Aug 1 2016

IBM Cognos TM1 – Multi Threaded Queries (MTQ) in TM1 10.2

What is a Multi-Threaded Query? Users may want to improve the processing performance of queries by allowing queries to split into multiple processing threads.Multi-threaded queries allow IBM Cognos TM1 to automatically load balance a single query across multiple cores. This can improve efficiency and processing time for large queries and rules. Problems Solved by IBM […]

What is a Multi-Threaded Query?

Users may want to improve the processing performance of queries by allowing queries to split into multiple processing threads.
Multi-threaded queries allow IBM Cognos TM1 to automatically load balance a single query across multiple cores. This can improve efficiency and processing time for large queries and rules.

Problems Solved by IBM Cognos TM1 Multi-Threaded Query

Previous Customer Concerns:

  • CPU Utilization: “I’ve got 16 cores and my CPU utilization is at 15%”
  • Server PVU Value: “More cores do not make my queries faster”
  • Data Scale: “TM1 Solutions have a data volume ceiling”
  • Rule Caution: “Rules slow down my queries to an unacceptable performance level”

New Multi-Threaded Query Approach:

  • Simple Configuration: tm1s.cfg -> MTQ =
  • All UIs can leverage MTQ: TM1 multi-threads stargate cache creation
  • High Performance: Query speed improves relative to available cores
  • Manages Concurrency: Available cores are load balanced across queries

Performance Tuning

The MTQ setting is part of overall Performance tuning in TM1 configuration (cfg) file:

  • MTQ (default 0)
  • MaximumCubeLoadThreads (default 0)
  • PersistentFeeders (default F)
  • ParallelInteraction (default T)
  • UseLocalCopiesForPublicDynamicSubsets (default T)
  • ViewConsolidationOptimization (default T)
  • ViewConsolidationOptimizationMethod (default ARRAY)

Non Configuration performance improvements include:

  • Power Plan setting High Performance- not balanced
  • Enable Hyperthreading (BIOS setting in some cases)
  • Optimize the use of the TM1 Query cache via configuration of the VMM/VMT parameter (CubeProperties) (see our accompanying blog post!)

How Multi-Threaded Queries work

MTQ is applied to queries exceeding 10,000 cell visits
Creation of TM1 ‘stargate’ cache is multi-threaded
Available server cores are applied to concurrently processing queries
MTQ automatically load balances cores across concurrent queries

Multiple Worker Threads Operate in Parallel

  • The query is parsed into worker threads, each performing their own transaction
  • Large complex (rule heavy) views will see large gains, but often not linear to the number of cores assigned.
  • Large non-complex (rule light or no rules) views see exponential gains, often linear to the number of cores assigned

How MTQ handles multiple user queries

  • Assumption : 8 core server, MTQ = 8
  • User 1 launches large data query:
  • Assigned 1 master thread, and 7 worker threads
  • User 2 launches second query:
  • Query 2 is queued and gets assigned 4 threads, leading to a 4/4 split between users 1 and 2
  • User 3 launches third query:
  • Query 3 is queued and gets assigned 2 threads, Queries 1 and 2 continue on 3 threads

Test Results from the IBM Labs

Test results from the IBM Labs have revealed impressive performance gains for MTQ versus Single-Threaded Queries.

MTQ implemented on a Large TM1 Model Customer

The impact of additional cores on query times with MTQ implemented for a Large TM1 Model Customer were investigated as below.

Customer Model Overview:

  • 75G Model, cubes 3G – 10G (Model Size)
  • 20 Concurrent Users (# of users)
  • 64 Core Server, 512G RAM (Ram and Server Size)

Increase in cores doesn’t have a pro-rata speed impact:

This is attributed to the fact that the non-stargate operations associated with displaying a view still require a fixed amount of time.

MTQ vs MaximumCubeLoadThread

  • Memory considerations
    • MaximumCubeLoadThreads can have a significant impact on Memory consumption due to duplicate feeders
    • MTQ setting also has an impact on memory consumption, but to a lesser extent
  • IBM recommendations regarding the number of cores
    • MTQ: maximum number of cores available
    • MaximumCubeLoadThreads:  set to 50% of total cores (old recommendation was max-1)

Leverage MTQ in TI

The performance gains of MTQ can be leveraged within TurboIntegrator by calling a sub-process that will create a cached view using the ‘ViewConstruct’ TI function.

Related content

Loading related content