Dec 17 2019

Improving rules performance with }StatsByRule – Part I

The Cubewise Code division wrote a great article recently “7 tips to writing faster IBM TM1 and Planning Analytics rules”. They show different techniques of how you can write rules to improve their performance. I presented the same topic on the European Cubewise Conference in Paris this year and think that it deserves a bit […]

The Cubewise Code division wrote a great article recently “7 tips to writing faster IBM TM1 and Planning Analytics rules”. They show different techniques of how you can write rules to improve their performance. I presented the same topic on the European Cubewise Conference in Paris this year and think that it deserves a bit of the background and some real-life examples. In this article, I want to focus on the “why”, “how” and “when” you can measure the performance of rules.

Functional database

IBM Planning Analytics powered by TM1 is an example of an implementation of a Functional Database – a marriage between multidimensional in-memory OLAP database and a spreadsheet-like cell oriented calculation engine. It takes the best from both worlds and unifies them in one powerful tool.Multidimensionality allows designing a model that reflects business complexity in your company and allows fast and efficient analysis in a pivot table style. TM1 uses a unique Sparse Data Consolidation Algorithm that allows the creation of huge cubes with a big number of dimensions and a literally unimaginable number of potential cells.Each cell, like in Excel, can be a simple entered value or can be a function calculated from different cells, which in turn can be simple values or functions. The calculations are called “Rules” and are the heart of TM1 modelling, as they allow you to model all the complexity needed by your business. 

“Why” you should monitor rules performance

TM1 PA is extremely scalable and can deal with huge datasets. Rules give the functionality of real-time calculations, that gather information from different modules and give one, reliable answer about the result of model calculations. They are relatively easy and quick to develop and can evolve with your business to reflects changing needs. But, as somebody said – with the great power comes great responsibility. When you write your rules, the same result can be acquired in different ways. Each time you plan the design of the rules, you should have few things in the back of your head:

  1. Maintainability of the model
    • Sometimes hard-coding a rule is easier or saves time. Drivers mappings settings can change though. You can foresee this! Creating parameter cubes that store drivers adds flexibility and reduces future maintenance.
    • You cannot foresee everything. Therefore rules should be easy to understand and adjusted by you or your colleagues in the future.
  2. Traceability – users can use tracing functionality to understand how values are calculated. Especially in more complex calculations that can show what is the data flow.
  3. Performance – rules must be fast enough to calculate the result in an acceptable time frame.

Some of the goals are contradictory and achieving all of them is not always possible, so compromises must be made. Until Planning Analytics 2.0 measuring the performance of different calculations methods was complicated as there was no tool. Developers usually trusted their experience, gut feelings and myths while writing rules. Sometimes they used a stopwatch to check how fast rules calculated.

“How” rules can be monitored

The situation changed for the better with Planning Analytics version 2.0 when the }StatsByRule cube was added to TM1’s Performance Monitor. Now you can monitor how many times each rule is called and how long it takes to calculate each rule line.

It is very easy to enable and start rule performance monitoring. You just need to set “RULE_STATS” property of a cube to “YES” and start Performance Monitor.

Now each time user or a process queries rule-calculated values in the cube, Performance Monitor tracks statistics and writes the results to the }StatsByRule cube. There are a few things that you need to be aware of how the mechanism works:

  • Performance Monitor (subprocess of TM1) monitors statistics and writes the result to }StatsByRule cube in 60 seconds time intervals. Therefore sometimes you need to wait up to a minute to see the full result of a measurement.
  • You can change the configuration of cubes you want to monitor while Performance Monitor is running, but again it can take up to a minute to changes be picked up.
  • You can trigger statistic capture in two ways: refreshing a view that contains cells driven by rules (from any user interface), or by running a TI that will use the same cells via CellGetN.
  • Performance Monitor stores incremented statistics. Each time you execute a view or run a TI it will add the latest result to overall result.
  • There is a performance overhead to monitoring rule calculations. This feature should generally be used in the development and not in production.
  • Clearing the statistic in the cube will not reset them. Performance Monitor will restore the accumulated result after each time interval.
  • Resetting the statistic can be achieved by:
    • Saving rule on the cube
    • Restarting the Performance Monitor
    • Restarting the TM1 Instance

When you open the }StatsByRule cube you will see that it has three dimensions: }Cubes with cube list, }LineNumber with starting line of monitored rule and }RuleStats with statistics measures:

  • Rule Text – first 99 characters of a monitored rule. Newline characters are ignored, and text is shown in one line only
  • Total Run Count – how many times the rule was executed
  • Min Time (ms) – how long the fastest rule execution took in milliseconds. Since there are a lot of situations that a single rule is calculated below one millisecond, this can often be zero
  • Max Time (ms) – how long the longest rule execution took. Can be useful to track problematic cases if this value seems unreasonably high
  • Avg Time (ms) – the result of a calculation based on two other measurements: Total Time (ms) / Total Run Count. Generally this is a better statistic to look than min or max as these statistics can be outliers
  • Total Time (ms) – how long in total it took to calculate rule line (Total run count x Avg Time)

When you start the analysis, the first thing to check is Total Time (ms) measure. You should focus on the rule that takes the longest to execute. Using the techniques from Code’s article, you can try different approach and make the rule faster.

Sometimes you may find that a rule identified as slow might not be the one that causes the problem. When the rule execution is measured, the total time is shown as a result, including other rules called for pre-cursor dependent calculations. For example, let’s assume that rule was found in Cube A with execution time of 1011 milliseconds. After analysis of the data flow, it appeared that there is a chain of calculations to provide a result. Data transfer from C to B and from B to C is performing OK. The core reason for a problem is Cube C that performs slow allocation.

Feeders can also be analysed as well using }StatsByRule Cube. The time showed for feeders also include the time of all other feeders executed by the one measured. The Run Count shows how many times a particular feeder line was fired, not the number of fed cells. If there is low Total Run Count for feeder, but long execution time, there is a good chance that there is overfeeding in the feeder chain.

Although the result of measurements is very precise and can indicate problematic areas, you need to remember that results are environment-specific and can give different results, even on similar models. Some of the factors that affect the performance of the rules are:

  • The sparsity of the cube
  • Order of the dimensions in the cube
  • Number of dimension elements
  • Length of the strings used by the rules
  • Resources on the server ( e.g. CPU cores, CPU speed)
  • Instance configuration
  • Server hardware

“When” to measure the rules execution time

Analysis of the rule performance is a time-consuming task, especially if you are trying different techniques to speed them up. You shouldn’t run it on a production server, as there may be problems with instance stability while doing the analysis. If you are trying to correct rule performance and changing rules often, it is a normal fact of development that you can expect reconciliation differences, locks and the other usual suspects.

I see three main scenarios in which it is worth the effort to check rule performance.

  • Troubleshooting system performance – you’ve noticed a degradation in performance. In most situations, rules were tested on a smaller set of data than used in a production environment. Sometimes the amount of data has grown over time until reached the problematic size.
  • Optimising rules during development – I usually check the rules on cubes that are likely to be the heavy ones. Either will store a big amount of data or perform heavy calculations and allocations.
  • Learning – while some of the techniques that improve performance will be used only in special cases (e.g. dimension shadowing) others (e.g. short notation over long one) should become your habit

In the following article, I will show how I performed an analysis on a client database and will show a comparison of the different techniques used. If you find this part interesting I highly recommend reading the second, as some of the findings are quite fascinating and sometimes counter-intuitive.

Related content

Loading related content