- The time required to publish a cube
- Peak memory usage for the Intelligence Server during cube publication.
- Intelligent Cube publication time:
This metric has 3 major components as shown in the diagram:P0 – P1: Query Execution on the data warehouse – This component of the publication time is dependent on the time taken for the data warehouse to execute the Intelligent Cube SQL and generate the result set data.P1-P2: Data fetching and normalization – This component of the publication time measures the actual data fetching operation from the data warehouse to Intelligence Server process.P2-P3: Analytical Engine Processing Time – This component of the publication time measures the time spent in performing additional processing required to convert the warehouse data and store the cube structure within MicroStrategy.
- Intelligent Cube publication memory utilization:
There are four major memory consuming components of the cube publication process:M1: This is the memory used when cube data is being fetched from the data warehouse and processed on the Intelligence Server. This includes the memory required to normalize and persist the Intelligent cube data.M2: After the Intelligent Cube data is persisted, some process allocated memory may be released by the Intelligence Server process, however this may not necessarily be observed.M3: Once the Analytical Engine begins to convert data to cube structure and normalize this, some additional memory may be used by theIntelligence Server process.M4: After the Intelligent Cube is published, some memory may be released by the Intelligence Server process.
- Intelligent Cube Report Execution Page:
When an Intelligent Cube report is run (and data published), the report execution status will be reported in cube report page in MicroStrategy Desktop. The following screenshot shows the Intelligent Cube report execution page for a sample cube:As shown in this page, publication of this sample cube started at 15:35:38 and took 10 minutes and 18 seconds:Start Time: 15:35:38Execution Time: 10 minutes 18 secondsSince the report execution page is only available when a user manually executes a cube report and waits for it to finish, this view will not be used in the discussion to follow.
- SQL View Statistics:
When cube report is run, additional information is available in SQL view. This view can also be accessed through History List. Cube report can also be run by adding to History List. And once it is published double clicking on it will open the editor below as well.The following example shows how the SQL view statistics can be used to determine the processing times of various cube publication steps.Query Execution Start Time (T0): 3:35:38 PM.Query Execution Finish Time (T3): 3:45:52 PM.(T3 – T0) should be less than or equal to “Execution Time” reported in Report Page. The difference between this value and the reported execution time is related to the overhead of the report execution through MicroStrategy Desktop.
Total Duration in the Query Engine (T4): 10:13.85
SQL Duration (T5): 10:13.41Analytical Duration (T6): 0:00Other Processing Duration in QE (T7): 0:00.44(T5 + T6 + T7) should be identical to the value for T4 and should be same as (T3 – T0).
Users may also be able to look at the SQL view for the report for a SQL pass called as “Populate Report Data” (shown below as Pass3). This SQL pass indicates time spent in the Analytical Engine for cube publication.
Sample Code/ErrorNATION a16where pa11.O_ORDERKEY = pa12.O_ORDERKEY andpa11.P_PARTKEY = pa12.P_PARTKEY andpa11.S_SUPPKEY = pa12.S_SUPPKEY andpa11.O_ORDERKEY = a13.O_ORDERKEY andpa11.S_SUPPKEY = a14.S_SUPPKEY andpa11.P_PARTKEY = a15.P_PARTKEY anda14.S_NATIONKEY = a16.N_NATIONKEYPass3 – Duration: 0:00:40.36Pass4 – Duration: 0:00:00.07drop table ZZTKP010045SP000If we consider T2 as the start of “Report Data Population” and T3 as Query Engine Execution Finish Time:Time to Populate Report Data (T3-T2): 00:40.36
- Make sure that system time of Desktop client and the Intelligence Server are synchronized. If they are not, they should be adjusted properly. Any time difference can be accounted for,but may may make the process difficult and increase the chances of errors.
- The start time for the Intelligent Cube publication should be noted first. For example, sample cube publication started at 15:35:38. Users should mark the start of the log (marked in light yellow in the screenshot below).
The Start Time (P0) is then : 3:35:38 PM. This is T0 from the discussion above.
-
Once the Intelligent Cube publication start time has been located, users now need to locate the end time. Note that the counter “Total Completed Cube Reports” is 0 at the beginning of publication. When cube is published this counter should increase to 1. In the example below, this counter increased to 1 at 15:45:55. So cube publication ended at 15:45:55.
The Intelligent Cube Finish Time (P3): 15:45:55 PM. This should be close to T3 from the discussion above.
Note that in this example P3 and T3 are not exactly the same. There is 3 second difference between the two values because of the finite resolution of the performance logging tool. In such cases, the value of P3 should be adjusted accordingly (using the SQL view statistics or other methods).
For this example, we could also use the change in the server process RSS reading to approximate the end time as 15:45:53 . If adjusted properly, the Intelligent Cube publication execution time is: P3 – P0 = 10 minutes 15 seconds. This should be same as (T3 – T0). -
When cube publication starts, Ithe Intelligence Server will send the SQL requests to the database and will wait until the SQL is executed and results are returned to the server. This time may also be identified from the performance logs as there will be minimal CPU activity, and no RSS increase and or network traffic received on the Intelligence Server network port.Once the process starts to receive data from the data warehouse, the counters for Network KBytes received and the Intelligence Server process RSS start to increase. Note that because there will always be some network traffic present, users should identify this period by the sharp increse in the amount of network data received.In the example below, the counter for Kbytes received shows significant network traffic at 15:38:49. Users may also note that a couple seconds later, the counter for the process RSS starts to increase as well (Data received will be processed and stored in I-Server memory)Users can mark this time as P1, and calculate the time spent waiting on the data warehouse as (P1 – P0)= 15:38:49 – 15:35:38 = 3 min 11 seconds.
-
While data is returned from the database, the counters for Kbytes received and the process RSS and size will continue to increase. For this example we have chosen the Intelligence Server method for data normalization, and so, during this time the data is also normalized. When the Kbytes received counter goes back to small value, this marks the end of the “data fetch and normalization period”. In this example, the end time is 15:45:12. (P2)The total Data Fetch and Normalization time is (P2 – P1) = 15:45:12 – 15:38:49 = 6 min 22 secondsUsers may also now be able to obtain some of the memory measurements from the performance logs:
M1: This is RSS increase during this period: (771 – 148) = 623 MBM2: In this example, the RSS value does not drop so M2 = 0During the data fetch time, the MSTRSvr process CPU time should be fairly low. If Intelligence Server normalization is used, there may beusage spikes in the process CPU time when normalization is performed. -
Once the data is fetched from data warehouse and normalized, it will be converted to cube structure by the Analytical Engine component of the Intelligence Server. During this time users may observe constant CPU activity on the Intelligence Server, and process memory increase. After the data is converted to cube structure there may also be a reduction in the process memory usage.
The Intelligence Server Analytical Engine time is (P3 – P2)= 15:45:53 – 15:45:12 = 41 secondsM3: 977 MB (1748 – 771). Note that 1748 is the peak memory usage during the Analytical engine processing of the cube data, and is not the value for the memory usage right before end time. The exact formula to determine this value should be {Max(memory) – Start(memory)}M4: 480 MB (1748 – 1268). This is the final memory usage after the job is completed.Now that all the performance counters are analyzed, users can compare these values to counters collected earlier through desktop and SQL view.P3 – P0 = 3:11 + 6:22 + 0:41 = 10: 14. This should be close for (T3 – T0)P3 – P2 = 41 seconds. This should be close to the value for (T3 – T2)