Over the past few years I have developed a list of ways to improve performance in SSAS and in two of the tools that use it: ProClarity and PerformancePoint. Below are those performance tips along with other important items to be aware of (if you are not using ProClarity, just skip the steps that mention ProClarity or PAS):
1) Manually add aggregations to the cube via the Aggregation Manager in the tool BIDS Helper. Best use for this is when you see a long running query in a trace in SQL Server profiler (see My Quick Guide to SSAS Query Tuning Part One) and want to add an aggregation to help improve the performance of that query. See ADLC Step 5: Create Custom Aggregations
2) When you process a cube, be aware that it will clear the ProClarity Analysis Server (PAS) cache
3) When you deploy a cube, it will usually clear the PAS cache
4) When you change OLAP security, it will clear the PAS cache
5) Use the Aggregation Design Wizard in SSAS to add multiple aggregations. This should be the first thing you do after a cube has been completed. See ADLC Step 1: Create Initial Aggregation Designs
6) Use the Usage Based Optimization in SSAS to add aggregations based on what queries the users are executing (first you must configure the analysis services query log and run a trace for at least a week). See ADLC Step 4: Run Usage-Based Optimization (UBO) Wizard
7) Add partitions to the cube – best to separate by date such as year or month
8. Limit publishing of PAS books, as that will clear the PAS cache. Should create separate production and testing books so that publishing of test books does not affect production
9) Add a new SSAS hierarchy instead of using named sets. This will also allow drill-to-detail
10) Add a new SSAS hierarchy instead of using a ProClarity measure in which the measure filters items in a hierarchy
11) Add a named calculation to the data source view in SSAS instead of creating a new calculated member on the cube
12) Re-organize the SharePoint site to improve performance: Don’t use pages in PerformancePoint dashboards and limit the number of reports per page
13) Increase the PAS lease period from the default of 6 hours to 24 hours
14) Implement cache warming in SSAS via an SSIS package: Build Your Own Analysis Services Cache-Warmer in Integration Services or SSAS Cache Warming Using SSIS
15) Replace StrToMember with MemberValue to increase performance, but this won’t help cache if you are using dynamic security
16) Warm the PAS cache: ProClarity Analytic Server cache warmer sample
17) Use the CREATE CACHE statement to warm the Storage Engine cache. See How to warm up the Analysis Services data cache using Create Cache statement?
18) Tune algorithms used in MDX (i.e. using named sets to avoid recalculating set expressions)
19) Tuning the implementation of MDX. Do this via the Analyze button in MDX Studio
20) Within the formula engine, avoid using the context of ‘Query’: Forced to use this if you use non-deterministic functions like Now(), as well as having users who are members of roles that use dynamic security, even if those users do in fact share the same permissions
21) Track the performance of PAS using the web page http://iis06/PAS/perfstats.asp
22) In ProClarity, avoid checking the filter options “Filter empty rows” and “Filter empty columns”. With them selected the MDX will use the NON EMPTY keyword, which greatly degrades performance. Instead, uncheck those boxes and use a filter such as: show rows above 0%
23) Look at partitioning. Try to either split the measure groups so that the majority of the queries hit a small partition (current month for example), or split the measure groups so that the queries are distributed evenly across many partitions (by product for example). Or do both
24) Adjust server settings
25) If using PerformancePoint, check out Dashboard warm up tool
Performance tuning articles:
BIDS Helper – A Visual Studio.Net add-in with features that extend and enhance the functionality of the SQL Server 2005 and SQL Server 2008 BI Development Studio (BIDS).
MDX Studio by Mosha Pasumansky (video Supercharge MDX performance using MDX Studio). MDX Studio is tool which helps users of Microsoft Analysis Services to analyze complex MDX expressions, monitor performance characteristics of MDX queries and get insight into how MDX interacts with other UDM features such as attribute relationships. MDX Studio provides unique visualization representation of MDX expression and allows the user to interact with different stages of MDX execution.
Caching and Cache-Warming in Analysis Services (requires registration)
Ten steps to take to improve performance:
3) MDX tuning
4) Manual aggregation design
5) SSAS cache warming of Formula Engine and Storage Engine with SSIS cache warming package
6) ProClarity cache warming
7) SSAS cache warming of Storage Engine (CREATE CACHE statement)
8) Create partitions
9) Adjust server settings
10) Add new hardware (scale out or scale up)
Measure Group Cache – is a storage engine cache, so data from here would need to be passed back to the formula engine to have any calculations executed. Hitting this probably means that something in your query or calculations prevents the formula engine from doing any caching. (ie. functions like Now() and Username() are non-deterministic and so results from them cannot be cached at the formula engine level).
Flat Cache – this is a formula engine cache, and is used to store individual cell values, but is only used when a query is executing in cell-by-cell mode. So if you see this it is an indication that your query is not executing in block mode (see description below). Flat Cache is a better type of cache than the Measure Group Cache.
Calculation Cache – this is a formula engine cache and is probably the best type of cache because it is used in block mode and stores the final results of any calculations.
Block computation versus cell-by-cell
When the Formula Engine has to evaluate an MDX expression for a query, it can basically do so in one of two ways. It can evaluate the expression for each cell returned by the query, one at a time, an evaluation mode known as “cell-by-cell”; or it can try to analyze the calculations required for the whole query and find situations where the same expression would need to be calculated for multiple cells and instead do it only once, an evaluation mode known variously as “block computation” or “bulk evaluation”. Block computation is only possible in some situations, depending on how the code is written, but is often many times more efficient than cell-by-cell mode. As a result, we want to write MDX code in such a way that the Formula Engine can use block computation as much as possible, and when we talk about using efficient MDX functions or constructs then this is what we in fact mean. Given that different calculations in the same query, and different expressions within the same calculation, can be evaluated using block computation and cell-by-cell mode, it’s very difficult to know which mode is used when. Indeed in some cases Analysis Services can’t use block mode anyway, so it’s hard know whether we have written our MDX in the most efficient way possible. One of the few indicators we have is the Perfmon counter “MDXTotal Cells Calculated”, which basically returns the number of cells in a query that were calculated in cell-by-cell mode; if a change to your MDX increments this value by a smaller amount than before, and the query runs faster, you’re doing something right.