Off to the races: Running Runstats by Howard Goldberg

One of the most common maintenance activities performed on a Db2 LUW database is running Runstats on tables and indexes. Accurate database statistics are vital for the Db2 Optimizer to produce the most optimal plans. The optimizer uses this information to estimate the costs for a plethora of possible access plans for each query and ultimately chooses the most efficient plan. The Runstats utility is the vehicle used by Db2 to collect key object information such as number of rows (card - syscat.tables) in a table, number of pages allocated (fpages - syscat.tables) and used (npages - syscat.tables) by a table and index clustering ratio (clusterratio/clusterfactor – syscat.indexes). Runstat is an I/O intensive operation, which can degrade application query and ETL performance while it’s in progress. The Runstats collection breadth and depth can be adjusted to enrich the information collected and improve Runstats elapsed times. However it’s a delicate balance to collect enough information, (enough, to produce the most optimal query plans), without disrupting application performance. This article will discuss some best practices related to the Runstats utility and some techniques to improve Runstat utility performance.
A typical Runstats command
A common Runstats command is the following:
- runstats on table SCHEMA.SYSTABLES with distribution on all columns and sampled detailed indexes all
This Runstats command will collect key table and index level statistics on table schema.table name along with its associated indexes, if specified. When the “with distribution” option is specified, column value frequency counts and row counts above and below (between) column data values are collected. The number of columns where this additional information is collected is controlled by the num_freqvalues and num_quantiles options of the runstats command. The default values for these options are 10 and 20 respectively. If your data is highly skewed it may be beneficial to increase the num_freqvalues and num_quantiles values. The Runstats command listed below demonstrates how to increase the frequency and quantiles.
- runstats on SCHEMA.SYSTABLES on all columns with distribution default num_freqvalues 50 num_quantiles 100 and sampled detailed indexes all
Please note that if a higher number of frequent and/or quantile values are used then more memory will be utilized during the execution of the Runstats command. You should ensure that the stat_heap_sz parameter is set to automatic to optimize memory utilization. To review the column distribution values collected from using the “with distribution”option, use the SQL query listed in Figure 1.
When the “sampled detailed indexes all” is specified, the Runstats utility will collect details about the cost of the I/O required to fetch data pages from a table using each index. The “sampled” option provides detailed index statistics with approximately the same accuracy as without using the sampled option. Sampling uses a fraction of the CPU and memory. Please refer to the Runstats command listed below for an example of using the “sampled detailed indexes all” option. From Db2 version 10.1+, the sampled option is used by default when the “and detailed indexes all” option is used.
- runstats on SCHEMA.SYSTABLES on all columns with distribution default NUM_FREQVALUES 50 num_quantiles 100 and sampled detailed indexes all
Key Runstats information
When Runstats was last run on an object?
The last time a Runstats command was executed on a table along with its associated statistics profile can be found in the syscat.tables catalog table. A statistic profile associates a table and its Runstats command definition together with its definition in the catalog. To gather the Runstats information refer to the SQL listed in figure 2.

In addition, when you explain an SQL using db2exfmt the last statistics time for each object is displayed at the end of the explain output. This is valuable information when analyzing an explain plan to ascertain if an object’s statistics are current. Please refer to the figure 3 below for a sample output of an explain plan using db2exfmt.

Historical Runstats analysis
Each time a Runstats command is executed, the details about the execution is stored in the $IBM_DB_DIR/db2dump/events/db2optstats*.log. For example,
2017-12-17-12.42.29.809640-300 E1296386E815 LEVEL: Event
PID : 19988 TID : 140218946545408 PROC : db2sysc 0
INSTANCE: instxxx NODE : 000 DB : XXXDB
APPHDL : 0-4975 APPID: *LOCAL.instxxx.xxx
AUTHID : SCHEMA HOSTNAME: hostxxxx
EDUID : 586 EDUNAME: db2agent (xxxDB) 0
FUNCTION: DB2 UDB, relation data serv, sqlrLocalRunstats, probe:220
COLLECT : TABLE AND INDEX STATS : Object name with schema : AT "2017-12-17-12.42.29.809595" : BY "User" : success
OBJECT : Object name with schema, 13 bytes
SCHEMA .TEST
IMPACT : None
DATA #1 : String, 152 bytes
RUNSTATS ON TABLE "SCHEMA"."TEST" ON ALL COLUMNS WITH DISTRIBUTION ON ALL COLUMNS AND SAMPLED DETAILED INDEXES ALL
-- table statistics profile was used
This log is very useful to determine when a Runstats command was executed, its elapsed time, and what Runstats features were used in collecting the statistics. However, parsing through the logs can be difficult and time consuming. To make this log easier to consume and analyze, there is a SYSPROC.PD_GET_DIAG_HIST table function that can be used to analyze the events log file using SQL. The SQL in appendix A can be used to identify long running Runstats commands. Figure 4, ordered in descending order by ELP_TIME_SS, shows a sample output when using this SQL.
Optimizing the Runstats command
Adjust the “with distribution” option
Additional column distribution statistics can be collected if the “on all columns with distribution” option is added to the Runstats command. This column list can be reduced by using the “with distribution on key columns” option and the Runstats command will only collect distribution statistics on columns that are indexed. Refer to the commands below for examples.
- runstats on SCHEMA.SYSTABLES on all columns with distribution default num_freqvalues 50 num_quantiles 100 and sampled detailed indexes all
- runstats on SCHEMA.SYSTABLES on key columns with distribution default num_freqvalues 50 num_quantiles 100 and sampled detailed indexes all
Add sampling to the Runstats command
To improve Runstats elapsed times, the “tablesample” and “indexsample” option can be used to sample the data and index pages examined to populate the object’s catalog statistics. There are two types of sampling: Bernoulli and System. The Bernoulli technique samples rows whereas System sampling samples pages. System sampling is preferred because it provides better performance. The sampling values can be adjusted but 10 is a good starting point to use for both the tablesample and indexsample options. The Runstats command below shows an example of using both the table and index sampling options.
- runstats on table SCHEMA.SYSTABLES with distribution on all columns and sampled detailed indexes all tablesample system (10) indexsample system(10)
There is also sampling utilizing statistical views (Statview). A statview is a view used to capture statistics that will provide better answer set (rows returned) estimates when joining multiple tables. A Statview can be implemented by creating a view on a join between two tables and it can be enabled using an “alter view enable query optimization” command. The optimizer will determine automatically if a statview will be beneficial when optimizing a query. The Runstats command below is an example of Runstats on a statview using sampling.
- runstats on table SCHEMA.systables_SV with distribution tablesample system(10)
Drop unused tables and indexes and redundant indexes
Runstats is an IO intensive operation, reducing the IO by eliminating unneeded objects will improve Runstats elapsed time as well as reduce overall database DML activity caused by maintaining these unnecessary objects. DB2 keeps track of unused objects and maintains a “lastused” column in the syscat.tables and syscat.indexes catalog tables. The query listed in figure 5 can be used to identify unused table and index objects in the database.

In addition to dropping unused indexes, dropping redundant indexes will also be beneficial to reducing Runstats times. A redundant index is one where its columns are the high order columns of another composite index. A single index can be used rather than maintaining two. In the example below, index SCHEMA.x1 is redundant since all of its columns are present at the high order of the index in SCHEMA.x2. The SCHEMA.x1 index can be dropped as the optimizer can use SCHEMA.x2 instead and Runstats will have fewer objects to maintain.
- create index SCHEMA.x1 on SCHEMA.test (col1, col2)
- create index SCHEMA.x2 on SCHEMA.test (col1, col2 ,col3)
Reorg fragmented indexes
A Runstats command must scan an index to collect key information used by the optimizer. Most tables have at least one index but for highly used tables many indexes may be present to enhance query performance. However, as more indexes are added the efficiency of a Runstats command will degrade. In addition, as a result of normal DML activity indexes will get fragmented or create noncontiguous pages. A badly fragmented index can be identified when the sequential pages (sequential_pages) associated with the index in the syscat.indexes catalog table is near zero and/or the number of empty leafs are high (num_empty_leafs). Fragmented indexes will cause additional physical IO and can significantly degrade the performance of a Runstats command and non-matching index scans. The SQL in figure 6 can be used to proactively identify highly fragmented indexes. The repair for a fragmented index, is to drop and recreate the index or the preferred method is to reorg the index using the REORG command.
To help optimize object scanning operations Db2 has features such as sequential prefetch and sequential detection. These algorithms anticipate scanning activity and proactively prefetch data pages into the bufferpool ahead of the scan operation. Db2 V10.1 introduced readahead prefetching to more efficiently prefetch non-contiguous pages caused by badly clustered table data pages or low density index pages. Even with the readahead prefetching technique, highly fragmented indexes will add significant time to a Runstats command. It is highly recommended to monitor large indexes and proactively reorg them to eliminate or mitigate index fragmentation to ensure optimal Runstats run times. Well organized indexes are also beneficial to query process during non-matching index scans.
Common Runstats mistakes
Not enclosing column groups in parenthesis () to collect column group stats
The optimizer cannot detect correlation between columns when multiple columns are used in a where clause unless there is an index on all of the columns or column group statistics are present. When column statistics are evaluated independently by the optimizer such as city, state and zip, the optimizer may under estimate the cardinality and choose a less optimal plan because it cannot detect value correlation between the columns. If a composite index on these columns is present then the optimizer can use the first2keycard, first3keycard, first4keycard or fullkeycard statistics to ascertain the cardinality amongst the columns. Column group statistics (similar to index key cardinality) collects cardinality statistics for each group of columns specified without creating an index structure. The example in figure 7 shows both the good and bad syntax when collecting group statistics.

Not using Statistic Profiles
Failing to use statistics profiles is more a missed opportunity rather than a mistake. A statistic profile associates a table and its Runstats command definition together with its definition in the catalog. Using Runstats profiles will standardize database statistics collection and is extremely useful in agile environments where data churn and application releases are high and frequent adhoc statistic collection will be required. The commands to set and deploy a Runstats Profile are the following:
- Set a Runstats Profile
- runstats on table SCHEMA.test on all columns with distribution and sampled detailed indexes all set profile only
- Use a Runstats Profile
- runstats on table SCHEMA.test use profile
Runstats data collection on Db2 Data Partitioning Data Base (DPF) and Skewed tables
A common mistake when running Runstats on a Db2 DPF Database is that the cardinality or number of rows in a table will be set to zero due to table skewness. In Db2 DPF, Runstats executes on a single data node and the results are extrapolated to represent the cumulative statistics across all nodes in the database. The node used to gather these statistics is the first database partition in the database partition group that the table is partitioned on. If a table is created in a tablespace using the IBMDEFAULTGROUP (nodes 1 – X) partition group, a well distributed table will have data on node 1 through node X. If a user connects to node 1, initiates a Runstats command, and there is no data on node 1, then the card column in the syscat.tables associated with the table will be populated with a zero. This will give the optimizer false information and cause it to underestimate the number of rows in the table. For example, in figure 8 and use case A, the table SCHEMA.HISTOGRAMBIN was distributed by HISTOGRAM_TYPE, which is a low cardinality column and resulted in severe table skew. In use case B, The table SCHEMA1.HISTOGRAMBIN table was distributed by the column PARTITION_KEY, which is a high cardinality column and the table was well distributed across all nodes with no skewing. In use case C, the table SCHEMA.HISTOGRAMBIN_distrand was distributed using the new random distribution method added in Db2 version 11 and this produced an evenly distributed table without choosing a specific distribution key.
Figure 8 - Table distribution use cases

In figure 9 below, the card column was collected for the SCHEMA.histogrambin table and it shows the skewed SCHEMA.histogrambin table as having a card of zero.
Figure 9 – Card values for skewed and un-skewed tables.

Summary
Many Runstats options were mentioned in the article and it can be overwhelming to new developers or developers switching database platforms. Listed below is summary of the Runstats best practices that will ensure that Runstats is running optimally, in most use cases.
- Use Runstats profiles
- Use the “with distribution” option
- Use “and sampled detailed indexes all “
- Keep the num_freqvalues and num_quantiles at their defaults
- If you have to increase them use 50/100 respectively. Please note this is just my rule of thumb for increasing these parameters.
- Use system(10) when running Runstats on statviews
- Use sampling on tables greater than 50GB+
- Once sampling is chosen, use “tablesample system (10) indexsample system(10)”
The most common Runstats commands
- runstats on table SCHEMA.test on all columns with distribution and sampled detailed indexes all set profile only
- runstats on table SCHEMA.test on all columns with distribution and sampled detailed indexes all use profile
Conclusion
Effective scheduling of the Runstats utility and leveraging its various data collection features will facilitate the proper balance between feeding the optimizer critical information and the utility not being intrusive to the application.
Appendix A – Historical Runstats monitoring using table function
