Introduction
This article is one in a series that focuses on improving performance in Db2 Warehouse. The first part provides some background information and lays out the first and most important aspect of query optimization, namely, cardinality estimation or estimating the number of rows that flows through the various operators in a query execution plan. Subsequent articles will discuss other topics of interest to query performance.
Relational Database Management Systems (RDBMS)
Relational database have stood the test of time having been around since the 1980s. Db2, the flagship database system from IBM has evolved from the early 1980s to the current form factors including Db2 on z/OS, Db2 on various Linux, Unix and Windows platforms, the IBM Integrated Analysis System (IIAS), Db2 Warehouse on Cloud, Db2 Warehouse. This article is primarily focused on improving performance in the warehouse offerings, IIAS, Db2 Warehouse on Cloud and Db2 Warehouse on Linux, Unix and Windows platforms.
SQL as a language of choice to interact with most relational databases has also stood the test of time, with the SQL standard evolving to incorporate other interface paradigms including procedural languages, object-relational extensions, XML, JSON, spatial, etc. This article is focused on the core SQL interface and does not delve into performance of the extensions as, by and large, most data warehouse queries are expressed in the core SQL language.
Data Warehouse databases managed in relational database systems have grown exponentially in terms of data volumes. Data Warehouse databases and Queries against them are characterized by:
- very large fact tables often running into billions of rows
- large number of dimension tables
- large number of joins
- large aggregation queries
- large amounts of data ingested into the warehouse
Warehouse Query Performance is critical given the massive amounts of data that is processed, the complexity of the analytical queries and the ever-increasing expectation for instantaneous results. Fortunately, advances in hardware, as well as advances in software, have helped make it possible to deal with these complex queries and massive volumes of data.
Db2 does an outstanding job with performance, particularly with the strategic switch to column organized tables. On occasion, it could be that the optimizer may not have all the information it needs to get the optimum query execution plan. This article provides some techniques to diagnose performance bottlenecks in query execution plans and techniques to improve performance under these circumstances.
Data Warehouse technology in Db2
A brief background of some of the relevant Db2 technology is summarized here. An awareness of these various components / features is fundamental to understanding what affects performance and how to exploit these technologies.
Query Optimizer

Automatic Query Rewrite: SQL is a complex language and there are many ways of writing SQL to achieve the same result. This component of the Db2 query optimizer automatically considers semantically equivalent versions of the query to rewrite the SQL to a better performing version of the statement. The goal within this component is to tackle common patterns of not so efficient SQL that might be used.
Query Plan Optimization: This component of the Db2 query optimizer chooses the query execution plan based on a cost model. Complex queries on large tables could be vulnerable to poor performing plans. The optimizer has long been the strength of Db2, particularly when it has good statistical information. It has been adapted and continues to be refined to exploit recent enhancements in the warehouse space. The column organized table feature and the emphasis on minimal manual tuning both drive revisions within the Db2 Optimizer.
Parallelism

Multipartition Parallelism (MPP): This form of parallelism, based on database partitioning inherited from the traditional “Database Partitioning Feature” (DPF) in Db2, allows scalability to handle very large databases using a divide and conquer approach. This is done by suitably partitioning the tables across multiple logical partitions that can also be spread across physical machines.
Symmetric multiprocessing (SMP): In addition to parallelism with database partitioning, SMP partitioning uses multiple processors to perform database operations within the same operating system, sharing common memory to further benefit parallelism within each MPP logical partition.
Column organized tables

This is the default organization layout in Db2 warehouse. It allows reduced I/O by reading pages that only include relevant columns needed for the query. With a query that accesses 4 out of 20 columns in a table, the database needs to read all the columns within a page while with a column organized table, it can avoid reading pages of the 16 columns that are not needed in the query. Column level dictionaries also allow for better column level compression given that the dictionaries are customized to the domain of values in each column. Any functionality not yet handled in the column organized table processing engine is compensated and handled in the row organized table processing engine. The row engine is also used as the interface to the applications.
Compression

Db2 uses a frequency-based order preserving dictionary-based compression. This allows very compact representation of highly skewed data. As an example, if two countries namely ‘USA’ and ‘China’ are the most frequent values occupying say 40% of the values in the “Country” column, they can be represented by 1 bit, while the next 8 most frequent countries occupying say 30% of the values could be represented by 3 bits and the infrequent values may be unencoded.
Compression in Db2 is not just used for compacting the data to save storage but is exploited as part of the processing within the column engine. The dictionary codes are ordered according to the values. This allows simple equality to be directly applied to the encoded column values after converting the predicate constant string to a dictionary code. Additionally, range predicates can be applied because of the order preserving encoding. Small OR and IN list predicates can also be applied on the compressed data itself avoiding the need to decompress encoded data when predicates are not satisfied. This data is also compacted in a manner that can be conveniently vectorized for processing in a way that exploits modern hardware.
Synopsis

The use of an auxiliary synopsis table that stores the MIN and MAX value of each column for stripes of up to 1024 rows helps reduce I/O by avoiding reading strides that Db2 determines is not needed in the query. For example, if you have a predicate C1 = 10, looking at the MIN_C1 and MAX_C1 in the synopsis table, Db2 only needs to access the 3rd stride (with MIN_C1 = 8 and MAX_C1 = 15)
Modern Hardware Exploitation

Db2 exploits modern hardware and adopts novel software techniques to process queries that need to go against large volumes of data and to process bulk Insert, Update or Delete. Internally, there is a maniacal focus within Db2 to minimize access to main memory by maximizing processing within the cache. With processors that support SIMD (Single Instruction Multiple Data), there is significant performance benefit where vectors of data are loaded into and retrieved from registers with a single instruction and register operations are carried out with a single instruction.
The Formatted EXPLAIN
The formatted EXPLAIN of the optimizer plan is useful to gain insight into the Db2 optimizer’s choice of the query execution plan (QEP). The formatted EXPLAIN (denoted by
EXPLAIN PLAN FOR
The

Tips on analyzing and improving query performance
Looking at a few formatted EXPLAIN plans you should be well on your way to improving performance. There are 4 steps that should be considered and looked at in order:
- Tackling abnormal Cardinality estimates
- Tackling the most expensive operators
- Using miscellaneous tricks to improve performance
- Rewriting SQL to improve performance
In this article, we look at the first step, Subsequent articles will consider other steps.
STEP 1: Tackling abnormal cardinality estimation
Here, we look at diagnosing some common cardinality estimation issues that might help the optimizer come up with better plans. With simple queries and query execution plans involving a few tables and a few simple predicates, it might be possible to get relatively good estimates. Warehouse application queries, on the other hand, may involve many tables with inner joins, outer joins, multiple complex predicates and subqueries containing aggregations. Cardinality estimation can be quite a challenge. The optimizer usually does a good enough job in choosing a query execution plan if it estimates a result size that is within an order of magnitude of the actual result size. Getting good cardinality estimates at the bottom of the plan should be the focus of the exercise. This is because
Basic Cardinality Issues
If the source tables in a query do not have statistics collected, it is fairly likely that there will be cardinality estimation issues. You can look at the extended diagnostic section in the formatted explain to determine this. Note that if there are no extended diagnostics, it is likely because the EXPLAIN diagnostics tables were not created. Consider creating these running the EXPLAIN DDL that is shipped with Db2. This is an example of a missing statistic diagnosis. If this is on the target of an INSERT, RUNSTATS should be collected after the INSERT is done.
Diagnostic Details: EXP0020W Table has no statistics. The table
"CALISTO"."TABLE1" has not had runstats run
on it. This may result in a sub-optimal access
plan and poor performance
Another telltale sign that cardinality of a predicate may be off (underestimated or overestimated) is when you see a Filter factor of 0.04 in the operator details. This is the default value (unless the column has exactly 25 distinct values)
24) Sargable Predicate,
Comparison Operator: Equal (=)
Subquery Input Required: No
Filter Factor: 0.04
Predicate Text:
--------------
(UPPER(Q10.USER_INITIAL) = 'C')
It may not always be possible but if there is a way to remove the expression around the column directly and/or put an expression on the constant side instead of the column side, that might give better cardinalities. For example, the predicate above might be written as USER_INITIAL = ‘C’ OR USER_INITIAL = ‘c’.
There are some predicates that are over complex query snippets where it is difficult to compute statistics, and these too have their filter factors with the default estimate of 0.04 that are not easily altered externally.
In an emergency, there is an undocumented setting of the DB2_SELECTIVITY registry variable that allows you to specify the SELECTIVITY on a predicate.
db2set DB2_SELECTIVITY=ALL -im
Alternatively, this registry can be set as a guideline at the end of the statement as follows:
/*
*/
With this setting, you will be able to use something like the following. Here the SELECTIVITY value is computed based on <the expected number of rows satisfying the predicate> divided by <the number of rows in the table>
SELECT * FROM T1 WHERE C1 = UPPER(USER_NAME) = ‘CALISTO’ SELECTIVITY 0.0006
Note that this may not always work if Db2 decides to transform the predicate into a different form.
Underestimation with statistical correlation between multiple local predicates
Underestimation is particularly troublesome. This is because Db2 could decide to allocate less memory or use a very large stream to build the hash table in a hash join if it assumed that it was small. Cardinality underestimation generally occurs when the statistics available to the optimizer are inadequate. One example is when there are multiple predicates applied to rows in a table where each predicate is treated independently when computing the filtering effect of all the predicates together. The filter factors for each predicate are multiplied to get the combined filter factor of the set of predicates. When this is multiplied by the number of rows in the table it gives the estimate of how many rows satisfy the set of predicates.
The Cardinality is computed here based on column independence. However, these columns could be strongly correlated. In the following example, not many countries have a state called California and not many states have a city called Los Angeles.

The seventh operator ( search for 7) TBSCAN ) in the details section of the formatted EXPLAIN, shows the filter factors of the predicates.
(Q2.COUNTRY = 'USA') Filter Factor: 0.05
(Q2.STATE = 'California’) Filter Factor: 0.002000
(Q2.CITY = 'Los Angeles') Filter Factor: 0.0000667
The cardinality estimate with just the basic statistics is computed as follows
180000 * 0.05 * 0.002 * 0.0000667 = 0.0012
The solution to get better estimates is to collect Column Group Statistics as follows (adding other column groups on the table as needed for this or other queries)
RUNSTATS ON TABLE CALISTO.CUSTOMER_ADDRESS ON ALL COLUMNS
AND COLUMNS ((COUNTRY, STATE, CITY)) ⬅️ NOTE 1
WITH DISTRIBUTION AND INDEXES ALL SET PROFILE;
NOTE 1: Note the double parenthesis for a single column group. For multiple groups, each column group is enclosed in a parenthesis and separated by commas ((c1, c2), (c5, c6, c7))
If the combined number of distinct values of all the 3 columns together is 12,500, the cardinality estimate with Column Group Statistics is computed as follows
180000 * ( 1 / 12500 ) = 144
This is likely a significantly better estimate of the number of rows with customers from Los Angeles.
Underestimation with statistical correlation between multiple join predicates
Cardinalities with numbers like 2.43817e-05 (with e to the power of minus something) in the formatted EXPLAIN are telltale signs that you need to correct cardinalities. The example above is with equality local predicates. One might see a Hash Join with these join predicates.
(Q2.COUNTRY = Q3.COUNTRY) Filter Factor: 0.05
(Q2.STATE = Q3.STATE) Filter Factor: 0.002000
(Q2.CITY = Q3.CITY) Filter Factor: 0.0000667
You need to do the same when you have multiple equality join predicates between two tables (i.e., add column group statistics in a similar manner). Here you can pick any one of the tables directly involved in the join.
Note that column group statistics is not used when we have predicates from different tables For example even though the left hand side columns of the following predicates are from a single table, the right hand side columns are from different tables:
(Q2.COUNTRY = Q3.COUNTRY)
(Q2.STATE = Q4.STATE)
(Q2.CITY = Q5.CITY)
Underestimation with join predicate data skew
If column group statistics has been already considered, or if you have a single join predicate and you still see significant underestimation, it could be due to very high skew in the join column. If so, this join may be a good candidate to consider a statistical view. Statistical views are views that you can collect statistics on. When these are created, and RUNSTATS has done the statistic collection, they can be automatically used to correct the optimizer estimates for the join. One can get skew information by looking at the frequency statistics in the catalogs.
In the example below, the statistical view is on the join of the two tables inventory and date_dim because inv_date_sk has significant skew.
CREATE VIEW TPCDS.INV_DATE AS
(SELECT date_dim.* ⬅️ Typically only need dimension table predicate columns
FROM inventory, date_dim ⬅️ Join of INVENTORY and DATE_DIM
WHERE inv_date_sk = d_date_sk ); ⬅️ Join predicate of interest
ALTER VIEW TPCDS.INV_DATE ⬅️ signals that this is a statistical view and can
ENABLE QUERY OPTIMIZATION; ⬅️ be used by the optimizer
RUNSTATS ON TABLE TPCDS.INV_DATE ⬅️ RUNSTATS collected on the view (join result).
WITH DISTRIBUTION;
Underestimation with overloaded dimensions
Another scenario that is often cause for underestimation is when one has an “overloaded dimension”. For example, when you have a date dimension that stores dates from 1950 to 2050. When this dimension is joined to a fact table that pretty much has around 10 years of data, this can result in significant cardinality underestimation.
This cardinality can be corrected within the optimizer by using a statistical view just like the one created in the previous skew scenario. This is very common in data warehouses with the date or period dimension.
Conclusion
In this article we looked at some background as it pertains to performance in a Db2 Warehouse. We looked at the components of the Db2 optimizer and some key features that are relevant to data warehouse query performance. The formatted EXPLAIN is briefly described to help with analyzing performance. The first step to improving performance is to get the cardinality estimates as close to the actual number of rows that flow through the various operators as possible. Subsequent articles will continue with other techniques namely looking at expensive operators, miscellaneous tips and tricks and tips on writing SQL queries to get good performance.