Introduction
In Db2 the best way for any developer writing SQL queries to know how a query is gathering the data for a result set is through the Db2 explain tool. I find too many developers do not know what to do or where to start when it comes to SQL tuning, program tuning, or application tuning. To me it all starts with each individual SQL statement. Not all performance issues are related to the SQL code itself, but it is the one area where developers are empowered to evaluate and make changes to. In Part 1, I talked about some of the first steps that a developer should go through when it comes to tuning queries. In this article, I will go through the initial steps in evaluating a SQL query using the Db2 explain output.
I stated in part 1 that it would be especially helpful if more developers were familiar with reading and analyzing Db2 explain output. If developers can look for just a few of the basic/obvious areas of possible performance issues from an explain output, that could help minimize many of the performance issues that arise. In my opinion, developers have 2 tasks when writing code:
- To get the output right. Nothing is worse than bad data and/or bad output of reporting data, query results, output files, etc.
- To get the results as quickly as possible. This is an area that seems to have been placed on a database analyst in many IT departments. Developers write the code, and then the database analyst checks each query for efficiency by evaluating the Db2 explain prior to promoting the code into production. This is a developer task! It does not take long to execute a Db2 explain and look for some of the obvious areas that may be causing a performance issue. All SQL developers should know to run and analyze Db2 explain output, especially the obvious performance areas.
Db2 explains can be run through most query tools and the same basic information is shown. For deeper analysis of what is going on from Db2 in the gathering of a result set, the IBM Data Studio tool is excellent. This tool does a lot of things, but the 2 things in my opinion it does best are query tuning, and the developing and testing of SQL-PL native stored procedures.
So whatever tool a developer may be using, the following steps should be followed when it comes to evaluating Db2 explain output. What comes out of a Db2 explain is the access path chosen by the optimizer that it thinks is the best possible path in the retrieving and gathering of a query result set needed. There might be multiple ways to gather the data, and the optimizer evaluates each, applying costing information, and then selecting the path it thinks is the most efficient. When it comes to tuning a query, it is important to know what exactly is going on from Db2 in the gathering and retrieval of the result set needed. The Db2 explain provides this information.
Step 1: Check to see if Db2 shows a table scan on any of the tables in the query
- Indexes defined on tables have a number of purposes. One of the main purposes is to speed up the gathering of data being retrieved for queries. A table scan means that Db2 will read through every row in the Db2 table to get the data it needs to satisfy the query. Db2 may choose this whether the result set may contain a few rows or many rows. Typically we do not want table scans on any tables in any query, especially in transactional queries where response time is more critical than batch. We usually want to see indexes being chosen from the optimizer to help retrieve the data needed. There are times when a table scan can be very helpful and that is noted later in this article. A table scan can be chosen from the optimizer for a number of reasons, and it’s up to the developer to know why so they can make any needed changes. Db2 may choose to scan a table in a query for a number of reasons, and it is important to know why in order to help improve the query’s performance.
Why are table scans problematic at times?
- Increase of CPU time when it comes to checking every row in a table against the query logic.
- Disk I/O: All data in a table may not be in a buffer pool, so physical I/O is needed. There is something called Sequential Prefetch that will help minimize the I/O needed when table scans are occurring, but for some tables there can still be a lot of I/O.
- Filling up its assigned bufferpool. Data is always brought from the physical disk to an area in memory called a bufferpool. Sometimes a bufferpool is shared by a number of tables. Once data is brought into a bufferpool, it will stay until the bufferpool area gets filled up, and then some pages begin dropping out. Db2 knows what data pages for a table are in the bufferpool, and ones that are not. Obviously the more data pages for a table in bufferpool memory, the better. Having one table take up most of a bufferpool can then hurt any other table’s response times.
So why might Db2 choose a table scan?
- Are any predicate(s) poorly coded in a non-indexable way that takes away any possible index choices from the optimizer?
- Do the predicates in the query not match any available indexes on the table? Know your indexes on a table!
- The table could be small and Db2 decides a table scan may be faster than index processing.
- The catalog statistics could say the table is small. This is more common in test environments where the Runstats utility is not executed very often.
- Are the predicates such that Db2 thinks the query is going to retrieve a large enough amount of data that would require a table scan? Some explain tools will show the number of rows Db2 thinks will be returned in the execution of a query (the IBM Data Studio tool is very good at this).
- Are the predicates such that Db2 picks a non-clustered index, and the rows needed are scattered throughout the table file such that the number of data pages to retrieve is high enough based on total number of pages in the table to require a table scan? Know how the data is physically clustered in the tablespace!
- Are the tablespace files or index files physically out of shape and need a REORG?
- Are there no predicates? So the query wants all the rows.
- Sometimes there are just too many conditions in the logic to return the results needed any other way. This is quite typical with many predicates that are OR’d together.
There are times in writing queries that a table scan may be the most efficient over the choice of an index.
- When the result set will be a high number of rows from the table. Depending on different variables, Db2 may choose a table scan when it thinks 30% or more of data from the table is needed. For most queries this is not the case. But for those queries needing a large percentage of the data in a table it can be the most efficient.
- If the number of data pages containing rows needed is high enough based on the total number of pages in the table. This can be quite common when Db2 chooses a non-clustering index to retrieve a high number of rows.
Step 2: Did the optimizer choose an index? Is it index scan or index matching?
- It’s one thing to see that an index has been chosen from the optimizer to help retrieve the result set needed. But is the index chosen being scanned or are there index matching predicates that will be used in the index processing? It is up to the developer to see and understand how efficient the index is being used. Along with seeing that an index was chosen, some explain tools will go further and show the number of matching columns to be used in the index processing. If the number of matching columns is 0, then that is stating that the index file will be scanned. The more matching columns, the faster the index processing. One of the common reasons for an index scan is a missing predicate on the leading column of a composite index. For example if there was an index on columns (LASTNAME, FIRSTNME, MIDINIT) and the only predicates were:
WHERE FIRSTNME = ‘JOE’
AND MIDINIT = ‘A’
If Db2 was to choose this index, it would have to scan through all of the last names looking for any ‘JOE’, ‘A’.
If the matching column count shows 0, the access method is called a nonmatching index scan and all the index values in the index file are read. If the matching column count > 0, the access method is called a matching index scan and the query uses predicates that can process through the index file very efficiently to find the index values matching predicate criteria.
Note: You need to be careful because some explain tools (like IBM Data Studio) will always show the following when an index is chosen by the optimizer. But even though the node says IXSCAN, it may or may not be scanning. The tool makes you click on the node to see the matching column count. Many explain tools do not show you this value.

Step 3: Are there any sorts occurring?
- There are a number of sorts that may occur in the execution of a query, and the Db2 explain output will show any that will take place. Data sorts may be occurring for any of the following reasons.(Order By, Group By, Distinct, Union, Joins, Subqueries). It is important to know the output of a query because sorts are going to be as expensive as their size. If the Db2 explain output shows a sort occurring, it is important to know the size of the sorts occurring. Know your query and query output! Does the query need the sort that is occurring? Db2 sorts are very efficient, and are as efficient as their size. If smaller sorts are occurring and needed, they will most likely not be the source of a query runtime problem.
Step 4: Check for any stage 2 predicates
- This was discussed in Part1 of this article. The reason it is brought up again is because in some Db2 explain output, stage 2 predicates are highlighted. It is hard to know what predicates are stage 1 vs stage 2 because there are so many ways to write SQL predicate logic, and the fact that the stage1/stage2 list has been changing in each Db2 version. Rows retrieved in processing can have certain predicates applied during stage 1, and certain predicates applied during stage 2. Stage 2 predicates cost more (about 10% more expensive to evaluate) that stage 1. While the optimizer over the last few versions of Db2 evaluates and processes more predicates during stage 1, Stage 2 predicates still exists. Some of these can be rewritten to stage 1, and some cannot. But we need to look for these and possible rewrite them. Not all stage 2 predicates can be rewritten as stage 1. If a query has a stage 2 predicate that it needs for logic that cannot be rewritten, that’s OK. Do not take them out of the query and handle the logic in application code.
Conclusion
I have found on projects that I have been a part of that as more developers on the team know how to execute and read Db2 explain output, the fewer performance problems we see in production. And it puts the task of evaluating SQL back onto the developers. This is where it should be since they know their data best. Anyone writing and developing queries should go through these first steps of analyzing explain for any query that is not performing as it should. ‘Getting Empowered’ is what we want developers to become. The more they know, the more they become empowered, and the fewer problems we have in production environments.