DB2 LUW Tuning Hash Joins - by Joe Geller

DB2 LUW Tuning Hash Joins - by Joe Geller

There are a number of fairly important tuning tips that do not get much attention.  In this article I will describe one of my favorites – Hash Joins.

 

HASH Joins

For batch jobs that process large portions of tables, hash joins are often the best choice.  Nested loop joins are good when there are indexable predicates on the inner table and the inner table is clustered in the same order as the outer table.  Otherwise, hash joins can perform much better.  This is because there is one pass of each table and sorts are not needed.  A single pass of a table using prefetch is the best you can do if you need to process the entire table.

That was a general guideline.  There are however a few potential problems that can negatively affect performance.  We’ll take a look at those and how to solve them.  To solve performance problems you need to:

  • have a good understanding of the access path
  • what the potential problems are
  • how to recognize them
  • how to solve them

Hash Join Process

  • DB2 builds a hash table in memory by reading one of the tables (the inner table – the right hand side of the access path graph. The hash table is a set of memory blocks (called buckets).
    1. Each key is hashed by an algorithm and the key and needed columns are stored in the corresponding bucket in the hash table.
    2. More than one key can hash to the same bucket. DB2 will maintain a chain of those keys. 

 The second table (outer or left and side) is read

    1. The keys are hashed and the bucket is searched for a matching key.
    2. For each matching key, DB2 will pass the joined data to the next step in the access path graph.

How big is the hash table?  And, how do we know how big it is?

The simple answer is that the number of buckets is the cardinality of the right hand side of the join.  This is the cardinality of the number of rows returned that DB2 estimates based on the statistics in the catalog.

What can go wrong?

Memory

The hash table comes from sort memory (sortheap).

  • If your sortheap is not large enough, you will get hash overflows in which some of the data has to be written to the temp tablespace. This will affect performance.
  • Generally DB2 will pick the smaller table (after predicates are applied) for the right hand side. This will require less memory.  If DB2 picks the wrong table (see below for details on why this could happen), you can waste memory and not perform as well.
  • You want to make sure your sortheap is big enough for your hash joins.

Long chain of keys in each bucket

It is normal for more than one key to hash to the same bucket.  Searching a chain of several keys is not a problem.  However, if DB2’s estimates are way too small, the hash table will be too small and the chains will get very long.  If the chains average 1000 keys the cpu time will be enormous.

Why would DB2’s estimates be that far off?  What is wrong with the Filter Factors?

  • The catalog statistics may be old
  • Distribution statistics have not been collected
  • Host variables are used in the predicates and there is an uneven distribution of values
  • Correlated columns
  • There are many situations where DB2 (or any DBMS) cannot accurately estimate the filter factor. Host variables with range predicates in particular are a problem.  This past year I did a presentation at the IDUG Conferences on Filter Factors. It was recorded and put on the IDUG Content Blog last December: 

The ABCs of Filter Factors (aka Selectivity) http://www.idug.org/p/bl/ar/blogaid=568

  • If you are joining more than 2 tables, the right hand side may itself be a join of several tables. If the filter factors and estimates are incorrect anywhere along the way, they will stay wrong at each step in the process.

How do we fix the problems?

  • Recognize that there is a problem
    1. Look at the Explain output. Check if the estimated cardinalities make sense.  If they appear too low, then you will have a problem and will need to work on fixing them (again, see my presentation).
    2. When you test the query, look at the performance metrics. Mon_get_pkg_cache_stmt is the best tool for analyzing query performance.
      1. If the cpu time is very high, be suspicious. Go back to the Explain and see if the hash table size is close to the actual data.
      2. Look in mon_get_pkg_cache_stmt for hash overflows for this statement.
  • Fix the problem – very simply, getting accurate cardinality estimates is the most important thing for tuning hash joins.

 

1 Like
Recent Stories
Some of my favorite tunings tips and tricks for DB2 by Joe Geller

SQL Tuning for Developers – Part 1

The Key To Mastering Db2 Locking On Row-Store Tables