Blogs

DB2 LUW ‘Performance First Aid’ with monreport.dbsummary

By Steve Rees posted May 02, 2017 08:53 AM

  

Investigating performance problems doesn’t have to be hard.  There are some basic steps you can take to find common bottlenecks in your system – ‘performance first aid’ if you will.  We’re going to look at some of these basic problems and how to find them, using just DB2’s built-in capabilities.

Performance problems are almost always about time – how long something takes to run, or how many things (transactions, statements, etc.) can be done in a certain amount of time.  So, for most performance problems, we primarily want to understand where time is being spent.   This is a fundamental principle that applies not only to the basics that we discuss here, but also to more unusual issues you may run into from time to time.

Fortunately, DB2 has many effective ways to figure out where time is going.  The ‘mon_get’ monitor table functions and administrative views provide loads of helpful data about DB2 performance.  Because this is ‘first aid’ and not brain surgery, we’re going to go with the easiest, most ubiquitous tool – monreport.dbsummary. 

A bit of background on dbsummary

Dbsummary is a clever stored procedure, shipped with DB2 as part of the monreport module.  It produces a nice textual report on activity in the system over a period of time.  Snapshots are very outdated these days, but many people still pine for the report-like output format snapshots produce.  This is where dbsummary fills a bit of a gap, giving output which reminds me of a modernized database snapshot.  It captures the necessary monitor data on throughput, time spent, rows read, etc. from DB2’s mon_get  functions, waits for a while (default 10 seconds), captures the same data again and calculates the differences, to see just what happened during the monitoring interval. 

There’s good reason for dbsummary to calculate these delta values and not to just display the raw data it captures – simply because most of the useful performance metrics that DB2 tracks are cumulative since database activation.  A spike or dip or other recent behaviors that you’re interested in can easily get lost if you just look at the running totals that DB2 provides.  I always recommend calculating delta values when using the mon_get functions directly, but fortunately, dbsummary does it for us. 

Dbsummary couldn’t be much easier to use.  You just call it, with an optional argument specifying how large the measurement window should be, in seconds.

idug fig 0.JPG

As an added bonus, the monreport module also includes other stored procedures for investigating lock waits, the heaviest statements, etc.   We’ll delve into those a bit, below.

Spending time on time spent

Time spent in DB2 tends to break down into two types – processing time and wait time.  Processing time is broadly a good thing to see being accumulated, since it generally accounts for the engine actually doing work.  Wait time, on the other hand, is not so desirable, since it represents where work has to pause for a bit, while the engine waits for something to happen.  DB2 breaks down wait time into many possible categories, such as various types of IO wait time, network wait time, lock wait time, etc.  There are also some pretty esoteric types of wait time too – but for ‘first aid’ purposes, we’re going to focus on the most common types. The full list is available in the Knowledge Center – this is a page I have bookmarked and refer to regularly.

We’re going to approach our first aid diagnosis using a simple step-by-step process of elimination.  We start at the highest, broadest diagnostic question, and we work our way down, refining as we go.  We’re starting with system level issues, showing up as unacceptable wait times and unusual processing, and once we’ve looked those over (spoiler alert!) we’ll have a look at individual statements. The following diagram gives an idea of the steps we’re going to follow:

idug blog flow chart.JPG

 

Q1: is the system slow within DB2, or above DB2?

It’s not always the database’s fault.  Sometimes, there can be a bottleneck at a layer above DB2, such as the application layer, the network, etc.  Before we start digging into database stuff, we should do a quick sanity check of client idle wait time (CIWT).  This is the time DB2 spends waiting for the next request from the client – kind of the ‘white space’ between requests.  Obviously, if there is a problem getting requests into DB2 in a timely fashion, the overall throughput and response time is going to suffer.  We calculate the ratio of time spent above DB2 (CIWT) to time spent inside DB2 (total request time), and if that ratio is around 4-5x or less, then the higher layers aren’t likely to be a huge problem and we search inside the database layer for any bottlenecks.  Once the ratio gets to around 10x (i.e., 10x or more time is spent in the application or client or network layers, than is spent inside DB2), then we turn our focus above DB2.  The decision is often not black and white, but more about ‘shades of grey’ – which area to investigate first, rather than necessarily being able to completely rule anything out.  As they say – it depends!

We’re going to look at a couple of simple examples, and then look at a minor ‘gotcha’ you’ll probably need to consider.  First, here’s an example of dbsummary output showing a ratio that is way below our 4-5x threshold, so all is well in the layers above DB2:

idug fig 1.JPG

Next, let’s look at a case where there appears to be a significant bottleneck above DB2, slowing down the flow of new requests to the database.

idug fig 2.JPG

Before we can trust a positive diagnosis, we need to make sure CIWT isn’t being artificially inflated by a bunch of idle connections – which actually happens quite a lot.  Suppose there were several DB2 CLP sessions connected to the database but not doing anything.  Each of those would be contributing a large amount to CIWT, and skewing our calculation.  We can be more accurate if we exclude the idle wait time they contribute.  Near the bottom of the dbsummary report is a list of current database connections and basic information about what each one has been doing.

To improve our calculation, we’re going to tweak it a bit by ignoring client idle wait time that arises from any idle connections.  To do this, we count the number of connections which have done very little or no work during the monitoring period, and we multiply that by how long (in milliseconds) dbsummary was running.  This is 10s by default, or whatever you specified when you called dbsummary.  In my example above, dbsummary ran 10s, and there were two idle connections done basically nothing.

idug fig 3.JPG

Based on these tweaks, we see the ratio is still fairly high at more than 8x time spent above DB2, so in this case we would still have good reason to go looking there for issues, before diving into DB2.

idug fig 4.JPG 

If our check of client idle wait time indicates that time inside DB2 dominates, then we move on to the next step, to check for bottlenecks in the database.

Q2: Is the system mostly processing, or mostly waiting?

This is an important question, since it determines where the investigation will go next.  To find the answer, we examine dbsummary output for the percent of time spent waiting.

idug fig 5.JPG

In this example, we see a high percentage of time spent waiting, so that directs us away from spending time (yet) looking for any ‘CPU hogs’ in the system.  As a general rule-of-thumb, anything less than about 25% wait time is pretty good (no major blockages in the system), but when it starts to get above 40% or so, we should dig in.

A quick note about ‘requests’ vs. ‘activities’.  I tend to focus more on DB2’s request-based metrics rather than activity-based metrics, because requests cover more types of database interactions.  It usually doesn’t matter too much if you’re just looking around in dbsummary output, but once you start querying the mon_get functions directly and doing calculations, using request metrics is generally preferable.

If wait time was low, we could skip the next few checks and go to Q6.  Otherwise, we look for a few ‘usual suspects’ of wait time – disk reads, transaction log writes and lock wait.

Q3: Are we waiting on disk reads?

Dbsummary provides a nice breakdown of some of the important types of wait time within DB2.   There are actually several other types of wait time that are not included in the dbsummary output, but our goal here is ‘first aid’, so dbsummary is often adequate.  Below is some sample output showing what looks like a pretty significant bufferpool read bottleneck.

idug fig 6.JPG

One important note about how dbsummary presents wait times.  You’ll notice how it says TOTAL_WAIT_TIME is 100%.  This is what it always shows, and can be a little alarming – but it doesn’t mean our system is waiting 100% of the time.  It’s just showing us how the wait time in the system divides up.   That’s why in the previous step (Q2) we first checked to see how much wait time there was in total, rather than going straight to this breakdown list.  Our example was 86% - so the breakdown here is showing us that 99% of the 86% is bufferpool read time.  Likewise, if the system was running really well - with, say, only 5% wait time – the breakdown would still show amounts out of 100% - i.e., how that 5% broke down.  Dbsummary’s use of “percents of percents” may seem a bit awkward at first, but you get used to it.  One other sort of odd thing – because dbsummary doesn’t show all types of wait time, it’s possible that the categories it does show don’t add up to 100%.  This happens when one or more of the wait time types it doesn’t show is a significant factor in the system.

Bufferpool read time is fairly common as a dominant wait time, especially in a system that has grown ‘organically’ over time with the absorption of large volumes of data.  How to reduce it?  We could try to do fewer synchronous reads:

  • Increase bufferpool size? Low hit ratio values in dbsummary output will indicate if this idea makes sense.  If you already have more than about four bufferpools, you can also consider combining some of them to give DB2 more flexibility on what pages are buffered, and where.  Enabling self-tuning memory can also be very helpful when multiple bufferpools are used.
  • Promote index access? The Design Advisor can give good suggestions on indexing and other improvements, especially for transactional systems which are still doing lots of tablescans.
  • Promote asynchronous reads, i.e. prefetching? Make sure tables which are heavily scanned have an adequate prefetch size, and consider enabling DB2_PARALLEL_IO.  Setting aside a small amount of bufferpool memory for block-based activity could help too.

If you can’t reduce the number of synchronous reads, perhaps you can make reads faster.  It’s fairly straightforward to determine the average bufferpool read time at the DB2 level, and you can get a long-term average from MON_BP_UTILIZATION (but it’s blind to peaks and valleys since it doesn’t use deltas.)  Your storage administrator can also help determine if the IO subsystem is getting overwhelmed, and if you ask really nicely, perhaps she will set you up with some faster storage…

Q4: are we waiting on the Transaction Log?

Another fairly common source of database wait time is writing to the transaction log.  Almost all activities which change database content (e.g. INSERT, UPDATE, DELETE, etc.) will be logged, to allow for rollback and/or recovery.  If a large volume of these activities is happening at once, the load on the log can be significant, and it can become a bottleneck.  Log disk wait time is reported by dbsummary, in the same breakdown list we used for bufferpool reads.  Here’s a different example showing a logging bottleneck:

idug fig 7.JPG

(Don’t forget – before we come looking for how the wait time breaks down, we verify that overall wait time is high enough to be significant.)  In a system with lots of write activity, there will be lots of logging, so seeing up to about 25% of log disk wait time can be fairly normal.  But if we get beyond that, we should drill down a bit on log performance.  A good next step is to determine the average log write time (which, alas, dbsummary doesn’t give us.) 

idug fig 8.JPG

A couple of things about this.  First, this calculation gives us the average since database activation, so it’s almost guaranteed to give us a lower value than when the system is really under load[1].  Second, a reasonable target is about 2-5ms for average log write time, so this example shows us that slow log write operations are contributing to our overall log bottleneck.  If this occurs, it’s good to make sure the basics are covered:

  • If possible, make sure the log path and storage isn’t shared by other things like tablespaces
  • Ensure log buffer size (LOGBUFSZ) is large enough to avoid overflows. Around 2000 is a good value.

As with the bufferpool read issue we looked at earlier, this is another good time to make friends with the storage administrator, and see if a small amount of fast storage (e.g. solid state disk) is available for the logs.

[1] A more accurate technique is to select the raw log_write_time (LWT) and num_lot_write_io (NLWI) elements twice, say 10s apart, and then calculate the average log write time using the delta values: (LWT2 – LWT1) / (NLWI2 – NLWI1)

 

Q5: are we waiting on Locks?

If wait time is high (Q2) but we’re not mainly stuck on bufferpool reads (Q3) or log disk writes (Q4), then lock wait time is a good next culprit to check.  If lock waits are an issue, there’s not too much a DBA can do to tune it away, at least not without some help from the application folks.  To figure out which SQL statements, which database objects, etc., are involved in the lock waits, we can use another routine from the monreport module: monreport.lockwait.

Monreport.lockwait is a different from dbsummary, in that it shows information for lock wait situations that are happening right now – rather than over a span of time like dbsummary does.  So if dbsummary indicates high lock wait time, a reasonable next step is to call monreport.lockwait several times during the period when lock wait is an issue.  The reason why you should collect output multiple times is that a lock wait in any one of them might just be a random event – but if you see similar waits occurring multiple times, you have much more reason to suspect that you’ve found a pattern and are zeroing in on root cause.   Each call to monreport.lockwait generates a report, showing lots of helpful information

  • Application handles of both the lock holder and the waiter
  • The type of the lock – row, table, etc.
  • The name of the lock – what table it is, and other information to zero in on what’s being locked
  • The modes the lock is held in and requested in – shared, exclusive, etc.
  • The SQL statement of the requester which is waiting on the lock

This information helps you diagnose which SQL statement(s) and application(s) are causing the lock wait time, so you can have a more productive conversation with the application developers on how to avoid significant lock wait.  If lock wait does turn out to be an issue, a few basic causes to look for include unnecessarily high isolation levels in the application, large table or index scans where something more precise (touching fewer rows) could be achieved, and excessively large units of work which end up holding large numbers of locks for extended periods of time.

There are many source of wait time, but these are three of the biggest.  Let’s move on to processing.

Q6: Are we spending lots of time Compiling SQL?

If the system is slower than we want it to be, and wait time isn’t a huge issue (Q2), then we need to look into processing time.  One of the first things I like to check is whether we’re spending a lot of time compiling SQL.  This isn’t necessarily a bad thing – sometimes it’s unavoidable, for example in warehouse environments with complex SQL and high optimization levels – but it’s good to avoid if we can.  As with wait time, dbsummary gives us a breakdown of how processing time is spent (but with fewer categories.)

idug fig 9.JPG

 In the example above, the system is spending 17% of it’s processing time compiling SQL, and only 2% actually executing SQL.  This seems like a bit of a waste, especially in an environment running lightweight transactional SQL.  In a case like this, it’s worthwhile having a look at the SQL in the system, either with monreport.pkgcache (a good “top 10” report about SQL in the system) or something a bit more exhaustive like mon_get_pkg_cache_stmt.  You may find that there’s an application in the system which is producing SQL with embedded literal values (e.g. “SELECT c1 FROM t WHERE c2 = 512”, “SELECT c1 FROM t WHERE c2 = 811”, etc.) rather than using parameter markers (e.g. “SELECT c1 FROM t WHERE c2 = ?”).  SQL with literal values is very helpful when the SQL is complex and the literal values help DB2 choose the best plan.  However, when this is done with simple SQL, it’s a waste of CPU and can cause the package cache to overflow.

If you see something like this, the best solution is to have a chat with the application folks and see if they can change the code to use literals.  Unfortunately, changing applications often seems to be an insurmountable problem – luckily, DB2 has a ‘statement concentrator’ feature that can remove some of the sting from gratuitous use of literals.  It can be turned on at the database level, but it’s a better idea to do it at the connection level.

If you’re not seeing a ton of SQL with literals, then it could simply be that there is a manageable number of parameterized statements, but just too many for the current package cache size.

Q7: Are there problems with individual statements?

Having looked over some main culprits at the system level (which can affect many statements at once), the next thing to do is look at individual statements, to see if there are significant problems among them.  Dbsummary isn’t really geared to look at individual statements, so we’ll have to branch out a bit.  Following the same principle as for the system itself, we want to concentrate on the hottest, most active statements (where the system is spending most of its time).  As I mentioned above, a good basic source for statement-level information is monreport.pkgcache, which gives a number of top-10 lists of SQL statements – by CPU usage, by number of executions, by number rows read, etc. – and generally both total and average values.  Statements which stand out at the top one or more of these lists are usually the ones we want to pay most attention to.  You can also query mon_get_pkg_cache_stmt directly, to compile larger lists, possibly ordered by other metrics than the small number that monreport.pkgcache supports – for example, top 25 statements by average execution time. 

A full treatment of how to investigate hot statements would need its own blog entry (as well as other tools beside those in the monreport module that we’re focusing on here), but an important basic best practice is to ensure that catalog statistics for the objects referenced in hot statements are up to date.  Without accurate statistics, the DB2 optimizer may not be able to arrive at the optimal access plan.  Something else to consider is that often the cost of a statement (CPU usage, run time, etc.) is proportional to the number of rows read, so techniques to reduce this (e.g. indexing, range partitioning, multi-dimensional clustering, etc.) are often helpful in improving the overall efficiency and run time of a statement.  As mentioned above, the Design Advisor can be helpful in this area.

 

Wrapping up

Monreport.dbsummary provides a very useful set of basic metrics that help us tackle common performance bottlenecks in DB2.  In fact, it provides many other metrics that we didn’t look at here – for example, time spent waiting on the network while receiving input or sending back result sets (very useful for cloud-based environments), plus basic throughput in transactions and statements per second, package cache activity, per-connection and per-member statistics, etc.  It does have some idiosyncrasies – the use of ‘percents of percents’ come to mind – but the overall benefits of things like built-in delta calculations make it a tool that I use all the time. 

In cases where additional data & capability are needed, dbsummary makes a good branching-off point to other tools like monreport.pkgcache, the Design Advisor and the mon_get table functions.  No matter which tools you end up using, whether for first aid or full-on brain surgery, the ‘decision tree’ approach can be very helpful, since it helps avoid spending time on areas which aren’t likely part of the real problem.  For example, right near the beginning, we determined whether the system was mainly waiting, or mainly processing.  If wait time dominates, why spend time looking for a CPU bottleneck?  In our simple examples, the amount of time saved this way is small, but when problems and investigations get more complicated, the advantages are much larger.

 

 

 

 


0 comments
32 views