Some musings on workload management best practices for homogeneous workloads (e.g. OLTP)

This is a follow-on entry to this one discussing how workload management and DB2 pureScale get along...

I have been making notes for a while now on potential contents for a prescriptive best practices document for OLTP-style workloads similar to what I produced a few years back for warehouses. I never seem to find the time to put is all together so I thought I would surface some of the primary thoughts in a draft form through this blog for public review and comment... enjoy ;)

 


The available DB2 workload management best practices document (Implementing DB2® Workload Management in a Data Warehouse) focuses on environments in which DB2 is processing a mixed workload, which I refer to generically as a heterogeneous workload, where the contents of the incoming work vary greatly in size and impact to the system.

In that enviroment, the initial workload management paradigm recommended is to group the work into independent subsets based on the individual estimated impact to the system and then to place resource controls (e.g. the CONCURRENTDBCOORDACTIVITIES concurrency threshold) on each subset such that the greater the estimated impact, the more restrictive the control. The basic idea being to manage the large ones and let the small ones play.

This approach does not work well when the incoming work does not vary greatly in terms of type or estimated impact such as is the case with many OLTP workloads; I refer to this type of workload generically as a homogeneous workload.

Yet, the same business issues and requirements face DB2 when processing these workloads: we still need to allocate resources efficiently and to ensure system stability through the various peaks and valleys of demand. So, as in the warehouse, we should still have a workload management infrastructure in place to allow these objectives to be met.

Here is a first draft of a proposal for a workload management paradigm aimed at homogeneous workloads.

 

Best practices approach for managing homogeneous workloads

A homogeneous workload is one where the incoming work does not vary significantly in estimated resource impact to a DB2 system such that they all effectively look similar and cannot be treated diffferently solely based on their impact to the system. A different approach is needed for this type of workload.

The simplest approach is simply to appy a control on the database as a whole, For example, if the system can handle 1000 concurrent queries, we will apply a concurrency threshold for 1000 on all queries so that the system is not overloaded.

A couple of issues arise with this type of simplistic approach:

  1. For sub-second queries, the overhead of applying a concurrency threshold will cause a degradation in performance so this approach should only be taken when dealing with longer queries or when the system is in actual danger of being overrun and becoming inaccessible.
  2. This approach assumes that there is no differentiation in business priority for incoming work

 

 

A couple of alternative approaches can also be used to address the short-comings of the simplistic model:

Priority Aging

A paradigm where work gets lowered in priority the longer it runs and lower priority work gets less resource.

All work is treated equally from the start but the longer something runs, the less resource it gets.

Block Allocation

A paradigm where a specific amount of resource is put aside for each different set of work based on its inherent (initial) priority.

The priority of work is known when it enters the system and, upon entry, it is guided to a set of pre-defined resources allocated for that priority where it will execute until completed.

This has also been referred to as a "pipeline" or "highway" approach where the size of the "pipeline" or the number of lanes in the "highway"  (both of which mean how much resource is given) are determined based on the business priority of the work going through it.

 

Priority aging

This approach leverages the more sophisticated CPU controls introduced with the WLM dispatcher in DB2 10.1 (see this blog entry) to permit an effective implementation of a popular/historical approach to workload management where work comes in to the system at the highest level of priority and then ages to lower priorities over time. The lower priority, the less resource is given with the result being that work gets less resource the longer it stays in the system.

Customers who like this approach do not have any fear of the system being swamped and have no other workload management requirements. They just want to bias the system to give preference to shorter queries. This approach also has the benefit of not imposing the overhead required for concurrency controls on sub-seconds queries.

The basic model is that as individual activities enter the system, they are mapped to the highest priority service subclass where they begin executing. After a period of time, a set amount of resource consumption, or some other criteria is met, the activity is deemed to have exhausted its welcome in this service class and then is moved to the next (lower) service subclass.

The DB2 implementation is a series of service subclasses within the same superclass with the first service subclass representing high priority and each subsequent subclass representing a lower priority. The amount of CPU resource allocated to each service class is controlled using WLM dispatcher CPU limits or CPU shares with the high priority service subclass getting the majority portion and each class below it getting less.

DB2 provides mechanisms in the form of DB2 thresholds to support the aging of work based on either CPU consumption or rows read (an analog of I/O consumption). One of these thresholds is defined on each service subclass and when a threshold is violated, the offending activity is REMAPPED to the next service subclass representing a lower priority.

(Note that I do not mention concurrency controls as a possible control mechanism for this approach. This is because concurrency thresholds do not reflect remapped activities and so all activities would simply be controlled by the first concurrency threshold they encountered when they entered the system.)

 

priorityaging.jpg

 

In this example, every activity gets to spend its first five seconds of CPU time in the high priority service class. Then the CPUTIMEINSC threshold kicks in and the activity is remapped to the medium priority service class action. And so on.

When work reaches the lowest priority service class, it could stay there until completion or you could add an additional threshold on the low priority class that moved the activity back up to the high priority class again after some other value of CPUTIME (e.g. 60 seconds) to start the whole cycle over again.Or you could choose to simply cancel it (e.g. STOP EXECUTION) depending on your mood :)

This is just one example implementation. You can do the same thing using the ROWSREADINSC threshold. You can more or fewer priority levels.

The advantages of this approach is that it is a simple configuration while still introducing a very natural prioritization approach which will bias itself in favour of short running queries.It also does not impose any additional overhead on short running activitities.

The downside is that if work piles up in the lowest priority service class, this priority aging approach more vulnerable to unintended consequences that naturally occur when work is slowed down. Once a piece of work is allowed to start executing, it not only consumes CPU and I/O resources but it also gathers a number of other resources such as memory and locks. Slowing down its processing can extend how long those resources are held and can cause spikes in resource usage levels or slow-down in other work due to lock waits of inefficient sorts etc.

 

Block Allocation (aka Pipeline)

For customers who want to apply different business priority to incoming work, then the block allocation approach, also known by other names such as the pipeline approach, can be used to ensure that the system resource priority reflects the business priority of the work being executed within the system.

The basic concept of this approach is that a series of diffferent resource blocks or pipelines are created with each one reflecting a specific business priority assigned to the incoming work. The actual system resources assigned to the block/pipline reflect that business priority with more resources being given to higher priority work. The end result is that each business priority is effectively assigned a different capacity for work which affects the throughput and performance characteristics of any work assigned to it.

The DB2 implementation is a set of service classes (super or sub, it doesn't matter) with each representing a different business priority and each getting a different amount of system resource. The actual resource control can be concurrency thresholds,  CPU limits, CPU shares or some mixture of these. DB2 workloads are created to represent incoming connections of different business priorities and aim their work at the appropriate service class.

 

TieredWLM.jpg

 

This diagram shows an example of a tiered service configuration similar to the one shown in the Stage 3 scenarios in the DB2 workload management best practices document for data warehouses. The primary differences being that the work in each tier (i.e. each service super class) is not further separated into different subclasses and the concurrency controls are applied at the super class level.

In this implementation, one can think of each tier being a different size pipeline or highway with the concurrency level determining the size.

The advantage of this approach is that you can clearly assign, monitor, and control the throughput and performance of work with different business priorities which allows you to make the system respond appropriately to the needs of your business.

The main disadvantage of this system is that you need to be able to identlfy and create workloads for the different sets of work according to their business priority in order to map the work appropriately. 

0 Comments
Recent Stories
Verson 3.0.4 of IBM Graphical WLM tool now available in developerWorks

A quick summary of available Db2 controls for system resources

Managing resource consumption for multiple databases under the same DB2 instance