WLM Dispatcher: Choosing between CPU shares and CPU limits

The premier workload management enhancement in DB2 10.1 for LUW was the introduction of the Workload Management Dispatcher or “WLM dispatcher”. This completely new workload management infrastructure allows DB2 itself to offer explicit control over CPU access similar to that offered by operating system products like AIX WLM but with more flexible options. The dispatcher technology effectively replaced the need for using integration with AIX WLM or Linux WLM for the majority of cases where those products had been used in DB2 9.7 and earlier.

The key benefits of the WLM dispatcher technology are that it is available on all DB2 platforms and it offers a much simpler interface for controlling CPU. Two new resource control attributes were introduced on service classes to provide direction to the WLM dispatcher: CPU limits and CPU shares.

CPU Limits impose absolute, fixed CPU resource allocations (of the CPU available to DB2). If a service class is limited to 10%, this limit applies irrespective of what else is running on the system. Even if the rest of the system is idle, a service class with a 10% limit will still only be allowed to use 10% of the CPU. This is similar to the hard limit feature offered by AIX WLM.

CPU shares are a relative control of CPU consumption. The relative nature of the allocation is derived from the relative number of shares allocated among the service classes. The amount of CPU made available to a service class can change over time and will vary depending on how many other service classes are active at the same time.

CPU shares are dynamic and react to what other competing work is actually running; the allocated amount of CPU changes as competing work comes and goes. Limits are fixed and do not change (except by operator intervention).


Choosing between the two

A common question is which one to use? The CPU limit is easier for humans to understand and the CPU shares concept takes a bit of thought to get straight in your head... but the CPU shares approach is by far the more powerful and better suited for the real-world.

The simple rule of thumb I use for choosing between these two options is to always use CPU shares unless you:

  • want to put explicit, permanent maximums on the CPU consumption of a subset of work
  • or if you want to have fixed sub-division of resources between different service classes on a database.

In general, CPU shares allow you to to achieve a more flexible, elastic configuration.

Note that CPU shares and CPU limits can be used together in different combinations to achieve various objectives. For example, in a scenario where we allow low priority work to consume the whole machine when high priority is absent, if we wanted to preserve some minimum reserve of CPU to be always available for high priority work (e.g. give it enough to respond well to an initial request when it starts up again), then we could apply a CPU limit on the low priority work to put a maximum on the amount of CPU it consumed regardless if the amount allowed by the CPU shares allocation was higher.

I'll cover some of the differences between hard and soft CPU shares in a future entry...

Recent Stories
How can I stay current on what fix packs are available for each Db2 release, what Hiper APARs might be out there, and if there are any security vulnerabilities that I should know about?

Things to consider when considering Db2 Native Encryption

An old Db2 Easter Egg: Setting the default isolation value for dynamic SQL