DB2 - L

 View Only
Expand all | Collapse all

Yet another bufferpool question

  • 1.  Yet another bufferpool question

    Posted Apr 11, 2022 04:09 PM
    Hi guys,

    we are reviewing our buffer pool layout in these days. Since we aren't experts in this topic we hired some expert to support us.
    Now he came up with some points:
    - actually we have 2 big buffer pools for 4K data (one for tables and one for indexes). We have a hit ratio (w/o p/f) during our online window between 85%-93%. Is that really to low? The hit ratio with p/f is between 70%-90%.
    - our expert is pushing us to move from 4K buffer pools to 16K buffer pools. It seems that IBM recommends this configuration over the last years. Honestly, I didn't attend such sessions on IDUG but I never heard about such a move. My latest information is, that DB2 reads the data in 4K pages. I don't see advantages besides for tables with long record lengths (let's say > 1K).

    Besides, I read some older comments in this forum. It seems that (for some of you) the I/O rates are more important than the hit ratios. How do I know what I/O rates are okay and which one don't?

    WolfgangBeikircherRaiffeisen Information Service

  • 2.  RE: Yet another bufferpool question

    Posted Apr 12, 2022 07:33 AM



    Let me answer the last question first, before Joel or another wizard pops in.


    "Lower I/O rates are always good.  The lower, the better, throughout process execution." I/O costs more than CPU, other things being equal, in both electricity/heat and in elapsed time.


    It's possible to hit a point of diminishing returns with complexity-of-operation (and increasing analyst time costs) replacing I/O, and a CPU increase from internal handling (instead of I/O) causing an increase in 4HRA and licensing charges.  That's what you're worried about, right?  That's a ways down the road in tuning, enough so that I've never seen any evidence of it happening in my thirty-two years (!) in DB2-DBA. 


    HOWEVER: Lead indicator - if you're to the point that it's actually a valid worry, you should expect that the first symptom will be a sharp rise in TCB time for your task.  It may still be worthwhile, if elapsed-time is your performance metric!  Faster reply to the customer, and/or shorter critical path during the batch window, are always important to the enterprise's bottom line.


    As to your first question (about 16K BPs), I've never seen a presenter who advocated that, either.  Watch this space, and we'll see if anyone can point back to it.





    Philip Sevetson

    Computer Systems Manager


    5 Manhattan West

    New York, NY 10001


    917-991-7052 m

    212-857-1659 f



  • 3.  RE: Yet another bufferpool question

    Posted Apr 12, 2022 08:00 AM
    Howzit Wolfgang?

    I concur with Phil, I can't say I've seen or heard any proponents of the 16K bufferpool school of thought. Reduced I/O is key, bufferpool residency is more important than hit ratios IMHO. In my experience separate indexes from tablespaces; random access from sequential access; tune bufferpool thresholds to meet application requirements and at the end of the day look at the metrics to guide your choices.

    Best of luck


    Bruce Williamson
    Commonwealth Bank of Australia

  • 4.  RE: Yet another bufferpool question

    Posted Apr 12, 2022 08:22 AM
    Hey guys, thanks for your replies.

    For the 16k buffer pool I'm still searching in the IDUG archives. Since I'm contradicting an expert I would like to have some arguments. :)
    The only presentation I found was in 2020 and talked about compressed indexes. Those should be placed in 8k or better in 16k buffer pools. But since we don't compress indexes, the argument doesn't count.

    As for the I/Os I know that the lower you have, the better is. The thing is, I don't know if our rates are already low. So we are looking at the hit ratio. 

    Bruce, we took exactly your approach: separated indexes and tablesspaces (it was already in the actual design). Random access from sequential is a bit harder. Here we rely on the explain information if the tablespace/index is read doing a sequential prefetch. Do you have a better approach?

    WolfgangBeikircherRaiffeisen Information Service

  • 5.  RE: Yet another bufferpool question

    Posted Apr 12, 2022 08:43 AM
    For the indexes I am slowly migrating the very large and very busy indexes to larger buffer pools, primarily 8K pools. There is something called index fan-out. Index fan-out increases with a larger page size the basic notion is that non-leaf pages will point to more underlying pages. This will improve page residency times for non-leaf pages and therefore improve buffer efficiency and I/O rates. This is regardless of whether you choose to compress indexes. Hopefully the "expert" is looking at things like FREEPAGE and PCTFREE, clustering, compression, I/O patterns, quantity of SQL, SQL quality, etc. and not just buffer pools which, in my opinion, is a less than 5% return on investment. As always, your results may vary. Good luck!

    Here is an article I wrote concerning index performance that attempts to explain index fan-out.

    Dan (not an expert, but rather an anti-consultant)

  • 6.  RE: Yet another bufferpool question

    Posted Apr 12, 2022 09:47 AM
    I'm going to go out on a limb here but generally speaking if applications are reasonably well designed, online applications should be predominantly short duration random access while batch is more typically longer running sequential (hopefully via clustering sequence). Depending on workload patterns, don't disregard changing bufferpool thresholds to suit online/batch workload

    At a high level IFCID 199 will give you a lot of useful info regarding bufferpool/dataset I/O statistics.

    What performance monitoring tools do you have? No I'm not a vendor but it may give you more options …

    Bruce Williamson
    Commonwealth Bank of Australia

  • 7.  RE: Yet another bufferpool question

    Posted Apr 12, 2022 09:33 AM
    I have always determined which size bufferpool to use based average record length with the idea being to fill each page as full as possible without wasting space. Db2 has a limit of 255 records per page so you wouldn't want to put a table or index with a small average record length into a 8k, 16k, or 32k bufferpool because you would be wasting page space and negatively impact bufferpool density. I use all four page sizes. Mostly we use 4k and 32k page sizes but do have a few tables using 8k and 16k page sizes. We do have one index with a large record length that I put into a 32k bufferpool. That change alone increased performance on that application around 1000%. 
    Filling the page as close to 255 rows can decrease getpages and potentially decrease I/O. I would recommend studying your tables and average record lengths and move tablespaces and possibly indexes to the appropriate bufferpool/page size. 
    As for how db2 reads the data, it used to always read in 4k pages but not now. Db2 reads 4k bufferpools in 4k pages, 8k bufferpools in 8k pages, 16k bufferpools in 16k pages and 32k bufferpools in two 16k pages. So proper page size does make a difference on getpages.

    RussellPetersCentral Technology Services

  • 8.  RE: Yet another bufferpool question

    Posted Apr 12, 2022 10:19 AM
    Going back to the original questions:
    - Is your hit ratio too low?
      - As others have said (including you), it really is the I/O rates that matter, or even just the # I/Os.  I/Os take time and it is time that is the measure of performance that matters.  Hit ratio can be an indication that you may have more I/O than ideal but by itself isn't the problem.
      - The first thing to look at if your hit ratio is low is to consider if an increase in the bufferpool size (i.e. more pages, not bigger pages) would help.
    - Would going to 16K pages help?
      - In general it is the total size of the pool that matters more than the size of each page.  The page size could help or hurt depending on the access pattern.  If you are generally reading data from nearby (i.e. a small # of) pages, then a larger page size may reduce the # I/Os needed. But if the data is accessed from random pages spread throughout a large table, then reading 16K at a time may use more buffer space than is needed.
    - Finally, if someone makes a blanket recommendation like that without more detail and suggestions of how to make a determination, they are likely not an "expert" and I would tend to not trust anything they say.


    Joe Geller

  • 9.  RE: Yet another bufferpool question

    Posted Apr 12, 2022 11:23 AM

    Hit ratio has always been interesting, and a mostly useless metric.  Especially so when dynamic prefetch appeared on the scene.

    Then DB2 would frequently read pages into the pool, and the application never accessed them.  The hit ratio would sometimes even go negative.


    Over the past dozen years +, I see increasing examples of "death by random I/O" as the size of objects has increased exponentially, and access became more random.


    The saviors here are, first having Gigabytes of memory to dedicate to pools, and the huge performance advances in DASD, i.e. SSD. 


    Now we frequently see avg synch  I/O times of 1-2 Ms, where they used to be 18-20+ Ms 30 yrs ago.


    Using larger page sizes with more data rows per page is a great way to reduce physical reads, and the cpu cost of I/Os.

    It also requires application analysis to determine a benefit of using larger page sizes.





    Joel Goldstein

    Responsive Systems



    Buffer Pool Tool® for DB2

       The only one that works !!




  • 10.  RE: Yet another bufferpool question

    Posted Apr 12, 2022 05:00 PM
    There's really no such thing as an I/O rate that's objectively "okay" or "not okay". You would need to know the environment to determine that.

    I personally don't think a HR of 85-93% is that bad on a huge catch-all pool. Joel is, of course, right that a hit ratio by itself doesn't really tell you anything. It could just be the nature of the data access you're doing that you don't get a lot of re-use. You might be able to divine whether your pools are configured adequately from a combination of hit-ratio, residency times, and some info on the working-sets from heavily-accessed objects in the pool (SMF199).

    As far as moving EVERYTHING to 16K pools...I wouldn't accept a blanket recommendation like that unless the "expert" can provide a thesis for why it might help you, and that would likely be on a case-by-case basis.

    Can I come up with a reason why NOT to use 16K pages? Well...if you are using LOCKSIZE PAGE, you could end up locking more rows in a single page and giving yourself concurrency issues. If you can't make the pool large enough to avoid significant page stealing, one stolen page would mean a lot more rows would be out-of-memory with each "steal" event.

    If you DO find a link to the 16K guidance, post it up...I'd like to see IBM's justification for it, for my own edification.

    Mark Wieczorkowski
    Db2 Systems Programmer, SSA/DCS
    Principal - Solipsistic, LLC

  • 11.  RE: Yet another bufferpool question

    Posted May 05, 2022 02:18 AM
    Hi guys,

    I wanna thank you very much for Your input. It helped me in the discussion greatly!
    I could convince the project team to switch focus. Away from 'just focus on hit ratio' towards 'lets look at the synch I/O'. Since we have a mixed workload I think this is a good solution.

    WolfgangBeikircherRaiffeisen Information Service