LOGBUFSZ

Gopalan Venkatramani

LOGBUFSZ


Hi all - My production database snapshot shows me this,

 

db2 get snapshot for database on CMSDB | grep "Log pages"
Log pages read = 7124
Log pages written = 78773292

I know its something to do with LOGBUFSZ . Have anyone did certain steps and succeeded bringing the read to 0 ?

I never did this tuning and I consider all these snapshot is old school are much better than new generations MON ppl lol. 

Kindly assist...

 

Log buffer size (4KB) (LOGBUFSZ) = 2160
Log file size (4KB) (LOGFILSIZ) = 49152
Number of primary log files (LOGPRIMARY) = 240
Number of secondary log files (LOGSECOND) = 12

Regards

Glenn

DB2 LUW

Ian Bjorhovde

LOGBUFSZ
(in response to Gopalan Venkatramani)
Glenn,

Log pages read is incremented in 2 cases:

1 - When a transaction is rolled back, and DB2 has to read the log pages from disk to undo it.
2 - When CUR_COMMIT is enabled, and a transaction is trying to read the committed version of the row, and the log record(s) are no longer in the log buffer.


Increasing LOGBUFSZ can help with the second case, but you can’t prevent the first case - unless you don’t allow applications to perform ROLLBACK ;-)

Your goal should be to minimize log pages read, but achieving a value of zero may not be possible.  You have very few log pages read (relative to number of pages written), which is what you want to see.  Are you having an issue with logging on the system right now?


Ian Bjorhovde
IBM Gold Consultant



On Sep 25, 2017, 1:51 PM -0700, Gopalan Venkatramani <[login to unmask email]>, wrote:
>
> Hi all - My production database snapshot shows me this,
>
> db2 get snapshot for database on CMSDB | grep "Log pages"
> Log pages read = 7124
> Log pages written = 78773292
> I know its something to do with LOGBUFSZ . Have anyone did certain steps and succeeded bringing the read to 0 ?
> I never did this tuning and I consider all these snapshot is old school are much better than new generations MON ppl lol.
> Kindly assist...
>
> Log buffer size (4KB) (LOGBUFSZ) = 2160
> Log file size (4KB) (LOGFILSIZ) = 49152
> Number of primary log files (LOGPRIMARY) = 240
> Number of secondary log files (LOGSECOND) = 12
> Regards
> Glenn
> DB2 LUW
>
> Site Links: View post online   View mailing list online   Start new thread via email   Unsubscribe from this mailing list   Manage your subscription
>
> This email has been sent to: [login to unmask email]
> Setup a data refresh task in less time than it takes to make a cup of coffee + save up to 90% in CPU
> ESAi's BCV5 & XDM fast data refresh & Test Data Mgmt products will make you a hero to users. See
> http://www.ESAIGroup.com/idug
>
> Use of this email content is governed by the terms of service at:
> http://www.idug.org/p/cm/ld/fid=2

Gopalan Venkatramani

RE: LOGBUFSZ
(in response to Ian Bjorhovde)

Ian - Thanks for the details. I wasn't aware the 1st point. Thats a good piece of information :) But what do you mean if there is a problem in logging in the system. If you mean by the transaction log I don't see any issues in the transaction logging part of DB. Since this is happening in Prod i am little worried. 

 Glenn

DB2 LUW

Robert Wright

RE: LOGBUFSZ
(in response to Gopalan Venkatramani)

Glenn

Apologies for the late reply but hopefully you'll find this useful even if you haven't already converted from the old to the new interfaces :)

You haven't mentioned the version of Db2 you are using. If it is >=10.1 then have you considered using the MON_GET_TRANSACTION_LOG table function? This has a number of fields that may help you. For example

  • NUM_LOG_BUFFER_FULL - is this occurring? An indicator to possibly increase your LOGBUFSZ
  • CUR_COMMIT* - these will help you decide if the log reads are related to CUR_COMMIT processing

Other metrics to identify the number of rollbacks are available >10.5 table function MON_GET_DATABASE

  • TOTAL_APP_ROLLBACKS
  • INT_ROLLBACKS

Finally, how much of an impact are the log reads having? This might determine whether or not you will gain much from your analysis. To understand this you can use the Time Spent Event Monitors, which are recorded at a number of places. Performing a number of calculations can quickly determine the percentage of your total request time spent on rollback processing timelog buffer wait time and log disk wait time.

HTH

Regards

Rob

 

MetricsAnalysis New Zealand, Limited

e: [login to unmask email]
m: +64 21 726 437
w: https://www.metricsanalysis.co.nz/unique-value-proposition
Edited By:
Robert Wright[Organization Members] @ Dec 12, 2017 - 02:38 PM (Pacific/Wellington)

Gopalan Venkatramani

RE: LOGBUFSZ
(in response to Robert Wright)


Hi Robert - This is good information. BTW I'm in 10.5 fp7. So let me check and will get back to you

 

Glenn

DB2 LUW

Gopalan Venkatramani

RE: LOGBUFSZ
(in response to Gopalan Venkatramani)

Hi Robert - This is what I found 

TOTAL_APP_ROLLBACKS            INT_ROLLBACKS
--------------------                             --------------------
   37673214                                       32967493

 

 NUM_LOG_BUFFER_FULL
--------------------
0

 

So since the log buffer full is zero I assume we don't need to touch the parameter... Let me know

 

 

Glenn

DB2 LUW

Robert Wright

RE: LOGBUFSZ
(in response to Gopalan Venkatramani)

Glenn

Although it looks like a high number of rollbacks, the interface returns the metrics since the database was first connected to/activated.

What is the percentage of rollbacks to total transactions? If it is high then you may have an application design issue.

What are your CUR_COMMIT stats looking like? If these are low in comparison to SELECTs using CS then you could consider disabling CUR_COMMIT. Disabling CUR_COMMIT may reduce the amount of data logged, allowing more log records per buffer, meaning the chance of a log record being in the buffer for your rollbacks is higher.

Note that you should monitor and analyse your system over a time period before deciding what actions to take.

Your log buffer size isn't that large at 2160. Unless you are on a memory restricted database you could consider increasing it to see if it has any effect. This is a core part of your system performance. Remember to increase the DBHEAP by the same amount, as the log buffer is allocated from there.

You should consider doing a workload analysis before you decide on your action(s). There's an IBM Best Practice document that you can use to guide you. It walks through a number of scenarios and the analysis to be performed. 

Regards

 

Robert Wright
MetricsAnalysis New Zealand, Limited e: [login to unmask email] m: +64 21 726 437 w: https://www.metricsanalysis.co.nz/unique-value-proposition

Robert Wright

RE: LOGBUFSZ
(in response to Gopalan Venkatramani)

There is also a good presentation available "Demystifying Logging &Recovery in DB2 for LUW" by Kelly Schlamb of IBM.

An interesting fact is that log records are written to disc if the bufferpool page they relate to is written to disc (see slide 4). The implication of this is that having poorly managed/configured bufferpool(s) can cause an unnecessary increase in logging.

 

Robert Wright
MetricsAnalysis New Zealand, Limited e: [login to unmask email] m: +64 21 726 437 w: https://www.metricsanalysis.co.nz/unique-value-proposition