DATE: 2008-05-22 (10:30 AM - 11:30 AM)SPEAKERS: Steve Rees (IBM Toronto Lab)There are many good reasons for wanting to measure performance of your DB2 statement, application or system. Maybe you want to track total system load over time for capacity planning, or it could be that you need to monitor response times for your SLA. Perhaps you're looking for the batch job that's been slowing down your system lately. Benchmarking isn't hard, but there are things you need to know to make it easier to do, and more likey to produce the data you need. This presentation will take you through the principles & practices of benchmarking performance. We will look at some cornerstones of benchmarking - repeatability, accuracy, relevance and managability, and how to achieve these with a minimum of effort. We will also cover practical issues, like what tools & mechanisms to use, how to control variability, environmental issues that can complicate things, and how to tell if you can trust your data.EXP. LEVEL: Intermediate,AdvancedOBJECTIVES:Reasons to benchmark a system, including pre & post-migration tests, capacity planning and evaluating SLAs.Core benchmarking concepts, including relevance, repeatability, accuracy and managability. Trade-offs between using production or test systems, and on-line vs. off-line benchmarking.Basic benchmarking practice, covering decisions on what type of systems to benchmark, what tools to use, what data to collect and how to collect it, and how to analyze and understand the data.More advanced topics, such as extending benchmarking to multi-layer systems, controlling variability, and using benchmark data for capacity planning.Benchmarking is very useful, and doesn't have to be mysterious or difficult or unreliable, once the basic techniques are understood. We will underscore this using concrete examples.