F03 - Extreme scaling with DB2 pureScale and DB2 compression

Session Number: 353
Track: Big Data and Analytics
Session Type: Podium Presentation
Primary Presenter: Peter Plachta [senior software engineer - IBM]
Time: Apr 30, 2013 (03:15 PM - 04:15 PM)
Room: Bonaire 5&6

Speaker Bio: Peter is a 12 year veteran of the DB2 kernel. He has worked on page cleaning algorithms during DB2's first TPCC benchmark, designed the deepCompression algorithm and now owns locking in pureScale.
Audience experience level: Beginner, Intermediate
Presentation Category: Data Management, Big Data
Presentation Platform: DB2 for Linux, UNIX, Windows
Audiences this presentation will apply to: Data Architects, Database Administrators, IT Managers
Technical areas this presentation will apply to: Data Warehousing and Business Intelligence, pureScale
Objective 1: First look at Locking in pureScale
Objective 2: Understand basic Locking configuration parameters in pureScale as well as basic monitoring options
Objective 3: Compression and adaptive compression actual customer data analysis
Objective 4: Compression best vs worst case analysis

Abstract:  DB2 pureScale recently hit the market with a shared-data solution scaling up to 128 machines. Oracle RAC has been around for 6 years and barely scaled to 8.
DB2 10.1 features a compression solution which can save 80%+ of your data: that's twice what Oracle can do.

I will discuss how we have achieved these landmarks, and what is world-first in our approach. What does it mean to your production system sizes and performance ?