IDUG Content Committee

 View Only

Shadow Tables – Performance observations using Infosphere Change Data Capture

By Somu Chakrabarty posted Jun 16, 2015 11:05 AM

  

by Somu Chakrabarty and Iqbal Goralwalla, Triton Consulting 

IBM has introduced the concept of shadow tables in DB2 10.5 Fixpack 4 (also known as the “Cancun Release”) to avoid separate OLTP and OLAP databases and to improve analytic query performance in a single OLTP database without using much tuning and indexing efforts. In essence, shadow tables bring analytics to data. A shadow table is a column-organised copy of a row-organised table that includes all columns or a subset of columns from the source row table. As such, it leverages the power of BLU Acceleration which results in fast reporting. The DB2 optimiser automatically routes transactional queries to the row tables, whilst analytic queries are sent to the columnar shadow tables. This enables fast reporting directly on the transactional system without the performance impact to the transactional workload and eliminates the need to replicate data into a separate data warehouse or data mart. OLTAP (Online Transactional Analytic Processing), depicted in the diagram below, is now a reality.

cdc1.jpg

Infosphere Change Data Capture (CDC) is a log-based replication software (a component of the InfoSphere Data Replication product) that captures data changes in a source data store in almost real-time and then asynchronously applies into a target data store based on table mappings configured in the InfoSphere CDC Management Console GUI application.  InfoSphere CDC asynchronously replicates insert/update/delete statements that are applied on the source table to the shadow table, thereby realising the benefits of DB2 BLU Columnar table feature for analytics and reporting in an OLTP environment. By default, all applications access the source tables. Queries are automatically routed by the optimizer to the source table (row-organized) or the shadow table (column-organized copy of the source table) by using a latency-based algorithm that prevents applications from accessing the shadow table when the latency is beyond the user-defined limit. So, to implement shadow tables we will have to first install and configure Infosphere CDC. Install binaries for CDC are supplied as part of DB2 installs from DB2 10.5 Fix pack 4 (since shadow table concept is a part of BLU Columnar feature only the Advanced Workgroup and Advanced Enterprise editions of DB2 come with CDC. From FP5 onwards this is also available as part DB2 BLU Acceleration In-Memory offering for DB2 ESE and WSE.).

The summary of steps of using CDC with DB2 BLU for Shadow tables are:

  • Installation of CDC Access Server
  • Installation of CDC instance for DB2 LUW
  • Installation of CDC Management console
  • CDC subscription and table mapping configuration (for those row-based tables for which we want to create a columnar shadow table)

When the access server is started, it creates a process consuming memory and CPU. The access server can be installed on the same server where the database containing the shadow table exists. However, for a production system you can install the access server on a separate server to avoid any overhead. Then, when we start the CDC replication instance for DB2 LUW another process is  started.

The CDC replication instance has to be present in the same server where the database containing the shadow tables exists. In the database where the row table and BLU shadow table exist we can see 1 new application (application handle 56773 in below example) running and then Management Console GUI (with at least one inactive or active subscription) initiates another application (application handle 56775 in below example):

Auth Id    Application Name     Appl.      Application Id                Status        

                                Handle                                  

---------- -------------------- ---------- ----------------------------- ---------

DB21054B   dmts64-java          56773      *LOCAL.db21054b.150323145420  UOW Waiting

DB21054B   dmts64-java          56775      *LOCAL.db21054b.150323145731  UOW Waiting

 

After the subscription mirroring is started we see more applications in the DB2 instance:

 

Auth Id  Application    Appl.      Application Id                 Status     

         Name           Handle

-------- -------------- ---------  ------------------------------ ------------

DB21054B dmts64-java    15111      *LOCAL.db21054b.150311220819   UOW Waiting

DB21054B dmts64-java    15124      *LOCAL.db21054b.150311220912    UOW Waiting

DB21054B dmts64-java    15102      *LOCAL.db21054b.150311220810    UOW Waiting

DB21054B dmts64-java    15115      *LOCAL.db21054b.150311220823    UOW Waiting

DB21054B dmts64-java    15082      *LOCAL.db21054b.150311220628    UOW Waiting

DB21054B dmts64-java    15113      *LOCAL.db21054b.150311220821    UOW Waiting

DB21054B dmts64-java    15080      *LOCAL.db21054b.150311220626    UOW Waiting

DB21054B dmts64-java    56773      *LOCAL.db21054b.150311214858    UOW Waiting

 

These applications and processes do not have any significant overhead on the server and database performance as per our experience as we can see below.

Impact on OLTP workload when analytic workload also taking place – before and after CDC is active

Shadow table feature (and associated CDC) is used in predominantly OLTP environment where you want to execute some analytic/reporting workload. So, the first thing that naturally comes to our mind is how much impact/overhead CDC will have on the existing OLTP system. Obviously if CDC is creating too much overhead then then the whole concept of shadow table is jeopardised. The good news is that our extensive testing shows that the overhead of running CDC on an OLTP system is very low. In our tests access server and the DB2 user database containing shadow tables coexist on the same server. You will get even less overhead if access server is installed on a different server.

We ran our test cases on a single database with 2 scenarios, a pure OLTP workload without any other analytic/reporting workload, and an OLTP workload along with some analytic/reporting workload. In the both cases response times of the same OLTP workload is slightly longer when CDC is active (up to 1.07% longer). We used one subscription for the testing containing 3 columnar BLU tables as source - 2 dimension tables (few thousands records) and one fact table with around 5 million records and corresponding 3 shadow tables as target in the subscription.

 cdc2.jpg

System level overhead (CPU/Memory/Disk) to maintain CDC and shadow table replication

 Another aspect of CDC usage is system (CPU/memory/disk) level overhead. We have not noticed any significant CPU and memory usage while CDC is active compared to without CDC in our extensive testing. Also, access server can be installed in a different machine to further minimise any CPU/memory overhead of the access server process. Disk read rate gets better when using shadow tables.

 cdc3.jpg

 As we could see above disk read rates decreased as the access path got routed to shadow table and then decreased further when separate bufferpools were used as you would expect.

 CDC and Latency performance

 To enable the use of shadow tables the optimizer must be directed to consider shadow tables for query optimization when the replication latency of CDC is within an acceptable limit. Latency-based routing is a performance improvement technique that directs a query to a shadow table if the replication latency is within a defined limit. Replication latency is the amount of time that it takes for a transaction against a source row-based table to be applied to a columnar shadow table. Not all applications can tolerate latency between the source row-organized table and the shadow table – this is a business decision. Different applications may have different latency limit. We must first identify which applications can tolerate latency and make use of shadow tables. The refresh age is an input to the system by the DBA/administrator to specify to DB2 optimizer how much time lag between the source table and the shadow table is tolerated by business, which is actually the value of the CURRENT REFRESH AGE special register as a time stamp duration. CDC needs one subscription for each database to replicate between row tables and corresponding columnar shadow tables in that database. CDC replication involves a capture engine that scrapes the DB2 logs for update deltas and feed these deltas to the apply engine to maintain the shadow tables. The “apply” to the shadow tables is further optimized to leverage the new BLU 10.5 index scan driven update feature for column-organized tables. Normally, CDC will apply deltas to the target tables as quickly as it can. However, for shadow tables and column organized tables in general, it is advantageous to apply deltas to the target in chunks to increases the size of each apply transaction. CDC offers a new system parameter “acceptable_latency_in_seconds_for_column_organized_tables” to accomplish this by delaying “the apply” to the target tables up to a certain number of seconds. We used the default value of 5 seconds which was good for our workload. Replication latency information is shared automatically from Infosphere CDC to the DB2 server through a table called SYSTOOLS.REPL_MQT_LATENCY. This needs to be created manually, in every DB2 database in which shadow tables are used, as part of CDC setup process.

Conclusion

 DB2 BLU shadow table concept has been implemented using the CDC which is an external product outside DB2. Some may not be happy to use an external product like CDC to implement something inside DB2. However, from our experience usage of CDC does not cause significant overhead with the normal functioning of the shadow table.

 


#hot-topics
1 comment
9 views

Permalink

Comments

Jul 21, 2015 10:35 AM

CDC

Good information.