DB2 Log Archiving/User Exit with Amazon S3

Due to my quick clicking without reading screen prompts, idug.org suffered a recent outage. Since then I have been spending some time improving our backup systems along with Billy Sundarrajan. Billy and I decided that we would be updating our backup strategy using Amazon's S3 service. For those of you who are not familiar with Amazon Simple Storage Server (S3), it’s basically a storage service that lets you store and retrieve unlimited amounts of data from anywhere. Amazon worries about the availability of your data, backup of your backups, and allows you to easily geographically diversify your backups.

There are no upfront costs; you only pay for what you use. The current price for this service is around $0.150 per GB per month and $0.10 per GB for data transfer. By the time you factor in all the costs for a tape system and Tivoli storage manager, this can be a very cost effective way to backup small DB2 databases with a proven technology.

I figured it would be a fun little exercise to ensure IDUG’s db2 express-c database is always backed up on S3 as well as our logs. Below is a basic db2 user exit script for UNIX that handles the log archiving communications with S3. It’s not as clean as I would like because the s3cmd command does not have standard error behavior.

#/bin/sh
# Basic Amazon S3 user exit to store archived db2 logs
#
# To install:
#1. Ensure s3cmd v0.9.9 command is installed and s3cmd --configure has been run under
#    the instance owner
#2. Copy this script to /bin/db2uext2
#3. Ensure LOGRETAIN is set to on
#4. db2 update db cfg using logarchmeth1 userexit
#
#Go change enough data to cause a log archive to be filled up and check your s3 account
#to ensure the log was archived properly.
#
# Example call: /home/rob/sqllib/bin/db2uext2 -OSLinux -RLSQL09053 -RQARCHIVE -DBSAMPLE 
#-NNNODE0000 -LP/home/rob/rob/NODE0000/SQL00001/SQLOGDIR/ -LNS0000004.LOG
#
# Revision History
#1.0 - Rob Williams - July 17 2009 - Initial Implementation
#
#
RC_DEFECT=16# Software error return code
SERVER_NAME=`hostname` DATABASE_NAME=`echo $4 | awk '{print substr($1,4)}'` NODE_NAME=`echo $5 | awk '{print substr($1,4)}'` LOG_PATH=`echo $6 | awk '{print substr($1,4)}'` LOG_FILE=`echo $7 | awk '{print substr($1,4)}'`

# First check to make sure the directory structure exists # # NOTE: The error codes are all over the place for s3cmd and does not return #        proper error codes or standards # DIR_NEXISTS=`s3cmd ls s3://$SERVER_NAME-$DATABASE_NAME-$NODE_NAME 2>&1 | grep ERROR | wc -l`
if   [ "$DIR_NEXISTS" = "1" ] then VERIFY=`s3cmd mb s3://$SERVER_NAME-$DATABASE_NAME-$NODE_NAME 2>&1| grep ERROR | wc -l` if [ "$VERIFY" = "1" ] then exit $RC_DEFECT fi fi
if   [ "$3" = "-RQARCHIVE"  ] then s3cmd put $LOG_PATH/$LOG_FILE s3://$SERVER_NAME-$DATABASE_NAME-$NODE_NAME/$LOG_FILE
#For some reason it prints nothing if it can't find the file
VERIFY=`s3cmd ls s3://$SERVER_NAME-$DATABASE_NAME-$NODE_NAME/$LOG_FILE 2>&1 |wc -l` if [ "$VERIFY" = "0" ] then exit $RC_DEFECT fi
elif [ "$3" = "-RQRETRIEVE" ] then VERIFY=`s3cmd get $LOG_PATH/$LOG_FILE  s3://$SERVER_NAME-$DATABASE_NAME-$NODE_NAME/$LOG_FILE 2>&1 | grep ERROR | wc -l` if [ "$VERIFY" = "1" ] then exit $RC_DEFECT fi else exit $RC_DEFECT fi

exit 0
Recent Stories
Mysql_real_escape_string/addslashes for DB2 CLI

Packet sniffing the DRDA protocol – A very low impact way to view incoming statements going into DB2

Disk Throughput on Amazon EC2, VM Ware, Slicehost, Internal Storage, and SANs