Migrating to DB2

Table of Contents

1. Introduction 
2. Meet DB2 
3. Sybase Migration Project Description 
3.1. Translation Process 
3.2. ITK Translation Process 
4. Translation of Triggers 
4.1. Row-level versus Statement Level Triggers 
4.2. Compound SQL (compiled) versus Compound SQL (in-lined) 
4.3. Before and After Triggers
4.3.1. Exposition One 
4.3.2. Exposition Two 
4.3.3. Exposition Three 
5. Trigger Classification 
6. Wrap-up and Other Not So Correlated Truths
7. Acknowledgements 
8. References 

1. Introduction

I’m sitting here in San Francisco International Airport waiting for my flight back to Brussels. During the past five days I have had the opportunity to attend the zIM Tech Sales and Sales Bootcamp 2013, hosted in the IBM Silicon Valley Laboratory (SVL).

During the opening speech it was a pleasure to hear that DB2 has been doing well over the last year and that there are great plans for future developments still ahead of us.

“Industry leading performance, scale and reliability on your choice of platform from Linux, UNIX and Windows to z/OS” is a slogan that probably sounds familiar to each of us knowing IBM turns this into reality release after release. Did you know that +95 percent of the worldwide Fortune 500 companies are running DB2?   

If you open an IT magazine these days you will be exposed to topics ranging from transactional to deep analytics, scoring, big data, social media and a lot of hardware systems that support and accelerate these techniques. I experienced the same when attending the sessions during the Bootcamp. Tremendous efforts are being made in order to enable DB2 to profit from all this new technology and allow us to interact with our information assets swiftly so that we can continuously improve our increasingly complex and dynamic business services and processes.

Every year Gartner performs a global analysis of IT spending trends. Key findings from this year's Gartner IT Key Metrics report are:

  • 55 percent of global IT budgets are spent on infrastructure & operations
  • 33 percent of global IT budgets are spent on applications
  • 12 percent of global IT budgets are spent on IT overhead

According to another study, technology is top-of-mind for CEO’s but 68 percent of IT operation costs will be for management and administration. The last figure is not sustainable towards the future and that is one of the arguments for this month’s topic, Migration to DB2.

The cost of database administration can greatly exceed the cost of the database software and hardware so it's critical that IT staff time be used effectively and efficiently. DB2 includes numerous autonomic features that free IT staff from many administration tasks. The other reasons I have heard for migrating away from the current database server are performance, scalability and reliability, license cost, declining technical support  from the database vendor and to finish the list, the customer’s or application vendor’s conflicting strategic directions. 

Today applications are more consciously written to be agnostic to the database technology so that a change from one database vendor to another should not be as labor intensive. These applications should also only require a testing cycle to be confident that both functional and non-functional performances have not been adversely affected. However there are still an awful lot of applications that require more effort.

An interesting list of Redbooks (keyword to use for your search is “conversion”) is available that can be helpful if you want to migrate from Sybase, SQL Server or MySQL to DB2. The latest version of DB2 LUW includes extensive native support for the PL/SQL procedural language, new data types, scalar functions, improved concurrency, built-in packages, OCI, SQL*Plus, and more. These features will ease developing applications that run on both DB2 and Oracle and can help simplify the process of moving from Oracle to DB2 LUW.

In March IDUG will publish the “Oracle to DB2 Migration Lessons Learned” white paper from The Fillmore Group in which they detail business and operational advantages of switching from Oracle to DB2. To complement the white paper, later in the month The Fillmore Group and IDUG will run a live webcast titled “Zero Outage Oracle to DB2 Migrations” in which more practical tips are disclosed.  

In this article I will mainly discuss a Sybase to DB2 migration project that I have the privilege to be the technical authority. I will limit the focus of the article to a small but important piece of the project, namely the translation of the triggers. The chapter “Trigger Translation” is divided in several parts allowing me to go into detail when explaining the options we have considered before coming to a solution. If you are not interested in the pure technical details I suggest you skip reading sub-titles 4.1, 4.2 and 4.3.

To begin, I will outline in general the MEET DB2 tool and show a report produced using a part of the Sybase DDL of that migration project which will give you some insight into the scope of the project.

2. Meet DB2

If you are curious about how compatible your Oracle applications are with DB2 I can refer you to the MEET DB2 tool that evaluates the degree to which DB2 will natively support the procedural logic and data definitions used in an Oracle database. Source procedural SQL and DDL from input files are processed by MEET DB2 to determine which statements and objects found are immediately transferable to DB2. All incompatibilities that are found are identified in a report that provides both summary information as well as detailed technical information about each incompatible feature.

MEET DB2 may be used by IBM customers solely to evaluate potential migration of their own applications. If you are a customer, you will need to provide your email address during the tool startup. MEET DB2 does NOT send any source code to the server. Only the line numbers and object names of the PL/SQL features that are detected are included in the report. You will be prompted to save the report data file using the 'Save Report' button, and e-mail it to the provided server e-mail address. You will then receive an answer from the MEET DB2 server a few minutes later. As an alternative, you can contact your local IBM representative or an IBM Information Management Business Partner to run an analysis of your applications using MEET DB2.

MEET DB2 previously also evaluated the compatibility of your Sybase ASE applications that you possibly would migrate to DB2 plus a separate priced feature named SQL Skin. SQL Skin is a layer between the Sybase application and the DB2 engine, in which the Sybase syntax is transformed in DB2 syntax and where the representation of output data can be transformed on the way back to the application. The report here states how compatible the application is for a migration from Sybase to DB2 with SQL Skin.

Executive Summary 

92.2% of statements immediately transferable to IBM DB2.

MEET DB2 has estimated that 92.2% of statements and 71.8% of objects are immediately transferable to IBM DB2. The technical report below is provided to detail exact instances and locations of potential issues to simplify the migration process.

Technical Summary

Technical Summary.jpg

By clicking for the detailed technical report the following list is generated:

Statistics.jpg

Next in the report is a list of all statements that are listed as “to verify” and the number of occurrences:

List of Statements.jpg
You can click on one of the features that are flagged and this is really the added value of the tool. added value.jpg You get the type of error and the exact occurrence and line number in the DDL were the incompatibility can be “cured”:

Feature_Description.jpg

After modifying all of the flagged statements you should be able to use the DDL and create the objects in the new DB2 database.

The estimates from the MEET DB2 tool are especially reliable for Oracle migrations. What I have witnessed myself is that using the SQL Skin layer can be successful in case you want to retire a Sybase application but implementations in an agile environment are less successful due to the fact that there is not much harmony between developing in Sybase and seizing the benefits from the latest native db2 features. It really becomes a nightmare if you want to mix procedures and triggers written in T-SQL with those natively written for DB2 in PL/SQL and have them interact. The SQL skin layer is going out of marketing soon.

In November 2012 there was a series of articles in developerWorks illustrating the process and methodology to be followed for migrating from Sybase ASE 12.x/15.x to DB2 9.7.x/DB2 10.1.x. I recommend reading the four papers if you are considering migrating from Sybase to DB2 or if you want to migrate from any other database to DB2.

3. Sybase Migration Project Description

The project in which our SuadaSoft team is currently involved in has some pre-requisites that are easy to define but not as easy to live up to. The look and feel of the application should be as close to Sybase as possible, meaning an extra blank in the resultset of a call or statement is a difference that needs to be reported and managed. Another pre-req is that the time necessary for the code freeze in Sybase has to be as small as possible, which limits the number of manual interventions, especially if you take into account the number of objects to translate. 

In general the migration of triggers is less straightforward then the migration of stored procedures due to the difference in behavior of triggers in Sybase and DB2. You will see in the next paragraph that the trigger code is first translated the same way as we translate the Stored Procedures and then post-processed.

Scope of the project: the goal is to migrate more than 40 Sybase databases into one DB2 database as the different applications use multiple database schemas and we want to avoid the extra complexity of setting up homogeneous federated access. There are about 16.000 stored procedures, 3.500 triggers, 4.000 tables, over 5.000 views and 7.000 indexes representing approximately two million lines of code in the triggers and stored procedures making a pure manual migration not feasible. We use following translation process in order to get all the objects translated and created in DB2.

3.1. Translation Process

 

TranslationProcess.jpg

We start with a DDL file that contains all table, view, defaults, trigger, stored procedure… definitions and we split that file in many smaller files that each contains one object per type of objects.

1. The ITK Translate ANTLR3 (Another Tool for Language Recognition) engine is something we have developed ourselves based on the Migration Toolkit for Sybase from IBM. I’ll explain more about the ITK in the next section. All of the objects are translated by the ITK, but for the triggers we need extra steps.

2. In step two we take the translated objects and process them to make them more readable, SQL statements are restructured so that the different clauses are aligned and spread over several lines.  This is important in order to allow the developers to easily read the generated code which is important for maintenance once everything is migrated and in production. During this step we also add extra logic to compensate for functionalities that don’t have a direct equivalent in DB2.

3. I’ll cover more about the why we use Trigger classifications in the same called section. In this step the already translated and beautified trigger code is processed by the Trigger Convert Engine. The Trigger Convert Engine will split the triggers in several DB2 before and after triggers where necessary and modifies some of the clauses depending on the type of trigger.

4. Security is processed in a fourth step. An important issue the team encountered is that in Sybase the user names can have a length of 30 whereas in DB2, standard only a length of eight is supported. We solved this by using a security exit in DB2 and a login proc. 

At the end of step one we get a report that identifies the number of successfully translated objects, and at the end of step three we get the number of objects that are successfully created in DB2. DBA’s already suspect that we have to pause between step one and step three in order to create the tables, indexes and views before we can actually create the triggers and stored procedures. In the weeks following the first run we don’t have to recreate the tables as the DDL for tables doesn’t change that often.  The reporting we do allows us to calculate our compatibility level just like the MEET DB2 tool does. Many Friday evenings I have waited to get the new figures and it was awesome to see the translation figures going to 100 percent.

3.2. ITK Translation Process

 

ITK_Translation.jpg

The above figure depicts the ITK translation process. The Parser "2" reads the DDL split by the Splitter "1" process. It builds an Abstract Syntax Tree "3" (AST). This tree represents the language structure of the Sybase DDL. The Walker "4" 'walks' this tree and translates the elements into DB2 DDL matching the AST to the string templates. When errors occur in the translation additional files are generated to provide more detail on the failures "5". Successfully translated DDL is listed in an output CSV file.

4. Translation of Triggers

Our original idea was to translate all Sybase triggers into DB2 AFTER STATEMENT LEVEL triggers, but in the beginning of the analysis it turned out very quickly that this wasn’t the best option. In the subtopics 4.1, 4.2 and 4.3 I describe with examples the DB2 trigger options you simply can’t ignore. There is a short conclusion in each subtopic so that the idea or the goal to reach is briefly summarized. I have highlighted the key of the translation exercise in blue.

Let me first wrap up some of the options in a trigger definition that are of importance in this context.

For DB2 a trigger defines a set of actions that are performed in response to an insert, update, or delete operation on a specified table. The statement that causes a trigger to be activated includes a set of affected rows. These are the rows of the subject table that are being inserted, updated, or deleted. The trigger granularity specifies whether the actions of the trigger are performed once for the statement (I will refer to them as statement level triggers) or once for each of the affected rows (which I will refer to as row-level triggers).

The triggered action can refer to the values in the set of affected rows using transition variables. Transition variables use the names of the columns in the subject table, qualified by a specified name that identifies whether the reference is to the old value (before the update) or the new value (after the update). The new value can also be changed using the SET VARIABLE statement in before, insert, or update triggers.

Another means of referring to the values in the set of affected rows is to use transition tables. Transition tables also use the names of the columns in the subject table, but specify a name to allow the complete set of affected rows to be treated as a table. Transition tables can only be used in AFTER triggers (that is, not with BEFORE and INSTEAD OF triggers), and separate transition tables can be defined for old and new values.

According to Sybase ASE, a trigger is a stored procedure that instructs the system to take one or more actions when a specific change is attempted.

The difference of interest is the “stored procedure” part. In Sybase you can use the same T-SQL commands in triggers as in stored procedures. A second fact is that Sybase ASE triggers all have a statement level granularity implying that there are transition tables that can be used.

4.1. Row-level Versus Statement Level Triggers


In order to stay as close to the logic of Sybase in the DB2 triggers it would be best if the DB2 triggers use statement level behavior. What follows here is an example in order to clarify that both row-level and statement level are possible but that the migration effort for row level triggers is much more cumbersome.

I start with a DB2 statement level trigger that does some validations before doing an insert into a second table luc_tr:

luc_tr.jpg

The next result (the update statement that changes the value of the column OMIT from ‘v’ to ‘w’ is firing the trigger) is as you would expect, in particular one row per qualifying row for the delete and one for the insert:

luctr_delete.jpg

Let’s find out what would happen if we turn this trigger into a row-level trigger…

luctr_delete2.jpg

...we would get following result:

luctr_results.jpg

It probably doesn’t take you long to understand that the current trigger code is not executed once, but once for every row that qualifies, meaning in our case twice.

In order to correct this we have to modify the trigger again:

modifytrigger.jpg

Conclusion: The point I want to make here is that for a translation engine it is more complex to translate the statement level trigger into a row-level trigger as all of the SQL statements in the trigger body have to be examined and adapted. In the above example it was fairly simple as the column “dkey” is a primary key. For tables with primary keys you only need to add one where condition, but what if there is no primary key, or the uniqueness is determined by a set of columns? On purpose I only used an SQL statement without too many conditions, but in reality there are SQL statements with left, right or full outer joins where the tables can be nested table expressions with sub-queries. I have seen queries that easily have over 250 lines of code. Translating those into row-level is challenging, even if it is automated.

4.2. Compound SQL (compiles) versus Compound SQL (in-lined)

A compound SQL (compiled) statement cannot be specified if the trigger definition includes a REFERENCING OLD TABLE clause or a REFERENCING NEW TABLE clause (remember, this is used in statement level triggers). A compound SQL (compiled) statement also cannot be specified for a trigger definition in a partitioned database environment.

Why would I be interested in using a compound SQL (compiled) trigger? There is a topic in the information center that lists the SQL statements that can be executed in routines. As you already could have guessed, there are more SQL statements that can be executed in the compiled flavor compared to the in-line flavor. You can find the entire list here.

Here is the list that does hurt the most in case one has to translate Sybase T-SQL triggers into DB2 triggers:

sql_statements.jpg

Conclusion: Since we have chosen to use statement level triggers it implies we have to make use of the in-lined flavor.  The lack of the cursor handling and the return/rollback possibilities have taken most of the effort to compensate or circumvent. The cursors can be rewritten in FOR loops and the rollback most of the times could have been replaced by signals. In case the rollbacks were NOT accompanied by raiserror a different behavior applies. In DB2 a signal always throws a message or SQLCODE, in Sybase if there is no raiserror prior to the rollback you don’t get a message. For the returns we had to inverse parts of the program logic in such a way that return is not necessary anymore.

4.3. Before and After Triggers


NO CASCADE BEFORE
Specifies that the associated triggered action is to be applied before any changes caused by the actual update of the subject table are applied to the database. It also specifies that the triggered action of the trigger will not cause other triggers to be activated.

AFTER
Specifies that the associated triggered action is to be applied after the changes caused by the actual update of the subject table are applied to the database.

And of course I want to use FOR EACH STATEMENT
Specifies that the triggered action is to be applied only once for the whole statement. This type of trigger granularity cannot be specified for a BEFORE trigger. If specified, an UPDATE or DELETE trigger is activated, even if no rows are affected by the triggering UPDATE or DELETE statement.

Making a choice here isn’t too hard you would think? Well, you guessed wrong. The usage of triggers includes enforcing business rules or enhancing changes (e.g. ensure that every change to a record contains the id of the user that introduced the change).  I often encountered triggers with update statements on the subject table in order to pin the user’s id that modified the row.

4.3.1. Exposition One


Let’s have a look at following example in which we have a trigger that does a small validation of the rows that have been updated in the triggering action and then modifies some of the columns again for auditing reasons, it is coded as an after trigger for each statement:

trigger_statment.jpg

When we create this trigger it executes successfully.

Out of our 16.000 stored procedures you can easily imagine I have at least one stored procedures that wants to modify the subject table of the trigger:

tabletrigger.jpg

As you can see, due to the recursion we might create when executing the stored procedure (we execute the trigger for the update in the stored procedure and the trigger gets fired again each time it encounters the update on the same subject table), DB2 already indicates that there is a possible problem. It returns sqlcode -101 that says that the statement could not be processed because it exceeds a system limit for either length or complexity, or because too many constraints or triggers are involved.

When we execute a simple update we get the same sqlcode -101. If you use a monitor you would see that once you start running the query it first goes for a while into “compiling a SQL statement” state and then finishes with the error:

triggererror.jpg

I can easily fool the DB2 engine in order to avoid the sqlcode -101 by adding in the trigger event clause of the trigger body  <<for update of “omit”>> , to make sure that the trigger code doesn’t execute for the updates in the trigger that don’t include the “omit” column, but that guarantees  other problems at execution time. In the latter case the creation of the stored procedure would work, and you could start executing it. You can also start to execute the update statement, but you will never see them finish successfully. When I did the test, after waiting for a few hours during which the processors of my computer were producing a lot of heat and consumed all of my simultaneously produced solar panel energy, I decided to give up and cancelled the update. Not something someone would even consider doing outside his own play-ground.

In the above example we got a sqlcode -101, a second message that might occur with other triggers is the sqlcode below, when you encounter this message the query an trigger will complete, but the optimizer indicates there might be a problem:  

triggerproblem.jpg

If I translate the trigger into a before trigger we will quickly see the result we intended to get:

beforetrigger.jpg

To turn the after statement trigger into a before row-level trigger I had to change the TRIGGER ACTION from AFTER into BEFORE, replace the transition table references into transition variable references and translate all queries into regular procedure statements. For instance the following query:

becomes.jpg

Notice the join condition disappeared. That is because there is only one row in transition phase and the “t_del” variables contain the values before any change is applied and the “t_ins” variables contain the values after the update would have been applied. Notice however, by manipulating the “t_ins” values in the trigger code, the final image of the row will be that of what you specified in the “t_ins” variables in the trigger.


The creation and the execution of the stored procedure are successful now, without changing anything in the procedure code, only the trigger changed from AFTER into BEFORE:

create.jpg

4.3.2. Exposition Two


Where the usage of a BEFORE trigger in DB2 is the correct option:

correct_option.jpg

The message tells us that the activation of “u#triggertest” of type trigger would exceed the maximum level of indirect SQL cascading. Cascading of indirect SQL occurs when a trigger activates another trigger (possibly through referential constraint delete rules) or a routine, containing SQL, invokes another routine. The depth of this cascading is limited to 16 for triggers and 64 for routines.

It can be easily fixed by making use of a BEFORE trigger:

fixedbeforetrigger.jpg

Explanation: the original value of “a” was set to one and “b” was set to two. By issuing the update statement << update triggerTest set b = 3>> the value of b will change into three. In the trigger code we specify that “a” must become the old value of “b” which was two and we add one, so “a” becomes three.

4.3.3. Exposition Three


A third use case for before triggers is when triggered action is to be applied before any changes caused by the actual update of the subject table are applied to the database.

An example would be that you have a table that contains a column that is used for audit reasons. The column needs to contain a validation value hashed with the end-user ID of the person who modified the row. In the table definition the column is defined as NOT NULL and the code to determine the default is too complex to put in the table definition. The business has also decided that the logic needs to be part of the database to make sure that the same rules are applied regardless of the client issuing the insert/update.

The code to determine the default will have to be in a before trigger because the code of the before trigger is executed before the original insert/update statement is applied to the database. In case the code to determine the default would stay in an after trigger, the original insert would be executed and fail because there would be a null value for the column. For an after update trigger, it would mean that the row is first updated and then again in the after trigger with the correct logic for that column. 

Conclusion: In example one and two we notice that if we use AFTER statement-level triggers, they cause cascading. The third example I used also illustrates that we cannot avoid the usage of BEFORE triggers.  After reaching to this conclusion we had to say farewell to the idea of translating all Sybase triggers into DB2 AFTER statement-level triggers.

5. Trigger Classification


What has not been discussed yet is that in Sybase you can have mixed triggers, meaning that the same trigger can be activated in response to an insert, update or delete on the subject table. This is possible in DB2 as well but only for Compound SQL (compiled) triggers. Remember that they are only for row-level triggers and that involves too much manual intervention to translate all off the sql statements. Not that we are work-shy, but each manual intervention means risk off human errors and would involve a long code-freeze. The goal is that the code-freeze period for switching from Sybase to DB2 is limited as much as possible. The customer can afford a month of code-freeze for non critical code, and must be able to update the critical code until the very last moment. 

Another option in Sybase is the update predicate (if update (column)), which can be used within a Sybase trigger to test if a particular column’s value has been modified by the triggering SQL statement. In the online documentation I have read so far it is always suggested to translate that as checking whether the old transition variable is different from the new transition variable. For performance reasons and in order to have the same behavior in case of null calls it is better to move the test into the trigger-event clause of the update trigger. This implies however that the entire code block that is executed in case the update predicate is true needs to be split off to a new DB2 trigger, which only contains the identical code block used within the original Sybase trigger.

Why do we have to classify triggers? In the previous sections you can find some of the reasons that boil down to the fact that in Sybase ASE there is one type of trigger and there is a lot of flexibility in what you can code within the T-SQL body of the trigger. It becomes clear that having an engine that translates line per line is not going to work well if you have to translate about 400.000 lines of code and some of the objects contain up to 1500 lines.  In DB2 we have the necessity:

• to mix “before” with “after” triggers
• to mix statement-level for the after triggers with row-level  for the before triggers
• to be able to split code into multiple triggers in case there are update predicates involved

In our migration project we decided to use a mechanism that classifies the triggers so that we can apply different rules for the different type of triggers. It allows us to give our ITK and Trigger Convert Engine hints on how to translate the Sybase trigger, which gives us the possibility to blend the different type of DB2 triggers. Let’s have a look at the classifications used:

 classifications.jpg

We started with translating triggers of class 10, 20, 30 because they were the most straightforward to translate and the translation of 1 Sybase trigger resulted in 1 DB2 trigger.

For the sub-division of class 40 our Trigger Convert Engine has the intelligence to figure out what parts need to be split off to the indicated trigger type.

Class 50 means that we injected an annotation (an annotation is a comment that we were allowed to put in the Sybase code and those comments are interpreted by both the ITK and the Trigger Convert Engine). The triggers of class 50 need to pass twice through the Trigger Convert Engine, in the first pass they are re-classified into class 10-40 and during the second round they are treated as simple triggers of the respectively class.

Class 61 is a special type of triggers, in Sybase they contain on the first line the statement “return” and then a lot of other logic. Why create those triggers you would ask? Well originally the rest of the code was executed but is temporally put at rest. We need to translate the entire trigger in case the original business logic would have to be activated again.

Class 70 contains most of the annotations in order to come to a good result when translating and actually running the application. They have to pass once through the ITK and several times through the Trigger Convert Engine. Below you can find an example of the annotations we had to inject:

codetype.jpg

I’m not going to explain all off the gory details of the annotations as it doesn’t make much sense without having the code of the  ITK and Trigger Convert Engine and it is also a piece of our secret sauce!

By the classification and the combination of the annotations the customer can continue to modify the Sybase sources during the migration project. This is necessary if it is a large-scale migration process.

6. Wrap-up and Other Not So Correlated Truths


Over the past decade I have become an experienced traveler, mileage and different languages are becoming trivial. However I still remember my first trip from Brussels to San Jose back in 2000. Weather conditions in Chicago (layover) caused several planes to land simultaneously which was a precursor for queuing at immigration services. Only 25 minutes left and having to move between several terminals I would have appreciated some guidelines. Most people in my immediate proximity seemed only to speak Spanish (which I didn’t expect and unfortunately still don’t speak). I got my connection flight in time thanks to airport trains but once airborne I felt a little bit uncomfortable. This isn’t anything a Coke Light can’t fix I thought, but when I asked the flight attendant she didn’t understand. How could I have known that I should have asked for a Diet Coke?   

During the migration project I re-experience the same as in the above anecdote. There are several milestones in the migration project that each have their own deadlines. Often it was stress because of small and unexpected issues like the difference in the handling of null values or the format of timestamps that seemed Chinese for me, which I regret I don’t speak either. We needed at times help from IBM in order to stay on track and our train has absolutely proven to be the cooperation with the customer which provided us the so needed guidelines.  I have learnt that there still is a difference between Sybase print which can be directly translated in dbms_output.put but if you try that with a jdbc client you get “nice try but no coke light nor diet coke.”

We encountered a lot of issues where existing literature suggests translating a Sybase function into similar existing DB2 functions. When we migrated the data, we noticed that we had to use different data domains in DB2 compared to what is suggested in public literature when replicating from Sybase to DB2. In a large scale migration project simply translating one on one isn’t going to get you at the destination as an experienced traveler.

The majority of the lessons learned have been achieved thanks to the extreme level of testing we conducted. The test scenarios have been provided by the customer and a lot of those scenario’s are built using a methodology we discussed together in the beginning of the project.
Talking about the tests and how we collect results and compare them would drive me too far today, but I believe there will be a focus topic around testing later this year.

7. Acknowledgements


I would like to thank the entire IDUG Content Committee, Eddy Coppens and IDUG Headquarters for reviewing and putting the article online and Hans Miseur (who is always there to listen and confer with).

Without the SuadaSoft team this article wouldn’t have been possible neither and especially Patrick Ericx (our ITK developer), Herman Cop (our Trigger Convert Engine developer), Wim Savonet (our team lead who also provided me the used pictures) have my special thanks.

From IBM I thank Camalla Haley Baruwa (Migration Consultant for teaching me the dances she danced before), William Kulju (DB2 product manager for assisting me in defining the best practices and bringing me in contact with his colleagues in the IBM labs) and Curt Cotner (distinguished engineer) for allowing us to use the same methodology as the old-time MTK and his cooperation in developing our test driver.

And last but not least, I would like to thank the customer with a quote from a friend, Laura “thank you for the challenge; it was my source of inspiration and creativity.”

8. References


DB2 Meet Tool

IT Metrics Report Gartner 

SQL Statements That Can be Executed in Routines

1 Like
Recent Stories
An Introduction to IBM Data Studio

The Basics of SQL Performance and Performance Tuning

Index Decluttering Opportunities in DB2 for Linux, UNIX, and Windows