DB2 11 Application Developer for z/OS - Part 10 - Advanced Programming Topics

One could say that we are almost at the end of our journey. The DB2 11 Application Developer for z/OS certification test consists of 10 sections so this writing should be the end of the story, right? Well, sort of.

  • First, thank you if you read these blogs, hopefully you found them interesting and useful for future reference. Indeed, this is the last posting in this series. Please remember that you can collect the links on the IBM Knowledge Center and create a PDF from your collection.
  • Secondly, if you are going to take the exam, I wish you good luck; hopefully the links helped you a bit to study the relevant material.
  • And last, but not least, this is only a beginning, in my opinion. The learning process almost never ends, so keep educating yourself as long as you like :-)

Let me start this blog post with a list of links to the previous blog entries:

  1. Part 1 covers the Database objects basics
  2. Part 2 shows how to retrieve and manipulate the data
  3. Part 3 is about the distributed access to DB2
  4. Part 4 shows how to code SQL in application programs
  5. Part 5 discusses the processing options that affect the program design
  6. Part 6 brings performance considerations
  7. Part 7 defines and discusses Units of work a restart processing
  8. Part 8 gives some hints for application design and lifecycle
  9. Part 9 shows how to access non-structured data

The last part of the test covers Advanced Programming Techniques like - working with stored procedures and user defined functions, triggers, temporal tables, temporary tables, column masks … There is a lot of material to cover, so let's try to summarize it in the following sections.

Ability to work with Stored Procedures

  • Stored procedures are a great feature to simplify the application logic as the code can be written once and called from different parts.
  • Therefore, probably the main use case for an application developer is calling a Stored procedure directly from the program, assigning values to the input parameters (IN and INOUT), processing the output parameters (OUT and INOUT) and eventually retrieving result sets using locators. Keep in mind that if an OUT parameter is not set within the procedure, NULL will be returned, while if an INOUT parameter is not set within the procedure, its input value will be returned.
  • There are also examples how to call a stored procedure for example from C, COBOL, and Java.
  • You can learn more details about Stored Procedures in the section called Implementing Stored procedures. It does not only contain a nice introduction to stored procedures and their benefits, but also shows how to create a stored procedure, how to call a stored procedure from an application program, but also how to debug, and configure stored procedures.
  • To create a stored procedure, you can pick up one of the three flavors:
  1. External stored procedures are implemented in an external programming language (Java, C, COBOL, PL/I, REXX, HLASM). Application programming guide shows the steps needed for creating such procedure.
  2. External SQL procedures are written in SQL, but are created, implemented, and executed like external procedures. These procedures are converted to C programs and compiled like the external procedures.
  3. Native SQL procedures are also written in SQL, but are created by a single SQL statement and do not have any associated external program. These procedures are executed as SQL bound into a package. See more details about the difference between external and native procedures. The native SQL stored procedures can also exploit the SQL control statements.
  • DB2 comes with a set of supplied stored procedures. It is worth reviewing them as they can be very helpful even in application programs.
  • The administrative tasks for stored procedure include:
  • There is also a great Redbook about Stored Procedures - Stored Procedures: Through the CALL and Beyond. It might help you even if it is written for DB2 9.

Ability to make use of UDF's

  • The introduction to User defined functions (UDFs) describes an UDF as a routine that can be invoked from other SQL statements and that returns a value (scalar) or a table.
  • More details about functions in general can be found in the SQL reference. Including the types of UDFs, the function invocation, and a function resolution.
  • This is how you can invoke a UDF from an application program. Make sure that DB2 invokes the intended function and check it in the DSN_FUNCTION_TABLE after EXPLAIN.
  • There are cases when DB2 casts the arguments passed to UDFs.
  • DB2 also comes with a set of built-in functions, which you can call from your applications.
  • The procedures how to define a UDF depends on the type of the function. You can refer back to the previous blog (part 2) that shows how to create an User Defined Function and lists the different types of functions.
  • Also see the UDF samples, which are shipped with DB2.
  • The administrative operations for UDFs are:

Ability to use Triggers

  • Triggers are set of rules (in SQL) that are activated when an insert, delete, or update operation occurs on a specified table or view. If you are not familiar with triggers, read more in the introduction of DB2 and also see the ways to define triggers.
  • There is a nice overview of triggers in the Application programming and SQL guide. It describes the main use cases of triggers - controlling and monitoring data changes in DB2, and also shows the parts of the triggers such as trigger name, subject table or view, activation time (before or after), triggering event (insert, update, delete), granularity (for each row, or for the statement), transition variables (old, new data) and tables, and triggered action (condition and body).
  • You can define a trigger using the CREATE TRIGGER statement.
  • You can even define triggers on tables with XML columns, but there are few restrictions.
  • The body of a trigger can include only one SQL statement, if you need to perform more actions or some advanced logic within a trigger, you can invoke User defined functions or Stored procedures from within the trigger body.
  • You can also learn more about DB2 triggers in the DB2 for z/OS Application Programming topics Redbook.

Ability to manage relationships between tables using Referential Integrity

  • Referential integrity is an important concept in the relational databases in general. Make sure to review the Database design notes that discuss the logical and physical design in general. Do not forget about normalization of the data.
  • The Application programming guide discusses the referential integrity from the developers perspective.
  • Referential integrity is defined using the parent and foreign keys, which define the relationship of the parent and dependent tables, and by a set of rules (insert, update, delete rules), which provide the integrity.
  • One could also ask oneself - how to maintain a referential integrity when using data encryption. Here are some tips.

Ability to work with Temporal and Archive Tables

  • Temporal tables (introduced in DB2 10 and improved in DB2 11) and Archive tables (introduced in DB2 11) can be very helpful in application simplification, because the tasks that needed to be managed by the application logic can now be driven by the DB2 itself.
  • There are three types of temporal tables - system, business (sometimes called application temporal), and bitemporal tables. You can find more details in the SQL reference.
  • The administration guide discusses the temporal tables in more details. It defines a temporal table in general, and it shows:
  • From the application developer perspective, the most important thing is how to query temporal tables using the period specification of the table reference, or using the CURRENT TEMPORAL SYSTEM_TIME or CURRENT TEMPORAL BUSINESS_TIME special registers (new in DB2 11).
  • For the application period tables, the UPDATE and DELETE statements allow updating and deleting of rows based upon a business period. See the period clause (FOR PORTION OF BUSINESS_TIME) of these statements.
  • An archive tables stores old (deleted) rows from an archive-enabled table.
  • The administration guide shows how to create an archive table.
  • When querying the archive-enabled table (also called the base table), you can specify whether you want to include the rows from the archive table using the GET_ARCHIVE built-in global variable.
  • When deleting rows from the base table, you can specify whether a copy should be stored in the archive table using the MOVE_TO_ARCHIVE global variable.
  • Also, have a look at two recent great blogs about temporal and archive tables at idug.org: Dan Luksetich wrote an article about the complexity of time travel queries, Kurt Struyf and Martin Hubel show how to manage data over time.

Ability to work with MQTs

  • Materialized query tables (MQTs) improve the performance of complex queries on large data by pre-computing the results and as such they may be very helpful especially for warehouse applications.
  • The definition of a MQT includes a query in the form of a subselect, as can be seen in the CREATE TABLE
  • There are two types of the MQTs: system managed and user managed.
  • The REFRESH TABLE statement refreshes the data in the MQT. This is the only way to refresh the system managed MQT, for user defined MQT you can use INSERT, UPDATE, DELETE, TRUNCATE, MERGE or LOAD. REFRESH TABLE is not recommended for user managed MQTs.
  • The queries in the application programs can be rewritten by the DB2 optimizer using Automatic Query Rewrite (AQR) to use a MQT if the query would cost less.
  • Make sure to use the queries that are eligible for AQR so that DB2 can consider it for AQR.
  • You can check whether MQT will be used by looking into the PLAN_TABLE. TABLE_TYPE='M' indicates that the table used is MQT.
  • Application programmers can enable AQR by two special registers:
  • The managing performance guide contains several hints and tips how to create MQT, populate system and user managed MQTs, enable the table for AQR.
  • It also gives some recommendations for MQT and base table design and shows examples shipped with DB2.

Ability to work with Temporary Tables

  • Temporary tables can be useful for storing the intermediate results.
  • Make sure to understand the distinctions between the base tables and temporary tables.
  • DB2 provides two kinds of temporary tables - created global temporary tables (CGTT) and the declared global temporary tables (DGTT). One of the main difference between CGTT and DGTT is that CGTTs have the permanent and sharable description in the DB2 catalog, while DGTTs have none.
  • Declared global temporary tables can be logged or not-logged. The created global temporary tables are not-logged.
  • Created temporary table can be created using the CREATE GLOBAL TEMPORARY TABLE statement. You can even set the default statistics for this kind of tables.
  • Declared temporary table can be created within the application using DECLARE GLOBAL TEMPORARY TABLE statement. The qualifier for the the DGTT is SESSION.
  • There are only a few utilities can you can run with the temporary tables.

Ability to work with Data Masking techniques

  • Data masking techniques provide mechanisms to secure the confidential data. In DB2 you can use row-level and column-level access control, also known as row permissions and column masks access control.
  • A column mask describes an access control rule for a column in a form of a SQL CASE expression.
  • A column mask can be created using the CREATE MASK statement.
  • To secure your data, you may also consider data encryption and encrypting the data using DB2 built-in functions.
  • A row permission describes a row access control rule for a table in a form of SQL search condition. The important thing is that a row permission acts as a filter applied to the table before any other operation. It also ensures that a row to be inserted or updated conforms to the definition (similar to the WITH CHECK option for a view).
  • A row permission can be created using the CREATE PERMISSION statement.
  • Before implementing column masks, make sure you fully understand possible shortcomings. I wrote a blog with an example how to overcome a trivial column mask.

Emil Kotrc
CA Technologies


Nice collection of materials for the exam

December 30, 2016 02:36 PM by Monickaraj Duraipandi

I want to thank Emil Kotrc for this wonderfull blog of study material collection. I passed DB2 11 Developer for Z/os exam today by reading through the materials mentiond in this blog! This blog was very usefull to prepare for the exam and I also read DB2 11 Exam C2090-313 Practice Questions By Robert N. Wingate in the preparation. Thank you for this nice blog.


Thanks Emil Kotrc for the wonderful blog

September 24, 2017 01:28 AM by Harish Kumar Mummidisetty

Firstly I want to thank Emil Kotrc for the blog which is so informative. I cleared my certification exam yesterday by just going through this blog for just 4hrs. 

Recent Stories
New V12 functionality - UNLOAD authority! Why? What is the point?

Partition By Growth Table Spaces - Partition 2, Living with PBGs

Partition By Growth Table Spaces - Partition 1, Getting Started