Blogs

DB2 SQL Enhancements You May Have Missed

By Fred Sobotka posted Jan 01, 2016 11:55 PM

  

Chances are that your business applications and the DB2 LUW databases that power them were built before IBM released DB2 10.1 in April 2012. While it’s no secret that the last two DB2 LUW releases delivered substantial performance gains that make upgrading to DB2 10.1 or even 10.5 a no-brainer, they also offer several new and improved SQL features, many of which can benefit existing DB2 applications with minimal disruption to their design. This article highlights some relatively approachable SQL enhancements that, once adopted, could at least simplify if not accelerate older DB2 applications.

LTRIM and RTRIM learned a new trick

The LTRIM and RTRIM scalar functions gained a second argument in DB2 10.1 and can now trim anything you want from the beginning or end of a string (Example 1). This should reduce the need for additional function calls that would otherwise have been necessary for trickier string edits. Mainframe shops are more likely to be familiar with the latest LTRIM and RTRIM, having first encountered them back in DB2 Version 9.1 for z/OS. The classic, single-argument LTRIM and RTRIM functions (SYSFUN schema) are still available for applications to use, and are the only option for trimming CLOB values.

VALUES LTRIM(' $     +409.320', x'090a0d' || ' $¥£€¤+');

1
------------------
409.320

1 record(s) selected.

Example 1. LTRIM (SYSIBM schema) is called to remove multiple leading characters (whitespace, currency symbols, and the plus sign). The three-byte hexadecimal constant in the second argument specifies tab, newline, and carriage return. The new LTRIM and RTRIM functions are Unicode-aware and can handle valid UTF-8 data (including multi-byte characters) in any of their arguments.


More manageable triggers

In DB2 9.7, a single trigger could be defined to handle multiple DML operations (any non-repeating combination of INSERT, UPDATE, or DELETE), but there wasn’t a straightforward way to determine which specific operation caused the trigger to fire. If you wanted a multi-event trigger to use different values for, say, an INSERT operation than for a DELETE, the trigger’s block of compound SQL had to examine the old and new versions of the row and look for NULL values in the primary key columns to infer which operation occurred. Not surprisingly, that sort of logic was often implemented instead as separate triggers containing more straightforward SQL. DB2 10.1 allows multi-event triggers to take entirely different courses of action based on user-defined criteria such as the triggering event (Example 2), which is easily obtained via three new event predicates: INSERTING, DELETING, or UPDATING.

CREATE OR REPLACE TRIGGER lpstore.sales_activity
AFTER INSERT OR DELETE OR UPDATE OF sales_status ON lpstore.mediaitem
REFERENCING NEW AS n OLD AS o FOR EACH ROW
BEGIN
   IF INSERTING THEN INSERT INTO webshop.new_arrivals(item_id)
VALUES (n.item_id)
   ELSEIF (DELETING AND (sales.sold(o.sales_status) = 0))
      THEN UPDATE lpstore.releases r
          SET r.withdrawn = r.withdrawn + 1
          WHERE r.release_id = o.release_id
   ELSEIF (UPDATING AND (sales.sold(n.sales_status) = 1))
      THEN CALL lpstore.process_commission(n.item_id)
   END IF;
END@

Example 2. A multiple-event trigger serves as a consolidation point for various DML operations on the same base table.


Leaner and more flexible table functions

User-defined table functions (UDTFs) pose design challenges in situations when a set-oriented approach isn’t a good fit for the task at hand. Even seemingly modest processing logic can often require unwieldy, multi-layered SQL queries in order to exist as a table function. What’s more, all those complex queries, loops, and other SQL processing inside the UDTF must run to completion before any results are returned to the caller, which makes proper tuning all the more important. Developers who are wrestling with these issues would do well to explore pipelined table functions, a new type of UDTF in DB2 10.1 that is geared towards loop-based processing. Unlike traditional UDTFs, pipelined table functions return their output a row at a time via the PIPE command (Example 3), and are therefore able to exit early if the caller has stopped consuming rows.

CREATE OR REPLACE FUNCTION unshipped_items(
boxtype CHAR(3), destination CHAR(3))
RETURNS TABLE (item_id VARCHAR(50),
weight_kg DECIMAL(6,2), ucc128label BLOB(128 K))
LANGUAGE SQL
BEGIN
DECLARE item_id VARCHAR(50);
DECLARE item_kg DECIMAL(6,2);
DECLARE AT_END INTEGER DEFAULT 0;
DECLARE C1 CURSOR FOR SELECT i.item_id, i.ship_kg
FROM inv.solditems i
WHERE i.shipdate IS NULL
AND i.ship_to = destination  
AND i.shipbox = boxtype
ORDER BY ship_kg ASC;

DECLARE CONTINUE HANDLER FOR SQLSTATE '02000'
SET AT_END = 1;
OPEN C1;

MYLOOP: LOOP
FETCH C1 INTO item_id, item_kg;
IF AT_END = 1 THEN
LEAVE MYLOOP;
END IF;

-- Return the current row to the caller
PIPE (item_id, item_kg, getBarcode(item_id));
END LOOP;
CLOSE C1;
RETURN;
END@

Example 3. Function unshipped_items() is a pipelined UDTF that iterates over a cursor and calls an external UDF to generate a barcode image for each fetched row. Because pipelined UDTFs can exit early, calling unshipped_items() and applying a filter such as FETCH FIRST 10 ROWS ONLY would result in no more than ten internal calls to getBarcode(). By contrast, a traditional, non-pipelined UDTF would ultimately call getBarcode() for every row returned by its internal query against inv.solditems, regardless of how few or how many rows the caller actually consumed.


A toolbox of built-in global variables

Determining the who, what, and where of the current session/connection became a lot easier in DB2 10.1 with the addition of several global variables (List 1) that are accessible to any static or dynamic SQL expression:

CLIENT_HOST        CLIENT_IPADDR        CLIENT_ORIGUSERID
CLIENT_USRSECTOKEN MON_INTERVAL_ID NLS_STRING_UNITS
PACKAGE_NAME PACKAGE_SCHEMA PACKAGE_VERSION
ROUTINE_MODULE ROUTINE_SCHEMA ROUTINE_SPECIFIC_NAME
ROUTINE_TYPE TRUSTED_CONTEXT

List 1. Built-in global variables (SYSIBM schema). All of the variables listed above are session global variables except for MON_INTERVAL_ID, which is a database global variable.

Given its unassuming name, you’d be forgiven for not recognizing MON_INTERVAL_ID as the standout item in the list. Like a DB2 sequence, MON_INTERVAL_ID is a BIGINT counter with database-level scope, and its value can only be modified by calling a special command. The difference, as you’ll soon see, lies in how two of DB2’s most powerful monitoring features (event monitors and usage lists) use MON_INTERVAL_ID to deliver a major assist to anyone capturing and analyzing performance monitoring data.

As of DB2 10.1, event monitors and usage lists now store the current MON_INTERVAL_ID number in every event record they write. By incrementing MON_INTERVAL_ID at the beginning of each download session (Example 4), a designated agent program or SQL script (one per database) could be implemented as a simple, stateless cron job that knows exactly which monitoring event records to retrieve without remembering or evaluating any details from previous invocations.

  1. Connect to DB2.
  2. Before closing the current monitoring interval, capture its value into a session variable.
    VALUES MON_INTERVAL_ID INTO MY_MON_ID;
  3. Start a new monitoring interval (but don’t use its value).   
    CALL MON_INCREMENT_INTERVAL_ID(?);
  4. Retrieve rows from all usage lists and unformatted event tables for the previous interval.
    SELECT … FROM … WHERE MON_INTERVAL_ID = MY_MON_ID;
  5. Disconnect from DB2.
  6. Publish the results to the monitoring platform, performance dashboard, ETL process, etc.
  7. Terminate.

Example 4. Overview of a stateless data collector program that periodically downloads recent DB2 monitoring event records based solely on MON_INTERVAL_ID. Without the context provided by MON_INTERVAL_ID, the data collector program would need to be stateful, preserving details from when it last ran in order to know what to retrieve for the current download session.

Think “nip and tuck”, not “brain transplant”

When it comes to older applications, developers (and management) may prefer to leave well enough alone, especially if the last DB2 upgrade went smoothly. Those willing to roll up their sleeves and retrofit some aging SQL will find a wealth of minimally invasive options in DB2 10.1 and newer, including the lesser-known features revealed in this article. It’s definitely easier to justify revisiting old SQL in order to resolve a performance issue, but you should also consider tackling the app’s more complex, awkwardly-written queries and routines. As the SQL modifications are deployed, watch for reductions in timeouts, deadlocks, application errors, and user complaints as signs you’re on the right track.


Fred Sobotka

0 comments
3 views