Why Use Routines in DB2

Why do we have routines in DB2? They allow us to include custom functionality inside our databases. But why do we need them? Should we use them? In the modern days of application-centric development, many developers tend to think of databases as simply a place to keep their information between invocations of the application. Agile developers want to define their own tables and columns based on their specific needs. On the other side, we have the data science and analytics trends that look of most data as being simple files. The business world is much more complicated than that and we need to straddle these trends.

The ability to write our own routines enhances DB2 to make it an integral part of our solutions and not just a simple data layer. With the addition of stored procedures, triggers and user-defined functions we have a fully active database. We can extend our databases in ways we haven’t even thought of yet. You can see this in the way IBM has exploited these features to enhance the way we administer our databases and monitor performance. 

Stored Procedures

It all started with stored procedures. They were the initial step that enabled the placement of simple and complex logic inside the database. In the early days of client/server computing, the client programs ran on desktop computers and connected directly to the database to retrieve and update data. Stored procedures introduced server-based control logic to ensure our data was treated consistently regardless of the client logic.

Their use also alleviated some major security concerns with simple 2-tier applications. The programs executing on the client could be compromised allowing almost unlimited potential for bypassing application logic. In fact, in those 2-tier applications, the end user had to have data access privileges against our databases, allowing them to bypass the application entirely using ad hoc query tools. Keeping critical logic on the database server prevented inappropriate client access.

Today, stored procedures continue to give us that central point of control. When we have the need to support diverse client platforms, stored procedures can be used to isolate our applications from database changes. Application servers may run in the data center or in the cloud. We can write those stored procedures in SQL or in high-level languages.

The primary limitation of stored procedures is a result of the fact that they are, at their core, a distributed program called from a client. There needs to be a client program that makes that call.

User-Defined Functions

The relational model includes the concept of custom domains for columns. Domains extend the basic data types and DB2 implemented them with user-defined distinct data types. In order to translate data to/from columns with a distinct data type, you need functions to handle the appropriate conversions. These can and should be simple SQL-based functions for optimal performance. But we can build on these to provide fairly complex functionality as well.

In DB2 for z/OS, we can use the IBM-supplied MQSEND function to place a message on a WebSphere MQ queue. We can take this basic concept and apply it to other message transport products (like Kafka) with a little bit of work. This can provide a performance benefit because we don’t have to transport the data from the database to the application server and then format the messages.

XML support in DB2 started with functions and the newer JSON features use functions as well. Support for user-defined functions has enabled IBM developers to enhance DB2 quickly with features that support non-relational technologies. We are just starting to see the benefits of integrating products from the big data area and we don’t need to wait for DB2 to supply exactly what we need.

User-Defined Table Functions

User-defined table functions boost the usefulness of programmable routines in DB2 to a whole new level. With table functions, our routines provide data to the running SQL statement as if it existed in a table. We can join it with other tables and use the full power of DB2.

XMLTABLE is a good example of using a table function to pull data elements from an XQuery expression and using them as normal relational columns. DB2 for LUW supplies many table functions, like MON_GET_ACTIVITY_DETAILS, that pull performance data from memory blocks and make them available for analysis using any tool that you’d like.

Generic table functions are even more flexible. They allow you to specify the output table definition at the time you reference the function instead of when you create it. This enables dynamic functions that can act as interfaces to Hadoop, Spark, or other data sources.

We’ve barely scratched the surface of what is possible with table functions. There are so many possibilities.

Conclusion

We could probably ignore routines entirely and still build our applications. After all, we just need unlimited resources and we can do almost anything. That doesn’t mean we should. We’ll have a lot of data in DB2 for the foreseeable future. With the big data bubble, we’re seeing a plethora of alternate data storage options and functional ways to deal with data. Routines let our programs operate on the data inside DB2 so that we don’t have to send it across the network. The ability to create our own routines in DB2 is a feature we should take advantage of where it makes sense.

 

Recent Stories
Selecting Appropriate Data Architecture Technologies

SQL Tuning for Developers – Part 2 Explain Output

Some of my favorite tunings tips and tricks for DB2 by Joe Geller