IBM released the initial version of Data Server Manager (DSM) on Friday March 20th 2015. DSM is IBM's new delivery mechanism for monitoring, tuning and administration of IBM database environments. It is available initially only for DB2 for LUW, supporting all currently supported versions of this product (V9.7 and above). It is bundled as part of the Advanced editions of DB2 for LUW, so if you have Advanced Enterprise or Advanced Workgroup licenses it should already be available to you via Passport Advantage. For other versions it is available as a separate extra-cost option called the Performance Monagement Offering (PMO). A free version, providing a subset of the functionality (it does not allow the use of a repository database for historical reporting), is also available for all DB2 users, even for the users of the free DB2 Express-C. One strange thing about the licensing is that it is based on the license of the database server being monitored, and there is no enforcement of this on the individual servers. So if you have a mixed environment of Advanced and other editions of DB2 you really need to set up two copies of DSM to stay compliant, or purchase the PMO for non-advanced servers.
Note that this is very much a first delivery, and we can expect new functionality to appear at regular intervals going forward. I am pleased to report that IBM has consulted extensively with key customers during the development of DSM, and continues to do so to prioritise new features. If you feel strongly about anything that is currently not in DSM then you should engage with your IBM contacts to get involved in this feedback process.
It is built upon the Websphere Application Server Liberty Core (http://www-03.ibm.com/software/products/en/appserv-was-liberty-core), which is IBM's lightweight and easy to deploy implementation of the Java EE 6 Web Profile functionality. The use of the Liberty platform is actually critical to the product, as it makes installation easy and, if no other option is available, allows it to be run on a wide range of existing hardware without causing excessive overhead. This is in direct response to feedback from many customers that earlier tools were both difficult to install and resource hungry due to their requirement for a full WAS stack.
Connectivity to gather information about the databases is over a standard JDBC connection: there is no agent to install on servers being monitored. This also makes setup easy, but does give some challenges for managing components of the DB2 infrastructure above the database level in the deployment hierarchy (instances, clusters and install images for example). There has been a lot of discussion between IBM and the Technical Advisory Board (TAB), made up of leading customers and consultants, about different ways to address this.
Installation and Execution
One of the best things about DSM is how easy it is to get up and running without having to acquire extensive hardware. It is currently available for deployment on Windows, Linux and AIX, with fairly generous version support although it should be noted that the operating system has to be 64-bit (this isn't normally a problem any more).
The software is shipped as a compressed tarball or zip file. Choose an installation location and unpack the file into that location. A directory will be created holding all the software (on AIX and Linux this is called ibm-datasrvrmgr). It isn't necessary to install the software as root, although for a production deployment it would make sense to do so to protect the integrity of the binaries. I noticed that the unpacked software may retain the original file ownership from the tarball if you use the standard “tar -xvf” recipe; depending on your version of tar there may be options to override this. Once the software is unpacked simply run the setup script (setup.sh on Linux or AIX, setup.bat on Windows) to configure it.
It is possible to specify configuration parameters in a setup.conf file. This would, for example, allow you to prepare the repository database during the initial setup. However, I found it easier to just run the standard install and set up the repository database afterwards (even if you configure the repository database details in setup.conf you still need to create the database manually first). The setup script asks you to accept the license agreement and then requests a user and password (twice) for logging into the application. It then starts up the application server on the default ports (11080 for standard connection and 11081 for SSL/TLS encrypted access); these ports can also be changed from inside the console once you log in.
One thing which isn't configured for you on Linux and UNIX is automatic starting, stopping or status checking of the server. However under the bin subdirectory you will find start, stop and status scripts which can easily be incorporated into a standard automation scheme such as /etc/inittab. On Windows, setup.bat makes DSM a standard Winodws service that automatically starts when the machine boots.
Once the server is started, open a web browser and point it at port 11080 (http) or 11081 (https). For example, if you are running DSM on the same machine as the browser you could point it at http://localhost:11080. You will be presented with a login screen: use the login credentials you provided during setup. After logging in it is possible to add additional users to the system.
A typical initial screen for DSM is shown below. We have already added a number of databases (more on this later). What concerns us at the moment is the toolbar in grey on the left hand side of the screen, which lists the main options. The three main functional areas are Monitor, Administer and Optimize; we will deal with each of these in turn shortly.
But first we'll consider some of the “Set Up” options. The most basic of these allow you to change the ports which the web server listens on, administer users and define email (SMTP, including SSL/TLS if required) and SNMP servers to send alerts to. You can also define the repository database to use (only for the non-free version). Note that the repository database has to actually exist before the product can be configured to use it. As well as these basic options there are also controls for how often DSM will gather information and retention policies for the repository database (based on either database size limits or time periods). Finally there is a facility to set up monitoring profiles, controlling a number of aspects of how databases are monitored. Initially there is just a single default profile and this is applied to each database. Note that the default profile does not have the collection of historical information or transmission of alerts via email or SNMP enabled, so if you wish to use these facilities you will have to set up at least one more profile or modify the default one to include these facilities. It is likely that you will create a number of monitoring profiles based on the criticality of the various databases being monitored.
Setting Up a Repository
Unless you have only installed the free basic version, you should set up a repository to hold historical information immediately. The database for the repository must be created first before attempting to configure it within DSM: it is just a standard DB2 database and can be configured on any existing DB2 server. After the database is created, you can go into “Set Up” / “Product Setup” / Repository Database Configuration” to define the database to use for the repository. This method of setup will always be there but you may actually get prompted to configure the repository database from a number of other places in the product if you haven’t already done so.When setting up the database, make sure to use and ID with sufficient privileges; these are documented in the DSM Knowledge Center. The process for setting up the repository takes quite a bit of time to run (in my tests up to about 10 minutes). It also appears that there is no way to influence how the objects are set up inside the database at configuration time, so if you have standards for tablespace names or buffer pools (for example) you will need to do modifications after the configuration has finished.
There are three ways to add databases to DSM: individually, by discovery at a database instance level and by importing a set of database configurations. The first two of these work well, but I've struggled to get the third option to work. I found the discovery in particular was very useful, although there a couple of things which I thought were unusual. The discovery provides a list of all installed DB2 software rather than investigating which of these install locations have DB2 instances associated with them. If you do server fixpack upgrades, and keep the older installation around for a while, this will come up with install locations which don't have any instances associated with them which can cause confusion. Also when it goes to find databases it does not differentiate between local (“Indirect”) and remote databases, and indeed even finds DB2 for z/OS subsystems if you are using DB2 Connect. It will then add these to the main panel, find it cannot monitor them and then you will have to go through and remove them. However these are minor complaints, especially from someone who could never get database discovery to work on the old DB2 client configuration.
Once databases are added to DSM they appear on the home panel with graphical indicators of CPU, I/O and memory usage and a colour-coded indicator of alerts generated for each database. Additional indicators can be added to this display (connections, deadlocks, lock timeouts, storage accesses and storage savings) but each new display takes quite a bit of real estate and reduces the number of databases with information on the panel without scrolling. It is possible to group databases together for display together. However it does not seem to be currently possible to minimize the view to show only the groups rather than all databases in the group, which means that this doesn't help to maximize the information visible in one place.
As a general comment about the user interface, it does look very nice but there is a lot of wasted real estate. All the graphics appear to be very large, with lots of white space around them, and are not resizable, meaning that on a regular screen with only the default three indicators there are only between 6 and 9 databases visible at a time. This would be a real manageability issue for large sites.
To monitor an individual database it can be selected from the home page or by choosing Monitor / Database from the menu bar at the left hand side. If going down the former route, it doesn't seem to matter which of the graphics you click on you always end up at the same place in the database monitoring screens. Once there you will find a wide variety of options examining many aspects of database performance and health.
The monitoring and alerting seems to be the most mature component of DSM. The alerting in particular seems to have been well thought out, including adding features that are going to be required for many sites such as emailing over secure connections (something that has been needed within the existing DB2 alerting and internal emailing facilities for a very long time). I found most of the alerting to be useful and accurate, although there was one alert (on percentage of rollbacks) which I wasn’t sure about and which seemed to appear all the time.
The breadth of information presented on the database monitor screens is extensive. However I have some reservations that the tool is actually catching everything going through the system, based on a comparison of the results from DSM against those from db2top running against the same database. This requires further investigation.
The fact that the tool, unless you are using the free basic edition, can maintain a repository of monitoring information is a great benefit. The fact that this repository is very easy to set up and can be created on any existing DB2 server is also very good.
Under the “Administer” tab there are a number of options available.
The “Explore Databases” tab provides a facility to browse and edit database objects, using a fairly typical hierarchical listing of object types. The display differs a bit from that in Data Studio, in that more objects have been promoted to the first level rather than being found under their schema (personally I think this is a step backwards). One other thing to be careful about is when editing objects is that by default the tool works in case sensitive mode, which I’ve rarely seen used in practice, so if you are not careful you are likely to get case sensitive objects created.
The SQL editor is fairly typical in its capabilities. However it is not nearly as dynamic in its operation compared to the SQL Editor in Data Studio. Having to click “Validate” before it performs any sort of syntax checking is really not good enough in the current era. Strangely enough the keyword highlighting works in real time, although this is not context sensitive (you can write something like “SELECT SELECT FROM SELECT” and get all three occurrences of “SELECT highlighted). It doesn’t get any better with the explain and tuning options within the editor: if you have any more than one SQL statement in your editor pane these options won’t work (you can’t highlight the statement you want to explain and it will work).
The tool has its own scheduler (yes, yet another scheduler in a tool). It doesn’t seem to interact at all with ATS (DB2’s Automated Task Scheduler), but instead seems to have its own facilities. Since IBM has just recently got rid of a similar externally hosted scheduler in favour of ATS, built into the DB2 server, I’m not sure what the thinking is.
There are a few other tools bundled in this section: “Track Changes”, “Compare Configurations” and “Diagnostic Logs”. Of these, the graphical display of the diagnostic log seems to be the one which functions the best.
Overall I found this part of the tool very “underwhelming”. I realise this is the initial release of the tool, but there is plenty of functionality better than this in both Data Studio and various Optim tools that should really have found their way into the product. Some of the limitations are as a result of trying to use a browser based platform rather than a thick client. They may be suitable for occasional use by administrators, but really would not be up to scratch for everyday use either by administrators or developers.
There are two facilities providing for optimisation: “Query Tuning” and “Control Clients”.
Query Tuning again appears to be fairly basic, and has some user interface problems. For example, after deleting a job I could still carry out actions such as looking at the tuning results (it was as if the deletion hadn’t taken place fully or some of the details were remaining in a cache somewhere).
The option to control clients appears to be an attempt to generate rulesets for controlling workload and client configurations based on a variety of characteristics. It appears to need some software installed at the client side e.g. for JDBC drivers an additional JAR file called the “Data Tools Runtime Client” must be deployed, which goes against the “easy to deploy and use” ethos of the rest of the tool. This functionality may have been ported across End to End reporting option within Optim Performance Manager Extended Edition.
It looks like this functionality is also very much a “work in progress”.
As a first version of a totally new concept in administration tools from IBM this is a fairly good attempt. I particularly like the ease of installation and of setting up the repository. Some of the components are more mature than others. The development team will readily acknowledge that this is very much a “work in progress” but have said that we can expect a lot of effort to go into this. My experience of providing feedback to the developers during the beta program has been good, and I’d encourage as many people as possible to use the tool and raise enhancement requests or PMRs (as appropriate) to steer the development of the tool in the right direction.