NOTE: These are only open to members of IDUG. If you are not a member, please CLICK HERE for more information.
NOTE: These are only open to members of IDUG. If you are not a member, please CLICK HERE for more information.
IDUG AU 2016
DB2 for z/OS makes more and more use of REAL memory to improve performance and reduce cost. But if you don't carefully budget and monitor the use of REAL memory on your system, you could be putting your applications at risk. This presentation will go back to the basics and answer the most common questions about REAL memory management including: how does DB2 uses virtual and REAL memory? how to build a budget based on system settings and buffer pool sizes? how to size the LFAREA? what are the key performance indicators and how do I know I am running 'safely'? what can be done to protect the system?
Is your Disaster Recovery solution based on DASD replication functions? This presentation will tell you everything you need to know about the Remote Copy Services for IBM z Systems: Metro Mirror (PPRC), z/OS Global Mirror (XRC), Global Copy and Global Mirror. It will address the most common myths and misconceptions about these DASD replication solutions. Important concepts and functions such as Rolling Disaster, Consistency Groups, HyperSwap and GDPS policies (GO|COND|STOP) will be discussed. It will also provide hints and tips on how to tune your DB2 for z/OS subsystems to achieve the most aggressive Recovery Time Objectives.
A new era of applications focused around mobile devices is becoming increasingly important in large enterprises. RESTful APIs pay an important role in this area. Learn how you can expose DB2 functionality through RESTful urls using DB2 Adapter of z/OS Connect and how you can consume a DB2 Adapter service to build a DB2 for z/OS mobile application using IBM MobileFirst. Using an easy to adapt example, we show what needs to be done from front end to the back end database server to make this happen.
Spark technology is proved to an efficient way for doing analytics. We will illustrate how to do Spark analytics on the XML data. With new release of DB2 for z/OS available soon, we will discuss new XML features and enhancements in coming release, and XML features delivered to services stream. We will demonstrate how to use some of these leading-edge features and enhancements to improve performance and development productivity We will also share a success story of a major client who recently adopted the DB2z-pureXML solution.
IBM® Data Server Manager ia an integrated tool that offers a simple web console to administer, monitor, manage, and optimize DB2 for Linux, UNIX and Windows databases across the enterprise. It is easy to deploy and manage to help tackle everyday database challenges. It provides database administrators (DBAs) and other IT staff with the information they need to manage performance proactively and help prevent problems before they impact the business. It offers lightweight install option for real-time monitoring and administration thus reducing storage costs. Automated, expert advice from a complete holistic view of the database, and intelligent alerts based on trends and statistics help optimize application performance, reduce the need for specialized skills and lower total cost of ownership.
Applications around Mobile devices are becoming increasingly important in large enterprises with requirement of storing JSON in database. This presentation will illustrate the JSON storage model and SQL APIs to manipulate JSON data. We will also present the JSON enhancements we delivered in the services stream. With rapid adoption of JSON, we will share how customers use JSON in a real world scenario. Tips and pitfalls will be given to ensure a successful JSON solution in an agile application development model.
The “good old days” where we simply could copy production data to test/QA as an easy method to test the applications are almost history. Once you start to look into the regulatory and compliance rules growing across the entire world – you really need to be careful what you are doing to help the application groups meeting their goals. Providing REAL production data for testing purposes can be a career limiting decision - let’s have a look at what’s going on and which options are available from a DB2 perspective in terms of making sure you meet the “rules”.
A dive into the DB2 BLU technologies and how they have been applied in version 11 and how they form a common analytical engine Link to dashDB and dashDB local technologies. The presentation will take you through what is new in Version 11, how you can apply it and introduce you to dashDB, dashDB local and when to use what technology in your analytics deployment.
nstead of tuning your SQL one statement at a time, why not look to the advantages of tuning an entire workload of SQL instead? Workload SQL tuning can not only provide faster returns on the investment in tuning time and effort but will usually provide bigger savings of cpu and other resources. Workload tuning can also avoid the unfortunate “Silly Putty” situation where improving the performance of one SQL statement can have unexpected detrimental effects elsewhere! Apart from tactical emergency corrections to individual SQL statements, workload SQL tuning should be the preferred mechanism to ensure that application SQL and the underlying database design work together in the most efficient way possible This presentation walks through a typical workload tuning methodology
Runstats has always been a challenge in terms of how to code the best possible syntax based on the SQL hitting the table and this task has not become any easier between each release of DB2. DB2 11 provides some great tools to assist with this task and the future looks even more promising.
Optimizer Hints have been available since DB2 Version 6 in 1999. This old method was hard to use and prone to error as it relied on the QUERYNO for each statement to link the SQL to the hint. If your program changed causing the QUERYNO to be different then the hint would not operate correctly. To resolve these problems DB2 10 for z/OS introduced a completely new method of providing hints based on the SQL Text and a Repository built into the DB2 Catalog. They can be applied to an SQL statement wherever it appears in the DB2 subsystem. Other advantages the new method provides include the ability to override certain Bind Options at the Statement rather than at the Package level, such as controls over Star joins and Parallelism options. DB2 11 has built on the initial implementation by allowing you to provide Filter Factors for predicates where the Optimizer is not able to calculate them, such as where Host Variables, Parameter Markers or Expressions are being used. This allows the Optimizer to make a more informed decision on the correct access path to use based on your knowledge of the application. This session will describe how to use Statement Level Access path hints as introduced in DB2 10 for z/OS, along with the additional features that have been added with DB2 11 for z/OS.
It isn't easy to find performance bottlenecks and pinpoint the root causes from complex database systems with multiple applications and concurrent users. In this session, we'll go through a step-by-step, decision tree-based approach for tackling performance problems. We start by identifying the core area of the bottleneck - is it time spent processing, or time spent just waiting? Then, by using DB2's powerful monitoring interfaces, we refine our diagnosis, drilling down layer by layer. This logical, efficient approach not only helps us get to root cause, but it does so in a way that avoids wasting time by "trying random stuff" which isn't likely to help.
There are various ways that you can deploy DB2 in the cloud – including hosted and fully-managed offerings. dashDB is a managed, self-service data warehouse in the cloud offering that puts powerful analytics at your fingertips. DB2 on Cloud is a flexible hosted offering that keeps management of the database environment in the hands of the DBA. Regardless of the cloud data service being used, one of the first tasks you’ll have is getting your data into the database and there are various different options available depending on the service. In this session you’ll learn about the different options for data loading and movement, when to use them, and best practices. You may also have a need to stage your data in a cloud-based object store like SoftLayer Object Storage (Swift) or Amazon S3 and guidance on how to store and retrieve data from there will also be covered.
Come get introduced to the DB2 Control Center, Visual Explain and Optimization Service Center replacement, Data Studio. An IBM Data Studio expert will be doing a session which overviews Data Studio. This session will cover all no-charge features , and is appropriate for either DB2 for z/OS or DB2 for LUW Database Administrators.
This presentation will look at the why and wherefores of the various methods of joining data within DB2, specifically looking at all the various ways data can be joined together as well as reviewing where various techniques appear to work out the best from a performance point of view.
Putting the pieces of DB2 pureScale together.
This presentation describes how the DB2 cluster services are the brain behind pureScale.
The greatest opportunity to interact with your clients on a personal level, using analytics to predict their needs and motivations, occurs each and every time they initiate a transaction: on every order placed, every payment requested, every change posted. Every transaction is an opportunity to put an intelligent business decision into action. How can you inject predictive analytics into a real-time operational transaction without degrading the speed or integrity of that transaction? And what is Real Time Analytics anyway?
Since most of the world’s transactional data continues to reside on the mainframe, technologies have been incorporated into z Systems that enable you to do just that. This session will outline some of those technologies based around DB2 on z/OS and how they integrate together to provide you with the means to potentially influence the outcome of a transaction at the time of running it. Should that credit card transaction go through? Should that insurance claim be paid? How can I help this customer while they are on the phone? And this solution can either work in isolation or can complement existing warehouse solutions you may already have in place.
Make the maximum use of your data on the mainframe. Get real (time).
Big Insights and Big SQL, leverages the DB2 BLU engine and is an important part of IBM's Common Analytic Engine strategy. In this session we will drill down on Big SQL and look at how we can leverage SQL to work with data stored in Hadoop - and - the role that Spark will play as a growing in-memory infrastructure for our analytic solutions.
As clients start taking their DB2 environments and want to take advantage of both a hybrid cloud and big data ecosystem they will be looking at a variety of options such as when to use traditional relational vs Hadoop - when to use SQL vs NOSQL - when to use open source components, etc. DB2 is the cornerstone of many such offerings from IBM which help clients make the right deployment decisions with investment protection and application portability. In this session we will go through IBM's Next Generation Platform, our Open for Data Strategy and how our Common Analytic Engine can make these transitions very easy for clients. In particular, we'll drill down on how well positioned DB2 clients are to leverage our strategy.
We always assume that tuning, and thus saving cpu, will result in actual monetary savings
Well, with IBMs Measured Workload Pricing this is not the case - we need to target our tuning at specific times of the month to see real value
This presentation starts by explaining how IBMs MLC works in practice and what this means to our DB2 world, before looking at some methodolgies for SQL tuning
One of the key differentiators for enterprises across industry has been how fast and agile they are capable of integrating the System of records in legacy applications with the other platforms. DB2 data in z/OS constitutes huge treasure trove of business data that often requires to be accessed by customers through various new age digital channels like web, mobile or cloud based applications. Thus publishing DB2 data as webservices forms the key for data consumptions across the board. This presentation provides insight into how DB2 Adapter configured on WAS Liberty profile for z/OS can act as good medium for establishing connectivity with DB2 and outside world. Also it involves discussion around use of Data Studio tool that eases developing Webservices from SQL statements and deploying the same on WAS on one go. It takes through end to end the steps involved in Developing, Deploying and Consuming Webservices for DB2.
Your Guide to Hybrid Cloud approaches and how to leverage them as part of your data and Application Strategies. This presentation will introduce you to common practices and directions in the leveraging of cloud across IT and how you can use DB2 with and in the cloud.
The db2pd command is a powerful tool for monitoring and administering DB2 Linux, UNIX and Windows databases and instances. You can use this tool to keep track of transactions, table spaces, table statistics, dynamic SQL, configuration settings and much, much more. Melanie will compare using SQL statements versus db2pd commands for troubleshooting, problem determination, performance tuning, environment analysis and more.
A new version of DB2 LUW was just born this past June ! This new release includes many enhancements for BLU, Scale-out, Analytics, OLTP, pureScale, security, SQL, hardware acceleration, ad more! Matt Huras, DB2's Chief Architect, will take you through an insider's technical view of the release, the value it provides for your organizations, and, where useful, key internals information.
DB2 Version 11.1 was announced in April, 2016 and eGA'ed in June. New features include BLU support on MPP, OLAP pushdown and performance enhancements in BLU, pureScale simplification, and plenty of new SQL to keep your developers confused for months! Come to this session to get a general overview of this release.
IBM’s DB2 on Cloud offering gives you everything you know and love about DB2 for LUW in a cloud-based packaging of infrastructure and software. Benefits include rapid provisioning for instant productivity and monthly subscription-based licensing. As is the case for on-premises deployments of DB2, it’s very likely that you’ll be thinking about your high availability and disaster recovery needs for your DB2 on Cloud databases. The great news is that HADR is available to you to meet those needs. In this session you’ll learn how to setup HADR for both HA and DR purposes between DB2 on Cloud servers. DB2 on Cloud is also a great DR option for your on-premises DB2 databases and this configuration will also be discussed in detail. Other strategies, including how to backup and restore databases using cloud-based storage (SoftLayer Object Storage and Amazon S3) will also be covered.
DB2 LUW V10 expanded HADR support to allow a HA synchronous and DR asynchronous standby database copy at the same time. This case study covers a recent implementation that included auxiliary standby databases and HA automation with TSA. All implemented in a infrastructure as a service cloud environment. We will look at the design used and the lessons learned throughout the implementation.
DB2 Version 11.1 was announced in April, 2016 and eGA'ed in June. This release contains a wealth of new SQL features, including a number of compatibility features. This session will take you through the list of SQL enhancements and how you can use them. Perhaps you will even be able to figure out what these features are useful for!
DB2 Version 11.1 was announced in April, 2016 and eGA'ed in June. One of the major features of this release was improved BLU performance and the support for BLU in an MPP environment. This session will go into details on what the new BLU features are and how you can take advantage of them in your environment.
It’s been a very exciting year for DB2 in the cloud. There have been new deployment options for both the dashDB and DB2 on Cloud data services and there is a new dashDB Transactional offering that is optimized for transactional workloads. So regardless of your type of workload, requirement for hosted vs. managed, or cloud vendor preference, we’ve got you covered. Also, dashDB is now available for fast and flexible on-premises and private cloud deployments in the form of dashDB Local, which is based on Docker container technology. And for those of you looking to deploy DB2 using a monthly subscription based model (perfect for getting up and running quickly for new projects) there are new packaging options available. Come and learn about all of these great new offerings and capabilities.
Dig into the nuts and bolts of Memory Management. Melanie will describe the details of management of instance-level memory, database-level shared memory and application memory for DB2 databases. Automated Memory Management using Self Tuning Memory Manager will also be addressed. The configuration and use of each memory heap will be described. The monitoring of memory usage using SQL statements, db2mtrk and db2pd command reports will be presented.
Whenever a DBA has an index they want to create to solve a performance issue, there is often someone raising an objection “But will this new index cause the database/application any harm?” For those that desire to successfully create indexes with confidence, this session will present a new method for predictively measuring the impacts of any new indexes so that informed decisions can be fearlessly made. Sometimes, too, this analysis method will reveal that the positive performance impacts of one or more new indexes might be BETTER than expected! #Winning! Example commands and SQL will be provided.
Are you responsible for upgrading your database to DB2 LUW 11? Take advantage of the new DB2 great functionality. Unlock the secrets. If you need a detailed step-by-step approach to implement the newest version of DB2 LUW 11, come learn about often overlooked but very important best practices to understand before and after upgrading. Satisfy your ever increasing demands to analyze business data quickly and get insights into your data.
DB2 with BLU Acceleration can provide order of magnitude improvements in performance, compression and time-to-value. This presentation gives DBAs what they need to know to be successful with BLU. It provides a concise technical look at the best practices the lab has established for BLU, and at the same time, explains the concepts and internals information, that gives DBAs an appreciation for the underlying reason for the best practices. This presentation is updated with the latest BLU details in V11, including the new massively parallel processing support.
DB2 for z/OS has a long history of exploiting z/OS and System z enhancements. Examples of these include Parallel Sysplex (DB2 Datasharing), and more recent enhancements like 64 Bit CSA (HCSA).
This session will on the present and the future. In this session, we will discuss how DB2 is taking advantage of the latest changes in IBM z Systems HW and SW to bring you a more robust, scalable, and affordable computing platform today. z/OS 2.2, the latest in I/O infrastructure, and z13 will all be discussed.
Double (2 part) presentation or half day seminar to provide a technical overview of DB2 12 for z/OS which will cover the follows: goals and highlights, performance enhancements for traditional workloads, performance enhancements for modern workloads, application enablement enhancements, reliability, availability, scalability, security, and migration.
In this presentation John will introduce and discuss some customer war stories, greatest hits in terms of positive and negative experience, and share recommended best practice on a number of hot topics. This will include discussion about EL_CFSTRUCTS_ON_RESTART zparm for deleting the structures, implications of stretching sysplex from a performance perspective (value of SMD and duplexing over distance),
housekeeping (why reorg index, index reorg and runstats, runstats, value of rebind), Alter bufferpool, insert space search (short) and UTS PBG,
IRLM CPU spikes in data sharing, converting over to table controlled with leading keys, customer performance experience with UTS and changing limit keys.
Most sites understand the importance of having a performance measurement/assurance environment to validate the performance of new or changed SQL
The snag is, as DB2 gets smarter about its environment, it gets increasingly difficult to guarantee that the performance DB2 subsystem really does behave the same as the real production subsystem(s)
This is why IBM have introduced Production Virtualization to "fool" a DB2 subsystem into thinking that certain aspects of its environment are actually different from reality - making it possible to model a production subsystem in a completely seperate LPAR
This session looks at how this has been implemented - with a couple of dsnzparm changes and addition to the capabilities of DB2 Profiles. As always, things are never quite as simple as they might have been
With every new release of DB2 we look to see what features will allow us to improve the capabilities and performance of our existing applications as well as the availability of our data. We also have to plan to utilize new features in our development efforts. This presentation takes a look at the features in DB2 11, and the next future release, that will improve our performance and provide us with maximum data availability as well as advanced application development. We will be focusing on features that can be utilized by DBAs and application programmers.
This session will provide an introduction to how data integrity is achieved within DB2 for z/OS by a variety of methods including Locks and Latches. Once you understand how this works we will describe how you can use this knowledge to improve performance and data availability in your applications while maintaining data integrity.
While being an important topic for all users of DB2, the session is primarily aimed at Application Developers and others who may not have been exposed to the detailed internals of DB2 operations. The intention is to explain how Locking works in simple terms and describe how you can exploit this without confusing you with all the gory details of the internals.
As a former DL/1 DBA (remember DL/1?) I was particularly excited to see Hash Access to DB2 data added to DB2 10
So, for the people who have only ever worked on a relational databse, just what IS Hash Access and why do I think it's a "good thing"?
Well, it gives SQL access to a single row from a DB2 table with ONE I/O - no messing around with indexes or table space scans - just go and grab the row
How does it work and where should it be used? Well you'll just have to come to this session to find out
What are the types of analytics workloads? And where should they reside? Analyzing your DB2 for z/OS operational data in real-time had obvious appeal to your business, but what are the query optimization and design implications? Hear from DB2 development about what they have learned working with customers to ensure successful query performance with real time analytics on DB2 for z/OS.
In March 2016, 13 volunteers bravely set out to explore the new features within DB2 12 for z/OS, so that they could write a detailed technical white paper for the IDUG community. Working alongside IBM's formal ESP programme, over the following 5 months the Editorial Committee gained valuable insight into exactly what makes the new release tick. This presentation will summarise the major findings contained within the IDUG DB2 12 for z/OS Technical White Paper, including practical experiences and independent evaluations of the new features.
DB2 12 for z/OS has been announced - so what's coming from the optimizer team regarding performance enhancements? It's too early to provide details - so you'll have to attend to learn how DB2 12 for z/OS will benefit the performance of your applications.