AD Hoc Access

Brunner Don J

AD Hoc Access
The subject of allowing Ad Hoc access in production was discussed earlier
this year. Unfortunately I discovered I only saved one post on the subject
in my save box. I believe the responses were mixed, some allowed complete
access, some allowed it with restrictions, others allowed no access.

We currently have developers, and IBM contractors who are assisting
them, that have select authority in our production environment. (please
don't turn this into a conversation about the type of work we do and the
fact that contractors have access to our data. All the standard
disclosure forms have been properly signed).

What has been troubling me for some time is the fact that Ad Hoc access
(SPUFI, DSNTEP2, DSNTIAUL & QMF) is occurring very frequently during prime
hours. The one post I do have on this subject stated that they're limiting
their Ad Hoc CPU usage to 2 seconds. Well over half of these accesses
complete with less than 1 second in CPU usage. However, it's not uncommon
to see batch Ad Hoc access jobs (DSNTEP2 & DSNTIAUL) get up to 20 minutes
of CPU time! Non-Production users on our production machines can only use
certain job classes, and the highest CPU allowed in any of these job classes
is 20 minutes. So there are a lot of jobs going out on S322 abends. Often
when reviewing the Dynamic SQL Access report that's run daily for the
previous day, I see that the same job name was executed later but with a
UserID of one of our local DBA's, who can use job classes that allow much
greater CPU usage.

Besides the System resources that are being consumed, I'm concerned about
DB2 CPU utilization, Bufferpool and IRLM usage. All the resources being
consumed by these non-production type accesses are being stolen from real
production work. I've sized DSNDB07 for our production workload, and only
the last data set in DSNDB07 is allowed to go into secondary extents. It's
my way of monitoring if they need increased in size or additional data sets
defined. I have a Control-M job that executes nightly that notifies me if
the
last DSNDB07 has exceeded a set number of extents. I estimate that over 90%
of the time that this does occur, the last data set has gone to the extent
limit. About 100% of the time guess what caused this to occur? If you
guessed an Ad Hoc access, treat yourself to this Saturday off. On a few
occasions where this occurred, in reviewing the DB2 message log in TMON/DB2,
I noticed that production CICS transactions were impacted.

I want to find the right balance of allowing Ad Hoc access to the developers
without impacting production work. I must point out that our development
group and data base designers, with the assistance of IBM contractors, are
in
fact one in the same. That being said, I understand the need to access
production data. Production work however should not be adversely impacted
by
this access. Developers sometimes have tunnel vision concerning their work.

What they're responsible for is the most important thing in their eyes, and
there's nothing wrong with the per se. The problems arises when there are
multiple projects, each thinking theirs is the most important, using a
shared
resource, in this case DB2.

The ideal solution would be to have WLM manage the Plans associated with
these
Ad Hoc accesses, and put them in a discretionary category. That is, allow
them
to use as much system resources as they need, but only when the resources
are
available (the batch ones additionally would still be covered by the total
CPU
allowed by the job class they're executing in). In working with our WLM
person
the past few days, it appears that this isn't possible. It seems that WLM,
in reference to DB2, only comes into play for DDF transactions and Multiple
Stored
Procedures address spaces.

It appears the only solution available which will allow Ad Hoc access but
limit
usage is the RLST. My initial scenario is to create two RLST entries, one
for
prime hours and the other for off hours. For those who are currently using
the
RLST, are you using a similar setup? If so, what type of CPU limitation are

you placing on prime hours access?

We're not a big DB2 shop by any means at the moment, by in the not too
distant
future we could be housing well over 12 terabytes of data. I'm afraid that
in
that type of environment these non-production Ad Hoc accesses will eat our
lunch.

Thanks

[login to unmask email]

Re: AD Hoc Access
(in response to Brunner Don J)

[login to unmask email]

Re: AD Hoc Access
(in response to Mike_Levine@TEKHELP.NET)
Hi Don,

I did not intend to infer that we give unrestricted read access
of production data to developers, we don't. We periodically take
'snapshots' of production data and load into development and test
tables. I was referring to the users/data owners.

I do not see a need for developers to be reading live, production
data to do development work. You may want to consider creating a
secure devl/test environment to load production data or unloading
production data and scrambling sensitive information to load into
your normal, less secure, devl/test environment.

Regards,

Michael Levine
Premier Data Services, Inc



From: Brunner Don J <[login to unmask email]>
To: "'[login to unmask email]'" <[login to unmask email]>
Subject: RE: AD Hoc Access
Date: Wed, 22 Dec 1999 13:20:05 -0500
X-UIDL: 1f82088b6863571b04445a7986d05898

Mike,

Thanks for the reply and the observation about Data Sharing. We're probably
about a year away from implementing it. One point I disagree with you in
reference to our situation is about allowing users to have unrestricted
access to THEIR data. The users I have a problem with in our situation are
the developers/data base designers. Quite simply the data doesn't belong to
them, and therefore they shouldn't be running amuck in our production
environment.

Thanks, and Merry Christmas to you.

-----Original Message-----
From: [login to unmask email] [mailto:[login to unmask email]
Sent: Wednesday, December 22, 1999 10:38 AM
To: [login to unmask email]
Subject: Re: AD Hoc Access


Hi,

We were one of the shops that allows unrestricted read access to
production data (if the user has the required authority). Consider
implementing data sharing and running all of your adhoc from another
LPAR/DB2 data sharing member with separate bufferpools, DSNDB07, etc.
It seems to be a waste of resources to run a query, time out and lose
all of that work. If implementing data sharing is not a short term
option, consider tailoring the QMF governor and setting up RLF to permit
higher limits during non-prime shifts. If your system is not CPU
bound and you are not having response time problems then consider
raising the CPU limits (make sure you have charge back!). Our position
was to let the users have at it until we have a problem. We very rarely
have problems due to this and the users are very happy to have access
to THEIR data. When the system is very busy, the CICS regions have
higher priority and the adhoc waits. We are implementing data sharing
and the adhoc will be moved there first.

Regards,

Michael Levine
Premier Data Services, Inc.



RICK (SWBT) DAVIS

Re: AD Hoc Access
(in response to Mike_Levine@TEKHELP.NET)
Don,
I might be barking up a tree you may not wish me to or be way off
track, but I think two issues may be present in the situation as you
described it. One, this tremendous demand for ad hoc query against
production data by the developers and data base designers could be hiding
some latent demand for actual production work and masquerading as test/devl
work as a work-around to an applications backlog. Two, it appears there may
be no, or a severe lack of a test and development subsystem/LPAR/machine
where cleansed portions of production data could be stored.
You cannot fix these problems with RLF/WLM/QMF governors.

HTH,
Rick Davis
"This e-mail and any files transmitted with it are the property of SBC,
are confidential, and are intended solely for the use of the individual
or entity to whom this e-mail is addressed. If you are not one of the
named recipient(s) or otherwise have reason to believe that you have
received this message in error, please notify the sender at 314-235-6854
and delete this message immediately from your computer. Any other use,
retention, dissemination, forwarding, printing, or copying of this
e-mail is strictly prohibited."



-----Original Message-----
From: [login to unmask email] [mailto:[login to unmask email]
Sent: Wednesday, December 22, 1999 12:46 PM
To: [login to unmask email]
Subject: Re: AD Hoc Access


Hi Don,

I did not intend to infer that we give unrestricted read access
of production data to developers, we don't. We periodically take
'snapshots' of production data and load into development and test
tables. I was referring to the users/data owners.

I do not see a need for developers to be reading live, production
data to do development work. You may want to consider creating a
secure devl/test environment to load production data or unloading
production data and scrambling sensitive information to load into
your normal, less secure, devl/test environment.

Regards,

Michael Levine
Premier Data Services, Inc



From: Brunner Don J <[login to unmask email]>
To: "'[login to unmask email]'" <[login to unmask email]>
Subject: RE: AD Hoc Access
Date: Wed, 22 Dec 1999 13:20:05 -0500
X-UIDL: 1f82088b6863571b04445a7986d05898

Mike,

Thanks for the reply and the observation about Data Sharing. We're probably
about a year away from implementing it. One point I disagree with you in
reference to our situation is about allowing users to have unrestricted
access to THEIR data. The users I have a problem with in our situation are
the developers/data base designers. Quite simply the data doesn't belong to
them, and therefore they shouldn't be running amuck in our production
environment.

Thanks, and Merry Christmas to you.

-----Original Message-----
From: [login to unmask email] [mailto:[login to unmask email]
Sent: Wednesday, December 22, 1999 10:38 AM
To: [login to unmask email]
Subject: Re: AD Hoc Access


Hi,

We were one of the shops that allows unrestricted read access to
production data (if the user has the required authority). Consider
implementing data sharing and running all of your adhoc from another
LPAR/DB2 data sharing member with separate bufferpools, DSNDB07, etc.
It seems to be a waste of resources to run a query, time out and lose
all of that work. If implementing data sharing is not a short term
option, consider tailoring the QMF governor and setting up RLF to permit
higher limits during non-prime shifts. If your system is not CPU
bound and you are not having response time problems then consider
raising the CPU limits (make sure you have charge back!). Our position
was to let the users have at it until we have a problem. We very rarely
have problems due to this and the users are very happy to have access
to THEIR data. When the system is very busy, the CICS regions have
higher priority and the adhoc waits. We are implementing data sharing
and the adhoc will be moved there first.

Regards,

Michael Levine
Premier Data Services, Inc.