QMF/Commit question

Dave Nance

QMF/Commit question
Any ideas on how to get QMF to commit every so often? One of the things we've always encouraged our developers to do is commit on long running processes. We've made some headway on that front. Here comes the but. We have, also, encouraged them to use QMF procs, queries and forms for a lot of their reporting needs as it is quicker development time and some of those are very long running, due to amount of data. Any thoughts/tips would be appreciated.

Dave Nance
First Health Services, Corp.
(804)527-6841

This message, including any attachments, is intended solely for the use
of the named recipient(s) and may contain confidential and/or
privileged information. Any unauthorized review, use, disclosure or
distribution of this communication(s) is expressly prohibited.
If you are not the intended recipient, please contact the sender by
reply e-mail and destroy any and all copies of the original message.
Thank you.



Walter Janißen

Re: QMF/Commit question
(in response to Dave Nance)
Dave

For the queries, if they are only selects, you can use the clause WITH UR.
As far as I know, there is no way, to commit in between a query. If a user
selects data, he has to scroll to the end of the data to release all the
locks.



Nick Cianci

Re: QMF/Commit question
(in response to Walter Janißen)
There is also "Reset Data" at the command line, but then you lose the result
set as well if my memory of QMF V3 serves me well. Still it might be handy
in limited situations.

Cheers,
Nick F. Cianci
IBM Certified Solutions Expert
- DB2 UDB V7.1 Database Administration for OS/390

EDS Canberra Solution Centre
DB2 DataBase Administrator
Phone: +61 2) 6275-5863
+61 (0)408-64 06 01


-----Original Message-----
From: Walter Jani?en [mailto:[login to unmask email]
Sent: Monday, 13 January 2003 11:43 PM
To: [login to unmask email]
Subject: Re: QMF/Commit question


Dave

For the queries, if they are only selects, you can use the clause WITH UR.
As far as I know, there is no way, to commit in between a query. If a user
selects data, he has to scroll to the end of the data to release all the
locks.








David S Waugh

Re: QMF/Commit question
(in response to Nick Cianci)
David:

I'm wondering a couple of things:
- Are these QMF procs & queries you're talking about SELECT-only, or
are they also doing UPDATEs, DELETEs, or INSERTs (or SAVE DATA)?
- What is your FETCH (F=) parameter set to at initial QMF invocation?
(i.e. are you taking the default, or did you bump it up?)
- Is it locking problems you're experiencing, or something else?

Situations where I've seen QMF cause/experience performance/lockout problems:

1. User is doing SAVE DATA after a query, and saving into the default QMF tablespace. They really should be set up with their own tablespace to save into, appropriately sized for their needs. Otherwise, they'll be (a) blowing out the extents for the default tablespace, (b) possibly getting into lock conflicts with each other on this tablespace, and (c) crawling along.

2. QMF SELECT queries ordinarily release page locks as they move from page to page in the table(s), so it's not like they're holding onto thousands of locks while the query is running (so frequent COMMITs shouldn't be necessary). However:

(A) The QMF query can run into a page that is locked by someone else for update. It patiently sits there waiting, but is holding an IS lock on the previously-read page while it's waiting. That can cause problems for other updaters. The problem may not be QMF per se, but some other update program that isn't committing frequently enough.

(B) If I remember correctly (and if it hasn't changed) Unless there is materialization, QMF fetches enough pages to satisfy the F= invocation parameter before showing the user the first page of results. It sits patiently waiting for the user to scroll thru all that result data before it begins to fetch another nnn pages to satisfy the next F= fetch limit. In the mean time, it holds a lock on the last page read. I've seen situations where the lock on that single page is held a very long time, as the user slowly scrolls thru the results (or fires off the query and leaves for lunch). Meanwhile, other processes get hung up waiting for QMF to release its lock on that page. You have two possible 'solutions' to this problem:
(1) There's a way to force a "max scroll down" at the end of every query to force QMF to read thru all pages and release all its locks (I don't remember specifically how I did this, but I know it's possible). [You generally can't count on users doing this on their own.]
(2) Bump up the F= invocation parameter sufficiently to make it more probable that all pages in the result set are read & all locks released before the user sees the first page of results. I used to set it to 150,000 rows to start (the default is far, far lower than that).
Online users hate these two solutions, as it takes longer for results to show up on their screen. As an alternative, you could "encourage" users to run their really-long-running queries in batch (again with a very large value for F= specified). Different QMF Governor limits for online and batch help "encourage" users to run in batch. Batch QMF runs quite a bit faster than online QMF, mostly because it's not constantly getting swapped in & out. But you're probably already doing that.

I heard a long time ago that IBM was going to (under the covers) add "FOR FETCH ONLY" onto the end of every QMF query that did SELECTs. Don't know what ever happened to that (or if it was just another unsubstantiated rumor). Anybody else know? That would be another thing to look at. If you can't get your users to add FOR FETCH ONLY to the end of their queries, I believe there's a way to "capture" the SQL statement before it gets sent off to DB2 so that you could add FOR FETCH ONLY auto-magically for them. The hard part would be (a) telling the difference between SELECT queries and UPDATE, INSERT and DELETE "queries" and (b) not putting a second FOR FETCH ONLY on the end if the user already did it themselves.

Adding WITH UR would help, but you wouldn't want to do it for every query, because some (most?) users don't want "dirty read" data in their results. The user would have to add this clause themselves where appropriate.

IMO, you shouldn't allow QMF users to UPDATE, INSERT & DELETE in a production environment -- that kind of stuff should be done by an application program (that CAN do regular commits). If QMF is doing mass updates on hundreds of thousands of rows, it's going to hold locks on thousands of pages before it finishes, and that's not a good thing. [Besides, QMF is supposed to be a "Reporting" tool, not an application programming language]

I don't know of any way of getting QMF to do commits during an insert/update/delete process - I doubt if it's even possible. There's no need to do commits during a select query.

Hope this helps!

David Waugh, NCW
DSW Consulting & Services
Former DB2 Sysprog, now clueless DB2 UDB Win/NT DBA
===
Creative BUMPER STICKERS (from www.dumbbumpers.com)
===
Maybe the Hokey-Pokey IS what it's all about.


---------- David Nance <[login to unmask email]> writes:

From: David Nance <[login to unmask email]>
To: [login to unmask email]
Subject: QMF/Commit question
Date: Fri, 10 Jan 2003 11:51:05 -0500

Any ideas on how to get QMF to commit every so often? One of the things we've always encouraged our developers to do is commit on long running processes. We've made some headway on that front. Here comes the but. We have, also, encouraged them to use QMF procs, queries and forms for a lot of their reporting needs as it is quicker development time and some of those are very long running, due to amount of data. Any thoughts/tips would be appreciated.

Dave Nance
First Health Services, Corp.
(804)527-6841

This message, including any attachments, is intended solely for the use
of the named recipient(s) and may contain confidential and/or
privileged information. Any unauthorized review, use, disclosure or
distribution of this communication(s) is expressly prohibited.
If you are not the intended recipient, please contact the sender by
reply e-mail and destroy any and all copies of the original message.
Thank you.