In Reply to John Kliewe:
In this particular case, the failing SQL is :
INSERT INTO TABLE1 SELECT * FROM TABLE2
Both tables are in tablespaces with LOCKSIZE ANY LOCKMAX SYSTEM,
and neither table has any RI or Triggers on it.
I am a bit puzzled too if your max user locks is ten times the
TS locks. Outside of the failing SQL, what other
Inserts/Updates/Deletes happened in the same UOW, since the X page
locks from changed pages are retained till Commit? (See Brian
Once Max User locks is hit, that is a fail rather than escalate
further (a little difficult for IBM to implement any other
TABLE2 should not have a lot of locks, unless package bound with
Repeatable Read or something silly like that.
TABLE1 could have as many pages locked as rows inserted, if they
are inserted in scattered cluster sequence. Inserts also experience
conditional locks where the page attempted lock is not successful
due to another process already locking it. In this case the
Inserter looks for another nearby page that is not already locked.
Failed conditional locks probably don't count towards the max lock
thresholds I would hope. I have seen very heavy CPU for large
numbers of failed conditional locks.
Do you have any monitoring tools where you can look at the maximum
number of locks that got accumulated? Many sites do, or even SMF
accounting data for thread might be able to help if the thread
detail data has just this transaction unsummarized.
If you don't mind locking up the whole table, LOCK TABLE before
the Inserts might help. That will minimize count towards max User
Locks. It is usually avoided due to lack of sharing TABLE1 with
other processes for a long UOW.
DB2 Application Performance Specialist
CPT Global Ltd