Dynamic SQL is *always* locking the DBD and it is a real pain point
at every customer site I go to...
I don’t think IBM will actually rewrite their DBD design, but
hope springs eternal! Naturally you can go the one TS with one TB
in one DB route and that would alleviate the problem, but at the
cost of drastically changing the design of every database in
Glad it is Friday!
SOFTWARE ENGINEERING GmbH and SEGUS Inc.
Tel. +49 (0)211 96149-675
Fax +49 (0)211 96149-32
Email: <mailto:[login to unmask email]>
[login to unmask email]
Web http://www.seg.de http://www.seg.de
Link zur Datenschutzerklärung
Software Engineering GmbH
Amtsgericht Düsseldorf, HRB 37894
Geschäftsführung: Gerhard Schubert, Ulf Heinrich
From: Michael Hannan [mailto:[login to unmask email]
Sent: Friday, March 6, 2020 1:49 PM
To: [login to unmask email]
Subject: [DB2-L] - RE: Interesting Aha Idea to vote for (Db2 for
In Reply to Roy Boxwell:
I think we all know the pain of changing a DBD during dynamic SQL
In the past this was not soooo important as dynamic SQL was
nowadays it is the de facto standard and so simply DROPing a dead
happens to be in the same database as your central, most used,
table is a
real pain point. I have seen a very nice idea:
That raises the desire to somehow change the way the DBD is locked
updated in cases like a DROP of a table. I can see this as an
Phase In Rebind. Why not slide the DBD change out gradually?
Whaddya all think???
Not sure i fully understand the problem. In some earlier version of
DB2, maybe as early as V5 I recall Jim Teng presenting on improved
concurrency (maybe not relevant for this one - unsure), anyway at
some point share locking on DBD by Dynamic if using Cache was
Quoting the DB2 V9 Redbook on Resource Serialisation:
Without statement caching, dynamic DML SQL statements acquire an
S-lock on the database descriptor (DBD), when the statement is
prepared. This prevents DDL from executing in the database that has
been referenced by the dynamic DML statement. The lock is held
until the thread reaches a commit point.
When using global statement caching (CACHEDYN=YES), no DBD lock is
acquired for short prepares (retrieving a statement from the cache)
or for full prepares (inserts into the cache) for SELECT, INSERT,
UPDATE, MERGE, or DELETE statements, provided statement caching has
not been prohibited for another reason. Those other reasons include
the use of the REOPT(ALWAYS) bind option, or if DDL was performed
by the application process in the same unit of work. TRUNCATE and
EXCHANGE both take a DBD lock."
Maybe someone could explain the exact scenario in a little more
detail, where Dynamic SQL is a problem share locker of the DBD.
DB2 Application Performance Specialist
CPT Global Ltd
-----End Original Message-----