Anything to optimise from this SQL ?

Nick Cianci

Anything to optimise from this SQL ?
Chris.,
I'm assuming that (9' is 9 minutes). My guess on the Merge Scan is
that DB2 doesn't believe that the predicates on A are doing a lot of
filtering. It would help to know the Card stats for the tables and what
indices are available. For instance is there an index on "I_VAL_SICO" &
"Y_STA_COD" on "A" as well as if there are indices on "A" & "B" for
"C_EMET". You will definitely need the later to avoid a MS-join.

But briefly to the point, no I don't believe you'd make any huge gains
because the bulk of the CPU would be spent in the sort part of MS-join. I
would anticipate only a small circa 10% gain in COMP=NO. Decompressing would
help avoid some of the CPU associated w/ compression, but the flip side is
you would increase the CPU associated with I/O (Still a gain mind you) and
increase the elapsed by the number of I/Os as well incurring more BPool
traffic.

My advice is to concentrate on the indexes; and ask yourself if given the
cardinality & filter criteria from Table "B", or the 2 columns on "A",
should I be able to process this with a Nested-Loop join? If the answer is
yes then work to that end with indices and IF APPROPRIATE play with the
stats (bearing in my the ramifications of playing w/ stats). If MS-join is
not avoidable, and desirable, then ask how you can organise the data (is the
cluster seq, correct and has the data been re-org'd), to minimise the effort
to sort it into the "C_EMET" join seq.

All the best.

Cheers,
Nick F. Cianci
EDS Canberra Solution Centre
DB2 DataBase Administrator
Phone: +61 2) 6275-5863
+61 (0)408-64 06 01


-----Original Message-----
From: Christophe Radier [mailto:[login to unmask email]
Sent: Thursday, 9 January 2003 4:20 AM
To: [login to unmask email]
Subject: Anything to optimize from this SQL ?


hi all ,

I have an SQL statement which takes 9' to execute in normal time ,
unfortunately it is executed at night (in a test environment) at peak CPU
used period , and it takes nearly 2 hours to be executed (the job is 90% on
CPU delay) , i've tried to add indexes on the columns but they are not
enough filtering and are not used.

The SQL is :

SELECT B.MOT_DIR_R , B.MOT_DIR , A.C_VAL_AFC , B.D_DEB_VALI_2
FROM ER.VVA508 A , ER.VVA532 B
WHERE A.C_EMET = B.C_EMET
AND A.I_VAL_SICO = 'O'
AND A.Y_STA_COD <> 'A'

The optimizer decides to use a merge scan join (so TS scan on the two
tablespace)

VVA508 is compressed , so i think that i should try to uncompress it
(COMP=NO) to avoid extra CPU consumption
Is there anything i should do ?

Thanks








[login to unmask email]

Re: Anything to optimise from this SQL ?
(in response to Nick Cianci)
Hi,
I agree with Nick that more information would be helpful. You can write
the query to select a favored access path if you know more details.

Some more things to consider are:
A.C_EMET = B.C_EMET - are both these columns defined identically?
Do you have indexes on these columns?
Do you have column statistics on the columns in the WHERE clause?

Cheers,
Jaiwant
IBM Certified Solutions Expert
DB2 V7.1 Database Administration for OS/390
DB2 UDB V7.1 Database Administration for UNIX, Windows and OS/2

On Thu, 9 Jan 2003 08:43:10 +1100, Cianci, Nick <[login to unmask email]> wrote:

>Chris.,
> I'm assuming that (9' is 9 minutes). My guess on the Merge Scan is
>that DB2 doesn't believe that the predicates on A are doing a lot of
>filtering. It would help to know the Card stats for the tables and what
>indices are available. For instance is there an index on "I_VAL_SICO" &
>"Y_STA_COD" on "A" as well as if there are indices on "A" & "B" for
>"C_EMET". You will definitely need the later to avoid a MS-join.
>
>But briefly to the point, no I don't believe you'd make any huge gains
>because the bulk of the CPU would be spent in the sort part of MS-join. I
>would anticipate only a small circa 10% gain in COMP=NO. Decompressing
would
>help avoid some of the CPU associated w/ compression, but the flip side is
>you would increase the CPU associated with I/O (Still a gain mind you) and
>increase the elapsed by the number of I/Os as well incurring more BPool
>traffic.
>
>My advice is to concentrate on the indexes; and ask yourself if given the
>cardinality & filter criteria from Table "B", or the 2 columns on "A",
>should I be able to process this with a Nested-Loop join? If the answer is
>yes then work to that end with indices and IF APPROPRIATE play with the
>stats (bearing in my the ramifications of playing w/ stats). If MS-join is
>not avoidable, and desirable, then ask how you can organise the data (is
the
>cluster seq, correct and has the data been re-org'd), to minimise the
effort
>to sort it into the "C_EMET" join seq.
>
>All the best.
>
>Cheers,
> Nick F. Cianci
>EDS Canberra Solution Centre
>DB2 DataBase Administrator
>Phone: +61 2) 6275-5863
> +61 (0)408-64 06 01
>
>
>-----Original Message-----
>From: Christophe Radier [mailto:[login to unmask email]
>Sent: Thursday, 9 January 2003 4:20 AM
>To: [login to unmask email]
>Subject: Anything to optimize from this SQL ?
>
>
>hi all ,
>
>I have an SQL statement which takes 9' to execute in normal time ,
>unfortunately it is executed at night (in a test environment) at peak CPU
>used period , and it takes nearly 2 hours to be executed (the job is 90% on
>CPU delay) , i've tried to add indexes on the columns but they are not
>enough filtering and are not used.
>
>The SQL is :
>
>SELECT B.MOT_DIR_R , B.MOT_DIR , A.C_VAL_AFC , B.D_DEB_VALI_2
> FROM ER.VVA508 A , ER.VVA532 B
> WHERE A.C_EMET = B.C_EMET
> AND A.I_VAL_SICO = 'O'
> AND A.Y_STA_COD <> 'A'
>
>The optimizer decides to use a merge scan join (so TS scan on the two
>tablespace)
>
>VVA508 is compressed , so i think that i should try to uncompress it
>(COMP=NO) to avoid extra CPU consumption
>Is there anything i should do ?
>
>Thanks
>
>
>
the
>
>
>
>
>
the DB2-L webpage at http://listserv.ylassoc.com. The owners of the list
can