Mad MAX goes again ....correction

Terry Purcell

Mad MAX goes again ....correction
After a discussion with Arli off list, I must post a correction to my
previous post:

I misread that the original query had additional GROUP BY's to remove the
duplicate K_NUMBER values. Therefore the statement:
"With the combination of the DISTINCT and GROUP BY, the DISTINCT is
redundant and therefore discarded by the optimizer."
Is not correct in this context.

The other addition is that any references to GROUP BY, should read "double
GROUP BY" to distinguish between the two original queries.

The rest of my remarks are free to be challenged or clarified :-)

That'll teach me to reply to emails at 2am!!

Terry
-----Original Message-----
From: Terry Purcell [mailto:[login to unmask email]
Sent: Thursday, December 14, 2000 12:46 AM
To: DB2 Data Base Discussion List
Subject: RE: Mad MAX goes again ....


Arli & Venkat,

I had been off the list and unfortunately missed the trail on this.

The 4K cache is the limit and there is no way to alter the size.

With the combination of the DISTINCT and GROUP BY, the DISTINCT is redundant
and therefore discarded by the optimizer.

Also, the comparison of the original two queries (I know it was a logical
comparison); the main difference between the GROUP BY expression and the
correlated subquery is as follows:

Both examples require the entire table of MYTABLE to be processed, and the
most efficient method is a sequential access method. The correlated subquery
requires all rows of the outer query to be read, and every row passed to the
inner subquery (of course repeated executions can be reduced due to the 4K
correlation cache, dependant on random nature of inner table sequence). The
inner table accesses MAY benefit from index lookaside and dynamic prefetch,
but even index lookaside requires access to every index non-leaf page once
the end of each leaf page is reached.

Alternatively, the GROUP BY will utilize pure sequential prefetch (rather
than the less efficient dynamic prefetch, which is still more efficient than
sync I/O), and will not require getpages on any non-leaf pages, as the
non-matching index scan (assuming this is the access path chosen) scan leaf
pages only. The GROUP BY may avoid a sort due to an available index, but
will materialize the NTE. The inner join to the other NTE will perform a NTE
merge so that there is only one materialisation (even though the query
visually contains 2 NTEs)..........

The upshot is that when processing the entire result set, a pure random
approach involving a GROUP BY encapsulated within an NTE can easily
outperform a correlated subquery which must pass every row to stage 2 to
either attempt the subquery, or match the correlation predicates and
TRUE/FALSE result in the correlation cache.

In addition to these two methods, there is an additional coding method which
can retrieve both summary and detail columns within a single query block
(provided an index can avoid a sort), rather than the two examples shown
which require two table accesses.

You can check this out at YL&A's Advanced & Complex SQL courses, or I can
pass this onto the list if required.

Regards
Terry Purcell
[login to unmask email]
Yevich Lawson & Associates
WWW.YLASSOC.COM
WWW.DB2-PORTAL.COM
WWW.DB2-SYMPOSIUM.COM



Arli (SG-SEMA) Salim

Re: Mad MAX goes again ....
(in response to Terry Purcell)
Hi Sanjeev,

I think the distinct in the third SQL would solve that
problem.

However, I agree with you that the result sets "may" be
different (because no order bys are specified in any of
the SQLs.) Any opinion on this?

Cheers,
arli

-----Original Message-----
From: S, Sanjeev (CTS) [mailto:[login to unmask email]
Sent: Thursday, December 14, 2000 3:31 PM
To: [login to unmask email]
Subject: Re: Mad MAX goes again ....


Hi Arli,
What if i point out something which may be wrong ? "someone will point me
in the right direction, that's good".I sent this mail yesterday as well but
it bounced back because of change in my e-mail address. Thanks to the list
owner for changing the address.

I think that the result set "may" be different due to the grouping you have
done on year.In the sqls mentioned in the Issac's mail you can miss the
records of some years if the max(K_number ) is same for the some of the
years.In your sql , u will be getting those as well.I hope i have understood
the things clearly.
Regarding performance, i would say checking all the possiblilities
depending on all the stats and choose the better one.

HTH
Regards
Sanjeev



> -----Original Message-----
> From: Salim, Arli (SG-SEMA) [SMTP:[login to unmask email]
> Sent: Wednesday, December 13, 2000 9:34 AM
> To: [login to unmask email]
> Subject: Re: Mad MAX goes again ....
>
> Hi Isaac & other Mad MAXes,
>
> I'm a new dba but am prepared to be shot down by you all,
> on or off the list.
>
> I feel that there's a better SQL to do the same job.
> Could any of you confirm on the equivalency and relative
> performance of this one:
>
> select distinct max(k_number)
> from MyTable
> group by year;
>
> Cheers,
> arli
>
> > > SELECT A.K_NUMBER
> > > FROM MyTable A
> > > WHERE A.K_NUMBER =
> > > (SELECT MAX(B.K_NUMBER)
> > > FROM MyTable B
> > > WHERE A.YEAR=B.YEAR)
> > > GROUP BY A.K_NUMBER ;
> > >
> > > SELECT A.K_NUMBER
> > > FROM
> > > (SELECT K_NUMBER,YEAR
> > > FROM MyTable) AS A
> > > INNER JOIN
> > > (SELECT MAX(K_NUMBER) AS K_NUMBER,YEAR
> > > FROM MyTable
> > > GROUP BY YEAR) AS B
> > > ON A.YEAR = B.YEAR
> > > AND A.K_NUMBER=B.K_NUMBER
> > > GROUP BY A.K_NUMBER ;
> > >
> > >
>
This e-mail and any files transmitted with it are for the sole use
of the intended recipient(s) and may contain confidential and privileged
information.
If you are not the intended recipient, please contact the sender by reply
e-mail and
destroy all copies of the original message. Any unauthorised review, use,
disclosure,
dissemination, forwarding, printing or copying of this email or any action
taken in
reliance on this e-mail is strictly prohibited and may be unlawful.

Visit us at http://www.cognizant.com





Isaac Yassin

MAD MAX goes again
(in response to Arli (SG-SEMA) Salim)
Hi,
As my PC went down with all mail, I have not kept up with everything, yet.
Even now the "reply" does not work...
Any way - the query was meant to check the price one pays for the limited
4kb table, this should be changed either by enlarging it or allowing syntax
like the one allowed in the PC/UNIX DB2 version.
As for the "distinct...group by" - it takes 2 minutes and 4 seconds of CPU
time which is more than the "join" method.
Thanks y'all for participation in the "game" -

Merry Xmas / Happy Hanuka / blessful Ramadan - pick your choice ...

Isaac Yassin
DBMS & IT Consultant
[login to unmask email]



Arli (SG-SEMA) Salim

Re: MAD MAX goes again
(in response to Isaac Yassin)
Hi Isaac,

Just came back from leave. Thanks very much for taking the
trouble to run the SQL.

Did anybody notice that Query 3 (Q3) is quite similar to the
right hand side of the inner join of Q2, with the only major
difference being the distinct clause?

I'm quite puzzled that Q3 didn't outperform Q2. I had estimated
the result set of 'max(k_number) group by year' to be quite
small (some hundred rows at most, I believe). In such a
case, can I more or less say the following (of course, just
wild guesses without access plans & stats):

1. sortc_uniq (for the distinct) processing was the cause for
the slower performance?; and/or
2. Q2 was run after Q1, so it probably took advantage of
some bufferpool hits?

For those who missed the earlier parts, here are the CPU times
taken by Isaac:

Query No CPU time (min:sec.centisec)
Q1 3:51.43
Q2 1:58.24
Q3 2:04

Q1:
SELECT A.K_NUMBER
FROM MyTable A
WHERE A.K_NUMBER =
(SELECT MAX(B.K_NUMBER)
FROM MyTable B
WHERE A.YEAR=B.YEAR)
GROUP BY A.K_NUMBER ;

Q2:
SELECT A.K_NUMBER
FROM
(SELECT K_NUMBER,YEAR
FROM MyTable) AS A
INNER JOIN
(SELECT MAX(K_NUMBER) AS K_NUMBER,YEAR
FROM MyTable
GROUP BY YEAR) AS B
ON A.YEAR = B.YEAR
AND A.K_NUMBER=B.K_NUMBER
GROUP BY A.K_NUMBER ;

Q3:
SELECT DISTINCT MAX(K_NUMBER)
FROM MyTable
GROUP BY YEAR;

Cheers,
arli

-----Original Message-----
From: Isaac Yassin [mailto:[login to unmask email]
Sent: Tuesday, December 19, 2000 3:49 AM
To: [login to unmask email]
Subject: MAD MAX goes again


Hi,
As my PC went down with all mail, I have not kept up with everything, yet.
Even now the "reply" does not work...
Any way - the query was meant to check the price one pays for the limited
4kb table, this should be changed either by enlarging it or allowing syntax
like the one allowed in the PC/UNIX DB2 version.
As for the "distinct...group by" - it takes 2 minutes and 4 seconds of CPU
time which is more than the "join" method.
Thanks y'all for participation in the "game" -

Merry Xmas / Happy Hanuka / blessful Ramadan - pick your choice ...

Isaac Yassin
DBMS & IT Consultant
[login to unmask email]





Sanjeev (CTS) S

Re: MAD MAX goes again
(in response to Arli (SG-SEMA) Salim)
Hi Salim,
Thanks for summarizing the queries and CPU time. Sometimes it is very
difficult to go back and check the things.
To open some more discussion and giving some thoughts as per your details i
would say i expected Q3 "not" to outperform Q2 because when i read the
Venkat's mail and his explaination about the join performed in the case of
Q2. Venkat said :- One scan of table and then making the result set smaller
after joining...........I am adding that your group by on Number(in Q2) is
on the smaller result set than the distinct on Number(in Q3). So there are
chances that Q2 will outperform Q3. So, Sortc_uniq definitely was the cause
of the poor performance because of distinct in Numbers but group by was also
there on number, the only point which differentiate is sorting the smaller
result set.
As per your second point Q2 might have taken the advantage of Q1 but when
was Q3 run ? After Q2 or Q1 .... !! No it was run independently by Issac.
But i believe that the CPU time would not vary so significantly, it will
definitely affect the I/Os and waits(if any) and as a whole the elapsed
time.

Let us see what others have to say. I think i just opened the serious
discussion for everyone in the holiday mood.

Happy New Year to all !!

HTH
Regards,
Sanjeev


> -----Original Message-----
> From: Salim, Arli (SG-SEMA) [SMTP:[login to unmask email]
> Sent: Tuesday, December 26, 2000 1:58 PM
> To: [login to unmask email]
> Subject: Re: MAD MAX goes again
>
> Hi Isaac,
>
> Just came back from leave. Thanks very much for taking the
> trouble to run the SQL.
>
> Did anybody notice that Query 3 (Q3) is quite similar to the
> right hand side of the inner join of Q2, with the only major
> difference being the distinct clause?
>
> I'm quite puzzled that Q3 didn't outperform Q2. I had estimated
> the result set of 'max(k_number) group by year' to be quite
> small (some hundred rows at most, I believe). In such a
> case, can I more or less say the following (of course, just
> wild guesses without access plans & stats):
>
> 1. sortc_uniq (for the distinct) processing was the cause for
> the slower performance?; and/or
> 2. Q2 was run after Q1, so it probably took advantage of
> some bufferpool hits?
>
> For those who missed the earlier parts, here are the CPU times
> taken by Isaac:
>
> Query No CPU time (min:sec.centisec)
> Q1 3:51.43
> Q2 1:58.24
> Q3 2:04
>
> Q1:
> SELECT A.K_NUMBER
> FROM MyTable A
> WHERE A.K_NUMBER =
> (SELECT MAX(B.K_NUMBER)
> FROM MyTable B
> WHERE A.YEAR=B.YEAR)
> GROUP BY A.K_NUMBER ;
>
> Q2:
> SELECT A.K_NUMBER
> FROM
> (SELECT K_NUMBER,YEAR
> FROM MyTable) AS A
> INNER JOIN
> (SELECT MAX(K_NUMBER) AS K_NUMBER,YEAR
> FROM MyTable
> GROUP BY YEAR) AS B
> ON A.YEAR = B.YEAR
> AND A.K_NUMBER=B.K_NUMBER
> GROUP BY A.K_NUMBER ;
>
> Q3:
> SELECT DISTINCT MAX(K_NUMBER)
> FROM MyTable
> GROUP BY YEAR;
>
> Cheers,
> arli
>
-----------------------------------------------------------------------------------------------------------------------------------------
-----------------------------------------------------------------------------------------------------------------------------------------
This e-mail and any files transmitted with it are for the sole use
of the intended recipient(s) and may contain confidential and privileged information.
If you are not the intended recipient, please contact the sender by reply e-mail and
destroy all copies of the original message. Any unauthorised review, use, disclosure,
dissemination, forwarding, printing or copying of this email or any action taken in
reliance on this e-mail is strictly prohibited and may be unlawful.

Visit us at http://www.cognizant.com
----------------------------------------------------------------------------------------------------------------------------------------
----------------------------------------------------------------------------------------------------------------------------------------



Sanjeev (CTS) S

Re: MAD MAX goes again
(in response to Isaac Yassin)
Hi Issac,

Thanks for the result. Despite of getting the difficult EXPLAIN Output, it
was useful analysis.

If we look at the EXPLAIN output, i will always say query no. 999992(i.e.
the last query below) is better. If someone would not have told me the CPU
time taken, i would have gone by EXPLAIN and using correlated subquery looks
the best choice. If i looked the explain output correctly then i can say
there is no sort for query no. 999992 and that is the most important thing
we can look at.Another difference i have seen is the COLUMN_FN_EVAL is "R"
in case of the last query(999992). Is this causing the more CPU or it is
also useful because as i think for small result set column function
evaluation is better if done after everything i.e value blank.

Now i think it is the data which is responsible for this difference in CPU
time. But how is this ? As we discussed in first few mails of this thread
that 4K memory table is utilized properly or not but out of all the points
mentioned by Venkat and You for not utilization of the 4K memory, nothing
looks true to me in this case. Let us look back the few mails :

)):- If ratio (colcard K_NUMBER/colcard YEAR) is high and the table is
clustered by YEAR then correlated subquery is very good choice because the
results are kept in the memory table and no extra access is necessary.
DB2 also does not use memory table under the following circumstances:
- SUM of length of correlated columns is more than 256 bytes.
- There is an unique index on a subset of the correlated column of a table
from the outer query and if there are more than 16 columns used for
correlation.

Obviously in Issac's case either there is an unique index or 4k table is not
used effectively causing a grief. Therefore the join appears to be a better
choice. (( :-

You wrote something like this :
)):- Let's take a look at the 4kb memory table. Assuming an INT field for
the group by and another INT for the SUM you get 8
bytes. 4000/8=500 rows (ok, I cut the corner here). and if you have more
fields on the group - it's even less. So , if you have more "groups" then
that - you start to loose when you overlay them. ((:-

So, how many SQLs are we going to check ? and ofcourse it always depends .
I think we are finding out the alternative ways of coding the queries if we
get some performance problems.

Thanks for opening the nice informative thread. Let's get someone other's
analysis on explain report.

Regards,
Sanjeev




> -----Original Message-----
> From: Isaac Yassin [SMTP:[login to unmask email]
> Sent: Friday, December 29, 2000 1:17 AM
> To: [login to unmask email]
> Subject: MAD MAX goes again
>
> Hi,
>
> I hope you can suffer the explain output ;-)
>
> All measurments run were conducted separatly on a separate BP which was
> contracted and expanded (1000) for each run, few runs were done and the
> time
> are averages, the fluctations were not high.
> Use an editor like PFE which allows for large view and its ok.
> The answer set had 11 rows before the distinct and 3 after it.
> The price for sort is very high and for distinct/group by it was done
> twice
> for this query - this can explain the high time for it.
>
> Again, sorry for the output.
>
>
> ---------+---------+---------+---------+---------+---------
> DELETE FROM PLAN_TABLE WHERE QUERYNO >= 999990 ;
> ---------+---------+---------+---------+---------+---------
> DSNE615I NUMBER OF ROWS AFFECTED IS 9
> DSNE616I STATEMENT EXECUTION WAS SUCCESSFUL, SQLCODE IS 0
> ---------+---------+---------+---------+---------+---------
> COMMIT;
> ---------+---------+---------+---------+---------+---------
> DSNE616I STATEMENT EXECUTION WAS SUCCESSFUL, SQLCODE IS 0
> ---------+---------+---------+---------+---------+---------
>
> -----------------------------------------------------------
>
> EXPLAIN PLAN SET QUERYNO = 999990
> FOR
>
> SELECT DISTINCT MAX(k_number)
> FROM my.table
> GROUP BY YEAR ;
> ---------+---------+---------+---------+---------+---------
> DSNE616I STATEMENT EXECUTION WAS SUCCESSFUL, SQLCODE IS 0
> ---------+---------+---------+---------+---------+---------
>
> EXPLAIN PLAN SET QUERYNO = 999991
> FOR
>
> SELECT A.k_number
> FROM
> (SELECT k_number,YEAR
> FROM my.table ) AS A
> INNER JOIN
> (SELECT MAX(k_number) AS k_number,YEAR
> FROM my.table
> GROUP BY YEAR) AS B
> ON A.YEAR = B.YEAR
> AND A.k_number=B.k_number
> GROUP BY A.k_number ;
> ---------+---------+---------+---------+---------+---------
> DSNE616I STATEMENT EXECUTION WAS SUCCESSFUL, SQLCODE IS 0
> ---------+---------+---------+---------+---------+---------
>
> EXPLAIN PLAN SET QUERYNO = 999992
> FOR
>
> SELECT A.k_number
> FROM my.table A
> WHERE A.k_number =
> (SELECT MAX(B.k_number)
> FROM my.table B
> WHERE A.YEAR=B.YEAR)
> GROUP BY A.k_number ;
> ---------+---------+---------+---------+---------+---------
> DSNE616I STATEMENT EXECUTION WAS SUCCESSFUL, SQLCODE IS 0
> ---------+---------+---------+---------+---------+---------
>
>
> -----------------------------------------------------------
>
> -----------------------------------------------------------
>
> COMMIT;
> ---------+---------+---------+---------+---------+---------
> DSNE616I STATEMENT EXECUTION WAS SUCCESSFUL, SQLCODE IS 0
> ---------+---------+---------+---------+---------+---------
> SELECT * FROM PLAN_TABLE WHERE QUERYNO >= 999990
> ORDER BY QUERYNO,PLANNO,QBLOCKNO;
> ---------+---------+---------+---------+---------+---------
-----------------------------------------------------------------------------------------------------------------------------------------
-----------------------------------------------------------------------------------------------------------------------------------------
This e-mail and any files transmitted with it are for the sole use
of the intended recipient(s) and may contain confidential and privileged information.
If you are not the intended recipient, please contact the sender by reply e-mail and
destroy all copies of the original message. Any unauthorised review, use, disclosure,
dissemination, forwarding, printing or copying of this email or any action taken in
reliance on this e-mail is strictly prohibited and may be unlawful.

Visit us at http://www.cognizant.com
----------------------------------------------------------------------------------------------------------------------------------------
----------------------------------------------------------------------------------------------------------------------------------------



Isaac Yassin

MAD MAX goes again
(in response to Sanjeev (CTS) S)
Hi,

I hope you can suffer the explain output ;-)

All measurments run were conducted separatly on a separate BP which was
contracted and expanded (1000) for each run, few runs were done and the time
are averages, the fluctations were not high.
Use an editor like PFE which allows for large view and its ok.
The answer set had 11 rows before the distinct and 3 after it.
The price for sort is very high and for distinct/group by it was done twice
for this query - this can explain the high time for it.

Again, sorry for the output.


---------+---------+---------+---------+---------+---------
DELETE FROM PLAN_TABLE WHERE QUERYNO >= 999990 ;
---------+---------+---------+---------+---------+---------
DSNE615I NUMBER OF ROWS AFFECTED IS 9
DSNE616I STATEMENT EXECUTION WAS SUCCESSFUL, SQLCODE IS 0
---------+---------+---------+---------+---------+---------
COMMIT;
---------+---------+---------+---------+---------+---------
DSNE616I STATEMENT EXECUTION WAS SUCCESSFUL, SQLCODE IS 0
---------+---------+---------+---------+---------+---------

-----------------------------------------------------------

EXPLAIN PLAN SET QUERYNO = 999990
FOR

SELECT DISTINCT MAX(k_number)
FROM my.table
GROUP BY YEAR ;
---------+---------+---------+---------+---------+---------
DSNE616I STATEMENT EXECUTION WAS SUCCESSFUL, SQLCODE IS 0
---------+---------+---------+---------+---------+---------

EXPLAIN PLAN SET QUERYNO = 999991
FOR

SELECT A.k_number
FROM
(SELECT k_number,YEAR
FROM my.table ) AS A
INNER JOIN
(SELECT MAX(k_number) AS k_number,YEAR
FROM my.table
GROUP BY YEAR) AS B
ON A.YEAR = B.YEAR
AND A.k_number=B.k_number
GROUP BY A.k_number ;
---------+---------+---------+---------+---------+---------
DSNE616I STATEMENT EXECUTION WAS SUCCESSFUL, SQLCODE IS 0
---------+---------+---------+---------+---------+---------

EXPLAIN PLAN SET QUERYNO = 999992
FOR

SELECT A.k_number
FROM my.table A
WHERE A.k_number =
(SELECT MAX(B.k_number)
FROM my.table B
WHERE A.YEAR=B.YEAR)
GROUP BY A.k_number ;
---------+---------+---------+---------+---------+---------
DSNE616I STATEMENT EXECUTION WAS SUCCESSFUL, SQLCODE IS 0
---------+---------+---------+---------+---------+---------


-----------------------------------------------------------

-----------------------------------------------------------

COMMIT;
---------+---------+---------+---------+---------+---------
DSNE616I STATEMENT EXECUTION WAS SUCCESSFUL, SQLCODE IS 0
---------+---------+---------+---------+---------+---------
SELECT * FROM PLAN_TABLE WHERE QUERYNO >= 999990
ORDER BY QUERYNO,PLANNO,QBLOCKNO;
---------+---------+---------+---------+---------+---------
QUERYNO QBLOCKNO APPLNAME PROGNAME PLANNO METHOD CREATOR TNAME
TABNO ACCESSTYPE MATCHCOLS ACCESSCREATOR ACCESSNAME INDEXONLY
SORTN_UNIQ SORTN_JOIN SORTN_ORDERBY SORTN_GROUPBY SORTC_UNIQ SORTC_JOIN
SORTC_ORDERBY SORTC_GROUPBY TSLOCKMODE TIMESTAMP REMARKS
PREFETCH COLUMN_FN_EVAL MIXOPSEQ VERSION
COLLID ACCESS_DEGREE ACCESS_PGROUP_ID JOIN_DEGREE
JOIN_PGROUP_ID SORTC_PGROUP_ID SORTN_PGROUP_ID PARALLELISM_MODE
MERGE_JOIN_COLS CORRELATION_NAME
---------+---------+---------+---------+---------+---------+---------+------
---+---------+---------+---------+---------+---------+---------+---------+--
-------+---------+---------+---------+---------+---------+---------+--------
-+---------+---------+---------+---------+---------+---------+---------+----
-----+---------+---------+---------+---------+---------+---------+---------+
---------+---------+---------+---------+---------+---------+---------+------
---+---------+---------+---------+---------+---------+---------+---------+--
-------+---------+---------+---------+---------+---------+---------+--------
-+---------+---------+---------+---------+---------+---------+---------+----
-----+---------+---------+---------+---------+---------+---------+---------+
---------+---------+---------+---------+---------+---------+---------
999990 1 DSNESM68 1 0 my table
1 I 0 my iindex04 Y
N N N N N N
N N IS 2000122709330652
S 0
DSNESPCS ------------- ---------------- ----------- ----------
---- --------------- --------------- ---------------- ---------------
999990 1 DSNESM68 2 3
0 0 N
N N N N N N
N Y 2000122709330652
S 0
DSNESPCS ------------- ---------------- ----------- ----------
---- --------------- --------------- ---------------- ---------------
999990 1 DSNESM68 3 3
0 0 N
N N N N Y N
N N 2000122709330652
0
DSNESPCS ------------- ---------------- ----------- ----------
---- --------------- --------------- ---------------- ---------------
999991 1 DSNESM68 1 0 ZZZ B
2 R 0 N
N N N N N N
N N N 2000122709330654
S 0
DSNESPCS ------------- ---------------- ----------- ----------
---- --------------- --------------- ---------------- ---------------
999991 3 DSNESM68 1 0 my table
3 I 0 my iindex04 Y
N N N N N N
N N IS 2000122709330654

S 0
DSNESPCS ------------- ---------------- ----------- ----------
---- --------------- --------------- ---------------- ---------------
999991 1 DSNESM68 2 1 my table
1 I 2 my iindex04 Y
N N N N N Y
N N IS 2000122709330654
0
DSNESPCS ------------- ---------------- ----------- ----------
---- --------------- --------------- ---------------- ---------------
999991 3 DSNESM68 2 3
0 0 N
N N N N N N
N Y 2000122709330654
S 0
DSNESPCS ------------- ---------------- ----------- ----------
---- --------------- --------------- ---------------- ---------------
999992 1 DSNESM68 1 0 my table
1 I 0 my iindex04 Y
N N N N N N
N N IS 2000122709330658
S 0
DSNESPCS ------------- ---------------- ----------- ----------
---- --------------- --------------- ---------------- --------------- A
999992 2 DSNESM68 1 0 my table
2 I 0 my iindex04 Y
N N N N N N
N N IS 2000122709330658
S R 0
DSNESPCS ------------- ---------------- ----------- ----------
---- --------------- --------------- ---------------- --------------- B
DSNE610I NUMBER OF ROWS DISPLAYED IS 9
DSNE616I STATEMENT EXECUTION WAS SUCCESSFUL, SQLCODE IS 100
---------+---------+---------+---------+---------+---------
---------+---------+---------+---------+---------+---------
DSNE617I COMMIT PERFORMED, SQLCODE IS 0
DSNE616I STATEMENT EXECUTION WAS SUCCESSFUL, SQLCODE IS 0
---------+---------+---------+---------+---------+---------
DSNE601I SQL STATEMENTS ASSUMED TO BE BETWEEN COLUMNS 1 AND 72
DSNE620I NUMBER OF SQL STATEMENTS PROCESSED IS 7
DSNE621I NUMBER OF INPUT RECORDS READ IS 46
DSNE622I NUMBER OF OUTPUT RECORDS WRITTEN IS 89


Isaac Yassin
DBMS & IT Consultant
[login to unmask email]



Isaac Yassin

MAD MAX goes again
(in response to Sanjeev (CTS) S)
Hi,
I'm really sorry for the explain format, will work on something better next
time.
I have no full answers for that case, what it teaches is that explain should
be handeled with care,
that queries can be written different ways and you are nevere sure which one
is best till you really
time it yourself.
Which means that "it depends.." should be the motto...

That was a good and informative discussion and may people contributed to
it - I would like to thank all those who gave their input and wish you all a
happy new year.

Isaac Yassin
DBMS & IT Consultant
[login to unmask email]



Arli (SG-SEMA) Salim

Re: MAD MAX goes again
(in response to Isaac Yassin)
I agree with Isaac that the answer is (always) "it depends."

Previously, I mentioned that I was puzzled that the distinct-
group-by query did not outperform the inner join query.

I think I can now make the following observations based on the
explain output:

- All three queries used iindex04, so there were most likely no
other better indexes that had year and knumber as their key
columns.
- Iindex04 was probably based on knumber, year, respectively
because the distinct-group-by query had to perform sortc_groupby.
- Had there been an index based on year, knumber, the distinct-
group-by query would most likely have been the fastest of all
because it only needed to sortc_uniq 11 rows.

If I'm mistaken, I hope somebody would correct me since this is
my first time in explaining explain. :-)

Happy New Year,
arli

-----Original Message-----
From: Isaac Yassin [mailto:[login to unmask email]
Sent: Saturday, December 30, 2000 2:09 AM
To: [login to unmask email]
Subject: MAD MAX goes again


Hi,
I'm really sorry for the explain format, will work on something better next
time.
I have no full answers for that case, what it teaches is that explain should
be handeled with care,
that queries can be written different ways and you are nevere sure which one
is best till you really
time it yourself.
Which means that "it depends.." should be the motto...

That was a good and informative discussion and may people contributed to
it - I would like to thank all those who gave their input and wish you all a
happy new year.

Isaac Yassin
DBMS & IT Consultant
[login to unmask email]