??: Re: How to improve the operation of insert and updat e

Eric Pearson

??: Re: How to improve the operation of insert and updat e
Unless you have *lots* (>10?) of duplicate key situations,
you would probably improve things by *not* checking for duplicates
on every insert but just handling the cases where the insert fails due to
uniqueness constraints.

Also, consider adjusting PCTFREE so that you minimize page splits.

Do you have some sort of monitor which tells you how the time on the
inserts is spent? How much time in I/O, how much CPU, how much lock wait,
etc?
This info may give you hints about what to tweak first.

regards,

eric pearson
NS ITO Database Support


-----Original Message-----
From: elibs [mailto:[login to unmask email]
Sent: Sunday, January 07, 2001 11:06 AM
To: [login to unmask email]
Subject: ??: Re: How to improve the operation of insert and update


Hi,John:

I am sorry that I leave out some information about my Q.In fact ,the
table is for insert and read.one batch program insert about 200K rows into
it every night,and one CICS programe randomly read the table every day.So I
have to consider both the situations.Adding index and partition can see the
positive result of CICS programe,but I have no idea of how to improve the
batch program performance except for enlarging the bufferpools, assigning
different BP and DASD to the index and table , assign enough cyls to the
tablespace and routinely reorging the tablespace, and sorting the input data
according to the index.
And before inserts one row into the table ,the batch program always check
whether there is duplicate record,which is verified by the index and another
field.
One other thing about the Q,I have tried the batch program in testing
system ,and it just took me 3min.It is very amazing.

Meanwhile,I am very sorry to omit so many things ,for I am very new to the
LIST and DB2.And I hope not to bother you much.

Looking forward to your further instructions.

Thanks

Larry


-----Original Message-----
·¢¼þÈË: John Hardy <[login to unmask email]>
ÐÂÎÅ×é: bit.listserv.db2-l
ÊÕ¼þÈË: [login to unmask email] <[login to unmask email]>
ÈÕÆÚ: 2001Äê1ÔÂ8ÈÕ 6:53
Ö÷Ìâ: Re: How to improve the operation of insert and update


>Does your 'insert' process simply consist of 200,000 iterations of a random
>SQL Insert? If so, then why do you think that adding an extra index and
>partitioning your table will help? (I say an extra index as I assume that
>your 40 million row was already indexed!?).
>
>If your process does consist simply of a series of random Inserts, then
>inserting the rows in clustering sequence often dramatically improves
>performance.
>
>If on the other hand your 'insert' process consists of a read of the table
>(with or without a cursor) followed by an insert (hence your new index)
>then others on the list might be able to give you some tips/further lines
>of investigation.
>
>I am afraid that you will need to supply more information.
>
>
>
>
>On Fri, 5 Jan 2001 08:50:45 -0500, elibs <[login to unmask email]> wrote:
>
>>Hello,All:
>>
>> I have a serious problem to the performance of my large table which has
>40M rows and continues been enlarged.
>>
>> The case is that, I have a routine batch job devoloped by AP to insert
>200K rows into this table daily.I am very frustrated that this job always
>take me about 20 minutes to complete, althought I have add index to it and
>patition it into 4 parts, and reorg ,restat and rebind it every week,it
>still mean nothing.
>>
>> Could someone kindly help me?
>>
>>Thanks in advance
>>
>>Larry
>>
>
>
> visit




visit the





Mohammed Nayeem

Re: ??: Re: How to improve the operation of insert and updat e
(in response to Eric Pearson)



Set freepage 5 and Pctfree 10 or 15 and try .....
---------------------- Forwarded by Mohammed Nayeem/MoMedicaid/US on 01/08/2001
10:18 AM ---------------------------


"Pearson, Eric L," <[login to unmask email]> on 01/08/2001 10:05:04 AM

Please respond to DB2 Data Base Discussion List <[login to unmask email]>

To: [login to unmask email]
cc: (bcc: Mohammed Nayeem/MoMedicaid/US)

Subject: Re: ??: Re: How to improve the operation of insert and updat
e



Unless you have *lots* (>10?) of duplicate key situations,
you would probably improve things by *not* checking for duplicates
on every insert but just handling the cases where the insert fails due to
uniqueness constraints.

Also, consider adjusting PCTFREE so that you minimize page splits.

Do you have some sort of monitor which tells you how the time on the
inserts is spent? How much time in I/O, how much CPU, how much lock wait,
etc?
This info may give you hints about what to tweak first.

regards,

eric pearson
NS ITO Database Support


-----Original Message-----
From: elibs [mailto:[login to unmask email]
Sent: Sunday, January 07, 2001 11:06 AM
To: [login to unmask email]
Subject: ??: Re: How to improve the operation of insert and update


Hi,John:

I am sorry that I leave out some information about my Q.In fact ,the
table is for insert and read.one batch program insert about 200K rows into
it every night,and one CICS programe randomly read the table every day.So I
have to consider both the situations.Adding index and partition can see the
positive result of CICS programe,but I have no idea of how to improve the
batch program performance except for enlarging the bufferpools, assigning
different BP and DASD to the index and table , assign enough cyls to the
tablespace and routinely reorging the tablespace, and sorting the input data
according to the index.
And before inserts one row into the table ,the batch program always check
whether there is duplicate record,which is verified by the index and another
field.
One other thing about the Q,I have tried the batch program in testing
system ,and it just took me 3min.It is very amazing.

Meanwhile,I am very sorry to omit so many things ,for I am very new to the
LIST and DB2.And I hope not to bother you much.

Looking forward to your further instructions.

Thanks

Larry


-----Original Message-----

Sanjeev (CTS) S

Re: ??: Re: How to improve the operation of insert and updat e
(in response to Mohammed Nayeem)
Everything is clearly explained by John.

Best thing is to have an online monitor, run the job and monitor for the
reason of poor performance. Probably find that SELECT statement for checking
the duplicate is the culprit and remove that, let the insert get -803 and
remove the abend logic for this sqlcode. Also look at the Buffer Pool
activity for Insert and Select. Probably you are not finding anything in the
buffer for select.
There is one more thing i.e. commit frequency which is important. Sometimes
too many commits are causing problems.
I would request the list to go back in 98 and look at some great comments in
the article by Boonie Baker which explains :

"""" If a job is doing iterative, looping inserts against the same table,
DB2 monitors this activity at run time. After the third insert, DB2 gets
tired of repetitively building the code needed to do the insert, assumes
that you are going to continue doing this insert a reasonable number of
times, and decides to create an executable procedure. The procedure reduces
CPU overhead and improves performance. However, you must use the insert
procedure (IPROC) at least five times to pay for the build overhead. After
the eighth insert (the fifth using the procedure), the SQL is faster and
cheaper. If a job (bound with RELEASE COMMIT) commits after every insert,
the IPROC is never built. If the job commits after every three or four
inserts, the procedure barely gets built before its untimely demise. All
cost, no payback. Not good."""""
Can we do something with this.
HTH
Regards,
Sanjeev


-----------------------------------------------------------------------------------------------------------------------------------------
-----------------------------------------------------------------------------------------------------------------------------------------
This e-mail and any files transmitted with it are for the sole use
of the intended recipient(s) and may contain confidential and privileged information.
If you are not the intended recipient, please contact the sender by reply e-mail and
destroy all copies of the original message. Any unauthorised review, use, disclosure,
dissemination, forwarding, printing or copying of this email or any action taken in
reliance on this e-mail is strictly prohibited and may be unlawful.

Visit us at http://www.cognizant.com
----------------------------------------------------------------------------------------------------------------------------------------
----------------------------------------------------------------------------------------------------------------------------------------