»Ø¸´: Re: How to improve the operation of insert and update

elibs

»Ø¸´: Re: How to improve the operation of insert and update
Hi,John:

I am sorry that I leave out some information about my Q.In fact ,the
table is for insert and read.one batch program insert about 200K rows into
it every night,and one CICS programe randomly read the table every day.So I
have to consider both the situations.Adding index and partition can see the
positive result of CICS programe,but I have no idea of how to improve the
batch program performance except for enlarging the bufferpools, assigning
different BP and DASD to the index and table , assign enough cyls to the
tablespace and routinely reorging the tablespace, and sorting the input data
according to the index.
And before inserts one row into the table ,the batch program always check
whether there is duplicate record,which is verified by the index and another
field.
One other thing about the Q,I have tried the batch program in testing
system ,and it just took me 3min.It is very amazing.

Meanwhile,I am very sorry to omit so many things ,for I am very new to the
LIST and DB2.And I hope not to bother you much.

Looking forward to your further instructions.

Thanks

Larry


-----Original Message-----
·¢¼þÈË: John Hardy <[login to unmask email]>
ÐÂÎÅ×é: bit.listserv.db2-l
ÊÕ¼þÈË: [login to unmask email] <[login to unmask email]>
ÈÕÆÚ: 2001Äê1ÔÂ8ÈÕ 6:53
Ö÷Ìâ: Re: How to improve the operation of insert and update


>Does your 'insert' process simply consist of 200,000 iterations of a random
>SQL Insert? If so, then why do you think that adding an extra index and
>partitioning your table will help? (I say an extra index as I assume that
>your 40 million row was already indexed!?).
>
>If your process does consist simply of a series of random Inserts, then
>inserting the rows in clustering sequence often dramatically improves
>performance.
>
>If on the other hand your 'insert' process consists of a read of the table
>(with or without a cursor) followed by an insert (hence your new index)
>then others on the list might be able to give you some tips/further lines
>of investigation.
>
>I am afraid that you will need to supply more information.
>
>
>
>
>On Fri, 5 Jan 2001 08:50:45 -0500, elibs <[login to unmask email]> wrote:
>
>>Hello,All:
>>
>> I have a serious problem to the performance of my large table which has
>40M rows and continues been enlarged.
>>
>> The case is that, I have a routine batch job devoloped by AP to insert
>200K rows into this table daily.I am very frustrated that this job always
>take me about 20 minutes to complete, althought I have add index to it and
>patition it into 4 parts, and reorg ,restat and rebind it every week,it
>still mean nothing.
>>
>> Could someone kindly help me?
>>
>>Thanks in advance
>>
>>Larry
>>
>
>
>





John Hardy

Re: »Ø¸´: Re: How to improve the operation of insert and updat
(in response to elibs)
Hello Larry,

No need to apologise for asking questions. That's what the list is here
for. You weren't to know, and it's easy to take some things for granted
when posing a question. You also might not yet know exactly what
information others need to help you. You'll soon find out. In the
meanwhile, it's always a good idea to start your question by stating the
platform and release of DB2 you are using, e.g. 'DB2 for OS/390 Version 5'.

If I understand you correctly, then you are attempting a SELECT with the
key of the row you are about to insert. I think that perhaps this SELECT
might well be the costly part of your process, or in any case, it certainly
won't help. If your index is unique, then perhaps you could try to skip
this read altogether and allow for a -803 SQLCODE from your insert. You
might like to consider including the non-key column you refer to in the
index to enable this. If for any reason you cannot do this, then please
look more closely at the performance of your pre-Insert Select query.
Sorting your input in Clustering Sequence will most probably help.

So far as the performance of the program in your testing system goes: The
volumes of data in your testing system are likely to be much smaller than
your production volumes. If however, the table in your testing DB also has
40 million rows, and your job is inserting the same number of rows, then
please inform the list, as this would shed a completely different light on
your problem.

By the way, one of the respondants to your query mentioned that you would
need to check that your free space was sufficient. Please ensure that you
have enough PCTFREE in your Pagesets to allow for growth between REORGs.
Note that this does not apply to your Tablespace if it is clustered on an
ascending key, or to any ascending key indexes (e.g. those based on a
defaulted timestamp).

John


On Sun, 7 Jan 2001 11:06:29 -0500, elibs <[login to unmask email]> wrote:

>Hi,John:
>
> I am sorry that I leave out some information about my Q.In fact ,the
>table is for insert and read.one batch program insert about 200K rows into
>it every night,and one CICS programe randomly read the table every day.So I
>have to consider both the situations.Adding index and partition can see the
>positive result of CICS programe,but I have no idea of how to improve the
>batch program performance except for enlarging the bufferpools, assigning
>different BP and DASD to the index and table , assign enough cyls to the
>tablespace and routinely reorging the tablespace, and sorting the input
data
>according to the index.
> And before inserts one row into the table ,the batch program always check
>whether there is duplicate record,which is verified by the index and
another
>field.
> One other thing about the Q,I have tried the batch program in testing
>system ,and it just took me 3min.It is very amazing.
>
> Meanwhile,I am very sorry to omit so many things ,for I am very new to
the
>LIST and DB2.And I hope not to bother you much.
>
> Looking forward to your further instructions.
>
>Thanks
>
>Larry
>
>
>-----Original Message-----
>·¢¼þÈË: John Hardy <[login to unmask email]>
>ÐÂÎÅ×é: bit.listserv.db2-l
>ÊÕ¼þÈË: [login to unmask email] <[login to unmask email]>
>ÈÕÆÚ: 2001Äê1ÔÂ8ÈÕ 6:53
>Ö÷Ìâ: Re: How to improve the operation of insert and update
>
>
>>Does your 'insert' process simply consist of 200,000 iterations of a
random
>>SQL Insert? If so, then why do you think that adding an extra index and
>>partitioning your table will help? (I say an extra index as I assume that
>>your 40 million row was already indexed!?).
>>
>>If your process does consist simply of a series of random Inserts, then
>>inserting the rows in clustering sequence often dramatically improves
>>performance.
>>
>>If on the other hand your 'insert' process consists of a read of the table
>>(with or without a cursor) followed by an insert (hence your new index)
>>then others on the list might be able to give you some tips/further lines
>>of investigation.
>>
>>I am afraid that you will need to supply more information.
>>
>>
>>
>>
>>On Fri, 5 Jan 2001 08:50:45 -0500, elibs <[login to unmask email]> wrote:
>>
>>>Hello,All:
>>>
>>> I have a serious problem to the performance of my large table which
has
>>40M rows and continues been enlarged.
>>>
>>> The case is that, I have a routine batch job devoloped by AP to insert
>>200K rows into this table daily.I am very frustrated that this job always
>>take me about 20 minutes to complete, althought I have add index to it and
>>patition it into 4 parts, and reorg ,restat and rebind it every week,it
>>still mean nothing.
>>>
>>> Could someone kindly help me?
>>>
>>>Thanks in advance
>>>
>>>Larry
>>>
>>
>>
>>
>
>
>
>
>