Disaster Recovery

[login to unmask email]

Disaster Recovery
I was wondering if anyone would be willing to share some of their current
practices as related to offsite Disaster Recovery. Some questions I have are:

1) What are people doing to ensure that recovery times at the offsite
location are kept to an absolute minimum? For example, copying archive logs
to disk before recovering application databases.

2) How do you handle synchronizing DB2 and non-DB2 resources when you
can't get a system-wide quiesce point?

3) For minimizing recovery times and reducing the synchronization
issues, is anyone doing mirroring (hot site connections) to their disaster
recovery site? What software are you using? What are some of the
considerations for this kind of setup? Is there any good reference for this?

Any insights would be greatly appreciated.

Thanks.
Dave.



Leslie Pendlebury-Bowe

Re: Disaster Recovery
(in response to DPetro@AOL.COM)
Hi Dave .

In reply to your request for Information see information below after
each of your questions.

Hope this helps

Leslie Pendlebury-Bowe
DB2 DBA OS390 SAP etc


______________________________ Reply Separator _________________________________
Subject: Disaster Recovery
Author: [login to unmask email] at Internet
Date: 1/3/00 10:10 PM


I was wondering if anyone would be willing to share some of their current
practices as related to offsite Disaster Recovery. Some questions I have are:

1) What are people doing to ensure that recovery times at the offsite
location are kept to an absolute minimum? For example, copying archive logs
to disk before recovering application databases.

> The largest part of any recovery is the data recovery (well it should be
unless you are a very small site). In our case Cat and Dir take about 1 hour
to get back and that includes recovering our third party software DB aswell -
Platinum. We use tape drives for our backups at the moment .. we do 8
streams and this allows us to backup our 500gig(only 360gig application data)
db in about 3 hours. We do onsite and offsite and only have 8 drives .. which
means we can only run 4 streams at a time .. BUT .. at our DR site we run 8
recovery streams and get the data back in 3/4 hours. We also start Index
recoveries for those streams that finish early at DR .. this means we are
well on the way to getting DB2 back inside of 8/10 hours .. (well that is the
theory anyway).

I might add that we put our archive logs onto DASD before the DB2 recovery
begins(OS390 recovery takes care of this for us). I would like to then take
this a stage further and put the archive logs into the actives .. which will
mean DB2 can access them that much quicker - note DB2 can have up to 32
active logs .. and so you can have an approach that means you alter the zparm
to have 32 active logs, copy in up to 32 archives(if they are required in the
recovery process) into the active logs, do your recovery and then rezise down
the actives back to your site standards .... I have never done this .. but I
am told by an expert that if you have significant log apply in your
recovery(we do and at the recovery site we can see up to an hour of log apply
alone) then you will see a significant reduction in recovery time .. anybody
tried this ?????????????????????????????????????????????????????????????

2) How do you handle synchronizing DB2 and non-DB2 resources when you
can't get a system-wide quiesce point?

>We do not have this problem .. but I can see your point when you cannot get a
system-wide quiesce point . We quiesce a dummy table to get an RBA in syscopy
.. just so that we get an RBA every 15mins that would use as a restart RBA if
required. - but we are a SAP site and the recovery approach for an SAP site is
to recover the whole DB back .. not just a subset of pagesets ..

3) For minimizing recovery times and reducing the synchronization
issues, is anyone doing mirroring (hot site connections) to their disaster
recovery site? What software are you using? What are some of the
considerations for this kind of setup? Is there any good reference for this?

>We are looking at this now .. as we haven't moved this forward a huge amount
then I would not like to comment too much - but we have been looking at
several ways .. PPRC - Peer to Peer Remote Copy is an option, third party
products are another and there are also others ...

Any insights would be greatly appreciated.

Thanks.
Dave.








[login to unmask email]

Re: Disaster Recovery
(in response to Leslie Pendlebury-Bowe)
At my current site we are lucky as we do not have a 24x7 requirement. Also,
the business requirement for Disaster Recover is to recover to the beginning
of the processing day (around 6am).

Therefore we can take advantage of the snapshot technology on our RVA disk
system.

We have a disaster recovery slot each day at 5.30am where DB2 is shutdown
and all other batch update processing is curtailed. A snapshot is taken of
all disks within the RVA box, this takes about 2 seconds per disk volume.
The disks are then copied off to tape in the background, meanwhile DB2 is
restarted and normal processing continues. The total down time of the system
ranges from 10 - 20 minutes.

Therefore our only procedure at DR is too perform volume restores for all
our disks. Due to paranoia and a slight concern about performance one set of
logs and a boot strap dataset reside on RAMAC rather than the RVA. So the
only step in our DB2 recovery is to copy the RVA logs and bootstrap dataset
to the RAMAC copy.

The DB2 DR recovery time used to range from 12 to 16 hours, now takes less
than 15 minutes. Also, the need for a DBA to be present at DR tests is no
longer a requirement.

Jim.


-----Original Message-----
From: [login to unmask email] [mailto:[login to unmask email]
Sent: 04 January 2000 03:11
To: [login to unmask email]
Subject: Disaster Recovery


I was wondering if anyone would be willing to share some of their current
practices as related to offsite Disaster Recovery. Some questions I have
are:

1) What are people doing to ensure that recovery times at the offsite
location are kept to an absolute minimum? For example, copying archive logs
to disk before recovering application databases.

2) How do you handle synchronizing DB2 and non-DB2 resources when you
can't get a system-wide quiesce point?

3) For minimizing recovery times and reducing the synchronization
issues, is anyone doing mirroring (hot site connections) to their disaster
recovery site? What software are you using? What are some of the
considerations for this kind of setup? Is there any good reference for
this?

Any insights would be greatly appreciated.

Thanks.
Dave.








RICK (SWBT) DAVIS

Re: Disaster Recovery
(in response to Jim.Leask@RS-COMPONENTS.COM)
Dave,
For Q#2 . . . in addition to Lesley's suggestion, you can always
recover to a DB2 checkpoint RBA/LRSN closest to the time/point other data
stores are using and handle any inflights/indoubts. Without a
system/enterprise-wide? quiesce, which syncs all data stores, this is about
best you can do.
Note: most shops simply restore/recover all data stores as far
forward as they can and deal with any out-of-sync conditions, unless, of
course, there is a large gap(s).

Rick Davis
"This e-mail and any files transmitted with it are the property of SBC,
are confidential, and are intended solely for the use of the individual
or entity to whom this e-mail is addressed. If you are not one of the
named recipient(s) or otherwise have reason to believe that you have
received this message in error, please notify the sender at 314-235-6854
and delete this message immediately from your computer. Any other use,
retention, dissemination, forwarding, printing, or copying of this
e-mail is strictly prohibited."



-----Original Message-----
From: [login to unmask email] [mailto:[login to unmask email]
Sent: Monday, January 03, 2000 9:11 PM
To: [login to unmask email]
Subject: Disaster Recovery


I was wondering if anyone would be willing to share some of their current
practices as related to offsite Disaster Recovery. Some questions I have
are:

1) What are people doing to ensure that recovery times at the offsite
location are kept to an absolute minimum? For example, copying archive logs
to disk before recovering application databases.

2) How do you handle synchronizing DB2 and non-DB2 resources when you
can't get a system-wide quiesce point?

3) For minimizing recovery times and reducing the synchronization
issues, is anyone doing mirroring (hot site connections) to their disaster
recovery site? What software are you using? What are some of the
considerations for this kind of setup? Is there any good reference for
this?

Any insights would be greatly appreciated.

Thanks.
Dave.








Leslie Pendlebury-Bowe

Re: Disaster Recovery
(in response to RICK (SWBT) DAVIS)
Rick
- yep forgot about that one .. :-) so easy to overlook the obvious
things like system checkpoints ...

cheers

Leslie


______________________________ Reply Separator _________________________________
Subject: Re: Disaster Recovery
Author: "DAVIS; RICK (SBCSI)" <[login to unmask email]> at Internet
Date: 1/4/00 8:39 AM


Dave,
For Q#2 . . . in addition to Lesley's suggestion, you can always
recover to a DB2 checkpoint RBA/LRSN closest to the time/point other data
stores are using and handle any inflights/indoubts. Without a
system/enterprise-wide? quiesce, which syncs all data stores, this is about
best you can do.
Note: most shops simply restore/recover all data stores as far
forward as they can and deal with any out-of-sync conditions, unless, of
course, there is a large gap(s).

Rick Davis
"This e-mail and any files transmitted with it are the property of SBC,
are confidential, and are intended solely for the use of the individual
or entity to whom this e-mail is addressed. If you are not one of the
named recipient(s) or otherwise have reason to believe that you have
received this message in error, please notify the sender at 314-235-6854
and delete this message immediately from your computer. Any other use,
retention, dissemination, forwarding, printing, or copying of this
e-mail is strictly prohibited."



-----Original Message-----
From: [login to unmask email] [mailto:[login to unmask email]
Sent: Monday, January 03, 2000 9:11 PM
To: [login to unmask email]
Subject: Disaster Recovery


I was wondering if anyone would be willing to share some of their current
practices as related to offsite Disaster Recovery. Some questions I have
are:

1) What are people doing to ensure that recovery times at the offsite
location are kept to an absolute minimum? For example, copying archive logs
to disk before recovering application databases.

2) How do you handle synchronizing DB2 and non-DB2 resources when you
can't get a system-wide quiesce point?

3) For minimizing recovery times and reducing the synchronization
issues, is anyone doing mirroring (hot site connections) to their disaster
recovery site? What software are you using? What are some of the
considerations for this kind of setup? Is there any good reference for
this?

Any insights would be greatly appreciated.

Thanks.
Dave.













Suzan Kramp

Re: Disaster Recovery
(in response to Leslie Pendlebury-Bowe)
You've gotten a number of good responses so far, but one thing I haven't
seen addressed yet is the very labor-intensive process of updating the
bootstrap dataset. We have automated almost all of this. We use reports
from our tape management system as input to some File Aid jobs. These jobs
create the delete and newlog statements for the archive logs. The only
manual work that we do now to the bootstrap dataset are to delete and add
the new active logs, add the last archive log, update the high rba, and
create the conditional restart record. Everything else is automated. The
most recent piece of this was implemented during our last disaster recovery
exercise in November. We were able to update our bootstrap datasets in
less than half an hour, and there were no keying errors at all, which saved
us even more time.

If you would like more details on this, please let me know and I will
forward what I can.




[login to unmask email]@RYCI.COM> on 01/03/2000 10:10:59 PM

From: [login to unmask email]@RYCI.COM> on 01/03/2000 10:10 PM

Please respond to DB2 Data Base Discussion List <[login to unmask email]>

Sent by: DB2 Data Base Discussion List <[login to unmask email]>

To: [login to unmask email]
cc:
Subject: Disaster Recovery


I was wondering if anyone would be willing to share some of their current
practices as related to offsite Disaster Recovery. Some questions I have
are:

1) What are people doing to ensure that recovery times at the offsite
location are kept to an absolute minimum? For example, copying archive
logs
to disk before recovering application databases.

2) How do you handle synchronizing DB2 and non-DB2 resources when you
can't get a system-wide quiesce point?

3) For minimizing recovery times and reducing the synchronization
issues, is anyone doing mirroring (hot site connections) to their disaster
recovery site? What software are you using? What are some of the
considerations for this kind of setup? Is there any good reference for
this?

Any insights would be greatly appreciated.

Thanks.
Dave.








Andy Hunt

Re: Disaster Recovery
(in response to Suzan Kramp)
Taking the 'automated' BSDS update a stage further.....

I worked at a previous site where we created a 'Disaster Recovery BSDS' after
each archive log was produced (invoked after each issue of DSNJ003I). The actual
BSDS and actual archive log were then immediately copied as file 1 and 2 on an
offsite version of the archive log. This 'Disaster Recovery BSDS' was then
written as a 3rd file on our offsite archive log tapes.

Our Disaster Recovery procedures involved creating the BSDS datasets and then
copying into this the 'Disaster Recovery BSDS' (3rd file on the tape). No other
BSDS updates were required (these had already been made onsite, pre-disaster).
This made the disaster recovery procedures much faster, simpler and less
stressful.

Kind Regards,
Andy Hunt - Scottish And Southern Energy






[login to unmask email] on 04/01/2000 16:22:32

Please respond to DB2 Data Base Discussion List <[login to unmask email]>

To: [login to unmask email]
cc: (bcc: Andy Hunt/HAV/SSE)
Subject: Re: Disaster Recovery



You've gotten a number of good responses so far, but one thing I haven't
seen addressed yet is the very labor-intensive process of updating the
bootstrap dataset. We have automated almost all of this. We use reports
from our tape management system as input to some File Aid jobs. These jobs
create the delete and newlog statements for the archive logs. The only
manual work that we do now to the bootstrap dataset are to delete and add
the new active logs, add the last archive log, update the high rba, and
create the conditional restart record. Everything else is automated. The
most recent piece of this was implemented during our last disaster recovery
exercise in November. We were able to update our bootstrap datasets in
less than half an hour, and there were no keying errors at all, which saved
us even more time.

If you would like more details on this, please let me know and I will
forward what I can.




[login to unmask email]@RYCI.COM> on 01/03/2000 10:10:59 PM

From: [login to unmask email]@RYCI.COM> on 01/03/2000 10:10 PM

Please respond to DB2 Data Base Discussion List <[login to unmask email]>

Sent by: DB2 Data Base Discussion List <[login to unmask email]>

To: [login to unmask email]
cc:
Subject: Disaster Recovery


I was wondering if anyone would be willing to share some of their current
practices as related to offsite Disaster Recovery. Some questions I have
are:

1) What are people doing to ensure that recovery times at the offsite
location are kept to an absolute minimum? For example, copying archive
logs
to disk before recovering application databases.

2) How do you handle synchronizing DB2 and non-DB2 resources when you
can't get a system-wide quiesce point?

3) For minimizing recovery times and reducing the synchronization
issues, is anyone doing mirroring (hot site connections) to their disaster
recovery site? What software are you using? What are some of the
considerations for this kind of setup? Is there any good reference for
this?

Any insights would be greatly appreciated.

Thanks.
Dave.
















**********************************************************************
The information in this E-Mail is confidential and may be legally
privileged. It may not represent the views of Scottish and Southern
Energy plc.
It is intended solely for the addressees. Access to this E-Mail by
anyone else is unauthorised. If you are not the intended recipient,
any disclosure, copying, distribution or any action taken or omitted
to be taken in reliance on it, is prohibited and may be unlawful.
Any unauthorised recipient should advise the sender immediately of
the error in transmission.

Scottish Hydro-Electric and Southern Electric are trading names of
Scottish and Southern Energy Group
**********************************************************************



Leslie Pendlebury-Bowe

Re: Disaster Recovery
(in response to Andy Hunt)
Hi
we too have automation for all aspects of the DR build and at the DR
site.. we use Platinum Recovery Analyser.


this works great for us ..

Incidentally I now think that with PRA we can put our DR jobs into
ControlM and not even need a DB2 DBA at DR ... wonderful.

regards

Leslie


______________________________ Reply Separator _________________________________
Subject: Re: Disaster Recovery
Author: [login to unmask email] at Internet
Date: 1/4/00 11:22 AM


You've gotten a number of good responses so far, but one thing I haven't
seen addressed yet is the very labor-intensive process of updating the
bootstrap dataset. We have automated almost all of this. We use reports
from our tape management system as input to some File Aid jobs. These jobs
create the delete and newlog statements for the archive logs. The only
manual work that we do now to the bootstrap dataset are to delete and add
the new active logs, add the last archive log, update the high rba, and
create the conditional restart record. Everything else is automated. The
most recent piece of this was implemented during our last disaster recovery
exercise in November. We were able to update our bootstrap datasets in less
than half an hour, and there were no keying errors at all, which saved us
even more time.

If you would like more details on this, please let me know and I will
forward what I can.




[login to unmask email]@RYCI.COM> on 01/03/2000 10:10:59 PM

From: [login to unmask email]@RYCI.COM> on 01/03/2000 10:10 PM

Please respond to DB2 Data Base Discussion List <[login to unmask email]>

Sent by: DB2 Data Base Discussion List <[login to unmask email]>

To: [login to unmask email]
cc:
Subject: Disaster Recovery


I was wondering if anyone would be willing to share some of their current
practices as related to offsite Disaster Recovery. Some questions I have
are:

1) What are people doing to ensure that recovery times at the offsite
location are kept to an absolute minimum? For example, copying archive
logs
to disk before recovering application databases.

2) How do you handle synchronizing DB2 and non-DB2 resources when you
can't get a system-wide quiesce point?

3) For minimizing recovery times and reducing the synchronization
issues, is anyone doing mirroring (hot site connections) to their disaster
recovery site? What software are you using? What are some of the
considerations for this kind of setup? Is there any good reference for
this?

Any insights would be greatly appreciated.

Thanks.
Dave.













Robert A. (Cameron of Pittsburgh at Alcoa) Knight

Re: Disaster Recovery
(in response to Leslie Pendlebury-Bowe)
I set up BMC recovery manager and it took care of all of this.
Why do this manually?




> ----------
> From: [login to unmask email]:[login to unmask email]
> Reply To: DB2 Data Base Discussion List
> Sent: Tuesday, January 04, 2000 11:22 AM
> To: [login to unmask email]
> Subject: Re: Disaster Recovery
>
> You've gotten a number of good responses so far, but one thing I haven't
> seen addressed yet is the very labor-intensive process of updating the
> bootstrap dataset. We have automated almost all of this. We use reports
> from our tape management system as input to some File Aid jobs. These
> jobs
> create the delete and newlog statements for the archive logs. The only
> manual work that we do now to the bootstrap dataset are to delete and add
> the new active logs, add the last archive log, update the high rba, and
> create the conditional restart record. Everything else is automated. The
> most recent piece of this was implemented during our last disaster
> recovery
> exercise in November. We were able to update our bootstrap datasets in
> less than half an hour, and there were no keying errors at all, which
> saved
> us even more time.
>
> If you would like more details on this, please let me know and I will
> forward what I can.
>
>
>
>
> [login to unmask email]@RYCI.COM> on 01/03/2000 10:10:59 PM
>
> From: [login to unmask email]@RYCI.COM> on 01/03/2000 10:10 PM
>
> Please respond to DB2 Data Base Discussion List <[login to unmask email]>
>
> Sent by: DB2 Data Base Discussion List <[login to unmask email]>
>
> To: [login to unmask email]
> cc:
> Subject: Disaster Recovery
>
>
> I was wondering if anyone would be willing to share some of their current
> practices as related to offsite Disaster Recovery. Some questions I have
> are:
>
> 1) What are people doing to ensure that recovery times at the offsite
> location are kept to an absolute minimum? For example, copying archive
> logs
> to disk before recovering application databases.
>
> 2) How do you handle synchronizing DB2 and non-DB2 resources when you
> can't get a system-wide quiesce point?
>
> 3) For minimizing recovery times and reducing the synchronization
> issues, is anyone doing mirroring (hot site connections) to their disaster
> recovery site? What software are you using? What are some of the
> considerations for this kind of setup? Is there any good reference for
> this?
>
> Any insights would be greatly appreciated.
>
> Thanks.
> Dave.
>
>
>
>
>
>
>
>
>
>
>



Max Scarpa

Re: Disaster Recovery
(in response to Robert A. (Cameron of Pittsburgh at Alcoa) Knight)
Because some companies have a very poor budget (or no budget at all) for
disaster recover.

It's an old story: "A recovery tool costs to much, it's not startegic, here
there are not any

earthquake, flood" etc. etc..

So do it manually, create a recovery strategy, type a lot of REXX and pgms....If
all will be ok, my

manager will be a geniuous, if somethig will go wrong I'll be a stupid system
admin.....

Dilbert docet.


Greetings

Max Scarpa
Data & System Admin
CESVE SpA - Italy





"Knight, Robert A. (Cameron of Pittsburgh at Alcoa)" <[login to unmask email]>
il 04/01/2000 19.10.12

Per favore, rispondere per DB2 Data Base Discussion List <[login to unmask email]>

Per: [login to unmask email]
Cc: (ccr: Massimo Scarpa/CESVE)

Oggetto: Re: Disaster Recovery




I set up BMC recovery manager and it took care of all of this.
Why do this manually?




> ----------
> From: [login to unmask email]:[login to unmask email]
> Reply To: DB2 Data Base Discussion List
> Sent: Tuesday, January 04, 2000 11:22 AM
> To: [login to unmask email]
> Subject: Re: Disaster Recovery
>
> You've gotten a number of good responses so far, but one thing I haven't
> seen addressed yet is the very labor-intensive process of updating the
> bootstrap dataset. We have automated almost all of this. We use reports
> from our tape management system as input to some File Aid jobs. These
> jobs
> create the delete and newlog statements for the archive logs. The only
> manual work that we do now to the bootstrap dataset are to delete and add
> the new active logs, add the last archive log, update the high rba, and
> create the conditional restart record. Everything else is automated. The
> most recent piece of this was implemented during our last disaster
> recovery
> exercise in November. We were able to update our bootstrap datasets in
> less than half an hour, and there were no keying errors at all, which
> saved
> us even more time.
>
> If you would like more details on this, please let me know and I will
> forward what I can.
>
>
>
>
> [login to unmask email]@RYCI.COM> on 01/03/2000 10:10:59 PM
>
> From: [login to unmask email]@RYCI.COM> on 01/03/2000 10:10 PM
>
> Please respond to DB2 Data Base Discussion List <[login to unmask email]>
>
> Sent by: DB2 Data Base Discussion List <[login to unmask email]>
>
> To: [login to unmask email]
> cc:
> Subject: Disaster Recovery
>
>
> I was wondering if anyone would be willing to share some of their current
> practices as related to offsite Disaster Recovery. Some questions I have
> are:
>
> 1) What are people doing to ensure that recovery times at the offsite
> location are kept to an absolute minimum? For example, copying archive
> logs
> to disk before recovering application databases.
>
> 2) How do you handle synchronizing DB2 and non-DB2 resources when you
> can't get a system-wide quiesce point?
>
> 3) For minimizing recovery times and reducing the synchronization
> issues, is anyone doing mirroring (hot site connections) to their disaster
> recovery site? What software are you using? What are some of the
> considerations for this kind of setup? Is there any good reference for
> this?
>
> Any insights would be greatly appreciated.
>
> Thanks.
> Dave.
>
>
>
>
>
>
>
>
>
>
>








Steve Thomas

Re: Disaster Recovery
(in response to Max Scarpa)
Leslie,

Yes, I've tried this and it works well. I didn't run the recovery both ways,
so I can't
comment on relative speed. I was told (probably by the same expert you are
referring to!) that doing this allows DB2 to make direct access to the log
range
it needs to recover, rather than reading the archive log sequentially.
Unless you
have a huge number of objects to recover, such as in your SAP
implementation, it
probably won't make a huge difference in the great scheme of things, but any
little
helps I suppose. The most vital thing, as you say, is to copy your archive
logs to
disk and make the appropriate changes to the BSDS so that you can parallel
run
the log apply bits of the recovery.

Steve Thomas

Leslie Pendlebury-Bowe wrote:

I might add that we put our archive logs onto DASD before the DB2 recovery
begins(OS390 recovery takes care of this for us). I would like to then take
this a stage further and put the archive logs into the actives .. which will
mean DB2 can access them that much quicker - note DB2 can have up to 32
active logs .. and so you can have an approach that means you alter the
zparm
to have 32 active logs, copy in up to 32 archives(if they are required in
the
recovery process) into the active logs, do your recovery and then rezise
down
the actives back to your site standards .... I have never done this .. but I
am told by an expert that if you have significant log apply in your
recovery(we do and at the recovery site we can see up to an hour of log
apply
alone) then you will see a significant reduction in recovery time .. anybody
tried this ?????????



Mike O'Brien

Re: Disaster Recovery
(in response to Steve Thomas)
Steve, what did you use to copy the archives to the
log datasets and how did you determine which datasets
to copy to.

--- Steve Thomas <[login to unmask email]>
wrote:
> Leslie,
>
> Yes, I've tried this and it works well. I didn't run
> the recovery both ways,
> so I can't
> comment on relative speed. I was told (probably by
> the same expert you are
> referring to!) that doing this allows DB2 to make
> direct access to the log
> range
> it needs to recover, rather than reading the archive
> log sequentially.
> Unless you
> have a huge number of objects to recover, such as in
> your SAP
> implementation, it
> probably won't make a huge difference in the great
> scheme of things, but any
> little
> helps I suppose. The most vital thing, as you say,
> is to copy your archive
> logs to
> disk and make the appropriate changes to the BSDS so
> that you can parallel
> run
> the log apply bits of the recovery.
>
> Steve Thomas
>
> Leslie Pendlebury-Bowe wrote:
>
> I might add that we put our archive logs onto DASD
> before the DB2 recovery
> begins(OS390 recovery takes care of this for us). I
> would like to then take
> this a stage further and put the archive logs into
> the actives .. which will
> mean DB2 can access them that much quicker - note
> DB2 can have up to 32
> active logs .. and so you can have an approach that
> means you alter the
> zparm
> to have 32 active logs, copy in up to 32 archives(if
> they are required in
> the
> recovery process) into the active logs, do your
> recovery and then rezise
> down
> the actives back to your site standards .... I have
> never done this .. but I
> am told by an expert that if you have significant
> log apply in your
> recovery(we do and at the recovery site we can see
> up to an hour of log
> apply
> alone) then you will see a significant reduction in
> recovery time .. anybody
> tried this ?????????
>
>
> To change your subscription options or to cancel
> your subscription visit the DB2-L webpage at
> http://www.ryci.com/db2-l. The owners of the list
> can
>

__________________________________________________
Do You Yahoo!?
Talk to your friends online with Yahoo! Messenger.
http://messenger.yahoo.com



Leslie Pendlebury-Bowe

Re: Disaster Recovery
(in response to Mike O'Brien)
REPRO? I would assume ??? Steve ....?
Leslie
ps how are things at BMC these days?


______________________________ Reply Separator _________________________________
Subject: Re: Disaster Recovery
Author: "Mike O'Brien" <[login to unmask email]> at Internet
Date: 1/5/00 5:57 AM


Steve, what did you use to copy the archives to the
log datasets and how did you determine which datasets
to copy to.

--- Steve Thomas <[login to unmask email]>
wrote:
> Leslie,
>
> Yes, I've tried this and it works well. I didn't run
> the recovery both ways,
> so I can't
> comment on relative speed. I was told (probably by
> the same expert you are
> referring to!) that doing this allows DB2 to make
> direct access to the log
> range
> it needs to recover, rather than reading the archive
> log sequentially.
> Unless you
> have a huge number of objects to recover, such as in
> your SAP
> implementation, it
> probably won't make a huge difference in the great
> scheme of things, but any
> little
> helps I suppose. The most vital thing, as you say,
> is to copy your archive
> logs to
> disk and make the appropriate changes to the BSDS so
> that you can parallel
> run
> the log apply bits of the recovery.
>
> Steve Thomas
>
> Leslie Pendlebury-Bowe wrote:
>
> I might add that we put our archive logs onto DASD
> before the DB2 recovery
> begins(OS390 recovery takes care of this for us). I
> would like to then take
> this a stage further and put the archive logs into
> the actives .. which will
> mean DB2 can access them that much quicker - note
> DB2 can have up to 32
> active logs .. and so you can have an approach that
> means you alter the
> zparm
> to have 32 active logs, copy in up to 32 archives(if
> they are required in
> the
> recovery process) into the active logs, do your
> recovery and then rezise
> down
> the actives back to your site standards .... I have
> never done this .. but I
> am told by an expert that if you have significant
> log apply in your
> recovery(we do and at the recovery site we can see
> up to an hour of log
> apply
> alone) then you will see a significant reduction in
> recovery time .. anybody
> tried this ?????????
>
>
> To change your subscription options or to cancel
> your subscription visit the DB2-L webpage at
> http://www.ryci.com/db2-l. The owners of the list
> can
>

__________________________________________________
Do You Yahoo!?
Talk to your friends online with Yahoo! Messenger.
http://messenger.yahoo.com








Mike O'Brien

Re: Disaster Recovery
(in response to Leslie Pendlebury-Bowe)
Never worked at BMC, so I don't know, must be another
Mike O'Brien. I use to work at Ford and used BMC
products though.
--- Leslie Pendlebury-Bowe
<[login to unmask email]> wrote:
> REPRO? I would assume ??? Steve ....?
> Leslie
> ps how are things at BMC these days?
>
>
> ______________________________ Reply Separator
> _________________________________
> Subject: Re: Disaster Recovery
> Author: "Mike O'Brien"
> <[login to unmask email]> at Internet
> Date: 1/5/00 5:57 AM
>
>
> Steve, what did you use to copy the archives to the
> log datasets and how did you determine which
> datasets
> to copy to.
>
> --- Steve Thomas <[login to unmask email]>
> wrote:
> > Leslie,
> >
> > Yes, I've tried this and it works well. I didn't
> run
> > the recovery both ways,
> > so I can't
> > comment on relative speed. I was told (probably by
> > the same expert you are
> > referring to!) that doing this allows DB2 to make
> > direct access to the log
> > range
> > it needs to recover, rather than reading the
> archive
> > log sequentially.
> > Unless you
> > have a huge number of objects to recover, such as
> in
> > your SAP
> > implementation, it
> > probably won't make a huge difference in the great
> > scheme of things, but any
> > little
> > helps I suppose. The most vital thing, as you say,
> > is to copy your archive
> > logs to
> > disk and make the appropriate changes to the BSDS
> so
> > that you can parallel
> > run
> > the log apply bits of the recovery.
> >
> > Steve Thomas
> >
> > Leslie Pendlebury-Bowe wrote:
> >
> > I might add that we put our archive logs onto DASD
> > before the DB2 recovery
> > begins(OS390 recovery takes care of this for us).
> I
> > would like to then take
> > this a stage further and put the archive logs into
> > the actives .. which will
> > mean DB2 can access them that much quicker - note
> > DB2 can have up to 32
> > active logs .. and so you can have an approach
> that
> > means you alter the
> > zparm
> > to have 32 active logs, copy in up to 32
> archives(if
> > they are required in
> > the
> > recovery process) into the active logs, do your
> > recovery and then rezise
> > down
> > the actives back to your site standards .... I
> have
> > never done this .. but I
> > am told by an expert that if you have significant
> > log apply in your
> > recovery(we do and at the recovery site we can see
> > up to an hour of log
> > apply
> > alone) then you will see a significant reduction
> in
> > recovery time .. anybody
> > tried this ?????????
> >
> >
> > To change your subscription options or to cancel
> > your subscription visit the DB2-L webpage at
> > http://www.ryci.com/db2-l. The owners of the list
> > can
> >
>
> __________________________________________________
> Do You Yahoo!?
> Talk to your friends online with Yahoo! Messenger.
> http://messenger.yahoo.com
>
>
> To change your subscription options or to cancel
> your subscription visit the
> DB2-L webpage at http://www.ryci.com/db2-l. The
> owners of the list can be
>
>
>
> To change your subscription options or to cancel
> your subscription visit the DB2-L webpage at
> http://www.ryci.com/db2-l. The owners of the list
> can
>

__________________________________________________
Do You Yahoo!?
Talk to your friends online with Yahoo! Messenger.
http://messenger.yahoo.com



Steve Thomas

Re: Disaster Recovery
(in response to Mike O'Brien)
Mike,

We were using a third party tool to generate the DR jobs so it did it for
us, but I believe it used Repro to copy the archives back to the active
logs. The jobs defined a new set of active log datasets and then just
repro'd the data in with the appropriate RBA/LRSNs, and modified the BSDS
with the information.

As far as working out which ones we needed, the accurate way would be to
find the oldest backup you were going to restore and work forward from
there. We just used an easier approximation and looked at he BSDS on the
last archive at the DR site and and went back a day and half (assuming you
take daily backups).

You do need to be carfeul which active log datasets you restore them to. For
example, suppose you have 4 actives log pairs and number 3 is being used
now, and you want to add 4 more. The you need to copy old number 4 to your
new number 8 and copy the archives to the new logs 4-7 to maintain the log
order. I always tried to leave a few spare empty ones so the recovery itself
didn't overwrite the ones I'd just restored.

Obviously, you're going to be restricted by number of logs, disk space and
other boring stuff, but I've found disk space magically becomes available in
a DR situation.

Steve

----- Original Message -----
From: "Mike O'Brien" <[login to unmask email]>
Newsgroups: bit.listserv.db2-l
To: <[login to unmask email]>
Sent: 05 January 2000 13:57
Subject: Re: Disaster Recovery


> Steve, what did you use to copy the archives to the
> log datasets and how did you determine which datasets
> to copy to.
>
> --- Steve Thomas <[login to unmask email]>
> wrote:
> > Leslie,
> >
> > Yes, I've tried this and it works well. I didn't run
> > the recovery both ways,
> > so I can't
> > comment on relative speed. I was told (probably by
> > the same expert you are
> > referring to!) that doing this allows DB2 to make
> > direct access to the log
> > range
> > it needs to recover, rather than reading the archive
> > log sequentially.
> > Unless you
> > have a huge number of objects to recover, such as in
> > your SAP
> > implementation, it
> > probably won't make a huge difference in the great
> > scheme of things, but any
> > little
> > helps I suppose. The most vital thing, as you say,
> > is to copy your archive
> > logs to
> > disk and make the appropriate changes to the BSDS so
> > that you can parallel
> > run
> > the log apply bits of the recovery.
> >
> > Steve Thomas
> >
> > Leslie Pendlebury-Bowe wrote:
> >
> > I might add that we put our archive logs onto DASD
> > before the DB2 recovery
> > begins(OS390 recovery takes care of this for us). I
> > would like to then take
> > this a stage further and put the archive logs into
> > the actives .. which will
> > mean DB2 can access them that much quicker - note
> > DB2 can have up to 32
> > active logs .. and so you can have an approach that
> > means you alter the
> > zparm
> > to have 32 active logs, copy in up to 32 archives(if
> > they are required in
> > the
> > recovery process) into the active logs, do your
> > recovery and then rezise
> > down
> > the actives back to your site standards .... I have
> > never done this .. but I
> > am told by an expert that if you have significant
> > log apply in your
> > recovery(we do and at the recovery site we can see
> > up to an hour of log
> > apply
> > alone) then you will see a significant reduction in
> > recovery time .. anybody
> > tried this ?????????
> >
> >
> > To change your subscription options or to cancel
> > your subscription visit the DB2-L webpage at
> > http://www.ryci.com/db2-l. The owners of the list
> > can
> >
>
> __________________________________________________
> Do You Yahoo!?
> Talk to your friends online with Yahoo! Messenger.
> http://messenger.yahoo.com
>
>
>


>



Mike O'Brien

Re: Disaster Recovery
(in response to Steve Thomas)
Steve, thanks for the info. That makes it pretty
clear. Glad you mentioned being careful about which
log dataset to put them on.

--- Steve Thomas <[login to unmask email]>
wrote:
> Mike,
>
> We were using a third party tool to generate the DR
> jobs so it did it for
> us, but I believe it used Repro to copy the archives
> back to the active
> logs. The jobs defined a new set of active log
> datasets and then just
> repro'd the data in with the appropriate RBA/LRSNs,
> and modified the BSDS
> with the information.
>
> As far as working out which ones we needed, the
> accurate way would be to
> find the oldest backup you were going to restore and
> work forward from
> there. We just used an easier approximation and
> looked at he BSDS on the
> last archive at the DR site and and went back a day
> and half (assuming you
> take daily backups).
>
> You do need to be carfeul which active log datasets
> you restore them to. For
> example, suppose you have 4 actives log pairs and
> number 3 is being used
> now, and you want to add 4 more. The you need to
> copy old number 4 to your
> new number 8 and copy the archives to the new logs
> 4-7 to maintain the log
> order. I always tried to leave a few spare empty
> ones so the recovery itself
> didn't overwrite the ones I'd just restored.
>
> Obviously, you're going to be restricted by number
> of logs, disk space and
> other boring stuff, but I've found disk space
> magically becomes available in
> a DR situation.
>
> Steve
>
> ----- Original Message -----
> From: "Mike O'Brien"
> <[login to unmask email]>
> Newsgroups: bit.listserv.db2-l
> To: <[login to unmask email]>
> Sent: 05 January 2000 13:57
> Subject: Re: Disaster Recovery
>
>
> > Steve, what did you use to copy the archives to
> the
> > log datasets and how did you determine which
> datasets
> > to copy to.
> >
> > --- Steve Thomas <[login to unmask email]>
> > wrote:
> > > Leslie,
> > >
> > > Yes, I've tried this and it works well. I didn't
> run
> > > the recovery both ways,
> > > so I can't
> > > comment on relative speed. I was told (probably
> by
> > > the same expert you are
> > > referring to!) that doing this allows DB2 to
> make
> > > direct access to the log
> > > range
> > > it needs to recover, rather than reading the
> archive
> > > log sequentially.
> > > Unless you
> > > have a huge number of objects to recover, such
> as in
> > > your SAP
> > > implementation, it
> > > probably won't make a huge difference in the
> great
> > > scheme of things, but any
> > > little
> > > helps I suppose. The most vital thing, as you
> say,
> > > is to copy your archive
> > > logs to
> > > disk and make the appropriate changes to the
> BSDS so
> > > that you can parallel
> > > run
> > > the log apply bits of the recovery.
> > >
> > > Steve Thomas
> > >
> > > Leslie Pendlebury-Bowe wrote:
> > >
> > > I might add that we put our archive logs onto
> DASD
> > > before the DB2 recovery
> > > begins(OS390 recovery takes care of this for
> us). I
> > > would like to then take
> > > this a stage further and put the archive logs
> into
> > > the actives .. which will
> > > mean DB2 can access them that much quicker -
> note
> > > DB2 can have up to 32
> > > active logs .. and so you can have an approach
> that
> > > means you alter the
> > > zparm
> > > to have 32 active logs, copy in up to 32
> archives(if
> > > they are required in
> > > the
> > > recovery process) into the active logs, do your
> > > recovery and then rezise
> > > down
> > > the actives back to your site standards .... I
> have
> > > never done this .. but I
> > > am told by an expert that if you have
> significant
> > > log apply in your
> > > recovery(we do and at the recovery site we can
> see
> > > up to an hour of log
> > > apply
> > > alone) then you will see a significant reduction
> in
> > > recovery time .. anybody
> > > tried this ?????????
> > >
> > >
> > > To change your subscription options or to cancel
> > > your subscription visit the DB2-L webpage at
> > > http://www.ryci.com/db2-l. The owners of the
> list
> > > can
> > >
> >
> > __________________________________________________
> > Do You Yahoo!?
> > Talk to your friends online with Yahoo! Messenger.
> > http://messenger.yahoo.com
> >
> >
> > To change your subscription options or to cancel
> your subscription visit
> the DB2-L webpage at http://www.ryci.com/db2-l. The
> owners of the list can
>
> >
>
>
> To change your subscription options or to cancel
> your subscription visit the DB2-L webpage at
> http://www.ryci.com/db2-l. The owners of the list
> can
>

__________________________________________________
Do You Yahoo!?
Talk to your friends online with Yahoo! Messenger.
http://messenger.yahoo.com



Leslie Pendlebury-Bowe

Re: Disaster Recovery
(in response to Mike O'Brien)
Mike
sorry - ps was for somebody else. :-)

I guessed that REPRO must be used ...

If you get the ARCHIVES into the ACTIVE and tested before me (assuming
you do this) can you let me know what performance improvements you see
?

speak to you soon

Leslie




______________________________ Reply Separator _________________________________
Subject: Re: Disaster Recovery
Author: "Mike O'Brien" <[login to unmask email]> at Internet
Date: 1/5/00 7:26 AM


Steve, thanks for the info. That makes it pretty
clear. Glad you mentioned being careful about which
log dataset to put them on.

--- Steve Thomas <[login to unmask email]>
wrote:
> Mike,
>
> We were using a third party tool to generate the DR
> jobs so it did it for
> us, but I believe it used Repro to copy the archives
> back to the active
> logs. The jobs defined a new set of active log
> datasets and then just
> repro'd the data in with the appropriate RBA/LRSNs,
> and modified the BSDS
> with the information.
>
> As far as working out which ones we needed, the
> accurate way would be to
> find the oldest backup you were going to restore and
> work forward from
> there. We just used an easier approximation and
> looked at he BSDS on the
> last archive at the DR site and and went back a day
> and half (assuming you
> take daily backups).
>
> You do need to be carfeul which active log datasets
> you restore them to. For
> example, suppose you have 4 actives log pairs and
> number 3 is being used
> now, and you want to add 4 more. The you need to
> copy old number 4 to your
> new number 8 and copy the archives to the new logs
> 4-7 to maintain the log
> order. I always tried to leave a few spare empty
> ones so the recovery itself
> didn't overwrite the ones I'd just restored.
>
> Obviously, you're going to be restricted by number
> of logs, disk space and
> other boring stuff, but I've found disk space
> magically becomes available in
> a DR situation.
>
> Steve
>
> ----- Original Message -----
> From: "Mike O'Brien"
> <[login to unmask email]>
> Newsgroups: bit.listserv.db2-l
> To: <[login to unmask email]>
> Sent: 05 January 2000 13:57
> Subject: Re: Disaster Recovery
>
>
> > Steve, what did you use to copy the archives to
> the
> > log datasets and how did you determine which
> datasets
> > to copy to.
> >
> > --- Steve Thomas <[login to unmask email]>
> > wrote:
> > > Leslie,
> > >
> > > Yes, I've tried this and it works well. I didn't
> run
> > > the recovery both ways,
> > > so I can't
> > > comment on relative speed. I was told (probably
> by
> > > the same expert you are
> > > referring to!) that doing this allows DB2 to
> make
> > > direct access to the log
> > > range
> > > it needs to recover, rather than reading the
> archive
> > > log sequentially.
> > > Unless you
> > > have a huge number of objects to recover, such
> as in
> > > your SAP
> > > implementation, it
> > > probably won't make a huge difference in the
> great
> > > scheme of things, but any
> > > little
> > > helps I suppose. The most vital thing, as you
> say,
> > > is to copy your archive
> > > logs to
> > > disk and make the appropriate changes to the
> BSDS so
> > > that you can parallel
> > > run
> > > the log apply bits of the recovery.
> > >
> > > Steve Thomas
> > >
> > > Leslie Pendlebury-Bowe wrote:
> > >
> > > I might add that we put our archive logs onto
> DASD
> > > before the DB2 recovery
> > > begins(OS390 recovery takes care of this for
> us). I
> > > would like to then take
> > > this a stage further and put the archive logs
> into
> > > the actives .. which will
> > > mean DB2 can access them that much quicker -
> note
> > > DB2 can have up to 32
> > > active logs .. and so you can have an approach
> that
> > > means you alter the
> > > zparm
> > > to have 32 active logs, copy in up to 32
> archives(if
> > > they are required in
> > > the
> > > recovery process) into the active logs, do your
> > > recovery and then rezise
> > > down
> > > the actives back to your site standards .... I
> have
> > > never done this .. but I
> > > am told by an expert that if you have
> significant
> > > log apply in your
> > > recovery(we do and at the recovery site we can
> see
> > > up to an hour of log
> > > apply
> > > alone) then you will see a significant reduction
> in
> > > recovery time .. anybody
> > > tried this ?????????
> > >
> > >
> > > To change your subscription options or to cancel
> > > your subscription visit the DB2-L webpage at
> > > http://www.ryci.com/db2-l. The owners of the
> list
> > > can
> > >
> >
> > __________________________________________________
> > Do You Yahoo!?
> > Talk to your friends online with Yahoo! Messenger.
> > http://messenger.yahoo.com
> >
> >
> > To change your subscription options or to cancel
> your subscription visit
> the DB2-L webpage at http://www.ryci.com/db2-l. The
> owners of the list can
>
> >
>
>
> To change your subscription options or to cancel
> your subscription visit the DB2-L webpage at
> http://www.ryci.com/db2-l. The owners of the list
> can
>

__________________________________________________
Do You Yahoo!?
Talk to your friends online with Yahoo! Messenger.
http://messenger.yahoo.com








Dayna Brennan

Re: Disaster Recovery
(in response to Leslie Pendlebury-Bowe)
We use BMC also and have been very pleased with it. We are running
PeopleSoft HRMS (really old version) and are working on a re-install of PS
HRMS 7.5, we have too many customizations to allow an upgrade. Because of
the number of objects, and the fact that all our logs are on tape, we only
recover to a fixed point in time, no recover to current because of the
gazillions of scan passes through the logs. The new version of PS has about
4 times as many objects as the one we're running now, so this is a big
concern to us. We can currently recover everything (like SAP, you do it ALL
or you do NONE of it) in roughly 8 hours, due to stacked tapes and number of
available tape drives we can only run 2 jobstreams parallel.

What do other ERP shops do? Just curious, and also anyone with an idea of
how to reduce ImageCopy time? We've tried going straight to DASD and having
DMS archive them right away to tape, it helped some but they still run a
very long time. We're running DB2 for OS/390 V5.1 on OS/390 2.6.

Thanks,

Dayna Brennan
DB2 DBA
Intertechnologies Group


The views expressed in this email are strictly those of the author, they do
not necessarily reflect the views of Intertech or the DBA group. Bouquets
and/or brickbats should be addressed to the author.

-----Original Message-----
From: Knight, Robert A. (Cameron of Pittsburgh at Alcoa)
[mailto:[login to unmask email]
Sent: Tuesday, January 04, 2000 12:10 PM
Subject: Re: Disaster Recovery


I set up BMC recovery manager and it took care of all of this.
Why do this manually?




> ----------
> From: [login to unmask email]:[login to unmask email]
> Reply To: DB2 Data Base Discussion List
> Sent: Tuesday, January 04, 2000 11:22 AM
> To: [login to unmask email]
> Subject: Re: Disaster Recovery
>
> You've gotten a number of good responses so far, but one thing I haven't
> seen addressed yet is the very labor-intensive process of updating the
> bootstrap dataset. We have automated almost all of this. We use reports
> from our tape management system as input to some File Aid jobs. These
> jobs
> create the delete and newlog statements for the archive logs. The only
> manual work that we do now to the bootstrap dataset are to delete and add
> the new active logs, add the last archive log, update the high rba, and
> create the conditional restart record. Everything else is automated. The
> most recent piece of this was implemented during our last disaster
> recovery
> exercise in November. We were able to update our bootstrap datasets in
> less than half an hour, and there were no keying errors at all, which
> saved
> us even more time.
>
> If you would like more details on this, please let me know and I will
> forward what I can.
>
>
>
>



andrea milligan

Re: Disaster Recovery
(in response to Dayna Brennan)
We run PeopleSoft GL 5.1 and now are running
AP/Purchasing/Inventory/HRMS/Payroll at Release 7.5. I have never had to do
a full recovery of all objects to a given point in time, but am curious as
to why you need to to that --- except for disaster recovery ---.

Andrea Milligan
DataBase Administrator
BlueCross BlueShield of Tennessee

> -----Original Message-----
> From: Dayna Brennan [SMTP:[login to unmask email]
> Sent: Wednesday, January 05, 2000 12:29 PM
> To: [login to unmask email]
> Subject: Re: Disaster Recovery
>
> We use BMC also and have been very pleased with it. We are running
> PeopleSoft HRMS (really old version) and are working on a re-install of PS
> HRMS 7.5, we have too many customizations to allow an upgrade. Because of
> the number of objects, and the fact that all our logs are on tape, we only
> recover to a fixed point in time, no recover to current because of the
> gazillions of scan passes through the logs. The new version of PS has
> about
> 4 times as many objects as the one we're running now, so this is a big
> concern to us. We can currently recover everything (like SAP, you do it
> ALL
> or you do NONE of it) in roughly 8 hours, due to stacked tapes and number
> of
> available tape drives we can only run 2 jobstreams parallel.
>
> What do other ERP shops do? Just curious, and also anyone with an idea of
> how to reduce ImageCopy time? We've tried going straight to DASD and
> having
> DMS archive them right away to tape, it helped some but they still run a
> very long time. We're running DB2 for OS/390 V5.1 on OS/390 2.6.
>
> Thanks,
>
> Dayna Brennan
> DB2 DBA
> Intertechnologies Group
>
>
> The views expressed in this email are strictly those of the author, they
> do
> not necessarily reflect the views of Intertech or the DBA group. Bouquets
> and/or brickbats should be addressed to the author.
>
> -----Original Message-----
> From: Knight, Robert A. (Cameron of Pittsburgh at Alcoa)
> [mailto:[login to unmask email]
> Sent: Tuesday, January 04, 2000 12:10 PM
> Subject: Re: Disaster Recovery
>
>
> I set up BMC recovery manager and it took care of all of this.
> Why do this manually?
>
>
>
>
> > ----------
> > From: [login to unmask email]:[login to unmask email]
> > Reply To: DB2 Data Base Discussion List
> > Sent: Tuesday, January 04, 2000 11:22 AM
> > To: [login to unmask email]
> > Subject: Re: Disaster Recovery
> >
> > You've gotten a number of good responses so far, but one thing I haven't
> > seen addressed yet is the very labor-intensive process of updating the
> > bootstrap dataset. We have automated almost all of this. We use
> reports
> > from our tape management system as input to some File Aid jobs. These
> > jobs
> > create the delete and newlog statements for the archive logs. The only
> > manual work that we do now to the bootstrap dataset are to delete and
> add
> > the new active logs, add the last archive log, update the high rba, and
> > create the conditional restart record. Everything else is automated.
> The
> > most recent piece of this was implemented during our last disaster
> > recovery
> > exercise in November. We were able to update our bootstrap datasets in
> > less than half an hour, and there were no keying errors at all, which
> > saved
> > us even more time.
> >
> > If you would like more details on this, please let me know and I will
> > forward what I can.
> >
> >
> >
> >
>
>
>
>
>

Anindita Chatterjee

Re: Disaster Recovery
(in response to andrea milligan)
unsubscribe DB2-L

> -----Original Message-----
> From: Milligan, Andrea [SMTP:[login to unmask email]
> Sent: Wednesday, January 05, 2000 11:00 AM
> To: [login to unmask email]
> Subject: Re: Disaster Recovery
>
> We run PeopleSoft GL 5.1 and now are running
> AP/Purchasing/Inventory/HRMS/Payroll at Release 7.5. I have never had to
> do a full recovery of all objects to a given point in time, but am curious
> as to why you need to to that --- except for disaster recovery ---.
>
> Andrea Milligan
> DataBase Administrator
> BlueCross BlueShield of Tennessee
>
> -----Original Message-----
> From: Dayna Brennan [SMTP:[login to unmask email]
> Sent: Wednesday, January 05, 2000 12:29 PM
> To: [login to unmask email]
> Subject: Re: Disaster Recovery
>
> We use BMC also and have been very pleased with it. We are running
> PeopleSoft HRMS (really old version) and are working on a re-install of PS
>
> HRMS 7.5, we have too many customizations to allow an upgrade. Because of
>
> the number of objects, and the fact that all our logs are on tape, we only
>
> recover to a fixed point in time, no recover to current because of the
> gazillions of scan passes through the logs. The new version of PS has
> about
> 4 times as many objects as the one we're running now, so this is a big
> concern to us. We can currently recover everything (like SAP, you do it
> ALL
> or you do NONE of it) in roughly 8 hours, due to stacked tapes and number
> of
> available tape drives we can only run 2 jobstreams parallel.
>
> What do other ERP shops do? Just curious, and also anyone with an idea of
>
> how to reduce ImageCopy time? We've tried going straight to DASD and
> having
> DMS archive them right away to tape, it helped some but they still run a
> very long time. We're running DB2 for OS/390 V5.1 on OS/390 2.6.
>
> Thanks,
>
> Dayna Brennan
> DB2 DBA
> Intertechnologies Group
>
>
> The views expressed in this email are strictly those of the author, they
> do
> not necessarily reflect the views of Intertech or the DBA group. Bouquets
>
> and/or brickbats should be addressed to the author.
>
> -----Original Message-----
> From: Knight, Robert A. (Cameron of Pittsburgh at Alcoa)
> [ <mailto:[login to unmask email]>]
> Sent: Tuesday, January 04, 2000 12:10 PM
> Subject: Re: Disaster Recovery
>
>
> I set up BMC recovery manager and it took care of all of this.
> Why do this manually?
>
>
>
>
> > ----------
> > From: [login to unmask email]:[login to unmask email]
> > Reply To: DB2 Data Base Discussion List
> > Sent: Tuesday, January 04, 2000 11:22 AM
> > To: [login to unmask email]
> > Subject: Re: Disaster Recovery
> >
> > You've gotten a number of good responses so far, but one thing I haven't
>
> > seen addressed yet is the very labor-intensive process of updating the
> > bootstrap dataset. We have automated almost all of this. We use
> reports
> > from our tape management system as input to some File Aid jobs. These
> > jobs
> > create the delete and newlog statements for the archive logs. The only
> > manual work that we do now to the bootstrap dataset are to delete and
> add
> > the new active logs, add the last archive log, update the high rba, and
> > create the conditional restart record. Everything else is automated.
> The
> > most recent piece of this was implemented during our last disaster
> > recovery
> > exercise in November. We were able to update our bootstrap datasets in
> > less than half an hour, and there were no keying errors at all, which
> > saved
> > us even more time.
> >
> > If you would like more details on this, please let me know and I will
> > forward what I can.
> >
> >
> >
> >
>
>
>
> the DB2-L webpage at < http://www.ryci.com/db2-l > . The owners of the list
> can
>



Leslie Pendlebury-Bowe

Re: Disaster Recovery
(in response to Anindita Chatterjee)
Dayan

as you use BMC products for your DR .. can you tell me if the BMC
products are able to handle the daily job we have here of load
balancing our tapes? We are SAP and like us IU am sure your site has
problems with one night your first backup job will use 1 tape the next
night 2 .. so you move objects out of that job into one of your other
backup jobs that is still using one tape (we have 50gig tapes) and
then 2 nights later another job will go to 2 tapes .. and so the
process goes on.

We use PDA to build our backup jobs .. I have asked for PDA to use its
stats database (we collect detailed stats weekly) to do load balancing
and for PRA (recovery analyzer) to be more tightly coupled with PDA so
that it can determine what is on what tape and build the recovery jobs
for our dr site based on optimization per volume (PRA is getting close
to what I want but is still a while away from being exactly how I need
it).

I just wondered if you have some magical way of load balancing and
that is fed into your DR Jobs build each night ... ???

Thanks

Leslie Pendlebury-Bowe
ps I am seriously considering writing my own REXX to do all this for
me based on either db2 catalog stats (makes for more site portable
code) or using PDA detailed stats .. I may even extend it to be a
series of REXXs that can build you a set of DR jobs......


______________________________ Reply Separator _________________________________
Subject: Re: Disaster Recovery
Author: Dayna Brennan <[login to unmask email]> at Internet
Date: 1/5/00 11:28 AM


We use BMC also and have been very pleased with it. We are running
PeopleSoft HRMS (really old version) and are working on a re-install of PS
HRMS 7.5, we have too many customizations to allow an upgrade. Because of
the number of objects, and the fact that all our logs are on tape, we only
recover to a fixed point in time, no recover to current because of the
gazillions of scan passes through the logs. The new version of PS has about
4 times as many objects as the one we're running now, so this is a big
concern to us. We can currently recover everything (like SAP, you do it ALL
or you do NONE of it) in roughly 8 hours, due to stacked tapes and number of
available tape drives we can only run 2 jobstreams parallel.

What do other ERP shops do? Just curious, and also anyone with an idea of
how to reduce ImageCopy time? We've tried going straight to DASD and having
DMS archive them right away to tape, it helped some but they still run a
very long time. We're running DB2 for OS/390 V5.1 on OS/390 2.6.

Thanks,

Dayna Brennan
DB2 DBA
Intertechnologies Group


The views expressed in this email are strictly those of the author, they do
not necessarily reflect the views of Intertech or the DBA group. Bouquets
and/or brickbats should be addressed to the author.

-----Original Message-----
From: Knight, Robert A. (Cameron of Pittsburgh at Alcoa)
[mailto:[login to unmask email]
Sent: Tuesday, January 04, 2000 12:10 PM
Subject: Re: Disaster Recovery


I set up BMC recovery manager and it took care of all of this.
Why do this manually?




> ----------
> From: [login to unmask email]:[login to unmask email]
> Reply To: DB2 Data Base Discussion List
> Sent: Tuesday, January 04, 2000 11:22 AM
> To: [login to unmask email]
> Subject: Re: Disaster Recovery
>
> You've gotten a number of good responses so far, but one thing I haven't
> seen addressed yet is the very labor-intensive process of updating the
> bootstrap dataset. We have automated almost all of this. We use reports
> from our tape management system as input to some File Aid jobs. These
> jobs
> create the delete and newlog statements for the archive logs. The only
> manual work that we do now to the bootstrap dataset are to delete and add
> the new active logs, add the last archive log, update the high rba, and
> create the conditional restart record. Everything else is automated. The
> most recent piece of this was implemented during our last disaster
> recovery
> exercise in November. We were able to update our bootstrap datasets in
> less than half an hour, and there were no keying errors at all, which
> saved
> us even more time.
>
> If you would like more details on this, please let me know and I will
> forward what I can.
>
>
>
>








Dayna Brennan

Re: Disaster Recovery
(in response to Leslie Pendlebury-Bowe)
That would be the reason. The other reason would be if we had some sort of
"software system meltdown". Since we don't in many cases know what tables
are involved (due to the dynamic nature of the SQL), if we have to restore
one table we would restore them all just in case something needs to be in
synch that we don't know about. We do have a few processes where we create
recoverability points, and luckily we've never had to actually bring back
the entire production database into production. We did however once put a
copy of the data for all the tables from 2-week-old image copies into a
development region so the applications folks could try to figure out what
had happened, it took three or four people several days, BMC was a big help
with the ability to unload from imagecopy. Hopefully if we ever physically
lost a pack, there would be a small enough number of objects that we could
restore to point-in-time.


The views expressed in this email are strictly those of the author, they do
not necessarily reflect the views of Intertech or the DBA group. Bouquets
and/or brickbats should be addressed to the author.



-----Original Message-----
From: Milligan, Andrea [mailto:[login to unmask email]
Sent: Wednesday, January 05, 2000 1:00 PM
Subject: Re: Disaster Recovery



We run PeopleSoft GL 5.1 and now are running
AP/Purchasing/Inventory/HRMS/Payroll at Release 7.5. I have never had to do
a full recovery of all objects to a given point in time, but am curious as
to why you need to to that --- except for disaster recovery ---.

Andrea Milligan
DataBase Administrator
BlueCross BlueShield of Tennessee

=======================

DB2-L webpage at http://www.ryci.com/db2-l < http://www.ryci.com/db2-l > . The
owners of the list can



James Campbell

Re: Disaster Recovery
(in response to Dayna Brennan)
Leslie,

BMC's Copy Plus has a multi-tasking option. I've never used it, but
according to the manual when each task finishes copying a space it
starts on the next space to be copied. Hence, with a tape to each task
the balancing will be done automatically.

/* standard disclaimer */
James Campbell
DBA
Hansen Corporation, Doncaster
+61 3 9840 3864
[login to unmask email]
-----Original Message-----
From: Leslie Pendlebury-Bowe
[mailto:[login to unmask email]
Sent: Thursday, January 06, 2000 5:48 PM
To: [login to unmask email]
Subject: Re: Disaster Recovery


Dayan

as you use BMC products for your DR .. can you tell me if the BMC
products are able to handle the daily job we have here of load
balancing our tapes? We are SAP and like us IU am sure your site
has
problems with one night your first backup job will use 1 tape the
next
night 2 .. so you move objects out of that job into one of your
other
backup jobs that is still using one tape (we have 50gig tapes) and
then 2 nights later another job will go to 2 tapes .. and so the
process goes on.

We use PDA to build our backup jobs .. I have asked for PDA to use
its
stats database (we collect detailed stats weekly) to do load
balancing
and for PRA (recovery analyzer) to be more tightly coupled with PDA
so
that it can determine what is on what tape and build the recovery
jobs
for our dr site based on optimization per volume (PRA is getting
close
to what I want but is still a while away from being exactly how I
need
it).

I just wondered if you have some magical way of load balancing and
that is fed into your DR Jobs build each night ... ???

Thanks

Leslie Pendlebury-Bowe
ps I am seriously considering writing my own REXX to do all this
for
me based on either db2 catalog stats (makes for more site portable
code) or using PDA detailed stats .. I may even extend it to be a
series of REXXs that can build you a set of DR jobs......


______________________________ Reply Separator
_________________________________
Subject: Re: Disaster Recovery
Author: Dayna Brennan <[login to unmask email]> at Internet
Date: 1/5/00 11:28 AM


We use BMC also and have been very pleased with it. We are running
PeopleSoft HRMS (really old version) and are working on a re-install of
PS
HRMS 7.5, we have too many customizations to allow an upgrade. Because
of
the number of objects, and the fact that all our logs are on tape, we
only
recover to a fixed point in time, no recover to current because of the
gazillions of scan passes through the logs. The new version of PS has
about
4 times as many objects as the one we're running now, so this is a big
concern to us. We can currently recover everything (like SAP, you do it
ALL
or you do NONE of it) in roughly 8 hours, due to stacked tapes and
number of
available tape drives we can only run 2 jobstreams parallel.

What do other ERP shops do? Just curious, and also anyone with an idea
of
how to reduce ImageCopy time? We've tried going straight to DASD and
having
DMS archive them right away to tape, it helped some but they still run a
very long time. We're running DB2 for OS/390 V5.1 on OS/390 2.6.

Thanks,

Dayna Brennan
DB2 DBA
Intertechnologies Group


The views expressed in this email are strictly those of the author, they
do
not necessarily reflect the views of Intertech or the DBA group.
Bouquets
and/or brickbats should be addressed to the author.

-----Original Message-----
From: Knight, Robert A. (Cameron of Pittsburgh at Alcoa)
[mailto:[login to unmask email]
Sent: Tuesday, January 04, 2000 12:10 PM
Subject: Re: Disaster Recovery


I set up BMC recovery manager and it took care of all of this.
Why do this manually?




> ----------
> From: [login to unmask email]:[login to unmask email]
> Reply To: DB2 Data Base Discussion List
> Sent: Tuesday, January 04, 2000 11:22 AM
> To: [login to unmask email]
> Subject: Re: Disaster Recovery
>
> You've gotten a number of good responses so far, but one thing I haven't
> seen addressed yet is the very labor-intensive process of updating the
> bootstrap dataset. We have automated almost all of this. We use reports
> from our tape management system as input to some File Aid jobs. These
> jobs
> create the delete and newlog statements for the archive logs. The only
> manual work that we do now to the bootstrap dataset are to delete and add
> the new active logs, add the last archive log, update the high rba, and
> create the conditional restart record. Everything else is automated. The
> most recent piece of this was implemented during our last disaster
> recovery
> exercise in November. We were able to update our bootstrap datasets in
> less than half an hour, and there were no keying errors at all, which
> saved
> us even more time.
>
> If you would like more details on this, please let me know and I will
> forward what I can.
>
>
>
>



the
DB2-L webpage at http://www.ryci.com/db2-l. The owners of the list can
be




the DB2-L webpage at http://www.ryci.com/db2-l. The owners of the list
can



Leslie Pendlebury-Bowe

Re: Disaster Recovery
(in response to James Campbell)
James
yep I am aware of this - I installed and configured multiple Copy Plus
Options files at a previous site - and I used the muti-tasking then -
as the site had 24 tape drives and used very small tapes we were able
to get the backup time down from 10 hours a night to 40 Minutes ....
it was all down to the configuration of BMC Copy Plus .. a win in my
books.

My concern is over the build of the DR jobs .. can your DR program
determine the tapes on which pagesets live (use syscopy etc) and build
the recovery jobs accourdingly so that 100% optimization of the tapes
is obtained and in turn end to end recovery times are kept to an
absolute minimum?

thanks

Leslie




______________________________ Reply Separator _________________________________
Subject: Re: Disaster Recovery
Author: Campbell James <[login to unmask email]> at Internet
Date: 1/7/00 9:36 AM


Leslie,

BMC's Copy Plus has a multi-tasking option. I've never used it, but
according to the manual when each task finishes copying a space it
starts on the next space to be copied. Hence, with a tape to each task
the balancing will be done automatically.

/* standard disclaimer */
James Campbell
DBA
Hansen Corporation, Doncaster
+61 3 9840 3864
[login to unmask email]
-----Original Message-----
From: Leslie Pendlebury-Bowe
[mailto:[login to unmask email]
Sent: Thursday, January 06, 2000 5:48 PM
To: [login to unmask email]
Subject: Re: Disaster Recovery


Dayan

as you use BMC products for your DR .. can you tell me if the BMC
products are able to handle the daily job we have here of load
balancing our tapes? We are SAP and like us IU am sure your site
has
problems with one night your first backup job will use 1 tape the
next
night 2 .. so you move objects out of that job into one of your
other
backup jobs that is still using one tape (we have 50gig tapes) and
then 2 nights later another job will go to 2 tapes .. and so the
process goes on.

We use PDA to build our backup jobs .. I have asked for PDA to use
its
stats database (we collect detailed stats weekly) to do load
balancing
and for PRA (recovery analyzer) to be more tightly coupled with PDA
so
that it can determine what is on what tape and build the recovery
jobs
for our dr site based on optimization per volume (PRA is getting
close
to what I want but is still a while away from being exactly how I
need
it).

I just wondered if you have some magical way of load balancing and
that is fed into your DR Jobs build each night ... ???

Thanks

Leslie Pendlebury-Bowe
ps I am seriously considering writing my own REXX to do all this
for
me based on either db2 catalog stats (makes for more site portable
code) or using PDA detailed stats .. I may even extend it to be a
series of REXXs that can build you a set of DR jobs......


______________________________ Reply Separator
_________________________________
Subject: Re: Disaster Recovery
Author: Dayna Brennan <[login to unmask email]> at Internet
Date: 1/5/00 11:28 AM


We use BMC also and have been very pleased with it. We are running
PeopleSoft HRMS (really old version) and are working on a re-install of
PS
HRMS 7.5, we have too many customizations to allow an upgrade. Because
of
the number of objects, and the fact that all our logs are on tape, we
only
recover to a fixed point in time, no recover to current because of the
gazillions of scan passes through the logs. The new version of PS has
about
4 times as many objects as the one we're running now, so this is a big
concern to us. We can currently recover everything (like SAP, you do it
ALL
or you do NONE of it) in roughly 8 hours, due to stacked tapes and
number of
available tape drives we can only run 2 jobstreams parallel.

What do other ERP shops do? Just curious, and also anyone with an idea
of
how to reduce ImageCopy time? We've tried going straight to DASD and
having
DMS archive them right away to tape, it helped some but they still run a
very long time. We're running DB2 for OS/390 V5.1 on OS/390 2.6.

Thanks,

Dayna Brennan
DB2 DBA
Intertechnologies Group


The views expressed in this email are strictly those of the author, they
do
not necessarily reflect the views of Intertech or the DBA group.
Bouquets
and/or brickbats should be addressed to the author.

-----Original Message-----
From: Knight, Robert A. (Cameron of Pittsburgh at Alcoa)
[mailto:[login to unmask email]
Sent: Tuesday, January 04, 2000 12:10 PM
Subject: Re: Disaster Recovery


I set up BMC recovery manager and it took care of all of this.
Why do this manually?




> ----------
> From: [login to unmask email]:[login to unmask email]
> Reply To: DB2 Data Base Discussion List
> Sent: Tuesday, January 04, 2000 11:22 AM
> To: [login to unmask email]
> Subject: Re: Disaster Recovery
>
> You've gotten a number of good responses so far, but one thing I haven't
> seen addressed yet is the very labor-intensive process of updating the
> bootstrap dataset. We have automated almost all of this. We use reports
> from our tape management system as input to some File Aid jobs. These
> jobs
> create the delete and newlog statements for the archive logs. The only
> manual work that we do now to the bootstrap dataset are to delete and add
> the new active logs, add the last archive log, update the high rba, and
> create the conditional restart record. Everything else is automated. The
> most recent piece of this was implemented during our last disaster
> recovery
> exercise in November. We were able to update our bootstrap datasets in
> less than half an hour, and there were no keying errors at all, which
> saved
> us even more time.
>
> If you would like more details on this, please let me know and I will
> forward what I can.
>
>
>
>



the
DB2-L webpage at http://www.ryci.com/db2-l. The owners of the list can
be




the DB2-L webpage at http://www.ryci.com/db2-l. The owners of the list
can