[AD] DB2 z/OS DB2 subsystem replication allowing renames (HLQ, Subsystem name, etc.)

Roy Boxwell

[AD] DB2 z/OS DB2 subsystem replication allowing renames (HLQ, Subsystem name, etc.)
Hi there!



Changed to AD as we also sell a process called ICE (InstantCloningExpert) which will clone/copy/rename/ddl/etc etc even makes the tea if you ask it...

Anyway, I digress, it does everything you need for copying/moving data around. Supports shared DASD, FTP, TCP/IP transmission etc



Zap me an email to try it out if you like!



Roy Boxwell

SOFTWARE ENGINEERING GmbH and SEGUS Inc.
-Product Development-

Vagedesstrasse 19
40479 Dusseldorf/Germany
Tel. +49 (0)211 96149-675
Fax +49 (0)211 96149-32
Email: <mailto:[login to unmask email]> [login to unmask email]
Web http://www.seg.de http://www.seg.de

https://www.seg.de/corporate/rechtliche-hinweise/datenschutz Link zur Datenschutzerklärung


Software Engineering GmbH
Amtsgericht Düsseldorf, HRB 37894
Geschäftsführung: Gerhard Schubert, Ulf Heinrich



From: Melissa Case [mailto:[login to unmask email]
Sent: Tuesday, February 4, 2020 9:04 PM
To: [login to unmask email]
Subject: [DB2-L] - DB2 z/OS DB2 subsystem replication allowing renames (HLQ, Subsystem name, etc.)



Hello,



DB2 V12 z/OS on a z15 (YAY for the cool kids) ��



We’re creating a new DB2 subsystem and want to move a subset of the data to the new system. There is shared DASD among ALL the DB2 regions – this will be the 5th prod…



We have flash copy, but need to rename the HLQ’s & zparm values, etc. Has anyone done something like this with the required changes that is willing to share their process & pitfalls?



Thanks!



Melissa (Missy) Case

DB2 Logical DBA / Team Lead

Ensono

e: <mailto:[login to unmask email]> [login to unmask email]

http://www.ensono.com www.ensono.com



From: Bright, Randy <[login to unmask email] <mailto:[login to unmask email]> >
Sent: Tuesday, February 4, 2020 5:59 AM
To: [login to unmask email] <mailto:[login to unmask email]>
Subject: [DB2-L] - RE: DB2 z/OS UNLOAD from an Image Copy GDG (DB2 for z/OS V11 NFM)



*** ATTENTION! This message originated from outside of Ensono. Treat hyperlinks and attachments in this email with caution. ***



A few questions: Would “sub-setting” the data help? In other words, instead of unloading “umpteen billion” rows from production, could you get by with “umpteen million” if all RI relationships were maintained? And as Phil asked, should you be “anonymizing” the data before it leaves production?



If the answers to those questions are “yes”, contact me offline. I’d like to have a conversation.



Randy Bright

Solutions Architect

BMC Software, Inc.

[login to unmask email] <mailto:[login to unmask email]>



From: Phil Grainger <[login to unmask email] <mailto:[login to unmask email]> >
Sent: Tuesday, February 4, 2020 4:04 AM
To: [login to unmask email] <mailto:[login to unmask email]>
Subject: [EXTERNAL] [DB2-L] - RE: DB2 z/OS UNLOAD from an Image Copy GDG (DB2 for z/OS V11 NFM)



Two questions spring to mind



1. Why use unload/reload when dsn1copy or dfdss would be far faster. Providing all structures are equal, of course



2. I hope you are also anonymising the data before it hits development subsystems...



Phil G

Sent from my iPad



On 4 Feb 2020, at 01:20, David Waugh <[login to unmask email] <mailto:[login to unmask email]> > wrote:



Michael:

It's a little difficult to explain why I think unloading from an Image Copy is preferable to unloading from the active table, but I'll give it a shot:

We're unloading around 300 tables' worth of data from our Production subsystem, and then loading the same # of tables in the QA subsystem. There is no shared tape between the LPARs, so we have to unload to a set of DASD volumes that are shared between the LPARs. Space on these shared DASD volumes is limited, and they cannot contain all of the unloaded data at once.

Some of the tables are quite large (up to 13 billion rows), so we have the BETWEEN timestamp unload WHEN criteria set to unload small slices of the data each time. We simply can't (and don't want to) unload all umpteen billion rows of data from Prod and load it into QA. Which means, at times, we're running up to 50 UNLOADs from a single table, each with a 2-week WHEN criteria, trying to do the unloads in "waves" so that we can get the last 6 months' worth of data unloaded from Prod and loaded into QA. (the purpose of taking small time slices of 2 weeks each is apparently to keep the size of each unload dataset under 1000 cylinders, so that we don't have problems finding sufficient DASD space on those shared volumes).

The process of unloading the same Prod table 50 times can take up to 4 hours for the biggest tables. While 50 unloads are running (sequentially), data in the table being unloaded is being updated constantly over that 4 hours, thus it's a pretty "fuzzy" copy. Multiply this process by 300 tables over 24 hours, and you have a REALLY "fuzzy" copy of the Production data to load into QA.

We can't run many of these unloads in parallel, again because of the limited shared DASD space we have to work with. So it's (1) unload a set of tables (or a single big table) from Prod, (2) load that set of tables (or a single big table) into QA, (3) delete the unload datasets from the shared DASD to free up space, (4) unload another set of tables (or another single big table) from Prod, (5) load that set of tables (or a single big table) into QA, (6) delete the unload datasets from the shared DASD to free up space, … Wash, Rinse, Repeat... over & over & over again.

The process of unloading and loading all 300 tables took about 36 hours the last time I did it (a couple of weeks ago), so it's a long, tedious, mind-numbing process that makes you want to tear your hair out and start updating your resume. And yes, I was doing this all by myself - it wasn't like there were 2 or 3 of us taking shifts.

And then when you're done moving all this data from Prod to QA, you get the fun of doing it AGAIN to move the same data from QA to Development. Oh, and they want it done closely on the heels of the Prod to QA move. Makes for a really exhausting week, to say the least.

I'm searching for a way of trimming down the time it takes, and also searching for a way of getting a less "fuzzy" copy of the Production data. Unloading the data from an image copy 50 times solves the "fuzzy" problem (pretty much, anyway, since the only Image Copies available to us are SHRLEVEL CHANGE), and I'm thinking that unloading from an image copy 50 times might be faster than unloading from the active table 50 times.

Does it make more sense now why I'm exploring this new way of moving data from Prod to QA to Development?

David Waugh, Erstwhile DB2 DBA



-----End Original Message-----



-----End Original Message-----



-----End Original Message-----

© 2019 Ensono, LP. All rights reserved. Ensono is a trademark of Ensono, LP. The information contained in this communication is confidential, is intended only for the use of the recipient named above, and may be legally privileged. If the reader of this message is not the intended recipient, you are hereby notified that any dissemination, distribution or copying of this communication is strictly prohibited. If you have received this communication in error, please resend this communication to the sender and delete the original message or any copy of it from your computer system.



-----End Original Message-----

Attachments

  • smime.p7s (5.1k)

steen rasmussen

[AD] DB2 z/OS DB2 subsystem replication allowing renames (HLQ, Subsystem name, etc.)
(in response to Roy Boxwell)
Melissa - you do have access to CA RC/Merger which is a component under RC/Migrator which can clone data in a very efficient way.

Ping me if needed

Steen Rasmussen email [login to unmask email]

Sent from my iPhone

> On Feb 5, 2020, at 10:03 AM, Boxwell, Roy <[login to unmask email]> wrote:
>
> Hi there!
>
> Changed to AD as we also sell a process called ICE (InstantCloningExpert) which will clone/copy/rename/ddl/etc etc even makes the tea if you ask it...
> Anyway, I digress, it does everything you need for copying/moving data around. Supports shared DASD, FTP, TCP/IP transmission etc
>
> Zap me an email to try it out if you like!
>
> Roy Boxwell
>
> SOFTWARE ENGINEERING GmbH and SEGUS Inc.
> -Product Development-
>
> Vagedesstrasse 19
> 40479 Dusseldorf/Germany
> Tel. +49 (0)211 96149-675
> Fax +49 (0)211 96149-32
> Email: [login to unmask email]
> Web http://www.seg.de
> Link zur Datenschutzerklärung
>
> Software Engineering GmbH
> Amtsgericht Düsseldorf, HRB 37894
> Geschäftsführung: Gerhard Schubert, Ulf Heinrich
>
>
> From: Melissa Case [mailto:[login to unmask email]
> Sent: Tuesday, February 4, 2020 9:04 PM
> To: [login to unmask email]
> Subject: [DB2-L] - DB2 z/OS DB2 subsystem replication allowing renames (HLQ, Subsystem name, etc.)
>
> Hello,
>
> DB2 V12 z/OS on a z15 (YAY for the cool kids) ��
>
> We’re creating a new DB2 subsystem and want to move a subset of the data to the new system. There is shared DASD among ALL the DB2 regions – this will be the 5th prod…
>
> We have flash copy, but need to rename the HLQ’s & zparm values, etc. Has anyone done something like this with the required changes that is willing to share their process & pitfalls?
>
> Thanks!
>
> Melissa (Missy) Case
> DB2 Logical DBA / Team Lead
> Ensono
>
> e: [login to unmask email]
> www.ensono.com
>
> From: Bright, Randy <[login to unmask email]>
> Sent: Tuesday, February 4, 2020 5:59 AM
> To: [login to unmask email]
> Subject: [DB2-L] - RE: DB2 z/OS UNLOAD from an Image Copy GDG (DB2 for z/OS V11 NFM)
>
> *** ATTENTION! This message originated from outside of Ensono. Treat hyperlinks and attachments in this email with caution. ***
>
>
>
> A few questions: Would “sub-setting” the data help? In other words, instead of unloading “umpteen billion” rows from production, could you get by with “umpteen million” if all RI relationships were maintained? And as Phil asked, should you be “anonymizing” the data before it leaves production?
>
> If the answers to those questions are “yes”, contact me offline. I’d like to have a conversation.
>
> Randy Bright
> Solutions Architect
> BMC Software, Inc.
> [login to unmask email]
>
> From: Phil Grainger <[login to unmask email]>
> Sent: Tuesday, February 4, 2020 4:04 AM
> To: [login to unmask email]
> Subject: [EXTERNAL] [DB2-L] - RE: DB2 z/OS UNLOAD from an Image Copy GDG (DB2 for z/OS V11 NFM)
>
> Two questions spring to mind
>
> 1. Why use unload/reload when dsn1copy or dfdss would be far faster. Providing all structures are equal, of course
>
> 2. I hope you are also anonymising the data before it hits development subsystems...
>
> Phil G
>
> Sent from my iPad
>
>
> On 4 Feb 2020, at 01:20, David Waugh <[login to unmask email]> wrote:
>
> 
> Michael:
>
> It's a little difficult to explain why I think unloading from an Image Copy is preferable to unloading from the active table, but I'll give it a shot:
>
> We're unloading around 300 tables' worth of data from our Production subsystem, and then loading the same # of tables in the QA subsystem. There is no shared tape between the LPARs, so we have to unload to a set of DASD volumes that are shared between the LPARs. Space on these shared DASD volumes is limited, and they cannot contain all of the unloaded data at once.
>
> Some of the tables are quite large (up to 13 billion rows), so we have the BETWEEN timestamp unload WHEN criteria set to unload small slices of the data each time. We simply can't (and don't want to) unload all umpteen billion rows of data from Prod and load it into QA. Which means, at times, we're running up to 50 UNLOADs from a single table, each with a 2-week WHEN criteria, trying to do the unloads in "waves" so that we can get the last 6 months' worth of data unloaded from Prod and loaded into QA. (the purpose of taking small time slices of 2 weeks each is apparently to keep the size of each unload dataset under 1000 cylinders, so that we don't have problems finding sufficient DASD space on those shared volumes).
>
> The process of unloading the same Prod table 50 times can take up to 4 hours for the biggest tables. While 50 unloads are running (sequentially), data in the table being unloaded is being updated constantly over that 4 hours, thus it's a pretty "fuzzy" copy. Multiply this process by 300 tables over 24 hours, and you have a REALLY "fuzzy" copy of the Production data to load into QA.
>
> We can't run many of these unloads in parallel, again because of the limited shared DASD space we have to work with. So it's (1) unload a set of tables (or a single big table) from Prod, (2) load that set of tables (or a single big table) into QA, (3) delete the unload datasets from the shared DASD to free up space, (4) unload another set of tables (or another single big table) from Prod, (5) load that set of tables (or a single big table) into QA, (6) delete the unload datasets from the shared DASD to free up space, … Wash, Rinse, Repeat... over & over & over again.
>
> The process of unloading and loading all 300 tables took about 36 hours the last time I did it (a couple of weeks ago), so it's a long, tedious, mind-numbing process that makes you want to tear your hair out and start updating your resume. And yes, I was doing this all by myself - it wasn't like there were 2 or 3 of us taking shifts.
>
> And then when you're done moving all this data from Prod to QA, you get the fun of doing it AGAIN to move the same data from QA to Development. Oh, and they want it done closely on the heels of the Prod to QA move. Makes for a really exhausting week, to say the least.
>
> I'm searching for a way of trimming down the time it takes, and also searching for a way of getting a less "fuzzy" copy of the Production data. Unloading the data from an image copy 50 times solves the "fuzzy" problem (pretty much, anyway, since the only Image Copies available to us are SHRLEVEL CHANGE), and I'm thinking that unloading from an image copy 50 times might be faster than unloading from the active table 50 times.
>
> Does it make more sense now why I'm exploring this new way of moving data from Prod to QA to Development?
>
> David Waugh, Erstwhile DB2 DBA
>
>
> -----End Original Message-----
>
> -----End Original Message-----
>
> -----End Original Message-----
> © 2019 Ensono, LP. All rights reserved. Ensono is a trademark of Ensono, LP. The information contained in this communication is confidential, is intended only for the use of the recipient named above, and may be legally privileged. If the reader of this message is not the intended recipient, you are hereby notified that any dissemination, distribution or copying of this communication is strictly prohibited. If you have received this communication in error, please resend this communication to the sender and delete the original message or any copy of it from your computer system.
>
>
> Site Links: View post online View mailing list online Start new thread via email Unsubscribe from this mailing list Manage your subscription
>
> This email has been sent to: [login to unmask email]
> Try BCV5, the BCV5 Masking Tool, & XDM a rapid Refresh/Clone/TDM Suite for Db2 z & distributed.
> DBARS -Audit,record,& block Db2 accesses to sensitive data real-time, NO audit trace or log required
> http://www.ESAIGroup.com/IDUG
>
>
> Use of this email content is governed by the terms of service at:
> http://www.idug.org/p/cm/ld/fid=2
>