DB2 z/OS DB2 subsystem replication allowing renames (HLQ, Subsystem name, etc.)

Melissa Case

DB2 z/OS DB2 subsystem replication allowing renames (HLQ, Subsystem name, etc.)
Hello,

DB2 V12 z/OS on a z15 (YAY for the cool kids) ��

We’re creating a new DB2 subsystem and want to move a subset of the data to the new system. There is shared DASD among ALL the DB2 regions – this will be the 5th prod…

We have flash copy, but need to rename the HLQ’s & zparm values, etc. Has anyone done something like this with the required changes that is willing to share their process & pitfalls?

Thanks!

Melissa (Missy) Case
DB2 Logical DBA / Team Lead
Ensono
e: [login to unmask email]<mailto:[login to unmask email]>
www.ensono.com http://www.ensono.com

From: Bright, Randy <[login to unmask email]>
Sent: Tuesday, February 4, 2020 5:59 AM
To: [login to unmask email]
Subject: [DB2-L] - RE: DB2 z/OS UNLOAD from an Image Copy GDG (DB2 for z/OS V11 NFM)


*** ATTENTION! This message originated from outside of Ensono. Treat hyperlinks and attachments in this email with caution. ***


A few questions: Would “sub-setting” the data help? In other words, instead of unloading “umpteen billion” rows from production, could you get by with “umpteen million” if all RI relationships were maintained? And as Phil asked, should you be “anonymizing” the data before it leaves production?

If the answers to those questions are “yes”, contact me offline. I’d like to have a conversation.

Randy Bright
Solutions Architect
BMC Software, Inc.
[login to unmask email]<mailto:[login to unmask email]>

From: Phil Grainger <[login to unmask email]<mailto:[login to unmask email]>>
Sent: Tuesday, February 4, 2020 4:04 AM
To: [login to unmask email]<mailto:[login to unmask email]>
Subject: [EXTERNAL] [DB2-L] - RE: DB2 z/OS UNLOAD from an Image Copy GDG (DB2 for z/OS V11 NFM)

Two questions spring to mind

1. Why use unload/reload when dsn1copy or dfdss would be far faster. Providing all structures are equal, of course

2. I hope you are also anonymising the data before it hits development subsystems...

Phil G
Sent from my iPad

On 4 Feb 2020, at 01:20, David Waugh <[login to unmask email]<mailto:[login to unmask email]>> wrote:


Michael:

It's a little difficult to explain why I think unloading from an Image Copy is preferable to unloading from the active table, but I'll give it a shot:

We're unloading around 300 tables' worth of data from our Production subsystem, and then loading the same # of tables in the QA subsystem. There is no shared tape between the LPARs, so we have to unload to a set of DASD volumes that are shared between the LPARs. Space on these shared DASD volumes is limited, and they cannot contain all of the unloaded data at once.

Some of the tables are quite large (up to 13 billion rows), so we have the BETWEEN timestamp unload WHEN criteria set to unload small slices of the data each time. We simply can't (and don't want to) unload all umpteen billion rows of data from Prod and load it into QA. Which means, at times, we're running up to 50 UNLOADs from a single table, each with a 2-week WHEN criteria, trying to do the unloads in "waves" so that we can get the last 6 months' worth of data unloaded from Prod and loaded into QA. (the purpose of taking small time slices of 2 weeks each is apparently to keep the size of each unload dataset under 1000 cylinders, so that we don't have problems finding sufficient DASD space on those shared volumes).

The process of unloading the same Prod table 50 times can take up to 4 hours for the biggest tables. While 50 unloads are running (sequentially), data in the table being unloaded is being updated constantly over that 4 hours, thus it's a pretty "fuzzy" copy. Multiply this process by 300 tables over 24 hours, and you have a REALLY "fuzzy" copy of the Production data to load into QA.

We can't run many of these unloads in parallel, again because of the limited shared DASD space we have to work with. So it's (1) unload a set of tables (or a single big table) from Prod, (2) load that set of tables (or a single big table) into QA, (3) delete the unload datasets from the shared DASD to free up space, (4) unload another set of tables (or another single big table) from Prod, (5) load that set of tables (or a single big table) into QA, (6) delete the unload datasets from the shared DASD to free up space, … Wash, Rinse, Repeat... over & over & over again.

The process of unloading and loading all 300 tables took about 36 hours the last time I did it (a couple of weeks ago), so it's a long, tedious, mind-numbing process that makes you want to tear your hair out and start updating your resume. And yes, I was doing this all by myself - it wasn't like there were 2 or 3 of us taking shifts.

And then when you're done moving all this data from Prod to QA, you get the fun of doing it AGAIN to move the same data from QA to Development. Oh, and they want it done closely on the heels of the Prod to QA move. Makes for a really exhausting week, to say the least.

I'm searching for a way of trimming down the time it takes, and also searching for a way of getting a less "fuzzy" copy of the Production data. Unloading the data from an image copy 50 times solves the "fuzzy" problem (pretty much, anyway, since the only Image Copies available to us are SHRLEVEL CHANGE), and I'm thinking that unloading from an image copy 50 times might be faster than unloading from the active table 50 times.

Does it make more sense now why I'm exploring this new way of moving data from Prod to QA to Development?

David Waugh, Erstwhile DB2 DBA

-----End Original Message-----

-----End Original Message-----

-----End Original Message-----

© 2019 Ensono, LP. All rights reserved. Ensono is a trademark of Ensono, LP. The information contained in this communication is confidential, is intended only for the use of the recipient named above, and may be legally privileged. If the reader of this message is not the intended recipient, you are hereby notified that any dissemination, distribution or copying of this communication is strictly prohibited. If you have received this communication in error, please resend this communication to the sender and delete the original message or any copy of it from your computer system.

Dave Nance

DB2 z/OS DB2 subsystem replication allowing renames (HLQ, Subsystem name, etc.)
(in response to Melissa Case)
Missy,    The place I am at used to do that on a daily basis. I think the product was from a ? MAINSTAR ? and was called VCR. It worked pretty well, was handled nightly in batch processes. I had it dismantled as was really a waste of our resources here. Not one of the cool kids with a z15.:) 
Dave Nance


On Tuesday, February 4, 2020, 02:04:30 PM CST, Melissa Case <[login to unmask email]> wrote:


Hello,

 

DB2 V12 z/OS on a z15 (YAY for the cool kids)   ��

 

We’re creating a new DB2 subsystem and want to move a subset of the data to the new system.  There is shared DASD among ALL the DB2 regions – this will be the 5th prod…

 

We have flash copy, but need to rename the HLQ’s & zparm values, etc.  Has anyone done something like this with the required changes that is willing to share their process & pitfalls?

 

Thanks!

 

Melissa (Missy) Case

DB2 Logical DBA / Team Lead

Ensono 

e:[login to unmask email]

www.ensono.com

 

From: Bright, Randy <[login to unmask email]>
Sent: Tuesday, February 4, 2020 5:59 AM
To: [login to unmask email]
Subject: [DB2-L] - RE: DB2 z/OS UNLOAD from an Image Copy GDG (DB2 for z/OS V11 NFM)

 

*** ATTENTION!  This message originated from outside of Ensono. Treat hyperlinks and attachments in this email with caution. ***

 

A few questions:  Would “sub-setting” the data help?  In other words, instead of unloading “umpteen billion” rows from production, could you get by with “umpteen million” if all RI relationships were maintained?  And as Phil asked, should you be “anonymizing” the data before it leaves production?

 

If the answers to those questions are “yes”, contact me offline.  I’d like to have a conversation.

 

Randy Bright

Solutions Architect

BMC Software, Inc.

[login to unmask email]

 

From: Phil Grainger <[login to unmask email]>
Sent: Tuesday, February 4, 2020 4:04 AM
To: [login to unmask email]
Subject: [EXTERNAL] [DB2-L] - RE: DB2 z/OS UNLOAD from an Image Copy GDG (DB2 for z/OS V11 NFM)

 

Two questions spring to mind

 

1. Why use unload/reload when dsn1copy or dfdss would be far faster. Providing all structures are equal, of course

 

2. I hope you are also anonymising the data before it hits development subsystems...

 

Phil G

Sent from my iPad

 


On 4 Feb 2020, at 01:20, David Waugh <[login to unmask email]> wrote:





Michael:

It's a little difficult to explain why I think unloading from an Image Copy is preferable to unloading from the active table, but I'll give it a shot:

We're unloading around 300 tables' worth of data from our Production subsystem, and then loading the same # of tables in the QA subsystem.  There is no shared tape between the LPARs, so we have to unload to a set of DASD volumes that are shared between the LPARs.  Space on these shared DASD volumes is limited, and they cannot contain all of the unloaded data at once. 

Some of the tables are quite large (up to 13 billion rows), so we have the BETWEEN timestamp unload WHEN criteria set to unload small slices of the data each time.  We simply can't (and don't want to) unload all umpteen billion rows of data from Prod and load it into QA.  Which means, at times, we're running up to 50 UNLOADs from a single table, each with a 2-week WHEN criteria, trying to do the unloads in "waves" so that we can get the last 6 months' worth of data unloaded from Prod and loaded into QA.  (the purpose of taking small time slices of 2 weeks each is apparently to keep the size of each unload dataset under 1000 cylinders, so that we don't have problems finding sufficient DASD space on those shared volumes). 

The process of unloading the same Prod table 50 times can take up to 4 hours for the biggest tables.  While 50 unloads are running (sequentially), data in the table being unloaded is being updated constantly over that 4 hours, thus it's a pretty "fuzzy" copy.   Multiply this process by 300 tables over 24 hours, and you have a REALLY "fuzzy" copy of the Production data to load into QA. 

We can't run many of these unloads in parallel, again because of the limited shared DASD space we have to work with.  So it's (1) unload a set of tables (or a single big table) from Prod, (2) load that set of tables (or a single big table) into QA, (3) delete the unload datasets from the shared DASD to free up space, (4) unload another set of tables (or another single big table) from Prod, (5) load that set of tables (or a single big table) into QA, (6) delete the unload datasets from the shared DASD to free up space, … Wash, Rinse, Repeat... over & over & over again.

The process of unloading and loading all 300 tables took about 36 hours the last time I did it (a couple of weeks ago), so it's a long, tedious, mind-numbing process that makes you want to tear your hair out and start updating your resume.  And yes, I was doing this all by myself - it wasn't like there were 2 or 3 of us taking shifts.

And then when you're done moving all this data from Prod to QA, you get the fun of doing it AGAIN to move the same data from QA to Development.  Oh, and they want it done closely on the heels of the Prod to QA move.  Makes for a really exhausting week, to say the least.

I'm searching for a way of trimming down the time it takes, and also searching for a way of getting a less "fuzzy" copy of the Production data.  Unloading the data from an image copy 50 times solves the "fuzzy" problem (pretty much, anyway, since the only Image Copies available to us are SHRLEVEL CHANGE), and I'm thinking that unloading from an image copy 50 times might be faster than unloading from the active table 50 times.  

Does it make more sense now why I'm exploring this new way of moving data from Prod to QA to Development?

David Waugh, Erstwhile DB2 DBA

 

-----End Original Message-----


 

-----End Original Message-----

 
-----End Original Message-----
© 2019 Ensono, LP. All rights reserved. Ensono is a trademark of Ensono, LP. The information contained in this communication is confidential, is intended only for the use of the recipient named above, and may be legally privileged. If the reader of this message is not the intended recipient, you are hereby notified that any dissemination, distribution or copying of this communication is strictly prohibited. If you have received this communication in error, please resend this communication to the sender and delete the original message or any copy of it from your computer system.

Site Links: View post online   View mailing list online   Start new thread via email   Unsubscribe from this mailing list   Manage your subscription  

This email has been sent to: [login to unmask email]
Try BCV5, the BCV5 Masking Tool, & XDM a rapid Refresh/Clone/TDM Suite for Db2 z & distributed.
DBARS -Audit,record,& block Db2 accesses to sensitive data real-time, NO audit trace or log required
http://www.ESAIGroup.com/IDUG



Use of this email content is governed by the terms of service at:
http://www.idug.org/p/cm/ld/fid=2

Rob Barbour

DB2 z/OS DB2 subsystem replication allowing renames (HLQ, Subsystem name, etc.)
(in response to Melissa Case)
Hi Missy,

If you wanted to replicate a full Db2 Subsystem changing HLQ, zparm etc
the BCV4 Db2 subsystem cloning tool would work (... it works with
Flashcopy).  BCV4 works off of volume copies.

But I get the feeling you want to clone/copy over select items vs whole
volume.  For that BCV5 would do renames, copy, etc and be a fully
automated process.

I am not sure if your customer is one of the Ensono ones that have BCV5
or not. You can find out more at
http://www.esaigroup.com/products/index.htm  or reach me offline.

--
Best Regards,
Rob Barbour
Enterprise Systems Associates, Inc ("ESAi")
UCF Research Park
3259 Progress Drive
Orlando,  Florida 32826    USA
Toll Free: 1-866-GO-4-ESAI (1-866-464-3724)
http://www.ESAIGroup.com/products
http://www.ESAIGroup.com/products www.twitter.com/ESAiSoftware
http://www.twitter.com/ESAiSoftware

z Performance... Productivity for Db2... Services...
*BCV4* -  Db2, SAP, PeopleSoft Clones/Refreshes in Minutes vs Days
*BCV5* -  On-Demand Db2 Refresh / Migrate in 1/10th Time & Effort
*BCV5 Masking Tool* -  Easily Achieve Your Data Masking Needs
*XDM* - Test Data Mgmt & Masking for Db2,Oracle, SQL Server et.al.

*DBARS* - Realtime DAM, Audit, Report, Block Accesses to Db2. NO Trace
or Logs required.
*BPA4DB2* - Save $ with The Top Bufferpool Tuning/Alerting Tool
*XM4DB2* - Proactive Approach for Db2,Performance, & Dynamic SQL
*ULT4DB2* - Faster, Better Value in Db2 Log Analyzers w/ PROP
*SQLQC* - Find, Analyze, Improve SQL Quality Control and Performance

*COST Optimization* - Lower Mainframe MLC Software Costs
*ICPU* - A Tool ANYONE Can Use To Improve CICS Application Performance

On 2/4/2020 3:03 PM, Melissa Case wrote:
>
> Hello,
>
> DB2 V12 z/OS on a z15 (YAY for the cool kids) ��
>
> We’re creating a new DB2 subsystem and want to move a subset of the
> data to the new system.  There is shared DASD among ALL the DB2
> regions – this will be the 5^th prod…
>
> We have flash copy, but need to rename the HLQ’s & zparm values, etc. 
> Has anyone done something like this with the required changes that is
> willing to share their process & pitfalls?
>
> Thanks!
>
> *Melissa (Missy) Case*
>
> DB2 Logical DBA / Team Lead
>
> Ensono
>
> e: [login to unmask email] <mailto:[login to unmask email]>_
>
> www.ensono.com http://www.ensono.com
>
> *From:* Bright, Randy <[login to unmask email]>
> *Sent:* Tuesday, February 4, 2020 5:59 AM
> *To:* [login to unmask email]
> *Subject:* [DB2-L] - RE: DB2 z/OS UNLOAD from an Image Copy GDG (DB2
> for z/OS V11 NFM)
>
> ****ATTENTION! *This message originated from outside of Ensono. Treat
> hyperlinks and attachments in this email with caution. ***
>
> A few questions:  Would “sub-setting” the data help?  In other words,
> instead of unloading “umpteen billion” rows from production, could you
> get by with “umpteen million” if all RI relationships were maintained?
> And as Phil asked, should you be “anonymizing” the data before it
> leaves production?
>
> If the answers to those questions are “yes”, contact me offline.  I’d
> like to have a conversation.
>
> *Randy Bright*
>
> *Solutions Architect*
>
> *BMC Software, Inc.*
>
> *[login to unmask email] <mailto:[login to unmask email]>*
>
> *From:* Phil Grainger <[login to unmask email]
> <mailto:[login to unmask email]>>
> *Sent:* Tuesday, February 4, 2020 4:04 AM
> *To:* [login to unmask email] <mailto:[login to unmask email]>
> *Subject:* [EXTERNAL] [DB2-L] - RE: DB2 z/OS UNLOAD from an Image Copy
> GDG (DB2 for z/OS V11 NFM)
>
> Two questions spring to mind
>
> 1. Why use unload/reload when dsn1copy or dfdss would be far faster.
> Providing all structures are equal, of course
>
> 2. I hope you are also anonymising the data before it hits development
> subsystems...
>
> Phil G
>
> Sent from my iPad
>
> On 4 Feb 2020, at 01:20, David Waugh <[login to unmask email]
> <mailto:[login to unmask email]>> wrote:
>
> 
>
> Michael:
>
> It's a little difficult to explain why I think unloading from an
> Image Copy is preferable to unloading from the active table, but
> I'll give it a shot:
>
> We're unloading around 300 tables' worth of data from our
> Production subsystem, and then loading the same # of tables in the
> QA subsystem.  There is no shared tape between the LPARs, so we
> have to unload to a set of DASD volumes that are shared between
> the LPARs. Space on these shared DASD volumes is limited, and they
> cannot contain all of the unloaded data at once.
>
> Some of the tables are quite large (up to 13 billion rows), so we
> have the BETWEEN timestamp unload WHEN criteria set to unload
> small slices of the data each time.  We simply can't (and don't
> want to) unload all umpteen billion rows of data from Prod and
> load it into QA.  Which means, at times, we're running up to 50
> UNLOADs from a single table, each with a 2-week WHEN criteria,
> trying to do the unloads in "waves" so that we can get the last 6
> months' worth of data unloaded from Prod and loaded into QA.  (the
> purpose of taking small time slices of 2 weeks each is apparently
> to keep the size of each unload dataset under 1000 cylinders, so
> that we don't have problems finding sufficient DASD space on those
> shared volumes).
>
> The process of unloading the same Prod table 50 times can take up
> to 4 hours for the biggest tables. While 50 unloads are running
> (sequentially), data in the table being unloaded is being updated
> constantly over that 4 hours, thus it's a pretty "fuzzy" copy.
> Multiply this process by 300 tables over 24 hours, and you have a
> REALLY "fuzzy" copy of the Production data to load into QA.
>
> We can't run many of these unloads in parallel, again because of
> the limited shared DASD space we have to work with.  So it's (1)
> unload a set of tables (or a single big table) from Prod, (2) load
> that set of tables (or a single big table) into QA, (3) delete the
> unload datasets from the shared DASD to free up space, (4) unload
> another set of tables (or another single big table) from Prod, (5)
> load that set of tables (or a single big table) into QA, (6)
> delete the unload datasets from the shared DASD to free up space,
> … Wash, Rinse, Repeat... over & over & over again.
>
> The process of unloading and loading all 300 tables took about 36
> hours the last time I did it (a couple of weeks ago), so it's a
> long, tedious, mind-numbing process that makes you want to tear
> your hair out and start updating your resume.  And yes, I was
> doing this all by myself - it wasn't like there were 2 or 3 of us
> taking shifts.
>
> And then when you're done moving all this data from Prod to QA,
> you get the fun of doing it AGAIN to move the same data from QA to
> Development.  Oh, and they want it done closely on the heels of
> the Prod to QA move.  Makes for a really exhausting week, to say
> the least.
>
> I'm searching for a way of trimming down the time it takes, and
> also searching for a way of getting a less "fuzzy" copy of the
> Production data.  Unloading the data from an image copy 50 times
> solves the "fuzzy" problem (pretty much, anyway, since the only
> Image Copies available to us are SHRLEVEL CHANGE), and I'm
> thinking that unloading from an image copy 50 times might be
> faster than unloading from the active table 50 times.
>
> Does it make more sense now why I'm exploring this new way of
> moving data from Prod to QA to Development?
>
> David Waugh, Erstwhile DB2 DBA
>
> -----End Original Message-----
>
> -----End Original Message-----
>
> -----End Original Message-----
>
> © 2019 Ensono, LP. All rights reserved. Ensono is a trademark of
> Ensono, LP. The information contained in this communication is
> confidential, is intended only for the use of the recipient named
> above, and may be legally privileged. If the reader of this message is
> not the intended recipient, you are hereby notified that any
> dissemination, distribution or copying of this communication is
> strictly prohibited. If you have received this communication in error,
> please resend this communication to the sender and delete the original
> message or any copy of it from your computer system.
>
>
> -----End Original Message-----
--
Signature Best Regards,
Rob Barbour
Enterprise Systems Associates, Inc ("ESAi")
UCF Research Park
3259 Progress Drive
Orlando,  Florida 32826    USA
Toll Free: 1-866-GO-4-ESAI (1-866-464-3724)
http://www.ESAIGroup.com/products
http://www.ESAIGroup.com/products www.twitter.com/ESAiSoftware
http://www.twitter.com/ESAiSoftware

z Performance... Productivity for Db2... Services...
*BCV4* -  Db2, SAP, PeopleSoft Clones/Refreshes in Minutes vs Days
*BCV5* -  On-Demand Db2 Refresh / Migrate in 1/10th Time & Effort
*BCV5 Masking Tool* -  Easily Achieve Your Data Masking Needs
*XDM* - Test Data Mgmt & Masking for Db2,Oracle, SQL Server et.al.

*DBARS* - Realtime DAM, Audit, Report, Block Accesses to Db2. NO Trace
or Logs required.
*BPA4DB2* - Save $ with The Top Bufferpool Tuning/Alerting Tool
*XM4DB2* - Proactive Approach for Db2,Performance, & Dynamic SQL
*ULT4DB2* - Faster, Better Value in Db2 Log Analyzers w/ PROP
*SQLQC* - Find, Analyze, Improve SQL Quality Control and Performance

*COST Optimization* - Lower Mainframe MLC Software Costs
*ICPU* - A Tool ANYONE Can Use To Improve CICS Application Performance





Chris Hoelscher

DB2 z/OS DB2 subsystem replication allowing renames (HLQ, Subsystem name, etc.)
(in response to Dave Nance)
We have a requirement to create a FULL r/o clone (for reducing contention, logging etc).
We use mainstar VCR every day and it indeed clones one DB2 subsystem to another – we maintain zparms dsnexit archives bsds separately
It requires (or we choose) a log suspend before and a log resume after the clone: 2 to 3 minutes

Occasionally we come up with LPL tablespaces on the target subsystem but that is acceptable to us

Thank You,
Chris Hoelscher| Lead Database Administrator | IBM Global Technical Services| T 502.476.2538 or 502.407.7266

From: Dave Nance <[login to unmask email]>
Sent: Tuesday, February 4, 2020 4:47 PM
To: Melissa Case <[login to unmask email]>
Subject: [DB2-L] - RE: DB2 z/OS DB2 subsystem replication allowing renames (HLQ, Subsystem name, etc.)

Missy,
The place I am at used to do that on a daily basis. I think the product was from a ? MAINSTAR ? and was called VCR. It worked pretty well, was handled nightly in batch processes. I had it dismantled as was really a waste of our resources here. Not one of the cool kids with a z15.:)

Dave Nance



On Tuesday, February 4, 2020, 02:04:30 PM CST, Melissa Case <[login to unmask email]<mailto:[login to unmask email]>> wrote:



Hello,



DB2 V12 z/OS on a z15 (YAY for the cool kids) ��



We’re creating a new DB2 subsystem and want to move a subset of the data to the new system. There is shared DASD among ALL the DB2 regions – this will be the 5th prod…



We have flash copy, but need to rename the HLQ’s & zparm values, etc. Has anyone done something like this with the required changes that is willing to share their process & pitfalls?



Thanks!



Melissa (Missy) Case

DB2 Logical DBA / Team Lead

Ensono

e: [login to unmask email]<mailto:[login to unmask email]>

www.ensono.com https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.ensono.com%2F&data=02%7C01%7Cchoelscher%40humana.com%7C0f306a76c71740e529d208d7a9bbe926%7C56c62bbe85984b859e511ca753fa50f2%7C1%7C0%7C637164496838336902&sdata=JQhq%2FHHzfBlG5IjBruOzkQz3i8%2FEnNla%2F2s05h1BwWY%3D&reserved=0



From: Bright, Randy <[login to unmask email]<mailto:[login to unmask email]>>
Sent: Tuesday, February 4, 2020 5:59 AM
To: [login to unmask email]<mailto:[login to unmask email]>
Subject: [DB2-L] - RE: DB2 z/OS UNLOAD from an Image Copy GDG (DB2 for z/OS V11 NFM)



*** ATTENTION! This message originated from outside of Ensono. Treat hyperlinks and attachments in this email with caution. ***



A few questions: Would “sub-setting” the data help? In other words, instead of unloading “umpteen billion” rows from production, could you get by with “umpteen million” if all RI relationships were maintained? And as Phil asked, should you be “anonymizing” the data before it leaves production?



If the answers to those questions are “yes”, contact me offline. I’d like to have a conversation.



Randy Bright

Solutions Architect

BMC Software, Inc.

[login to unmask email]<mailto:[login to unmask email]>



From: Phil Grainger <[login to unmask email]<mailto:[login to unmask email]>>
Sent: Tuesday, February 4, 2020 4:04 AM
To: [login to unmask email]<mailto:[login to unmask email]>
Subject: [EXTERNAL] [DB2-L] - RE: DB2 z/OS UNLOAD from an Image Copy GDG (DB2 for z/OS V11 NFM)



Two questions spring to mind



1. Why use unload/reload when dsn1copy or dfdss would be far faster. Providing all structures are equal, of course



2. I hope you are also anonymising the data before it hits development subsystems...



Phil G

Sent from my iPad



On 4 Feb 2020, at 01:20, David Waugh <[login to unmask email]<mailto:[login to unmask email]>> wrote:



Michael:

It's a little difficult to explain why I think unloading from an Image Copy is preferable to unloading from the active table, but I'll give it a shot:

We're unloading around 300 tables' worth of data from our Production subsystem, and then loading the same # of tables in the QA subsystem. There is no shared tape between the LPARs, so we have to unload to a set of DASD volumes that are shared between the LPARs. Space on these shared DASD volumes is limited, and they cannot contain all of the unloaded data at once.

Some of the tables are quite large (up to 13 billion rows), so we have the BETWEEN timestamp unload WHEN criteria set to unload small slices of the data each time. We simply can't (and don't want to) unload all umpteen billion rows of data from Prod and load it into QA. Which means, at times, we're running up to 50 UNLOADs from a single table, each with a 2-week WHEN criteria, trying to do the unloads in "waves" so that we can get the last 6 months' worth of data unloaded from Prod and loaded into QA. (the purpose of taking small time slices of 2 weeks each is apparently to keep the size of each unload dataset under 1000 cylinders, so that we don't have problems finding sufficient DASD space on those shared volumes).

The process of unloading the same Prod table 50 times can take up to 4 hours for the biggest tables. While 50 unloads are running (sequentially), data in the table being unloaded is being updated constantly over that 4 hours, thus it's a pretty "fuzzy" copy. Multiply this process by 300 tables over 24 hours, and you have a REALLY "fuzzy" copy of the Production data to load into QA.

We can't run many of these unloads in parallel, again because of the limited shared DASD space we have to work with. So it's (1) unload a set of tables (or a single big table) from Prod, (2) load that set of tables (or a single big table) into QA, (3) delete the unload datasets from the shared DASD to free up space, (4) unload another set of tables (or another single big table) from Prod, (5) load that set of tables (or a single big table) into QA, (6) delete the unload datasets from the shared DASD to free up space, … Wash, Rinse, Repeat... over & over & over again.

The process of unloading and loading all 300 tables took about 36 hours the last time I did it (a couple of weeks ago), so it's a long, tedious, mind-numbing process that makes you want to tear your hair out and start updating your resume. And yes, I was doing this all by myself - it wasn't like there were 2 or 3 of us taking shifts.

And then when you're done moving all this data from Prod to QA, you get the fun of doing it AGAIN to move the same data from QA to Development. Oh, and they want it done closely on the heels of the Prod to QA move. Makes for a really exhausting week, to say the least.

I'm searching for a way of trimming down the time it takes, and also searching for a way of getting a less "fuzzy" copy of the Production data. Unloading the data from an image copy 50 times solves the "fuzzy" problem (pretty much, anyway, since the only Image Copies available to us are SHRLEVEL CHANGE), and I'm thinking that unloading from an image copy 50 times might be faster than unloading from the active table 50 times.

Does it make more sense now why I'm exploring this new way of moving data from Prod to QA to Development?

David Waugh, Erstwhile DB2 DBA



-----End Original Message-----



-----End Original Message-----


-----End Original Message-----

© 2019 Ensono, LP. All rights reserved. Ensono is a trademark of Ensono, LP. The information contained in this communication is confidential, is intended only for the use of the recipient named above, and may be legally privileged. If the reader of this message is not the intended recipient, you are hereby notified that any dissemination, distribution or copying of this communication is strictly prohibited. If you have received this communication in error, please resend this communication to the sender and delete the original message or any copy of it from your computer system.

-----End Original Message-----

The information transmitted is intended only for the person or entity to which it is addressed
and may contain CONFIDENTIAL material. If you receive this material/information in error,
please contact the sender and delete or destroy the material/information.

Humana Inc. and its subsidiaries comply with applicable Federal civil rights laws and
do not discriminate on the basis of race, color, national origin, age, disability, sex,
sexual orientation, gender identity, or religion. Humana Inc. and its subsidiaries do not
exclude people or treat them differently because of race, color, national origin, age,
disability, sex, sexual orientation, gender identity, or religion.

English: ATTENTION: If you do not speak English, language assistance services, free
of charge, are available to you. Call 1‐877‐320‐1235 (TTY: 711).

Español (Spanish): ATENCIÓN: Si habla español, tiene a su disposición servicios
gratuitos de asistencia lingüística. Llame al 1‐877‐320‐1235 (TTY: 711).

繁體中文(Chinese):注意:如果您使用繁體中文,您可以免費獲得語言援助
服務。請致電 1‐877‐320‐1235 (TTY: 711)。

Kreyòl Ayisyen (Haitian Creole): ATANSION: Si w pale Kreyòl Ayisyen, gen sèvis èd
pou lang ki disponib gratis pou ou. Rele 1‐877‐320‐1235 (TTY: 711).

Polski (Polish): UWAGA: Jeżeli mówisz po polsku, możesz skorzystać z bezpłatnej
pomocy językowej. Zadzwoń pod numer 1‐877‐320‐1235 (TTY: 711).

한국어 (Korean): 주의: 한국어를 사용하시는 경우, 언어 지원 서비스를 무료로
이용하실 수 있습니다. 1‐877‐320‐1235 (TTY: 711)번으로 전화해 주십시오.