DB2 z/OS UNLOAD from an Image Copy GDG

David Waugh

DB2 z/OS UNLOAD from an Image Copy GDG

We have a requirement to unload a bunch of data from our Production DB2 and load it into our QA DB2 subsystem on a periodic basis. 

In the past, we've been unloading all 300+ tablespaces using normal UNLOAD TABLESPACE... utility statements.

  UNLOAD TABLESPACE DBname.TSname     
    FROM TABLE Owner.Tblname         
    SHRLEVEL CHANGE ISOLATION UR       

There are some pretty big tables out there, so the entire process takes in excess of 12 hours.

In order to make the process faster (and less "fuzzy") and less of an interference with online processing, I'd like to unload the tablespace data from the latest set of daily Image Copies instead of unloading from the tablespaces themselves.  The Image Copies are SHRLEVEL CHANGE, and they are written to GDG datasets with a naming convention of HLQ.DSN1COPY.DBname.TSname(+1)

The problem I'm having is that the UNLOAD utility is ignoring the TEMPLATE and keeps insisting that I need a //CPYTEMPL DD in my JCL. 

    - If I don't supply the CPYTEMPL DD, it fails with a message of "DSNU035I... DSNUULCB - UNABLE TO FIND DD CARD FOR DDNAME CPYTEMPL"

    - If I do supply the CPYTEMPL DD, it unloads successfully (though I don't know if it's really unloading from the Image Copy GDG or from the tablespace instead)

Here is one example of the UNLOAD utility JCL I've put together:

//******************************************************************** 
//* UNLOAD THE TABLESPACE FROM AN IMAGE COPY DATASET
//* (RATHER THAN FROM THE TABLESPACE ITSELF)
//********************************************************************
//UNLD10 EXEC DSNUPROC,SYSTEM=DSN1,
// LIB='SYSx.DSN1.SDSNLOAD',
//* UTPROC='PREVIEW', <=== TO PREVIEW THE TEMPLATE & UNLOAD
//* RESTART(PHASE), <=== TO RESTART UTIL IN LAST PHASE
// UID='DBname.TSname'
//SYSPUNCH DD DSN=HLQ.DSN1.DBname.TSname.LOADCARD,
// DISP=(NEW,CATLG,DELETE),UNIT=SYSDA,
// SPACE=(TRK,(1,1),RLSE)
//SYSREC DD DSN=HLQ.DSN1.DBname.TSname.UNLDDATA,
// DISP=(NEW,CATLG,DELETE),
// SPACE=(CYL,(1500,100),RLSE),
// UNIT=SYSDA
//* TEMPLATE GETS IGNORED, THIS DD IS REQUIRED TO GET THIS TO WORK:
//CPYTEMPL DD DISP=SHR,DSN=HLQ.DSN1COPY.DBname.TSname(0)
//SYSIN DD *
TEMPLATE CPYTEMPL
DSN 'HLQ.DSN1COPY.DBname.TSname(0)'
UNLOAD TABLESPACE DBname.TSname
FROMCOPYDDN (CPYTEMPL)
FROM TABLE Owner.Tblname
/*

 

Is there a way to specify something in the TEMPLATE or UNLOAD statements that would keep UNLOAD from insisting on a //CPYTEMPL DD in the JCL?  Or am I stuck with this way of doing it?

Curious minds want to know...

Thanks in advance,

David Waugh, erstwhile DB2 DBA

Horacio Villa

DB2 z/OS UNLOAD from an Image Copy GDG
(in response to David Waugh)
Hi David,

I think the problem is the way you specify de GDG name on TEMPLATE, not
with the name CPYTEMPL, which could be any other.
Read "How TEMPLATE supports GDG data sets" in Utility Guide and Reference
and maybe you find if it's supported or not, perhaps you should specify
the absolute name in order to use TEMPLATE.
Regards,
Horacio Villa


Horacio Villa

DB2 z/OS UNLOAD from an Image Copy GDG
(in response to Horacio Villa)
I'd give a try to (+0) instead of (0) when using TEMPLATE.
In the section "Restarting an online utility" it says:
If generation data groups (GDGs) are used and any (+1)
generations were cataloged, ensure that the JCL is changed to GDG (+0) for
such data sets.

Horacio Villa

David Waugh

RE: DB2 z/OS UNLOAD from an Image Copy GDG
(in response to Horacio Villa)

Horatio:

I've tried it with everything I can think of - (0), (+0), (+1), (-1) and none of those make any difference, unfortunately.

 

I've tried it with the actual GOOVOO number, and that DOES work, but I don't want to have to supply the GOOVOO number for 300+ IC datasets every time I want to run this job - I'd rather use the most current IC with (0).

Thanks, David Waugh

David Waugh

RE: DB2 z/OS UNLOAD from an Image Copy GDG
(in response to Horacio Villa)

WITH (+0) and *with* //CPYTEMPL DD in the JCL, the unload is successful

  DSNUGUTC - OUTPUT START FOR UTILITY, UTILID = DBname.TSname                  
  DSNUGTIS - PROCESSING SYSIN AS EBCDIC                                          
  DSNUGUTC -  TEMPLATE CPYTEMPL DSN 'HLQ.DSN1COPY.DBname.TSname(+0)'     
  DSNUJTDR - TEMPLATE STATEMENT PROCESSED SUCCESSFULLY                           
  DSNUGUTC -  UNLOAD TABLESPACE DBname.TSname                                   
    DSNUUGMS -   FROMCOPYDDN(CPYTEMPL)                                            
    DSNUUGMS -   FROM TABLE DBname.Tblname                                     
  DSNUUNLD - UNLOAD PHASE STATISTICS - NUMBER OF RECORDS UNLOADED=66977 FOR TABLE 
  DSNUUNLD - UNLOAD PHASE STATISTICS - NUMBER OF RECORDS UNLOADED=66977 FOR TABLES
  DSNUUNLD - UNLOAD PHASE COMPLETE, ELAPSED TIME=00:00:00                        
  DSNUGBAC - UTILITY EXECUTION COMPLETE, HIGHEST RETURN CODE=0                   

WITH (+0) but *without* //CPYTEMPL DD in the JCL, the unload is unsuccessful. 

  DSNUGUTC - OUTPUT START FOR UTILITY, UTILID = DBname.TSname                
  DSNUGTIS - PROCESSING SYSIN AS EBCDIC                                        
  DSNUGUTC -  TEMPLATE CPYTEMPL DSN 'HLQ.DSN1COPY.DBname.TSname(+0)'   
  DSNUJTDR - TEMPLATE STATEMENT PROCESSED SUCCESSFULLY                         
  DSNUGUTC -  UNLOAD TABLESPACE DBname.TSname                                 
    DSNUUGMS -   FROMCOPYDDN(CPYTEMPL)                                          
    DSNUUGMS -   FROM TABLE DBname.Tblname                                   
  DSNUULCB - UNABLE TO FIND DD CARD FOR DDNAME CPYTEMPL                        
  DSNUGBAC - UTILITY EXECUTION TERMINATED, HIGHEST RETURN CODE=8     

Oddly enough, in both cases, the unload output says that the TEMPLATE was processed successfully.

David Waugh

RE: DB2 z/OS UNLOAD from an Image Copy GDG
(in response to David Waugh)

BTW, this is running on DB2 V11R1M0 NFM

Larry Jardine

DB2 z/OS UNLOAD from an Image Copy GDG
(in response to David Waugh)
Do you need DISP (SHR) in your template?

Larry Jardine
Database Advisor, Aetna, a CVS Health Company

[CVS]
CONFIDENTIALITY NOTICE: This communication and any attachments may contain confidential and/or privileged information for the use of the designated recipients named above. If you are not the intended recipient, you are hereby notified that you have received this communication in error and that any review, disclosure, dissemination, distribution or copying of it or its contents is prohibited. If you have received this communication in error, please notify the sender immediately by email or telephone and destroy all copies of this communication and any attachments.

From: David Waugh <[login to unmask email]>
Sent: Saturday, January 18, 2020 2:21 AM
To: [login to unmask email]
Subject: [EXTERNAL] [DB2-L] - DB2 z/OS UNLOAD from an Image Copy GDG

**** External Email - Use Caution ****

We have a requirement to unload a bunch of data from our Production DB2 and load it into our QA DB2 subsystem on a periodic basis.

In the past, we've been unloading all 300+ tablespaces using normal UNLOAD TABLESPACE... utility statements.

UNLOAD TABLESPACE DBname.TSname
FROM TABLE Owner.Tblname
SHRLEVEL CHANGE ISOLATION UR

There are some pretty big tables out there, so the entire process takes in excess of 12 hours.

In order to make the process faster (and less "fuzzy") and less of an interference with online processing, I'd like to unload the tablespace data from the latest set of daily Image Copies instead of unloading from the tablespaces themselves. The Image Copies are SHRLEVEL CHANGE, and they are written to GDG datasets with a naming convention of HLQ.DSN1COPY.DBname.TSname(+1)

The problem I'm having is that the UNLOAD utility is ignoring the TEMPLATE and keeps insisting that I need a //CPYTEMPL DD in my JCL.

- If I don't supply the CPYTEMPL DD, it fails with a message of "DSNU035I... DSNUULCB - UNABLE TO FIND DD CARD FOR DDNAME CPYTEMPL"

- If I do supply the CPYTEMPL DD, it unloads successfully (though I don't know if it's really unloading from the Image Copy GDG or from the tablespace instead)

Here is one example of the UNLOAD utility JCL I've put together:

//********************************************************************
//* UNLOAD THE TABLESPACE FROM AN IMAGE COPY DATASET
//* (RATHER THAN FROM THE TABLESPACE ITSELF)
//********************************************************************
//UNLD10 EXEC DSNUPROC,SYSTEM=DSN1,
// LIB='SYSx.DSN1.SDSNLOAD',
//* UTPROC='PREVIEW', <=== TO PREVIEW THE TEMPLATE & UNLOAD
//* RESTART(PHASE), <=== TO RESTART UTIL IN LAST PHASE
// UID='DBname.TSname'
//SYSPUNCH DD DSN=HLQ.DSN1.DBname.TSname.LOADCARD,
// DISP=(NEW,CATLG,DELETE),UNIT=SYSDA,
// SPACE=(TRK,(1,1),RLSE)
//SYSREC DD DSN=HLQ.DSN1.DBname.TSname.UNLDDATA,
// DISP=(NEW,CATLG,DELETE),
// SPACE=(CYL,(1500,100),RLSE),
// UNIT=SYSDA
//* TEMPLATE GETS IGNORED, THIS DD IS REQUIRED TO GET THIS TO WORK:
//CPYTEMPL DD DISP=SHR,DSN=HLQ.DSN1COPY.DBname.TSname(0)
//SYSIN DD *
TEMPLATE CPYTEMPL
DSN 'HLQ.DSN1COPY.DBname.TSname(0)'
UNLOAD TABLESPACE DBname.TSname
FROMCOPYDDN (CPYTEMPL)
FROM TABLE Owner.Tblname
/*



Is there a way to specify something in the TEMPLATE or UNLOAD statements that would keep UNLOAD from insisting on a //CPYTEMPL DD in the JCL? Or am I stuck with this way of doing it?

Curious minds want to know...

Thanks in advance,

David Waugh, erstwhile DB2 DBA

-----End Original Message-----

NOTICE TO RECIPIENT OF INFORMATION:
This e-mail may contain confidential or privileged information. If you think you have received this e-mail in error, please advise the sender by reply e-mail and then delete this e-mail immediately.
This e-mail may also contain protected health information (PHI) with information about sensitive medical conditions, including, but not limited to, treatment for substance use disorders, behavioral health, HIV/AIDS, or pregnancy. This type of information may be protected by various federal and/or state laws which prohibit any further disclosure without the express written consent of the person to whom it pertains or as otherwise permitted by law. Any unauthorized further disclosure may be considered a violation of federal and/or state law. A general authorization for the release of medical or other information may NOT be sufficient consent for release of this type of information.
Thank you. Aetna
Attachments

  • image003.png (3.8k)

Larry Jardine

DB2 z/OS UNLOAD from an Image Copy GDG
(in response to David Waugh)
Actually, I don't see the option to specify a template for FROMCOPYDDN, only a DDNAME.

Larry Jardine
Database Advisor, Aetna, a CVS Health Company

[CVS]
CONFIDENTIALITY NOTICE: This communication and any attachments may contain confidential and/or privileged information for the use of the designated recipients named above. If you are not the intended recipient, you are hereby notified that you have received this communication in error and that any review, disclosure, dissemination, distribution or copying of it or its contents is prohibited. If you have received this communication in error, please notify the sender immediately by email or telephone and destroy all copies of this communication and any attachments.

From: David Waugh <[login to unmask email]>
Sent: Saturday, January 18, 2020 2:21 AM
To: [login to unmask email]
Subject: [EXTERNAL] [DB2-L] - DB2 z/OS UNLOAD from an Image Copy GDG

**** External Email - Use Caution ****

We have a requirement to unload a bunch of data from our Production DB2 and load it into our QA DB2 subsystem on a periodic basis.

In the past, we've been unloading all 300+ tablespaces using normal UNLOAD TABLESPACE... utility statements.

UNLOAD TABLESPACE DBname.TSname
FROM TABLE Owner.Tblname
SHRLEVEL CHANGE ISOLATION UR

There are some pretty big tables out there, so the entire process takes in excess of 12 hours.

In order to make the process faster (and less "fuzzy") and less of an interference with online processing, I'd like to unload the tablespace data from the latest set of daily Image Copies instead of unloading from the tablespaces themselves. The Image Copies are SHRLEVEL CHANGE, and they are written to GDG datasets with a naming convention of HLQ.DSN1COPY.DBname.TSname(+1)

The problem I'm having is that the UNLOAD utility is ignoring the TEMPLATE and keeps insisting that I need a //CPYTEMPL DD in my JCL.

- If I don't supply the CPYTEMPL DD, it fails with a message of "DSNU035I... DSNUULCB - UNABLE TO FIND DD CARD FOR DDNAME CPYTEMPL"

- If I do supply the CPYTEMPL DD, it unloads successfully (though I don't know if it's really unloading from the Image Copy GDG or from the tablespace instead)

Here is one example of the UNLOAD utility JCL I've put together:

//********************************************************************
//* UNLOAD THE TABLESPACE FROM AN IMAGE COPY DATASET
//* (RATHER THAN FROM THE TABLESPACE ITSELF)
//********************************************************************
//UNLD10 EXEC DSNUPROC,SYSTEM=DSN1,
// LIB='SYSx.DSN1.SDSNLOAD',
//* UTPROC='PREVIEW', <=== TO PREVIEW THE TEMPLATE & UNLOAD
//* RESTART(PHASE), <=== TO RESTART UTIL IN LAST PHASE
// UID='DBname.TSname'
//SYSPUNCH DD DSN=HLQ.DSN1.DBname.TSname.LOADCARD,
// DISP=(NEW,CATLG,DELETE),UNIT=SYSDA,
// SPACE=(TRK,(1,1),RLSE)
//SYSREC DD DSN=HLQ.DSN1.DBname.TSname.UNLDDATA,
// DISP=(NEW,CATLG,DELETE),
// SPACE=(CYL,(1500,100),RLSE),
// UNIT=SYSDA
//* TEMPLATE GETS IGNORED, THIS DD IS REQUIRED TO GET THIS TO WORK:
//CPYTEMPL DD DISP=SHR,DSN=HLQ.DSN1COPY.DBname.TSname(0)
//SYSIN DD *
TEMPLATE CPYTEMPL
DSN 'HLQ.DSN1COPY.DBname.TSname(0)'
UNLOAD TABLESPACE DBname.TSname
FROMCOPYDDN (CPYTEMPL)
FROM TABLE Owner.Tblname
/*



Is there a way to specify something in the TEMPLATE or UNLOAD statements that would keep UNLOAD from insisting on a //CPYTEMPL DD in the JCL? Or am I stuck with this way of doing it?

Curious minds want to know...

Thanks in advance,

David Waugh, erstwhile DB2 DBA

-----End Original Message-----

NOTICE TO RECIPIENT OF INFORMATION:
This e-mail may contain confidential or privileged information. If you think you have received this e-mail in error, please advise the sender by reply e-mail and then delete this e-mail immediately.
This e-mail may also contain protected health information (PHI) with information about sensitive medical conditions, including, but not limited to, treatment for substance use disorders, behavioral health, HIV/AIDS, or pregnancy. This type of information may be protected by various federal and/or state laws which prohibit any further disclosure without the express written consent of the person to whom it pertains or as otherwise permitted by law. Any unauthorized further disclosure may be considered a violation of federal and/or state law. A general authorization for the release of medical or other information may NOT be sufficient consent for release of this type of information.
Thank you. Aetna
Attachments

  • image003.png (3.8k)

Horacio Villa

DB2 z/OS UNLOAD from an Image Copy GDG
(in response to Larry Jardine)
The Utility User Guide and Reference says, refering to Unload ....
Fromcopyddn:
FROMCOPYDDN
Use the FROMCOPYDDN option to unload data from one or more image
copy data sets that are associated with the specified DD name.
It says nothing about TEMPLATE, that would be clearer.
So, I think Larry is right.

Horacio





Actually, I don?t see the option to specify a template for FROMCOPYDDN,
only a DDNAME.

Larry Jardine
Database Advisor, Aetna, a CVS Health Company




David Waugh

RE: DB2 z/OS UNLOAD from an Image Copy GDG
(in response to Larry Jardine)

Larry:

I added DISP SHR to the CPYTEMPL template, and DB2 complained that SHR was not a valid value.  So I took it off.

David

Philip Sevetson

DB2 z/OS UNLOAD from an Image Copy GDG
(in response to David Waugh)
David,

If this keeps being a problem for you, I have the following, which generates explicit datasets in FROMCOPY. It needs a template for UNLDDDN and PUNCHDDN [ ‘UNLDDN(TMPLSYSR) PUNCHDDN(TMPLSYSP)’]:


/* GENERATE AN UNLOAD STATEMENT WHICH UNLOADS FROM
THE MOST RECENT FULL COPY OF A TABLESPACE */
WITH TS_TO_UNLOAD (DBNAME, TSNAME)
AS (
/* These are the tablespaces you want to unload */
SELECT 'DB1','TS1' FROM SYSIBM.SYSDUMMY1 UNION
SELECT 'DB2','TS2' FROM SYSIBM.SYSDUMMY1
)
, TS_AND_DSNAME (DBNAME, TSNAME, DSNAME)
AS (
/* FIND THE DSNAME FOR THE MOST RECENT
FULL COPY OF THE TABLESPACE */
SELECT CP.DBNAME, CP.TSNAME, CP.DSNAME
FROM TS_TO_UNLOAD SCOPE
INNER JOIN SYSIBM.SYSCOPY CP
ON (SCOPE.DBNAME = CP.DBNAME
AND SCOPE.TSNAME = CP.TSNAME
)
WHERE CP.ICTYPE = 'F' /* FULL COPY */
AND CP.ICBACKUP = ' ' /* LOCAL, PRIMARY COPY */
AND CP.TIMESTAMP =
( /* TIMESTAMP OF MOST RECENT LOCAL-PRIMARY
FULLCOPY FOR THIS TABLESPACE */
SELECT MAX(TIMESTAMP)
FROM SYSIBM.SYSCOPY TIMER
WHERE TIMER.DBNAME = CP.DBNAME
AND TIMER.TSNAME = CP.TSNAME
AND CP.DSNUM = 0 /* NOT A PARTITION-LEVEL COPY */
AND TIMER.ICTYPE = 'F'
AND TIMER.ICBACKUP = ' '
)
)
, RSLTS_W_SORTCOLS (UTILSTMT, DBNAME, TSNAME, STMT_LINENUM)
AS (
/* BUILD MULTIPLE-LINE UTILITY STATEMENT */
SELECT /* FIRST LINE OF UNLOAD */
CHAR('UNLOAD TABLESPACE '
CONCAT RTRIM(DBNAME)
CONCAT '.'
CONCAT TSNAME
,80
) AS UTILSTMT
,DBNAME
,TSNAME
,1 AS STMT_LINENUM
FROM TS_AND_DSNAME
UNION ALL
SELECT /* SECOND LINE OF UNLOAD */
CHAR(' UNLDDN(TMPLSYSR) PUNCHDDN(TMPLSYSP) '
,80
) AS UTILSTMT
,DBNAME
,TSNAME
,2 AS STMT_LINENUM
FROM TS_AND_DSNAME
UNION ALL
SELECT /* THIRD LINE OF UNLOAD */
CHAR(' FROMCOPY '
CONCAT RTRIM(DSNAME)
,80
) AS UTILSTMT
,DBNAME
,TSNAME
,3 AS STMT_LINENUM
FROM TS_AND_DSNAME
)
/* WRITE OUT ORDERED UTILITY STATEMENT */
SELECT UTILSTMT
FROM RSLTS_W_SORTCOLS
ORDER BY DBNAME, TSNAME, STMT_LINENUM
;




Philip Sevetson
Computer Systems Manager
FISA-OPA
5 Manhattan West
New York, NY 10001
[login to unmask email]<mailto:[login to unmask email]>
212-857-1688 w
917-991-7052 m
212-857-1659 f
[cid:[login to unmask email]

From: David Waugh <[login to unmask email]>
Sent: Sunday, January 19, 2020 3:24 AM
To: [login to unmask email]
Subject: [DB2-L] - RE: DB2 z/OS UNLOAD from an Image Copy GDG


Larry:

I added DISP SHR to the CPYTEMPL template, and DB2 complained that SHR was not a valid value. So I took it off.

David

-----End Original Message-----
**This e-mail, including any attachments, may be confidential, privileged, or otherwise legally protected. It is intended only for the addressee. If you received this e-mail in error or from someone who was not authorized to send it to you, do not disseminate, copy, or otherwise use this e-mail or its attachments. Please notify the sender immediately by reply e-mail and delete the e-mail from your system.**
Attachments

  • image001.png (3.3k)

David Waugh

RE: DB2 z/OS UNLOAD from an Image Copy GDG (DB2 for z/OS V11 NFM)
(in response to David Waugh)

I have a follow-up question about this:

Are there any "gotchas" I need to take into account when using UNLOAD from Image COPY datasets for ICs that were done with SHRLEVEL CHANGE instead of SHRLEVEL REFERENCE? 

SHRLEVEL CHANGE is a "dirty/fuzzy" copy of the tablespace data because it allows users & applications to INSERT, UPDATE & DELETE while the image copy is being taken.  There is the possibility that uncommitted data might be copied. 

Given the way that data has been copied from Prod to QA in the past:

  UNLOAD TABLESPACE dbname.tsname       
    FROM TABLE owner.tblname            
         WHEN (timestampcol BETWEEN 'yyyy-mm-dd-00.00.00.000000'      
            AND 'yyyy-mm-dd-23.59.59.999999')               
    SHRLEVEL CHANGE ISOLATION UR           

followed by LOAD on the QA system:

    LOAD REPLACE LOG NO INDDN SYSREC                        
         ENFORCE NO NOCOPYPEND                               
         DSN('HLQ.PROD.ssid.dbname.tsname.UNLDDATA')         
         DISP(OLD,KEEP,KEEP)              

just the UNLOAD process could take hours for a very large table (hundreds of millions of rows).  So between the amount of time it takes to unload the data from an active table, plus SHRLEVEL CHANGE, plus ISOLATION UR, this amounts to a VERY dirty/fuzzy copy of the production data that will be loaded into the QA subsystem's table.

So UNLOADing from an Image Copy taken with SHRLEVEL CHANGE, while it will probably be fuzzy, it won't be anywhere near as fuzzy as the UNLOAD from the active table.

The client is fine with "fuzzy" copies of their data being loaded into QA.  They're fine with uncommitted data being loaded into QA.  I think less fuzzy is better than more fuzzy, so I'm inclined to go with UNLOAD from image copies instead of UNLOAD from active tables.

But I don't want to switch to unloading from ICs if there are any "gotchas" I don't know about.  Anybody know of anything I need to be careful of?

Thanks once again,

David Waugh, erstwhile DB2 DBA

                   

Michael Hannan

RE: DB2 z/OS UNLOAD from an Image Copy GDG (DB2 for z/OS V11 NFM)
(in response to David Waugh)

David,

I don't understand why you need to use Unload from Imagecopy. Unload utility on table can use PARALLEL for partitions, and use a WHEN clause with predicates. You could also use SQL type unload (much higher CPU) with parallel access path with offload to zIIP, or could run multiple Unload jobs with different partition/cluster key ranges.

  UNLOAD TABLESPACE dbname.tsname       
    FROM TABLE owner.tblname            
         WHEN (timestampcol BETWEEN 'yyyy-mm-dd-00.00.00.000000'      
            AND 'yyyy-mm-dd-23.59.59.999999')               
    SHRLEVEL CHANGE ISOLATION UR        

I hope your timestamp is updated for any any Insert or Update.  If not, you might need to add a proper change timestamp, or fix those Updates that did not update it. Propagating Deletes maybe more tricky. Decide if Deletes need to fire triggers to obtain keys to be removed or be recorded in temporal table, or use logical delete instead of real delete, initially.

If timestamp range predicate selects a delta that is well behind the current timestamp, then chances of picking up Uncommitted data become very small, and may not need special measures to deal with it.

Yes Image copy shrlevel change could have rows changes that could be subsequently backed out just like the table pages, if changes happened very recently.

Perhaps someone can list special advantages of Unload from Imagecopy that escape me right now. Seems to be an unusual solution. Is there a real problem that requires it?

DFSORT/ICETOOL can be used to merge previous baseline flatfile extracts with a delta extracts, if needed, to become the next baseline extract. 

Its hard to say what gotchas are possible with uncommitted data, without knowing the design of tables, and relationships to other tables. Just use the timestamp predicate to minimise chances of uncommitted data is probably good enough. Uncommitted data is not totally corrupt, just potentially inconsistent with other data.

Michael Hannan,

DB2 Application Performance Specialist
CPT Global Ltd

Edited By:
Michael Hannan[Organization Members] @ Jan 26, 2020 - 05:15 AM (Europe/Berlin)

Larry Jardine

DB2 z/OS UNLOAD from an Image Copy GDG (DB2 for z/OS V11 NFM)
(in response to Michael Hannan)
You may want to unload a large set of tables "as of" a near-specific point-in-time. Like on Feb. 5, you may want to have unloads for 100 tables as of Jan 31.

Larry Jardine
Database Advisor, Aetna, a CVS Health Company

[CVS]
CONFIDENTIALITY NOTICE: This communication and any attachments may contain confidential and/or privileged information for the use of the designated recipients named above. If you are not the intended recipient, you are hereby notified that you have received this communication in error and that any review, disclosure, dissemination, distribution or copying of it or its contents is prohibited. If you have received this communication in error, please notify the sender immediately by email or telephone and destroy all copies of this communication and any attachments.

From: Michael Hannan <[login to unmask email]>
Sent: Saturday, January 25, 2020 11:12 PM
To: [login to unmask email]
Subject: [EXTERNAL] [DB2-L] - RE: DB2 z/OS UNLOAD from an Image Copy GDG (DB2 for z/OS V11 NFM)

**** External Email - Use Caution ****

David,

I don't understand why you need to use Unload from Imagecopy. Unload utility on table can use PARALLEL for partitions, and use a WHEN clause with predicates. You could also use SQL type unload (much higher CPU) with parallel access path with offload to zIIP, or could run multiple Unload jobs with different partition/cluster key ranges.

UNLOAD TABLESPACE dbname.tsname
FROM TABLE owner.tblname
WHEN (timestampcol BETWEEN 'yyyy-mm-dd-00.00.00.000000'
AND 'yyyy-mm-dd-23.59.59.999999')
SHRLEVEL CHANGE ISOLATION UR

I hope your timestamp is updated for any any Insert or Update. Propagating Deletes maybe more tricky. Decide if Deletes need to fire triggers to obtain keys to be removed or be recorded in temporal table, or use logical delete instead of real delete, initially.

I timestamp range predicate selects a delta that is well behind the current timestamp, then chances of picking up Uncommitted data become very small, and may not need special measures to deal with it.

Yes Image copy shrlevel change could have rows changes that could be subsequently backed out just like the table pages, if changes happened very recently.

Perhaps someone can list special advantages of Unload from Imagecopy that escape me right now. Seems to be an unusual solution. Is there a real problem that requires it?

DFSORT/ICETOOL can be used to merge previous baseline flatfile extracts with a delta extracts, if needed, to become the next baseline extract.

Michael Hannan,

DB2 Application Performance Specialist
CPT Global Ltd

-----End Original Message-----

NOTICE TO RECIPIENT OF INFORMATION:
This e-mail may contain confidential or privileged information. If you think you have received this e-mail in error, please advise the sender by reply e-mail and then delete this e-mail immediately.
This e-mail may also contain protected health information (PHI) with information about sensitive medical conditions, including, but not limited to, treatment for substance use disorders, behavioral health, HIV/AIDS, or pregnancy. This type of information may be protected by various federal and/or state laws which prohibit any further disclosure without the express written consent of the person to whom it pertains or as otherwise permitted by law. Any unauthorized further disclosure may be considered a violation of federal and/or state law. A general authorization for the release of medical or other information may NOT be sufficient consent for release of this type of information.
Thank you. Aetna
Attachments

  • image003.png (3.8k)

Michael Hannan

RE: DB2 z/OS UNLOAD from an Image Copy GDG (DB2 for z/OS V11 NFM)
(in response to Larry Jardine)

Larry,

Thanks, that is really quite obvious, now that you mention it. Unloading as at some Point of Time, because POT value of a row can't be seen if subsequently updated. FLASHCOPY might be useful.

I think I can recall unloading tables in hierarchical bottom up sequence before, child table before parent, to make sure parents have been found for child table rows. That assumes that orphans matter in the extract? Might need Inserted and Updated timestamps on most tables.

That makes another "Got yer" occur to me. Where database contains denormalised derived data, the calculated values could be slightly out of sync for rows extracted. Sites should have SQLs to check derived data is in sync, and when not, generate a dummy update statement (update a row to same value as it already is) that will fire triggers to resync the derived data. 

If needing an absolutely consistent copy of the data, triggers on Insert/Update/Delete could be fired during the data extract process to capture before (and after) rows for changes occurring during the extract runs.

So it is possible to design a more complex process to get all the copied data back into sync perfectly. Hopefully that is not needed, and some inconsistency of the extract can be tolerated. Try to run extracts at a quite time in terms of Inserts/Updates etc.

Michael Hannan,
DB2 Application Performance Specialist
CPT Global Ltd

Edited By:
Michael Hannan[Organization Members] @ Jan 27, 2020 - 12:07 AM (Europe/Berlin)
Michael Hannan[Organization Members] @ Jan 27, 2020 - 12:11 AM (Europe/Berlin)

David Waugh

RE: DB2 z/OS UNLOAD from an Image Copy GDG (DB2 for z/OS V11 NFM)
(in response to Michael Hannan)

Michael:

It's a little difficult to explain why I think unloading from an Image Copy is preferable to unloading from the active table, but I'll give it a shot:

We're unloading around 300 tables' worth of data from our Production subsystem, and then loading the same # of tables in the QA subsystem.  There is no shared tape between the LPARs, so we have to unload to a set of DASD volumes that are shared between the LPARs.  Space on these shared DASD volumes is limited, and they cannot contain all of the unloaded data at once. 

Some of the tables are quite large (up to 13 billion rows), so we have the BETWEEN timestamp unload WHEN criteria set to unload small slices of the data each time.  We simply can't (and don't want to) unload all umpteen billion rows of data from Prod and load it into QA.  Which means, at times, we're running up to 50 UNLOADs from a single table, each with a 2-week WHEN criteria, trying to do the unloads in "waves" so that we can get the last 6 months' worth of data unloaded from Prod and loaded into QA.  (the purpose of taking small time slices of 2 weeks each is apparently to keep the size of each unload dataset under 1000 cylinders, so that we don't have problems finding sufficient DASD space on those shared volumes). 

The process of unloading the same Prod table 50 times can take up to 4 hours for the biggest tables.  While 50 unloads are running (sequentially), data in the table being unloaded is being updated constantly over that 4 hours, thus it's a pretty "fuzzy" copy.   Multiply this process by 300 tables over 24 hours, and you have a REALLY "fuzzy" copy of the Production data to load into QA. 

We can't run many of these unloads in parallel, again because of the limited shared DASD space we have to work with.  So it's (1) unload a set of tables (or a single big table) from Prod, (2) load that set of tables (or a single big table) into QA, (3) delete the unload datasets from the shared DASD to free up space, (4) unload another set of tables (or another single big table) from Prod, (5) load that set of tables (or a single big table) into QA, (6) delete the unload datasets from the shared DASD to free up space, … Wash, Rinse, Repeat... over & over & over again.

The process of unloading and loading all 300 tables took about 36 hours the last time I did it (a couple of weeks ago), so it's a long, tedious, mind-numbing process that makes you want to tear your hair out and start updating your resume.  And yes, I was doing this all by myself - it wasn't like there were 2 or 3 of us taking shifts.

And then when you're done moving all this data from Prod to QA, you get the fun of doing it AGAIN to move the same data from QA to Development.  Oh, and they want it done closely on the heels of the Prod to QA move.  Makes for a really exhausting week, to say the least.

I'm searching for a way of trimming down the time it takes, and also searching for a way of getting a less "fuzzy" copy of the Production data.  Unloading the data from an image copy 50 times solves the "fuzzy" problem (pretty much, anyway, since the only Image Copies available to us are SHRLEVEL CHANGE), and I'm thinking that unloading from an image copy 50 times might be faster than unloading from the active table 50 times.  

Does it make more sense now why I'm exploring this new way of moving data from Prod to QA to Development?

David Waugh, Erstwhile DB2 DBA

David Waugh

RE: DB2 z/OS UNLOAD from an Image Copy GDG
(in response to Horacio Villa)

I've tried about a billion different ways of doing this, and I cannot get FROMCOPYDDN to work with an Image Copy  TEMPLATE.  Looks like UNLOAD just wasn't designed to work with an Image Copy TEMPLATE - it will SAY it successfully processed the template, but it will NOT use the template - it insists you have a //IMGCOPY DD DSN= statement for the Image Copy dataset in your JCL in order to UNLOAD from said Image Copy.  Bummer.   

Phil Grainger

DB2 z/OS UNLOAD from an Image Copy GDG (DB2 for z/OS V11 NFM)
(in response to David Waugh)
Two questions spring to mind

1. Why use unload/reload when dsn1copy or dfdss would be far faster. Providing all structures are equal, of course

2. I hope you are also anonymising the data before it hits development subsystems...

Phil G

Sent from my iPad

On 4 Feb 2020, at 01:20, David Waugh <[login to unmask email]> wrote:



Michael:

It's a little difficult to explain why I think unloading from an Image Copy is preferable to unloading from the active table, but I'll give it a shot:

We're unloading around 300 tables' worth of data from our Production subsystem, and then loading the same # of tables in the QA subsystem. There is no shared tape between the LPARs, so we have to unload to a set of DASD volumes that are shared between the LPARs. Space on these shared DASD volumes is limited, and they cannot contain all of the unloaded data at once.

Some of the tables are quite large (up to 13 billion rows), so we have the BETWEEN timestamp unload WHEN criteria set to unload small slices of the data each time. We simply can't (and don't want to) unload all umpteen billion rows of data from Prod and load it into QA. Which means, at times, we're running up to 50 UNLOADs from a single table, each with a 2-week WHEN criteria, trying to do the unloads in "waves" so that we can get the last 6 months' worth of data unloaded from Prod and loaded into QA. (the purpose of taking small time slices of 2 weeks each is apparently to keep the size of each unload dataset under 1000 cylinders, so that we don't have problems finding sufficient DASD space on those shared volumes).

The process of unloading the same Prod table 50 times can take up to 4 hours for the biggest tables. While 50 unloads are running (sequentially), data in the table being unloaded is being updated constantly over that 4 hours, thus it's a pretty "fuzzy" copy. Multiply this process by 300 tables over 24 hours, and you have a REALLY "fuzzy" copy of the Production data to load into QA.

We can't run many of these unloads in parallel, again because of the limited shared DASD space we have to work with. So it's (1) unload a set of tables (or a single big table) from Prod, (2) load that set of tables (or a single big table) into QA, (3) delete the unload datasets from the shared DASD to free up space, (4) unload another set of tables (or another single big table) from Prod, (5) load that set of tables (or a single big table) into QA, (6) delete the unload datasets from the shared DASD to free up space, … Wash, Rinse, Repeat... over & over & over again.

The process of unloading and loading all 300 tables took about 36 hours the last time I did it (a couple of weeks ago), so it's a long, tedious, mind-numbing process that makes you want to tear your hair out and start updating your resume. And yes, I was doing this all by myself - it wasn't like there were 2 or 3 of us taking shifts.

And then when you're done moving all this data from Prod to QA, you get the fun of doing it AGAIN to move the same data from QA to Development. Oh, and they want it done closely on the heels of the Prod to QA move. Makes for a really exhausting week, to say the least.

I'm searching for a way of trimming down the time it takes, and also searching for a way of getting a less "fuzzy" copy of the Production data. Unloading the data from an image copy 50 times solves the "fuzzy" problem (pretty much, anyway, since the only Image Copies available to us are SHRLEVEL CHANGE), and I'm thinking that unloading from an image copy 50 times might be faster than unloading from the active table 50 times.

Does it make more sense now why I'm exploring this new way of moving data from Prod to QA to Development?

David Waugh, Erstwhile DB2 DBA

-----End Original Message-----

Randy Bright

DB2 z/OS UNLOAD from an Image Copy GDG (DB2 for z/OS V11 NFM)
(in response to Phil Grainger)
A few questions: Would “sub-setting” the data help? In other words, instead of unloading “umpteen billion” rows from production, could you get by with “umpteen million” if all RI relationships were maintained? And as Phil asked, should you be “anonymizing” the data before it leaves production?

If the answers to those questions are “yes”, contact me offline. I’d like to have a conversation.

Randy Bright
Solutions Architect
BMC Software, Inc.
[login to unmask email]

From: Phil Grainger <[login to unmask email]>
Sent: Tuesday, February 4, 2020 4:04 AM
To: [login to unmask email]
Subject: [EXTERNAL] [DB2-L] - RE: DB2 z/OS UNLOAD from an Image Copy GDG (DB2 for z/OS V11 NFM)

Two questions spring to mind

1. Why use unload/reload when dsn1copy or dfdss would be far faster. Providing all structures are equal, of course

2. I hope you are also anonymising the data before it hits development subsystems...

Phil G
Sent from my iPad


On 4 Feb 2020, at 01:20, David Waugh <[login to unmask email]<mailto:[login to unmask email]>> wrote:


Michael:

It's a little difficult to explain why I think unloading from an Image Copy is preferable to unloading from the active table, but I'll give it a shot:

We're unloading around 300 tables' worth of data from our Production subsystem, and then loading the same # of tables in the QA subsystem. There is no shared tape between the LPARs, so we have to unload to a set of DASD volumes that are shared between the LPARs. Space on these shared DASD volumes is limited, and they cannot contain all of the unloaded data at once.

Some of the tables are quite large (up to 13 billion rows), so we have the BETWEEN timestamp unload WHEN criteria set to unload small slices of the data each time. We simply can't (and don't want to) unload all umpteen billion rows of data from Prod and load it into QA. Which means, at times, we're running up to 50 UNLOADs from a single table, each with a 2-week WHEN criteria, trying to do the unloads in "waves" so that we can get the last 6 months' worth of data unloaded from Prod and loaded into QA. (the purpose of taking small time slices of 2 weeks each is apparently to keep the size of each unload dataset under 1000 cylinders, so that we don't have problems finding sufficient DASD space on those shared volumes).

The process of unloading the same Prod table 50 times can take up to 4 hours for the biggest tables. While 50 unloads are running (sequentially), data in the table being unloaded is being updated constantly over that 4 hours, thus it's a pretty "fuzzy" copy. Multiply this process by 300 tables over 24 hours, and you have a REALLY "fuzzy" copy of the Production data to load into QA.

We can't run many of these unloads in parallel, again because of the limited shared DASD space we have to work with. So it's (1) unload a set of tables (or a single big table) from Prod, (2) load that set of tables (or a single big table) into QA, (3) delete the unload datasets from the shared DASD to free up space, (4) unload another set of tables (or another single big table) from Prod, (5) load that set of tables (or a single big table) into QA, (6) delete the unload datasets from the shared DASD to free up space, … Wash, Rinse, Repeat... over & over & over again.

The process of unloading and loading all 300 tables took about 36 hours the last time I did it (a couple of weeks ago), so it's a long, tedious, mind-numbing process that makes you want to tear your hair out and start updating your resume. And yes, I was doing this all by myself - it wasn't like there were 2 or 3 of us taking shifts.

And then when you're done moving all this data from Prod to QA, you get the fun of doing it AGAIN to move the same data from QA to Development. Oh, and they want it done closely on the heels of the Prod to QA move. Makes for a really exhausting week, to say the least.

I'm searching for a way of trimming down the time it takes, and also searching for a way of getting a less "fuzzy" copy of the Production data. Unloading the data from an image copy 50 times solves the "fuzzy" problem (pretty much, anyway, since the only Image Copies available to us are SHRLEVEL CHANGE), and I'm thinking that unloading from an image copy 50 times might be faster than unloading from the active table 50 times.

Does it make more sense now why I'm exploring this new way of moving data from Prod to QA to Development?

David Waugh, Erstwhile DB2 DBA

-----End Original Message-----

-----End Original Message-----

David Waugh

RE: DB2 z/OS UNLOAD from an Image Copy GDG (DB2 for z/OS V11 NFM)
(in response to Randy Bright)

Phil and Randy and others:

To answer your questions:

1. Why use unload/reload when dsn1copy or dfdss would be far faster. Providing all structures are equal, of course
     Yes, it would be faster & easier, but DSN1COPY/DFDSS won't work, for a number of reasons:

  • There is simply too much data in Prod, billions of rows (we only want millions of rows, the most recent ~300 days worth.  We don't want all of it (which is what DSN1COPY and DFDSS would give us), we only want a sub-set.moved to QA and Dev
  • There is not nearly enough contiguous shared DASD space between the Prod and QA LPARs for the "unload" datasets from DSN1COPY/DFDSS (one of them alone would be > 250,000 tracks)
  • There is the potential for differences between the table definitions (like a new column defined on a QA table that is not yet implemented on Prod) that would make DSN1COPY and DFDSS unusable as a solution for those particular tables.  We'd have to fall back to the UNLOAD/RELOAD process for them
  • The OBID/DBID XLAT process would be tedious to set up for 300+ tablespaces, and since there's no DDF communication between the DB2 subsystems on Prod & QA, it would be a manual process.
  • If we ever decided to go from Segmented tablespaces to PBG, we would start on Dev and work our way through QA, then Prod.  DSN1COPY & DFDSS would become useless while this was going on.
  • Indexes would need to be re-built after the DSN1COPY or DFDSS, which would take time


2. I hope you are also anonymising the data before it hits development subsystems...

    Sadly, no.  There should be, but (as far as I know) there's not.

 

3. Would “sub-setting” the data help?

    We're already doing that - like I said in my recent diatribe, we're doing up to 50 unloads for the larger

    tables, unloading 2-week date-range slices of data at a time, giving us unload datasets (in total)

    containing only the last ~300 days' worth of data

 

4.  How about unloading from a FlashCopy?  Unfortunately, you cannot use DB2 UNLOAD against a FlashCopy, only a "traditional" Image Copy.  That would be ideal, as it's a near-instantaneous snapshot of the data, which would be far better than a SHRLEVEL CHANGE image copy.

 

Yes, I know that several vendors have tools we could make use of.  Sure would make my life easier.  But I'm not the decider, just the DBA.  I've passed along the idea of using some vendor's tools to accomplish this, but I'm not sure they're willing to spring for the cost of buying & implementing a tool. 

Thanks,

David Waugh, Erstwhile (& exhausted) DB2 DBA. 

Michael Hannan

RE: DB2 z/OS UNLOAD from an Image Copy GDG (DB2 for z/OS V11 NFM)
(in response to David Waugh)

David,

I read your long explanation. A couple of things occur to me.

Your unloads don't have to be to the limited by the Shared Dasd. Unload to tape or any DASD (subject to HSM) first up. Does not have to be the shared DASD.  During the load process, you then have to stage copy the data to the shared DASD, to move it too the other system. However the Unloads could have more parallelism. I hope unloads are set to use limited partitions. Staging data to the other LPAR might well be designed to be somewhat independent of the Unloads and the Loads, and as highly parallel as the common DASD allows.

I don't know how often you are doing this extract of data to copy it to QA. However if frequent, it seems to me a lot of time could be saved by the concept of maintaining a baseline extract of certain table subsets of data (timestamp ranges) at the target system, and then subsequently extracting Deltas of only the rows Inserted or updated since the last extract (needs a timestamp column that shows when last updated/inserted). ICETOOL with right control cards is capable of merging deltas with a baseline (very fast) to create a new baseline. The whole process could become more complex, but save repeatedly extracting much of the same data at each run.  Assumes table formats did not change, so more complex if table change has occurred.

On occasions SQL driven unloads can be fast for small portions of the partitions extracted, and when getting a good access path. However this is not usually likely.

Overall, the case for unloading from Imagecopy, is a little unconvincing to me (being a skeptic). It might actually take longer, by causing more scanning of the Image copies. I cannot be really sure without looking in depth.

Michael Hannan,
DB2 Application Performance Specialist
CPT Global Ltd 

Bharath Nunepalli

RE: DB2 z/OS UNLOAD from an Image Copy GDG
(in response to David Waugh)

We have similar situation.
I copy data from one database to other frequently for an ERP application. We have 1000+ tables per databases.
I developed a REXX process for building the UNLOAD & LOAD JCLs. That process builds UNLOAD JCLs for unloading the data from some specific date's image copies. We also use IBM Recovery Expert for data migration.

I'm not sure how you are building UNLOAD/LOAD JCLs. Also, I'm not clear why you want to use template.
Not all imge copy datasets for a database will have the same GOOVOO number. So, its better you build a process that gets the details from SYSCOPY.

I understand the pain of unloading data from very large tables. I would suggest to use high performance unload utility. We use IBM HPU. I believe other ISV provide similar products.
With HPU, I was able to unload 1.5 billion rows within 45 mins. With regular unload, it took more than ~ 6 hours.


Bharath Nunepalli,

Senior DB2 DBA.

Michael Hannan

RE: DB2 z/OS UNLOAD from an Image Copy GDG
(in response to Bharath Nunepalli)



In Reply to Bharath Nunepalli:

We have similar situation.
I copy data from one database to other frequently for an ERP application. We have 1000+ tables per databases.

On a tangent as usual:

Do you really have 1000+ tables in one Database, or do I misunderstand? How many tablespaces? You may well mean 1000+ per application set off databases, in which case, no problem.

I would be interested in hearing what people think is a good limit per Database, since I worry about the size of the DBD. If an extremely large DBD gets updated by DDL, there are issues with concurrency with other things, EDM pool, and also very large log records get written. I found out the hard way long ago running a generated DDL script with many Commits (each writing DBD before and after records), and burning lots of Log. I then realised I had to make each Database DBD smaller.

Richard Yevich once recommended 1 Tablespace per Database (around the time of DB2 2.3). However I doubt anyone/many went that far to try for least locking conflicts.

Michael Hannan,
DB2 Application Performance Specialist
CPT Global Ltd

Edited By:
Michael Hannan[Organization Members] @ Feb 13, 2020 - 09:35 AM (Europe/Berlin)

Roland Schock

AW: DB2 z/OS UNLOAD from an Image Copy GDG
(in response to Michael Hannan)
Hi Michael,

a Siebel Database layout on distributed has easily 4000+ tables. SAP is similar.
On Db2 distributed we have a huge bunch of tables in a single tablespace. Just on z/OS the convention to have one tablespace for each table came up, as the backup granularity is by tablespace. Hence it is possible to have a tablespace per table. Nice idea though.

BTW, on Db2 LUW the "Maximum number of table spaces in a database" is 32,768.
See https://www.ibm.com/support/knowledgecenter/SSEPGG_11.5.0/com.ibm.db2.luw.sql.ref.doc/doc/r0001029.html

Mit freundlichen Grüßen/Kind regards
Roland Schock
[ibm-champion-rgb-130px]

Von: Michael Hannan <[login to unmask email]>
Gesendet: Donnerstag, 13. Februar 2020 09:35
An: [login to unmask email]
Betreff: [Marketing Mail] [DB2-L] - RE: DB2 z/OS UNLOAD from an Image Copy GDG



In Reply to Bharath Nunepalli:

We have similar situation.
I copy data from one database to other frequently for an ERP application. We have 1000+ tables per databases.

On a tangent as usual:

Do you really have 1000+ tables in one Database, or do I misunderstand? How many tablespaces?

I would be interested in hearing what people think is a good limit per Database, since I worry about the size of the DBD. If an extremely large DBD gets updated by DDL, there are issues with concurrency with other things, EDM pool, and also very large log records get written. I found out the hard way long ago running a generated DDL script with many Commits (each writing DBD before and after records), and burning lots of Log. I then realised I had to make each Database DBD smaller.

Richard Yevich once recommended 1 Tablespace per Database (around the time of DB2 2.3). However I doubt anyone/many went that far to try for least locking conflicts.

Michael Hannan,
DB2 Application Performance Specialist
CPT Global Ltd

-----End Original Message-----
[login to unmask email]
Telefon+Mobil +49 89 32468-130

________________________________
ARS Computer und Consulting GmbH, www.ars.de http://www.ars.de
ein Unternehmen der TIMETOACT Group
Garmischer Straße 7, 80339 München, Deutschland

ARS - The Art of Software Engineering
Software Engineering, Technologieberatung, IBM Software, IBM Lizenzmanagement Services

Handelsregister München, HRB 101829, USt-ID: DE 155 068 909
Geschäftsführer: Joachim Gucker, Stefan Schäffer, Michael Arbesmeier
Datenschutzerklärung: https://web.ars.de/datenschutz/
Attachments

  • image001.jpg (3.8k)

Bharath Nunepalli

RE: DB2 z/OS UNLOAD from an Image Copy GDG
(in response to Michael Hannan)

We actually have 2500+ tables per database.
1000 tables in their own tablespaces and remaining in a segmented tablespace

 

Bharath Nunepalli,

Senior DB2 DBA.

Manoj Kaveri

RE: DB2 z/OS UNLOAD from an Image Copy GDG
(in response to David Waugh)

Try by removing TEMPLATE statement and dataset name just code FROMCOPYDDN CPYTEMPL. 

Question came to my mind, As unloading from image-copy also puts TS in UT mode during unload. Does it meet your objective? 

Michael Hannan

RE: DB2 z/OS UNLOAD from an Image Copy GDG
(in response to Bharath Nunepalli)

In Reply to Bharath Nunepalli:

We actually have 2500+ tables per database.
1000 tables in their own tablespaces and remaining in a segmented tablespace

My comment about number of Objects (tablespaces and indexes) in a DBD relates to chewing up massive DB2 Log space for DBD before and after images written to the Log for each change.   I experienced this problem many years ago, with a lot of DBD updates causing excessive Log switching and archiving. Has this gone away in the modern day? Are huge DBDs, split up in some way today? Or do we still need to keep a DBD smaller?

DBD Lock contention is of course another aspect, so while I know Create Index has to update the DBD (and Lock of course), which other operations have to share lock the DBD, and is that brief or long?

 Looking for Expert Technical comments on this tangent topic.

Michael Hannan,
DB2 Application Performance Specialist

Edited By:
Michael Hannan[Organization Members] @ May 01, 2020 - 10:30 AM (Australia/Melbourne)

James Campbell

DB2 z/OS UNLOAD from an Image Copy GDG
(in response to Michael Hannan)
Many versions ago logging on DBD01 was changed to have one before and one after image
per UOW. The way to reduce log space is to have less frequent commits - which, of course,
increases the time locks are held - which has its own issues.

James Campbell


On 30 Apr 2020 at 17:28, Michael Hannan wrote:

> In Reply to Bharath Nunepalli:
> We actually have 2500+ tables per database.
> 1000 tables in their own tablespaces and remaining in a segmented tablespace
> My comment about number of Objects (tablespaces and indexes) in a DBD relates to chewing up massive DB2 Log space for DBD before and after images written to the Log for each change.   I experienced this problem many years ago, with a lot of DBD updates causing excessive Log switching and archiving. Has this gone away in the modern day? Are huge DBDs, split up in some way today? Or do we still need to keep a DBD smaller?
>
> DBD Lock contention is of course another aspect, so while I know Create Index has to update the DBD (and Lock of course), which other operations have to share lock the DBD, and is that brief or long?
>  
> Michael Hannan,
> DB2 Application Performance Specialist
>


--
This email has been checked for viruses by AVG.
https://www.avg.com

Michael Hannan

RE: DB2 z/OS UNLOAD from an Image Copy GDG
(in response to James Campbell)



In Reply to James Campbell:

Many versions ago logging on DBD01 was changed to have one before and one after image
per UOW. The way to reduce log space is to have less frequent commits - which, of course,
increases the time locks are held - which has its own issues.

James Campbell

Thanks James!. For the locking reason, I think the small DBD is going to be way better. I still want to understand which processes lock the DBD and cause/contribute to the contention, outside of DDL changes. Probably have to make some time for experiments soon.

Michael Hannan,
DB2 Application Performance Specialist