Unload data from backup

alain pary

Unload data from backup

Hello , 

I have to unload data from a backup to check the content . It is very simple why do you making a thread for that ?

The table is sysibm.sysroutines    ok

as two columns are LOB it can't be done via unload backup   ;-( 

no problem , create a clone table , make obidxlat of the 3 backups (base tables and 2 lob tablespaces) rebuild indexes for LOB. and I will have my information in 20 minutes. 

I create a table sysroutines like sysibm.sysroutines , prepare my obidxlat , run them , page are copied in the 3 TS. 

rebuild index for parse_tree  LOB  ok

rebuild index for TEXT lob -->     abend  ;-(

verify  my obid , output obidxlat , .... 

check in the catalog , finally I find the cause , the LOB ts for TEXT  is a 8K pages and  mine is a 4K pages.  (the create like doens't manage it , also in V12) 

I redefine object with the good page size , redo my obidxlat , redo the build  index 

and finally find my information .

It takes more than 2 hours for this simple task , 

fortunately , It is not for a incident or a problem solving .

I  have a little question for you .

have you an automatic procedure to do this task ?(without third-party tool) 

can you check it with sysibm.sysroutines ? 

How many time do you need to get the information ? 

 

regards 

Alain 

 

 

Avram Friedman

RE: Unload data from backup
(in response to alain pary)

I would like to suggest that DB2 is good at storing procedures / routines in the catalog.
I suspect your requirement is to review a specific routine at time of failure.

The method you propose will not do this as there will be many more potentially correct routines than incorrect ones and the long column lenths including some BLOBS will make reading difficult.

A simple SELECT through SPUFI /  QMF of the particular row that is of interest should do the trick.
Very long columns that are of no or limited interest can be eliminated from the display or truncated as a substring.

You mentioned that you have no add on tools.
Assuming you mean no tools for unload, you may have tools for "Catalog Management"
Available from many vendors like IBM, BMC and CA as examples
These tools, if present,  will allow you to reverse engineer your routines back to DDL for readability.

It is also a good idea to treat routines, functions and procedures with the same administrative practices as any other type of user program.  This includes being able to save and retrieve the source.

In Reply to alain pary:

Hello , 

I have to unload data from a backup to check the content . It is very simple why do you making a thread for that ?

The table is sysibm.sysroutines    ok

as two columns are LOB it can't be done via unload backup   ;-( 

no problem , create a clone table , make obidxlat of the 3 backups (base tables and 2 lob tablespaces) rebuild indexes for LOB. and I will have my information in 20 minutes. 

I create a table sysroutines like sysibm.sysroutines , prepare my obidxlat , run them , page are copied in the 3 TS. 

rebuild index for parse_tree  LOB  ok

rebuild index for TEXT lob -->     abend  ;-(

verify  my obid , output obidxlat , .... 

check in the catalog , finally I find the cause , the LOB ts for TEXT  is a 8K pages and  mine is a 4K pages.  (the create like doens't manage it , also in V12) 

I redefine object with the good page size , redo my obidxlat , redo the build  index 

and finally find my information .

It takes more than 2 hours for this simple task , 

fortunately , It is not for a incident or a problem solving .

I  have a little question for you .

have you an automatic procedure to do this task ?(without third-party tool) 

can you check it with sysibm.sysroutines ? 

How many time do you need to get the information ? 

 

regards 

Alain 

 

 



Avram Friedman
DB2-L hall of fame contributor
DB2-L 'past' administrator

[login to unmask email]

alain pary

RE: Unload data from backup
(in response to Avram Friedman)

Thank you Avram , 

My purpose is only to suggest to take care that a simply task can  be sometime more difficult to realize. 

Some restrictions (lob unload in this case) some side effect  (a lob in 4k vs 8k) .

regards 

Alain

Kai Stroh

RE: Unload data from backup
(in response to alain pary)

Yes, the Unload utility cannot process LOB data when unloading from image copy. I think the reason for this restriction is that it would be very expensive for Db2 to locate a LOB value for a given row in the base table. It needs an index to quickly find the position of the ROWID in the LOB, after that it needs to jump through the LOB high level and low level space map pages, and finally it needs to be able to randomly access LOB data pages because a single LOB value can span many pages. This is probably not feasible when operating on image copies, which may not even be stored on DASD.

Your DSN1COPY approach was good, except that you relied on Db2 to allocate your target LOB table spaces implicitly (by using CREATE TABLE LIKE), and it ended up using different attributes. Therefore, DSN1COPY could not create a working copy.

Also, I would be careful when using clone tables as target (assuming you are talking about a real clone relationship, i.e. the .I0002 data sets). In clone tables, Db2 sets the hi-order bit inside the table space for things like the PSID and OBID in order to indicate that this is instance 2 of an object. I am not sure if DSN1COPY is able to handle this correctly in all cases.

In general, you will probably need a tool for DDL generation in order to create the target objects. I don't think there is a good way to automate that without using a vendor tool unless you are willing to write a lot of REXX code.

I happen to work for a company that offers a tool that will do exactly what you are describing - it will generate DDL, copy table spaces (including LOB and XML) and indexes directly from image copies into a different target, and also take care of things like copying RUNSTATS information, updating sequences, run all required utilities such as REBUILD INDEX or REPAIR CATALOG (the latter is extremely important if tables have even been altered), and much more. Please contact me directly if you would like to get more information.

 

--
Kai Stroh
UBS Hainer
Fast, efficient Db2 z/OS data migrations and renewals. That’s BCV5.
Learn how the Test Data Management Field Guide can help you to improve your own process.