- Article Type: General
- Product: Aleph
- Product Version: 20
- Relevant for Installation Type: Dedicated-Direct; Direct; Local;
util a/17/11/2 in the xxx50 z00p shows that there are 2,103,451 records, occupying 455 meg.
But when Upgrade Express "Export Customer Data" (create_customer_data) processes the z00p using the Data Pump, this is seen:
. . exported "ABC50"."Z00P" 24.48 GB 2103451 rows
(And it takes *11 hours* to process this z00p. The Data Pump export of all of the Oracle data, *without the z00p*, takes only 6 hours.)
Why does the exported table occupy 24 gig when the size in Oracle is only 455 meg? And why does it take so long?
[Note: At smaller sites -- say, under 800,000 bib records -- it is less trouble to just do the regular export/import and not exclude the z00p.)
In addition to the data and index components which one has for normal tables, because the Z00P_PTR column is in LOB format, a very large LOBSEGMENT component exists . This can be seen is util a/17/11/1; see example in Additional Information.
Two solutions to this problem are given below. We originally suggested Solution B (Data Pump export of z00p prior to regular Upgrade Express export), but testing showed that solution A (rebuild of z00p using p_publish_04) in the new version is much faster: p_publish_04 ran on the 2.3 million bib library (generating 2.0 million z00p records) in two hours. The Data Pump export, though not requiring downtime for the z00p export, still takes 8-11 hours (plus three hours for import).
Solution A. p_publish_04
1. Stop the ue_21 daemons on v20 Prod.
2. Drop the z00p table or omit it from the v20 export as described in Article 000046841 ("How to exclude Z00P_SEGNAME LOBSEGMENT from export") (see link to this article below).
3. Run Upgrade Express.
4. Do util a/17/1 to drop the v21 z00p.
5. Run p_publish_04 on each source library in v21. These p_publish_04's will generate z00p's with timestamps with the current date/time, which will result in their being republished by p_publish_06. If there have been problems which you want to correct, this may be desirable, but if the v20 z00p is OK, then you may want to update the timestamp for each publishing set in the v21 $aleph_proc/tab_publish_timestamp with a date/time which is greater than that of the z00p_timestamp's of the new z00p's. This will prevent p_publish_06 from republishing all the z00p's.
Solution B. Separate Data Pump export of the z00p prior to the regular, complete Upgrade Express export.
The night *before* the regular planned Upgrade Express export:
1. Stop the ue_21 daemons on v20 Prod.
2. Export the z00p using $aleph_proc/oracle_expdp_table on v20 Prod.
3. Copy the exported v20 z00p to v21.
4. Drop the z00p (util a/17/1) on v20 Prod.
The night *of* the regular planned Upgrade Express export:
5. Take Prod v20 down.
6. Upgrade Express "Create Customer Data" to export the v20 Oracle data from v20 Prod.
7. Copy the v20 Oracle data to v21.
8. Import the Oracle data ("Install Customer Data") from step 7 on v21.
9. Import the z00p on v21 using $aleph_proc/oracle_impdp_table. (Note: this needs to wait until *after* step 8 which includes create_ora_user for the $usr_library.)
10. Start the ue_21 daemons on v21 (which will process the z07p's accumulated while the ue_21 daemons were stopped on v20).
Link to relevant article: How to exclude Z00P_SEGNAME LOBSEGMENT from export .
util a/17/11/1 shows this: Z00P TABLE TS4D 5554176 694272 85 Z00P_ID INDEX TS3X 327680 40960 5 Z00P_ID1 INDEX TS3X 524288 65536 8 Z00P_ID2 INDEX TS3X 262144 32768 4 Z00P_ID3 INDEX TS3X 983040 122880 15 Z00P_ID4 INDEX TS3X 262144 32768 4 Z00P_ID5 INDEX TS3X 1310720 163840 20 Z00P_SEGNAME LOBSEGMENT TS4D 101293056 12661632 1734 The last is what is taking most of the space (101 Gig).
Category: System management
- Article last edited: 10/25/2014