- Article Type: General
- Product: Aleph
- Product Version: 18.01
In our prod region we have the problem where the z07 is filling up with records to process. I followed through KB 4093 and none of the causes mentioned are in place here. BUT I did rebuild the direct index (z11) today via parallel indexing in the ABC02 library. I have run manage_05 in ABC02 after setting LS's for z00 and z103 in ABC02. I haven't done anything to ABC01 yet.
Could it be the cause of all the z07 entries? That doesn't seem logical since it defeats the purpose of parallel indexing. Is there a table setting that I messed that could be causing this?
I see that the current ue_08 log (run_e_08.18695) in abc01 $data_scratch is large.
Each processing of a z01 heading by ue_08 can generate up to 10 z07 records. The following grep shows that the current ue_08 generated 455,480 z07's:
aleph1-18(1) ABC01-ALEPH>>grep -c 'Update doc' run_e_08.18695
I see in the run_e_08.18695 that ue_08 was started with "CHECK-OPTION: C". That's good; that's what it should be.
ue_08 processes z01 records which have a z01_rec_key_4 value of "-NEW-000000000".
This is similar to KB 8192-10426, but I don't see any run of p_manage_02 which could have triggered all the z01_rec_key_4 "-NEW-"s . p_manage_05 does not update the z01 and, I believe, has nothing whatsoever to do with this problem.
At the moment, each z07 which is being processed is being written to the z07h for re-processing in the later parallel indexing step. This means that the 184,421 z07's will be written there, if given the chance.
I suggest preventing this by either:
(1) completing the parallel indexing steps in the near future; or
(2) halting the parallel indexing at this point ( -- including doing the SQL "drop table z07h"--) and then starting it over again from the beginning.
- Article last edited: 10/8/2013