Skip to main content
ExLibris

Knowledge Assistant

BETA
  • Subscribe by RSS
  • Back
    Aleph

     

    Ex Libris Knowledge Center
    1. Search site
      Go back to previous article
      1. Sign in
        • Sign in
        • Forgot password
    1. Home
    2. Aleph
    3. Knowledge Articles
    4. Unusually large number of Oracle Archive log files

    Unusually large number of Oracle Archive log files

    1. Last updated
    2. Save as PDF
    3. Share
      1. Share
      2. Tweet
      3. Share
    No headers

     

    • Product: Aleph
    • Product Version: 20, 21, 22, 23
    • Relevant for Installation Type: Dedicated-Direct, Direct, Local, Total Care

     

    Description:
    Our Oracle database server for Aleph (production) is configured in ARCHIVE_LOG mode. Since the switch to production (Jan 5th), the disk space used by the Archive Logs was well within specs, and any peaks we had were directly related to an indexing operation.

    Starting on June 23rd, we experienced an unusually large number of Archive Logs (10 to 20 times more than usual). Prior to June 23rd, on the file system where the Archive log files are places (a different file system than the Oracle data), we maintained an average of 20% disk space usage in the file system (130G of 642G). Since June 23rd, the used disk space used increased to 86% (521G of 642G). 

    The size of the data in the database did not increase significantly (at least not the file system), so I have to assume it is just a lot of transactions going on. Is there a way for us to determine which job is causing such a large number of transactions? We are not aware of any jobs we have started which would cause that.

     

    Resolution:
    Feeling that the extremely large abc01 run_e_08 log (27.8 million updates): 

    aleph@aleph-bib(a20_1) ABC01> grep -c 'Update doc' run_e_08.7368
    27823270

    probably involved authority updates, I found the ./abc50/scratch/run_e_11.7368 log with 6.5 million type 4 (abc10 -> abc01) z105 updates: 

    aleph@aleph-bib(a20_1) ABC50> grep -c ' 4 00 ABC01 ABC10' run_e_11.7368
    6554570

    And an abc10 ue_01 log with 3.2 million updates from June 22 - June 30:

    aleph@aleph-bib(a20_1) ABC10> grep -c HANDLING run_e_01.7368
    3277298

    We see in the June 23 $alephe_scratch/abc10_p_manage_18.00056 log that 747,271 authority records were added/updated: 

    747271 END READING AT 15:23:00

    In summary: This very large abc10 load generated abc10-to-abc01 z105 records, 
    which were processed by the $usr_library ue_11, 
    which updated the abc01 z01's to "NEW", 
    which caused the abc01 ue_08 to process them, 
    which generated abc01 z07 records 
    which were processed by the abc01 ue_01. 

    The latter processing (the bib ue_01 index updates) generated the majority of the archive logs.

    What could have been done differently? All the steps up to and including the abc01 ue_08 processing are quite necessary (in order to have the authority updates reflected in the bib headings). 

    But, as described in the Article "Bib z07 records when running p_manage_18 on Authorities" (KB 8192-8311), the bib z07 records with Z07_SEQUENCE beginning with the current year could have been deleted if it was not your intention that the bib records themselves would be updated by this authority load.

     

     


    • Article last edited: 12-Mar-2016
    View article in the Exlibris Knowledge Center
    1. Back to top
      • Unsuppressed records
      • Unwanted ADM record created
    • Was this article helpful?

    Recommended articles

    1. Article type
      Topic
      Language
      English
      Product
      Aleph
    2. Tags
      This page has no tags.
    1. © Copyright 2025 Ex Libris Knowledge Center
    2. Powered by CXone Expert ®
    • Term of Use
    • Privacy Policy
    • Contact Us
    2025 Ex Libris. All rights reserved