Skip to main content
ExLibris
  • Subscribe by RSS
  • Ex Libris Knowledge Center

    Oracle /arch directory filling up rapidly; ue_11

    • Article Type: General
    • Product: Aleph
    • Product Version: 18.01

    Description:
    Yesterday Aleph stopped when our ora01 volume ran out of space.

    The /ora01/oradata/aleph1/arch directory contained 182 GB worth of archivelog files, so I deleted a bunch of those from the earliest through 1/23. However, he earliest weren't older than 1/19.

    Looking at the most recent logs for hot, cold, and archivelog backups, I didn't see anything that would indicate that Oracle was keeping more archivelog files on the /ora01 volume than it normally would, so unless the oracle data files themselves have suddenly grown a lot bigger, I'm thinking that it must be that there were a lot more archivelog files than usual (they're all 48MB, but there are lots of them).

    Resolution:
    An examination of the /ora01/oradata/aleph1/arch showed that arch_aleph1_... files were being created at a rate of 4 gig/hr.

    The "top" command showed this:

    PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
    25437 oracle 16 0 1120m 637m 636m R 53.1 15.7 144:16.11 oracle
    25435 aleph 16 0 68200 16m 7752 S 13.6 0.4 91:41.57 rts32
    12613 oracle 16 0 1141m 17m 13m D 8.3 0.4 178:45.62 oracle
    25438 oracle 15 0 1118m 16m 16m S 1.3 0.4 2:35.78 oracle
    ...

    "ps -ef" grep for this rts32 process# 25435 showed this:

    libprddb1.lib.abc.edu-18(1) ABC01-ALEPH>>ps -ef | grep 25435
    aleph 25435 1 18 08:36 ? 01:31:27 /exlibris/aleph/a18_1/aleph/exe/rts32 ue_11_a ABC00.a18_1
    oracle 25437 25435 29 08:36 ? 02:23:16 oraclealeph1 (DESCRIPTION=(LOCAL=YES)(ADDRESS= (PROTOCOL=beq)))
    oracle 25438 25435 0 08:36 ? 00:02:34 oraclealeph1 (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))

    ue_11 runs in the $usr_library. To see the $usr_library, do this:

    echo $usr_library
    ABC00

    Looking at the abc00 $data_scratch we see this:

    ls -lrt run_e_11*
    -rw-rw-r-- 1 aleph exlibris 4675507111 Dec 21 00:31 run_e_11.1797
    -rw-rw-r-- 1 aleph exlibris 5111072869 Jan 25 00:31 run_e_11.11920
    -rw-rw-r-- 1 aleph exlibris 11014410780 Jan 29 16:55 run_e_11.2916
    -rw-rw-r-- 1 aleph exlibris 1251406189 Jan 30 16:53 run_e_11.25361


    These logs have entries like this:

    2009-01-26 08:45:12 Update : c 00 CEN02 ABC01:
    Load: /tmp/utf_files/exlibris/aleph/u18_1/abc01/tab/tab_z105_filter
    Load: /tmp/utf_files/exlibris/aleph/u18_1/alephe/tab/tab_io_remote
    2009-01-26 08:45:12 Error: io_z105 - table doesn't exist in database or tab_io_remote for library: 'CEN02'.

    The ABC01 tab_z105 has this line referencing CEN02:
    NEW-DOC c CEN02

    SKB 8192-5046 addresses this.

    SKB 8192-4344 explains that ue_11 continues to try writing the z105 when it fails (assuming that the reason is a duplicate timestamp); that's why such large logs are being written.

    Additional Information

    archive log logs


    • Article last edited: 10/8/2013