- Article Type: General
- Product: Aleph
- Product Version: 20, 21, 22, 23
- Relevant for Installation Type: Dedicated-Direct; Direct; Local; Total Care
Our /exlibris filesystem keeps filling up (as seen in "df -h /exlibris"). How can we tell what is causing this?
Note: If the ./oradata/alephnn/arch directory is part of a filesystem other than /exlibris, it may be that other filesystem which is filling up.
The most common cause is large files being rapidly written to the Oracle ./arch (archive logging) directory. To find the location of the Oracle ./arch directory do "util o 7/3" (Show Archiving Status) or:
> sqlplus sys/oradba as sysdba
SQL> archive log list
Archive destination /exlibris/oradata/aleph23/arch
and go to that directory and see how fast the files are being created. The most common causes of large archive log files being rapidly written are:
1. batch jobs loading large numbers of bib records
2. manage-21 or manage-37 global change jobs generating large numbers of updates
3. manage-40 (Update Indexes for Selected Records)
Each of which generates large numbers of z07 (indexing request) records, which are processed by ue_01, which generates very large numbers of index updates and archive log entries. Note: the number of index updates resulting from bib record updates are usually about 30 times the number of bib record updates themselves.
The article " Oracle /arch directory filling up rapidly; ue_11 " is an analysis of a case where the Oracle archive logs were being rapidly written as a result of a large authority record load.
Use the "top" command (or its equivalent on your server) to see which ue_nn process(es) might be involved:
> top -u aleph (this will limit the "top" display to processes with "aleph" as the user)
Then look for processes with "rts" in the COMMAND column (on the far right) and a high value in the %CPU column, and do the following for the nnnn "PID" (process ID) (the first column):
> ps -ef |grep nnnn
Once it's been determined that a particular ue_nn process is involved, the first thing to do is "util e" to *stop* the process. (The ue_nn's are background processes and stopping them from running for a few minutes (or hours) is not really a problem. They will just pick up where they left off when restarted.) The most common problem ue_nn is the bib library ue_01.
If the preceding doesn't help determine what files/processes are filling up the disk, then do the following:
> cd /exlibris
> du > /tmp/yourname
> du > /tmp/yourname2
> cd /tmp
> diff yourname yourname2
The yourname and yourname2 will list each file in the directory/subdirectories, and its space. The diff will show you which files have changed -- and by how much -- in the interval between the two "du" command.
Note: a "du" on the entire /exlibris directory can take a while (an hour?). If you suspect that a certain subdirectory is the problem, you can cd to that directory and do the "du" there. That will be quicker.
In the case of manage-21 it may be that the fields being updated are not indexed and don't need to have their z07's be processed by ue_01 -- in which case they could be deleted and this whole problem prevented. See article " manage-21 and manage-37 generate large numbers of z07 records " in this regard.
See the article " /exlibris filesystem 100% full; locating files to delete **MASTER RECORD** " to locate excessively large files/directories.
Aside from the Oracle ./arch files, other rapidly-growing files can be the server log files. The $KEEP_LOGDIR value in aleph_startup is set to remove www_server and pc_server logs older than 7 days. If you see large pc_ser_nnnn files in your $LOGDIR directory, you don't normally need these -- and can suppress them the majority of the time when you don't. See the article " Suppression or generation of the $LOGDIR/pc_ser file ".
- Article last edited: 9-Mar-2017