Skip to main content
ExLibris
  • Subscribe by RSS
  • Ex Libris Knowledge Center

    /exlibris filesystem 100% full; locating files to delete **MASTER RECORD**

    • Article Type: General
    • Product: Aleph
    • Product Version: 20, 21, 22, 23

    Description:
    Our /exlibris filesystem has filled up. ("df -h /exlibris" shows 100% used.)
    We get the following error message logging on to the server:
    cp: writing `/exlibris/aleph/a20_2/tmp/f_symbol.22502': No space left on device
    How can we locate files to delete?

    Resolution:
    Sometimes the problem is an excessively large file; other times it's a directory with an excessive number of smaller files.

    The following can be used to locate files over 1,000,000 blocks (400 meg) in a particular filesystem. For instance, if the filesystem which has filled up is /exlibris, then the following can be done:
      > cd /exlibris
      > find . -size +1000000 -print

    If you suspect a particular directory's files, you can "cd" to that directory and do "du -sh". Some common examples are $LOGDIR, /exlibris/ftp_from_exlibris, and ./oradata/alephnn/arch. This will show you the amount of space being used in the directory.

    Note: if you find that "du -sh" doesn't work on your server, use "du -sm" or "du -sk" instead.

    "du –k"   shows the size of all subdirectories in a directory in an easy-to-compare form.

    See Note# 3 below in regard to a specific case where p_file_04 was looping and a tremendously large z52.comp.seq file was being written.   

    The following can be used to locate the largest directories in a filesystem, in ascending order, the largest last.
    > cd /exlibris
    > du -ks ./* | sort -n

    du: cannot read directory `./lost+found': Permission denied
    8 ./tmp
    24 ./startup
    3992 ./aik-bak
    899744 ./product
    4653904 ./app
    5221384 ./ftp_from_exlibris
    211830756 ./aleph


    Then you can do this same command for the largest directory:
    > cd aleph
    > du -ks ./* | sort -n

    4 ./def_aleph.dat
    ...
    8 ./backup_temp
    20 ./ora_aleph
    1344 ./upgrade_express_1901_2001.tar.1.09
    4652 ./upgrade_express_1801_1901.tar.1.26
    5272 ./upgrade_express_1701_1801.tar.1.26
    7008 ./oradiag_aleph
    39456 ./u20_1
    89856 ./upgrade_express_1801_1901
    652000 ./upgrade_express_1701_1801
    934984 ./a20_1
    3927480 ./upgrade_express_1901_2001
    103810740 ./a20_2
    108579024 ./u20_2

    <and so on with each subdirectory>
    To eliminate "Permission denied" perform the command as "root".

    Notes:

    1. When you find excessively large files, you should consider what process created these files and what can be done to prevent it.

    2. The most common problem directory is the ./oradata/xxxx/arch directory.   This contains archiving records.  When there a lot of bib or authority records being loaded at once, this ./arch directory will grow rapidly.  ./oradata/alephx/arch/ files should *not* be deleted manually.  They will be removed by Oracle/RMAN after a few days.  Or it might be done by an Oracle DBA.  See the article  " Analyzing space problems due to excessive Oracle archive logs " in this regard.

    3.  In one case the "find . -size +1000000 -print" command  in /exlibris showed a 283 Gig /exlibris/aleph/a23_1/vir01/files/z52.comp.seq file.  This file is created by the  p_file_04 procedure, called by the clear_vir01 job.  It seems that this procedure was looping.  We killed the clear_vir01 processes and the z52.comp.seq file disappeared.  (Then we ran clear_vir01 from the command line to make sure that subsequent runs would be OK.  It ran successfully.)

    4. If you find that the disk continues to fill up after your deletion of files, see article " Our /exlibris file system keeps filling up. How can we tell what's causing this? " (KB 16384-13810).

    5. In regard to ongoing maintenance, see the System Administrator's Guide-Preventative Maintenance document, including the util x functions.

     

     

     


    • Article last edited: 24-Mar-2018