Skip to main content
ExLibris
  • Subscribe by RSS
  • Ex Libris Knowledge Center

    GUI is non responsive across all modules

    • Article Type: General
    • Product: Aleph
    • Product Version: 20

    Description:
    Aleph is non responsive across all modules. This seems similar to outage due to Oracle space error on August 16 (SI 16384-205104).

    This is what I have found thus far using util O/3/1

    Thread 1 advanced to log sequence 12237 (LGWR switch)
    Current log# 2 seq# 12237 mem# 0: /exlibris/oradata/aleph20/aleph20_redo02.log
    Tue Sep 07 11:11:11 2010
    Process m000 died, see its trace file
    Tue Sep 07 11:11:11 2010
    Trace dumping is performing id=[cdmp_20100907111111]
    Tue Sep 07 11:11:11 2010
    OS Audit file could not be created; failing after 5 retries
    opidcl aborting process unknown ospid (13788_47387984891072) due to error ORA-9925
    Tue Sep 07 11:11:11 2010
    OS Audit file could not be created; failing after 5 retries
    opidcl aborting process unknown ospid (14234_47846063152320) due to error ORA-9925
    Tue Sep 07 11:15:47 2010
    OS Audit file could not be created; failing after 5 retries
    opidcl aborting process unknown ospid (21031_47196114218176) due to error ORA-9925
    Tue Sep 07 11:15:47 2010
    Trace dumping is performing id=[cdmp_20100907111547]
    Tue Sep 07 11:15:47 2010
    OS Audit file could not be created; failing after 5 retries
    opidcl aborting process unknown ospid (20993_47899087064256) due to error ORA-9925
    Tue Sep 07 11:36:17 2010
    Errors in file /exlibris/app/oracle/diag/rdbms/aleph20/aleph20/trace/aleph20_psp0_2957.trc:
    ORA-27300: OS system dependent operation:pipe failed with status: 23
    ORA-27301: OS failure message: Too many open files in system
    ORA-27302: failure occurred at: skgpspawn2
    Tue Sep 07 11:36:18 2010
    Process m000 died, see its trace file

    Resolution:
    I see you were getting the "Too many open files in system" error in Oracle.

    Following KB 16384-28193, I see the following on your (v20) server:

    1. /proc/sys/fs/file-max 32767

    2. fs.file-max in etc/sysctl.conf = 6553600

    3. "nofile" (number of files) line in /etc/security/limits.conf: no such line.

    Based on which, I suggest you:

    1. Increase /proc/sys/fs/file-max to 65536.

    2. Reduce the fs.file-max in /etc/sysctl.conf to 65536.

    3. Add this line to /etc/security/limits.conf:

    oracle hard nofile 65536


    KB's 16384-28193 and 16384-8401 describe a relation between the "Too many open files" error and "limit descriptors". I find that "limit descriptors" on your server gives a result of 1024:

    > limit descriptors
    descriptors 1024

    I think this may too low. I think you should set it to 32767.


    • Article last edited: 10/8/2013