"Filesize limit exceeded" -- **MASTER RECORD**
- Product: Aleph
- Product Version: 20, 21, 22, 23
- Relevant for Installation Type: Dedicated-Direct, Direct, Local, Total Care
Description
In running certain batch jobs, we are getting the message "Filesize limit exceeded" and messages such as:
Adding line length 3850 exceeds z00_data_len 2909
Adding line length 2200 exceeds z00_data_len 1154
Resolution
The message "Filesize limit exceeded" is a Unix message. Normally it indicates that the size of an output file which an Aleph program is writing has exceeded the system limit. That limit can be seen by entering the command "ulimit -a" on the unix command line, as described at https://www.cyberciti.biz/faq/file-s...-and-solution/ . (This must be done as root.)
In the past there was a problem with output files exceeding 2 Gig. (See, for example, the article " p_file_03: "Filesize limit exceeded" for z97 table ".) But is seems there have been changes which have made this much rarer in version 20-up.
These messages *can* occur even though the "ulimit -a" command shows "file size (blocks, -f) unlimited", when there are many jobs with many processes running at the same time, resulting in heavy use of I/O, memory, and CPU.
In such cases the file /var/log/messages.2 may have error messages such as: "kernel add_flength_seq[12967]: segfault at 00000000ff9c9000 rip 0000000000546fac rsp 00000000ff9ab0ac error 4" which show that it's a system problem.
In general jobs -- especially multi-process jobs, such as the manage-01 or manage-02 indexing jobs -- should *not* be started at the same time -- even if they're for different libraries. They should be staggered and spread out over a longer period of time.
- Article last edited: 21-Feb-2018