Skip to main content
  • Subscribe by RSS
  • Ex Libris Knowledge Center

    Loading updated bibliographic and authority records after vendor authority control work complete

    • Product: Voyager
    • Product Version: 8.0.0 and higher
    • Relevant for Installation Type: Multi-Tenant Direct, Dedicated-Direct, Local, TotalCare


    We sent records to a vendor for authority control work, and they are ready to be reloaded into Voyager now. What steps are necessary to do this?


    If you are planning an authority control work project, please open a Case with Support so that we are aware of your plan.


    If you are starting from different assumptions than those outlined below, you may need to adjust the process described accordingly.

    • Incoming records (records in files from vendor) have the same record ID as matching records in database
    • Record ID is desired match point
    • Existing authority records will all be deleted prior to loading new authority records. See How to delete a set of authority records
    • There are no duplicate authority records1 within the new set of authorities to load (check with your vendor about the possible presence of duplicate authority records in your file and if that is the case, modify your duplicate detection/import profile accordingly).
    • All authority records will be loaded into the database, and all bibliographic records will be replaced.
    • Any bibliographic records in file that do not match records in database should be discarded
    • No change in character encoding of records (MARC21 UTF-8)
    • Import will be run on the server
    • Operator is familiar with Bulk Import parameters (see Technical User's Guide for more information)
    • Record files have been placed on server. (Workflow references example named NewAuths.aut in directory /m1/voyager/xxxdb/local)

    The above are not requirements. They are a set of assumptions guiding the example workflow below. Adjust according to local needs and practices.

    Codes and names used as examples will be referenced in later steps. Not necessary to use same names and codes - adjust to local conventions/practices.

    Configure System Administration
    1. Configure duplicate detection and a bulk import rule. 
      1. Bibliographic record matching
        1. System Administration > Cataloging > Bibliographic Duplicate Detection > New
        2. Profile tab
          1. Fill in Profile Name and Profile Code. (Example: Match Bib ID / MBBID)
          2. Duplicate Handling: Replace
          3. Check box for "Discard incoming records that do not match existing records"
          4. Duplicate Replace: 100
          5. Duplicate Warn: 100
        3. Field Definitions tab
          1. BBID, Bibliographic Record ID
          2. Field Weight 100
        4. Click Save
      2. Authority record matching
        1. System Administration > Cataloging > Authority Duplicate Detection > New
        2. Profile tab
          1. Fill in Profile Name and Profile Code. (Example: Match Auth ID / MATID)
          2. Duplicate Handling: Add-Unconditional  (if there are no duplicate authority records)
          3. Duplicate Replace: 100
          4. Duplicate Warn: 100
        3. Field Definitions tab
          1. ATID, Authority Record ID
          2. Field Weight 100
        4. Click Save
    2. Configure Bulk Import Rule
      1. System Administration > Cataloging > Bulk Import Rules > New
      2. Rule Name tab
        1. Fill in Profile Name (Example: Bib-Auth Control Update)
        2. Fill in Profile Code (Example: BACU)
      3. Rules tab
        1. Bib Dup Profile: MBBID
        2. Auth Dup Profile: MATID
        3. Owning Library: {select appropriate}
        4. Expected Character Set Mapping of Imported Records: MARC21 UTF-8
        5. Check box for "Leave OPAC Suppress Unchanged for Replaced and Merged Records"
      4. Profiles > Single MFHD > Load Bib, Auth Only
      5. Click Save.
    Test and run import process on server
    1. Test the import process by importing 10 records from one of the files and checking in Cataloging.
      1. SSH to server
      2. cd /m1/voyager/xxxdb/sbin
      3. Pbulkimport -f/m1/voyager/xxxdb/local/NewAuths.aut -iBACU -b1 -e10
      4. Check log: /m1/voyager/xxxdb/rpt/log.imp.{datestamp}.{timestamp}
      5. Note record IDs and check in Cataloging to be sure records handled as expected.
      6. If problems, make adjustments to rules or duplicate handling and repeat steps until outcome is as desired.
      7. Once test is satisfactory, move on to next steps.
    2. Import remainder of file.
      1. SSH to server
      2. cd /m1/voyager/xxxdb/sbin
      3. Pbulkimport -f/m1/voyager/xxxdb/local/NewAuths.aut -iBACU -b11
    3. Repeat for remaining files.
    4. Check results in log files and in Cataloging module.


    After loading new bibliographic and authority records, open a case with Support and schedule a Full Regen.  See "Additional Information" below for details.

    Additional Information

    1Note that in the case of some vendors, some authority records can be duplicated in the import files - they are in both the subject authority file and also in the name authority file.  In this situation you need to use duplicate detection to avoid importing duplicate records. This is a common "gotcha" that you want to avoid.

    The order of record processing (bib records first versus authority records first) does not matter, but you want to avoid any gaps in processing steps to minimize unwanted behaviors with your headings.

    Consider freezing the creation of new catalog records and the editing of existing records during this process.

    You may need to take into consideration any local "custom" authority records to avoid losing them.

    Note that after loading new bibliographic and authority records See and See Also cross-references may not work correctly.  running a FULL (or HEADINGS) regen should solve that problem.  See:  Cross references display in OPAC after adding new auth record with no linked bibs

    Depending on the number of records you are loading, this workflow can put a strain on the server's physical resources (such as disk space).  That is why Support asks you to inform us by opening a ticket if you are planning to do an authority control work project.  We can assist you to make sure this project goes as smoothly as possible.

    For large bulk import projects, do not run the bibliographic records through keyword indexing during the bulk import loads. Instead, run regens later.  Authority records are not keyword indexed.

    See also:

    • Article last edited: 15-Sep-2019
    • Was this article helpful?