Skip to main content
ExLibris
  • Subscribe by RSS
  • Ex Libris Knowledge Center

    Hebrew characters stripped from Backstage records, with p_manage_18

    • Article Type: General
    • Product: Aleph
    • Product Version: 16.02

    Description:
    Backstage is in the process of creating MARC records for a lot of our old Hebrew items. I just loaded 5 test records from Backstage and the Hebrew characters were stripped from the MARC record when I looked at them in the Cataloging module and on our OPAC. The Hebrew is being sent to us in parallel fields. We also get a lot of Hebrew records from RLIN and import them via the RLIN loader. The Hebrew in the records from RLIN are also in parallel fields but when they load via the RLIN loader the Hebrew is transferred to 880 tags. This is fine, I'm just not sure how this is done via the loader.

    Resolution:
    I ftp'd the sample (Unicode) files from Backstage at loaded them into one of our test databases using the native ALEPH procedures and specifying 'None' for Character Conversion. I am getting the same results as you are - my Cataloging client crashes.

    I believe I mentioned that we have had good success loading MARC data with Unicode encoding directly into ALEPH with no character conversion being done. (Specifically, from OCLC.) I assume this indicates one of two things. Either there are alternate encodings in the Backstage data that ALEPH isn't anticipating, or there are invalid encodings in their data.

    In any case, I think the best course of action is to get the backstage data encoded in MARC8, and use the MARC8_to_UTF character conversion routine.

    [From site:] That's fine. Backstage can send the files in MARC8 format and the Hebrew comes through no problem.


    • Article last edited: 10/8/2013
    • Was this article helpful?