Skip to main content
ExLibris
  • Subscribe by RSS
  • Ex Libris Knowledge Center

    Refreshing Test server data Using Oracle Data Pump

    • Article Type: General
    • Product: Aleph
    • Product Version: 20, 21, 22, 23

    Desired Outcome Goal:
    Use the Oracle Data Pump to transfer data to a Test server.

    Procedure:
    Oracle Data Pump utility enables faster data export/import as compared to the “old” export/import scripts and was introduced in Aleph 18 rep_change 1946, Aleph19 rep_change 532.
    Implementation notes for the rep_change:  Since Java routines are used in the scripts, a new permission must be granted:
    > s+ ALEPH_ADMIN
    SQL> grant JAVA_ADMIN to aleph_admin WITH ADMIN OPTION;

    SQL> quit;

    Note:  the following presumes that the Prod and Test instances are on different servers, with the same $ALEPH_VERSION($ALEPH_COPY), i.e. "22(1)".  If they are on the same server, then they will have different  $ALEPH_COPY values and the following will need to be adjusted.
    On the (Production) source server:
    > cd $aleph_proc
    > csh -f oracle_expdp_aleph_libs >& $alephe_scratch/oracle_expdp_aleph_libs.date.log &

    * A log file for the export will be written in $alephe_scratch.
    * The export files will be found in each library under the $data_files/dpdir directory.  Example:
           /exlibris/aleph/u22_1/abc01/files/dpdir:
           -rwxrwxrwx 1 aleph aleph 72299253  Jan 22 12:56 ABC0101.dmp.gz*
           -rwxrwxrwx 1 aleph aleph 781684736 Jan 22 12:56 ABC0102.dmp*
           -rwxrwxrwx 1 oracle  dba 4737      Jan 22 12:56 expABC01.log*
           -rw------- 1 aleph aleph 46710784  Jan 22 12:57 ABC0102.dmp.gz

    > cd /exlibris/aleph/u22_1

    > tar -cvf  dpdir.tar  */files/dpdir/*
    > sftp <target server> 

    > cd /exlibris/aleph/u22_1

    > put  dpdir.tar
    * Create separate session on (Test) target server.

    > cd /exlibris/aleph/u22_1

    tar -xvf  dpdir.tar

    * The import files will then be found in each library under the $data_files/dpdir directory.

    * Then run aleph_shutdown.

    * Then:

    > s+ ALEPH_ADMIN
    SQL> grant JAVA_ADMIN to aleph_admin WITH ADMIN OPTION;

    SQL> quit;

    > cd $aleph_proc
    > csh -f oracle_impdp_aleph_libs yes >& $alephe_scratch/oracle_impdp_aleph_libs.date.log &

    * A log file for the import will be written in $alephe_scratch.

    * Then run aleph_startup.

     

    Note:  The oracle_expdp_aleph_libs proc includes the USMnn demo library tables in addition to the local ABCnn tables.  This may work OK, but if not, your only concern should be the success of the ABCnn tables. 
     

    Additional Information

    1) See also the document How_to_transfer_configuration_tables_and_Oracle_tables_from_server_A_to_server_B_20101221.doc (in Additional_How_To_Presentations_from_Support), which also discusses the transfer of the u-tree non-Oracle data.
    2) In our experience, a database of 1 million bibliographic records takes 0.5 hours to export, and another 0.5 hours to import.  The Aleph system doesn't have to be down while export is run, but data updates made during the export may not be transferred to the target - resulting in inconsistencies-- which are OK in a Test run, but not in a Production run

    Additional words for searching: clone, cloning

    Category: System Management (500)


    • Article last edited: 24-Jun-2016