Load Balancer Requirements
This information is not applicable to Primo VE environments. For more details on Primo VE configuration, see Primo VE.
To support fault tolerance, high availability, and better performance, Primo MFE (Multiple Front End) configurations allow load balancing. For more information on Primo topologies, contact Ex Libris Support.
The load balancer (LB) distributes the work load between the active Front Ends (FE) and distrbutes the entire work load to the remaining FE's if an FE fails.
LBs can be hardware-based (such as F5, Cisco, and so forth) or software-based (such as Apache). Primo does not require a specific type of LB, but it must support sticky sessions.
For Load Balancer (LB) environments with multiple front ends (MFE), it is important to configure the LB "X-FORWARDED-FOR" HTTP header. Otherwise, security will assume all search requests are coming from the LB and not the FEs, which could trigger the Alarm.
How Does It Work?
In all topology setups, all MFE's are active and respond to incoming requests.
A typical Primo MFE configuration includes two or more Primo Front End (FE) servers running on a host-assigned internal IP, receiving HTTP request via port 1701.
End users identified by source IPs must be balanced between existing Primo servers based on the Primo servers' computational power. For example, if your environment uses two Primo servers and both have an equal amount of computational power, the load balancer should assign half of the sessions to each server.
Each FE must be running a PDS server, but only one of them is active at a time.
The following figures illustrate the failover process when an LB is used in an MFE configuration. Note that PDS2 will remain active after the restoration of FE1/PDS1.

Primo Load Balancing Diagram

Front End Failure - Load Balancer

Front End Restoration - Load Balancer
Load Balancer Setup
The setup of a LB varies between the various types of LBs. You need to make sure that all the MFEs are defined correctly in the LB and that the PDS is defined correctly in the LB. To support the failover of the PDS server, you need to configure the pds_url and pds_internal_url parameters in the Back Office to the LB address.
For Load Balancer (LB) environments with multiple front ends (MFE), it is important to configure the LB "X-FORWARDED-FOR" HTTP header. Otherwise, security will assume all search requests are coming from the LB and not the FEs, which could trigger the Alarm.
Sticky Sessions
The sticky session feature enables the load balancer to bind a user's session to a specific application instance so that all requests coming from the user during the session are sent to the same application instance.
Primo requires this feature to be activated on the LB so that all of the HTTP requests belonging to a user’s session are routed to the same server.
Removing Servers for Maintenance Purposes
Any of the Primo FE servers can be removed from the load balancing group for maintenance purposes.
The load balancing group must be reconfigured so that the load balancer does not continue to send requests to the removed server.
Removing one or more FE servers decreases Primo performance.
Load Balancer Monitor Guidelines
Ex Libris recommends that your load balancer provide a monitor utility that gathers the following statistics for each server in each group:
-
Status
-
Number of requests performed
-
Cumulative size of the data sent
-
Cumulative size of the data received
-
Current connections illustrate the trend of successful connections over time
Load Balancer Test Case
The following figure shows an MFE load balancing test case performed by Ex Libris:

Load Balancer Test Case
-
Load Balancer brand, model, type, configuration, and infrastructure setup is under full customer responsibility.
-
The LB must support sticky sessions in order for Primo to work properly.
-
The TimeOut parameter is necessary for remote search.
-
Ex Libris recommends using a hardware load balancer for the Production environment.