Configuration for workload sharing with high availability

This configuration consists of multiple messaging engines running in a cluster, where each messaging engine can fail over to one or more alternative servers.

To create this configuration, you add a cluster to a service integration bus. One messaging engine is created automatically, then you add the further messaging engines that you require to the cluster. A typical configuration has one messaging engine for each server in the cluster. Create a new "One of N" policy for each messaging engine in the cluster. Configure the policies so that one messaging engine runs on each server and so that there is high availability behavior, for example, each messaging engine can fail over to one designated server. After you create the new policies, use the match criteria to associate each policy with the required messaging engine.

The default service integration policy, "Default SIBus Policy", does not provide this behavior, so you must create new policies. Also, it is not advisable to change the default service integration policy because those changes will affect all messaging engines that the policy manages.

This type of configuration provides high availability, because each messaging engine can fail over if a server becomes unavailable. The configuration provides workload sharing because there are multiple messaging engines to share the traffic through the destination.

The following diagram shows an example configuration of this type. There are two messaging engines, ME-A and ME-B, with data stores DS-A and DS-B, running in a cluster of three servers and sharing the traffic through a destination. When Server-2 fails, ME-B fails over and continues to run on Server-3. Both messaging engines continue to process the traffic through the destination.

Figure 1. Highly available messaging engines with workload sharing configuration before loss of Server-2ME-A is running in Server-1 and ME-B is running in Server-2.
Figure 2. Highly available messaging engines with workload sharing configuration after loss of Server-2Server-2 has failed. ME-A is still running in Server-1, but ME-B has been moved to Server-3.

Each server in the cluster contains the definition of each messaging engine that can run on it, and creates an instance of the messaging engine so that the instance is ready to be activated if another server should fail.

Each messaging engine can be active in only one server at a time, with the other servers acting as standby servers for that messaging engine. At any one time on each server, zero, one or two messaging engine instances might be active. For example, in the previous diagram, Server-1 has one of its two messaging engine instances active. If ME-B is then failed over to Server-1 rather than Server-3, both the messaging engine instances on Server-1 will be active at the same time.

One example configuration is where each messaging engine runs on a specific server and can fail over to one other specified server in the cluster. Each server can host up to two messaging engines, such that there is an ordered circular relationship between the servers. You use the preferred servers list and the Preferred servers only option for each policy to set this behavior.

Table 1. Example high availability and workload sharing configuration
Messaging engine Preferred servers
ME-A Server-1

Server-2

ME-B Server-2

Server-3

ME-C Server-3

Server-4

ME-D Server-4

Server-1

The following example configuration provides high availability and workload sharing when message transmission is a priority. There are two messaging engines, ME-A and ME-B, with data stores DS-A and DS-B, respectively, running in a cluster of three servers and sharing the traffic through a destination. In normal operation, ME-A runs on Server-1 and ME-B runs on Server-2. Server-3 provides a failover location for both messaging engines. This is known as an "N+1" configuration, because there is one spare server.

The advantage of this configuration is that if one server fails, each remaining server hosts only one messaging engine. The disadvantage of this configuration is the expense of the spare server.

Table 2. Example "N+1" high availability and workload sharing configuration
Messaging engine Preferred servers
ME-A Server-1

Server-3

ME-B Server-2

Server-3

In a configuration that provides high availability and workload sharing, the data store for the messaging engine must be accessible by all the servers in the cluster. The means of achieving this depends on the data store topology used. If you are using a networked database server, you need to ensure that it is accessible from all servers in the cluster that may run the messaging engine. Alternatively, you could use an external high availability framework to manage the database using a shared disk. To make configuration easier, use the same data store topology for each messaging engine.

You can specify one or more preferred servers for each messaging engine, as mentioned earlier. Whenever a preferred server is available, the high availability manager (HAManager) will run the messaging engine in it. When no preferred server is available, the messaging engine will run in an alternative server, as long as the Preferred servers only option is not set on the policy. When a preferred server becomes available again, the HAManager will move the messaging engine back to it if, and only if, the Fail back option is set on the relevant policy.




Related concepts
Policies for service integration
Match criteria for service integration
Workload sharing
Service integration high availability and workload sharing configurations
Related tasks
Configuring high availability and workload sharing of service integration
Adding a cluster as a member of a bus
Configuring a policy for messaging engines
Concept topic    

Terms of Use | Feedback

Last updated: Sep 20, 2010 9:00:59 PM CDT
http://www14.software.ibm.com/webapp/wsbroker/redirect?version=vela&product=was-nd-dist&topic=cjt0011_
File name: cjt0011_.html