WebSphere WebSphere Application Server Version 6.1.x Feature Pack for Web Services Operating Systems: AIX, HP-UX, i5/OS, Linux, Solaris, Windows, z/OS

Highly available messaging engines with workload sharing configuration

This configuration consists of multiple messaging engines running in a cluster, with each messaging engine able to fail over to one or more alternative servers.

This configuration can be achieved by adding a cluster to a service integration bus. This automatically creates one messaging engine; you then add to the cluster any further messaging engines that you require. The default policy will allow the messaging engines to fail over between servers in the cluster, making the messaging engines highly available. You can optionally give each messaging engine a preference for one or more servers, by creating a specific policy to which you add preferred servers. You can further alter the policies to control the availability characteristics of each messaging engine, as described in Configuring a policy for messaging engines.

This type of configuration provides availability, because each messaging engine can fail over if there is a failure, and also workload sharing because there are multiple messaging engines to share the traffic through the destination.

The following diagram shows an example configuration of this type. There are two messaging engines, ME-A and ME-B, with data stores DS-A and DS-B, running in a cluster of three servers and sharing the traffic through a destination. When Server-2 fails, ME-B fails over and continues to run on Server-3. Both messaging engines continue to process the traffic through the destination.

Figure 1. Highly available messaging engines with workload sharing configuration before loss of Server-2ME-A is running in Server-1 and ME-B is running in Server-2.
Figure 2. Highly available messaging engines with workload sharing configuration after loss of Server-2Server-2 has failed. ME-A is still running in Server-1, but ME-B has been moved to Server-3.

Each server in the cluster contains the definition of each messaging engine that can run on it, and creates an instance of the messaging engine so that the instance is ready to be activated if another server should fail.

Each messaging engine can be active in only one server at a time, with the other servers acting as standby servers for that messaging engine. At any one time on each server, zero, one or two messaging engine instances might be active. For example, in the previous diagram, Server-1 has one of its two messaging engine instances active. If ME-B is then failed over to Server-1 rather than Server-3, both the messaging engine instances on Server-1 will be active at the same time.

The following configuration is an example of a configuration that provides high availability and workload sharing, where message transmission is a priority. There are two messaging engines, ME-A and ME-B, with data stores DS-A and DS-B, respectively, running in a cluster of three servers and sharing the traffic through a destination. In normal operation, ME-A runs on Server-1 and ME-B runs on Server-2. Server-3 provides a failover location for both messaging engines. This is known as an "N+1" configuration, because there is one spare server.

The preferred server list for ME-A is Server-1, Server-3, and the preferred server list for ME-B is Server-2, Server-3. The advantage of this configuration is that if one server fails, each remaining server hosts only one messaging engine. The disadvantage of this configuration is the expense of the spare server.

In a configuration that provides high availability and workload sharing, the data store for the messaging engine must be accessible by all the servers in the cluster. The means of achieving this depends on the data store topology used. If you are using a networked database server, you need to ensure that it is accessible from all servers in the cluster that may run the messaging engine. Alternatively, you could use an external high availability framework to manage the database using a shared disk. To make configuration easier, use the same data store topology for each messaging engine.

You can specify one or more preferred servers for each messaging engine, as mentioned earlier. Whenever a preferred server is available, the HAManager will run the messaging engine in it. When no preferred server is available, the messaging engine will run in an alternative server. When a preferred server becomes available again, the HAManager will move the messaging engine back to it if, and only if, the Fail back option is set on the relevant policy.


Concept topic

Terms of use | Feedback


Timestamp icon Last updated: 27 November 2008
http://publib.boulder.ibm.com/infocenter/wasinfo/v6r1/index.jsp?topic=/com.ibm.websphere.pmc.wsfep.multiplatform.doc/concepts/cjt0011_.html

Copyright IBM Corporation 2004, 2008. All Rights Reserved.
This information center is powered by Eclipse technology. (http://www.eclipse.org)