Configuration for workload sharing or scalability

This configuration consists of multiple messaging engines running in a cluster, with each messaging engine restricted to running on one particular server. A workload sharing configuration achieves greater throughput of messages by spreading the messaging load across multiple servers.

To create this configuration, you add a cluster to a service integration bus. One messaging engine is created automatically, then you add the further messaging engines that you require to the cluster, for example, one messaging engine for each server in the cluster.

You create a core group policy for each messaging engine. Because no failover is required, you configure those policies so that each messaging engine is restricted to a particular server. To restrict a messaging engine to a particular server, you can configure a Static policy for each messaging engine.

After you create the new policies, use the match criteria to associate each policy with the required messaging engine.

This type of deployment provides workload sharing through the partitioning of destinations across multiple messaging engines. There is no failover capability because each messaging engine can run on only one server. The impact of a failure is lower than in a simple deployment because if one of the servers or messaging engines in the cluster fails, the remaining messaging engines still have operational destination partitions. However, messages that are handled by a messaging engine in a failed server are unavailable until the server is restarted.

The following diagram shows a workload sharing configuration with two messaging engines, ME-A and ME-B, with data stores, DS-A and DS-B, respectively. The messaging engines run in a cluster of two servers and share the traffic passing through the destination. If Server-2 fails, ME-B cannot run because it runs only on that server. However, ME-A continues to run and will handle all new traffic through the destination.

Figure 1. Workload sharing configuration before loss of Server-2 ME-A is running in Server-1 and ME-B is running in Server-2.
Figure 2. Workload sharing configuration after loss of Server-2 ME-A is running in Server-1 but Server-2 has failed, resulting in the loss of ME-B.

For more information about sharing workload between messaging engines, see Workload sharing.




Related concepts
Policies for service integration
Match criteria for service integration
Workload sharing
Workload sharing with queue destinations
Service integration high availability and workload sharing configurations
Related tasks
Configuring high availability and workload sharing of service integration
Adding a cluster as a member of a bus
Configuring a policy for messaging engines
Concept topic    

Terms of Use | Feedback

Last updated: Sep 20, 2010 9:00:59 PM CDT
http://www14.software.ibm.com/webapp/wsbroker/redirect?version=vela&product=was-nd-dist&topic=cjt0009_
File name: cjt0009_.html