This configuration consists of multiple messaging engines running in a cluster, with each messaging engine restricted to running on one particular server. A workload sharing configuration achieves greater throughput of messages by spreading the messaging load across multiple servers.
To create this configuration, you add a cluster to a service integration bus. One messaging engine is created automatically, then you add the further messaging engines that you require to the cluster, for example, one messaging engine for each server in the cluster.
You create a core group policy for each messaging engine. Because no failover is required, you configure those policies so that each messaging engine is restricted to a particular server. To restrict a messaging engine to a particular server, you can configure a Static policy for each messaging engine.
After you create the new policies, use the match criteria to associate each policy with the required messaging engine.
This type of deployment provides workload sharing through the partitioning of destinations across multiple messaging engines. There is no failover capability because each messaging engine can run on only one server. The impact of a failure is lower than in a simple deployment because if one of the servers or messaging engines in the cluster fails, the remaining messaging engines still have operational destination partitions. However, messages that are handled by a messaging engine in a failed server are unavailable until the server is restarted.
The following diagram shows a workload sharing configuration with two messaging engines, ME-A and ME-B, with data stores, DS-A and DS-B, respectively. The messaging engines run in a cluster of two servers and share the traffic passing through the destination. If Server-2 fails, ME-B cannot run because it runs only on that server. However, ME-A continues to run and will handle all new traffic through the destination.
For more information about sharing workload between messaging engines, see Workload sharing.