Use this task to compute and configure speed factors for your multiple
tier configuration.
Before you begin
You must have WebSphere® Virtual Enterprise installed
and applications installed that are operational under a workload.
About this task
A speed factor exists for every combination of transaction class,
target web module, and processing tier. The speed factor describes how heavily
a request of the given transaction class, throughout the life in the given
target module, loads the processing tier. You can define speed factors at
varying levels of granularity. Speed factors can be defined in a broader scope.
The autonomic request flow manager (ARFM) uses speed factors at the level
of service class, target deployment target, and processing tier. You can define
speed factors at a variety of levels for any processing tier that is not a
target tier or is not the one and only processing tier in the target module.
In
a configuration that has multiple tiers, the work profiler automatically computes
speed factors for the target tier. The target tier communicates directly with
the on demand router (ODR). For any tiers that are deeper than the target
tier, you must define speed factors. If your deployment target contains both
an target tier and a non-target tier, you must configure the speed factors
for both of the tiers because the work profiler cannot automatically compute
speed factors in that situation. You can compute the speed factor by dividing
average CPU utilization by the average number of executing requests. This
task describes how to find these values and configure the speed factor for
your multiple tier configuration.
Procedure
- Generate traffic for a transaction class and module pair.
You can generate traffic by using an application client or a stress
tool.
- Monitor CPU utilization in your configuration. Determine an average
CPU utilization. You need the CPU utilization of all the machines
that are involved in serving your traffic, and all machines that have performance
interactions with them to be at the configured limit that is defined with
the Maximum CPU utilization property on the Operational policies
> Autonomic managers > Autonomic request flow manager panel. Disable all
the autonomic managers so that you can ensure the system does not make changes
while you take the CPU utilization measurement:
- The application placement controller: Disable the application placement
controller by putting it in manual mode. Click Operational policies > Autonomic
managers > Application placement controller. Click the Enable check
box so that it is not checked to disable the application placement controller.
- The autonomic request flow manager: You can use magic N mode if you are
using only one flow, for example, an ODR, deployment target, and service class
combination; otherwise, you might need to put the autonomic request flow manager
in manual mode.
- Dynamic workload management: Disable dynamic workload management for each
dynamic cluster. Click Servers > Dynamic clusters > dynamic_cluster_name >
Dynamic WLM. Click Dynamic WLM check box so that it is not checked
to disable dynamic workload management.
If you disable the autonomic managers, you can add CPU load through
background tasks. Use an external monitoring tool for your hardware.
- Using the runtime charting in the administrative console, monitor
the number of running requests. Click Runtime Operations > Runtime
Topology in the administrative console. You can view the number of concurrent
requests.
- Compute the speed factor for the deployment target. Using
the following equation to calculate the speed factor:
speed factor = (normalized CPU speed) * (CPU utilization) /
(number of concurrent requests, measured at entry and exit of the target tier)
- Configure the speed factor in the administrative console.
You set the custom property on the deployment target, for example, a
cluster of servers or a standalone application server. For more information
about the overrides that you can create with the speedFactorOverrideSpec custom
property, see Autonomic request flow manager advanced custom properties
.
- Define a case for each tier in the deployment target.
Each case is separated by a comma. Each case contains a pattern that
is set to a value that is equal to the speed factor that you calculated. The
pattern defines the set of service classes, transaction classes, applications,
or modules that you can override for the particular tier. The pattern is :
service-class:transaction-class:application:module:[tier, optional]=value
You can specify a wild card for any of the service class, transaction class,
application, or module by entering a * symbol. Each pattern can include
at most one application, at most one module, at most one service class, and
at most one transaction class. The tier is optional, and represents the deployment
target name and relative tier name. Set the value to a speed factor override
number or to none to define no override. Following
is an example of a speed factor override value for a two tier configuration:*:*:*:*=none,*:*:*:*:../DbCel/CICS=0.7
For
the first tier, there are no overrides. There is an override of 0.7 for the
tier named CICS+1 that is in the cell named DbCel.
- Create the custom property in the administrative console.
In the deployment target, click Custom properties > New. The
name of the custom property is speedFactorOverrideSpec, and the value of the
custom property is the string that you composed in the previous step.
- Save your configuration.
Results
Speed factors are configured to override the speed factor values that
are created by the work profiler and support performance management of more
than one tier.
What to do next
Repeat these steps for each transaction class module and non-target
tier node pair. You also must configure the node speed for each external node.
See
Configuring node computing power
for more information.