WebSphere Virtual Enterprise, Version 6.1.1
             Operating Systems: AIX, HP-UX, Linux, Solaris, Windows, z/OS


Application placement frequently asked questions

Occasionally, you might encounter application placement behavior that is not expected. This topic describes some commonly asked questions and things to look for when application placement is not working as you expect.

Where is the application placement controller running?

To find where the application placement controller is running, you can use the administrative console or scripting. To check the location in the administrative console, click Runtime Operations > Extended Deployment > Core components. You also run the checkPlacementLocation.jacl script to display the server on which the application placement controller is running.

When does the application placement controller start a server?

The application placement controller starts servers for the following reasons: For a view of what is running in view of the application placement controller, see the SystemOut.log messages.

When does the application placement controller stop a server?

The application placement controller stops a server for the following reasons:

Why didn't the application placement controller start a server?

The application placement controller might not display that the server is started for one of the following reasons:

Viewing failed start information

Remember: The failed start list is not persisted when the application placement controller restarts or moves between nodes.
You can view failed start information with one of the following options:
  • Use the PlacementControllerProcs.jacl script to query failed server operations.
    Run the following command:
    ./wsadmin.sh -profile PlacementControllerProcs.jacl -c "anyFailedServerOperations"
  • Use commands in the wsadmin tool to display failed starts.
    For example, you might run the following commands:
    wsadmin>apc = AdminControl.queryNames('WebSphere:type=PlacementControllerMBean,process=dmgr,*')
    wsadmin>print AdminControl.invoke(apc,'anyFailedServerOperations')
    When the server becomes available, the failed to start flag is removed. You can use the following wsadmin tool command to list the servers that have the failed to start flag enabled:
    wsadmin>print AdminControl.invoke(apc,'anyFailedServerOperations') OpsManTestCell/xdblade09b09/DC1_xdblade09b09
  • View the failed starts in the SystemOut.log file.

Why did the application placement controller start more servers than I expected?

More servers can start than expected when network or communication issues prevent the application placement controller from receiving confirmation that a server has started. When the application placement controller does not receive confirmation, it might start an additional server.

How do I know when the application placement controller has completed an action or is going to complete an action?

You can check the actions of the application placement controller with runtime tasks. To view the runtime tasks, click System administration > Task management > Runtime tasks. The list of runtime tasks includes tasks that the application placement controller is completing, and confirmation that changes were made. Each runtime task has a status of succeeded, failed, or unknown. An unknown status means that there was no confirmation either way whether the task was successful.

How does the application placement controller work with VMware, which hardware virtualization environments are supported?

For more information about how the application placement controller works with VMware and other hardware virtualization environments, see Virtualization and WebSphere Virtual Enterprise and Supported server virtualization environments .

How can I start or stop a server without interfering with the application placement controller?

If you start or stop a server while the dynamic cluster is in automatic mode, the application placement controller might decide to make changes to your actions. To avoid interfering with the application placement controller when you start or stop a server, put the dynamic cluster into manual mode before you start or stop a server.

In a heterogeneous system (mixed hardware or operating systems), how does the application placement controller pick where it is going to start a server?

The membership policy for a dynamic cluster defines the eligible nodes on which servers can start. From this set of nodes, the application placement controller selects a node on which to start a server by considering system constraints such as available processor and memory capacity. The application placement controller does not make any decisions about server placement based on operating systems.

When my dynamic cluster is under load, when does the application placement controller start another server?

The application placement controller works with the autonomic request flow manager (ARFM) and defined service policies to determine when to start servers. Service policies set the performance and priority maximums for applications and guide the autonomic controllers in traffic shaping and capacity provisioning decisions. Service policy goals indirectly influence the actions that are taken by the application placement controller. The application placement controller provisions more servers based on information from the ARFM about how much capacity is required for the number of concurrent requests that are being serviced by the ARFM queues. This number is determined based on how much capacity each request uses when it is serviced and how many concurrent requests ARFM determines is appropriate. The number of concurrent requests is based on application priority, goal, and so on.

The performance goals that are defined by service policies are not guarantees. WebSphere® Virtual Enterprise cannot make your application respond faster than its limit. In addition, more capacity is not provisioned if enough capacity is already provisioned to meet the demand, even if the service policy goal is being breached. WebSphere Virtual Enterprise can prevent unrealistic service policy goals from introducing instability into the environment.

How does the application placement controller determine the maximum heap size of my server?

You can change the heap size of the server in the dynamic cluster template. See Modifying the JVM heap size for more information.

How does the application placement controller work with Compute Grid in particular, dispatching jobs and endpoint selection?

When the application placement controller and Compute Grid are configured to work together, Compute Grid delegates the task of endpoint selection to the placement controller. When jobs are submitted, Compute Grid asks the placement controller to select an endpoint on which to run the job. The information that Compute Grid includes in this request to the application placement controller includes the class of the job, the completion time goal, and the nodes, clusters, or servers where the job is permitted to run. When the application placement controller has selected an endpoint, that information is passed back to Compute Grid, which is responsible for starting the job. When the application placement controller selects the endpoint, it considers the completion time goal for the job, available resources in the system, the execution profile of previous jobs in the same class, and other work, including both transactional and batch work, that the system needs to handle. The application placement controller attempts to select an endpoint in so that the job has enough resources to complete on or before its completion time goal. Because of the selection process that must occur, the application placement controller does not always immediately select an endpoint for the job. In this situation, the job does not start immediately. This delay is likely if the system is small, has other transactional or batch work, and the completion time goal for the job is long.

How does the application placement controller work with WebSphere eXtreme Scale?

The application placement controller integrates with WebSphere eXtreme Scale. In particular, before stopping a dynamic cluster member that contains a container server, the application placement controller contacts the container server to quiesce any work that is being done, and moves any vital data to backup locations as needed.

The catalog service automatically starts in the configured deployment manager. If you want to configure failover for the catalog service, you can create a catalog service grid. See Starting the catalog service process in a WebSphere Application Server environment for more information.

Why are the dynamic cluster members not inheriting properties from the template?

You must save dynamic clusters to the master repository before making changes to the server template. If you have dynamic cluster members that do not inherit the properties from the template, the server template probably incurred changes in an unsaved workspace. To fix this issue, delete the dynamic cluster, then recreate it.

Save your changes to the master repository. You can ensure that your changes are saved to the master repository after clicking Finish, by clicking Save in the message window in the top frame. Click Save again in the Save to Master Configuration window. Click Synchronize changes with nodes.

Why does my dynamic cluster have too few active servers?

If you encounter problems where not enough servers are running in the dynamic cluster, try the following actions:
  • When the nodes in the node group are not highly utilized, verify the service policy is met. At times the policy might not be defined clearly and although the system is able to meet them, although not to your expectations. To check or change a service policy in the administrative console click Operational policies > Service policies > Select an existing policy. Check the goal type, goal value, and importance of the policy, and make any necessary changes.
  • When the nodes in the node group are highly utilized, compare the service policy goals of this cluster to service policy goals of other active clusters. If the traffic that belongs to this cluster has lower importance or looser target service goals relative to the other clusters, it is more likely that the system instantiates fewer servers for this cluster. To check or change a service policy in the administrative console, click Operational policies > Service policies > Select an existing policy.
  • When the node group seems to have some extra capacity, but your service policies are not met, check the configuration settings on the dynamic cluster. There might be too few instances of the dynamic cluster created as a result of the maxInstances policy setting.



Related tasks
Configuring dynamic application placement
Related reference
Administrative roles and privileges
Related information
Application placement custom properties
Reference topic    

Terms of Use | Feedback

Last updated: Oct 30, 2009 1:33:44 PM EDT
http://publib.boulder.ibm.com/infocenter/wxdinfo/v6r1m1/index.jsp?topic=/com.ibm.websphere.ops.doc/info/odoe_task/rodappfail.html