There are several planning decisions that you need to make when setting up a WebSphere® Application Server for z/OS® configuration file system.
Every WebSphere Application Server for z/OS node--whether a standalone application server, deployment manager, managed application server node, or location service daemon--requires a read/write home directory, sometimes referred to as its WAS_HOME.
/WebSphere/V7R0 /AppServer /bin /classes /java /lib /logs /profiles /default -> this is the profile_root directory /temp ... /Daemon /config /SYSA SYSA.SYSA.BBODMNB -> /WebSphere/V7R0/Daemon/config/SYSA/SYSA/BBODMNB SYSA.SYSA.BBOS001 -> /WebSphere/V7R0/AppServer/profiles/default/config/cells/SYSA/nodes/SYSA /servers/server1 SYSA.SYSA.BBOS001.HOME -> /WebSphere/V7R0/AppServerThe WebSphere Application Server home directory for BBOS001 is named AppServer. It contains directories with complete configuration information for the SYSA node and the BBOS001 server.
In addition to the WebSphere Application Server home directories themselves, the configuration file system contains a multipart symbolic link for each server that points to the startup parameters for the server. The symbolic link is named cell_short_name.node_short_name.server_short_name.
The sample configuration file system above contains a symbolic link SYSA.SYSA.BBODMNB to start the location service daemon and a symbolic link SYSA.SYSA.BBOS001 to start the BBOS001 application server. The second symbolic link is specified in the ENV parameter on the START command when the server or location service daemon is started from the MVS console:
START procname,JOBNAME=BBOS001,ENV=SYSA.SYSA.BBOS001
Each symbolic link points to the subdirectory where the server's was.env file resides. This file contains the information required to start the server.
Two or more z/OS systems can share a configuration file system, provided the z/OS systems have a shared file system and the configuration file system is mounted R/W. All updates are made by the z/OS system that "owns" the mount point. For a Network Deployment cell, this is generally the z/OS system on which the cell deployment manager is configured.
The choice of WebSphere Application Server for z/OS configuration file system mount points depends on your z/OS system layout, the nature of the application serving environments involved, and the relative importance of several factors: ease of setup, ease of maintenance, performance, recoverability, and the need for continuous availability.
In a single z/OS system:
If you run WebSphere Application Server for z/OS on a single z/OS system, you have a wide range of choices for a z/OS configuration file system mount point. You might want to put several standalone application servers in a single configuration file system with a separate configuration file system for a production server or for a Network Deployment cell. Using separate configuration file system datasets improves performance and reliability, while using a shared configuration file system reduces the number of application server cataloged procedures you need.
/WebSphere/V7_test /DevServer - home to standalone server DVCELL, with server DVSR01A /TestServer1 - home to standalone server cell T1CELL, with server T1SR01A /TestServer2 - home to standalone server cell T2CELL, with server T2SR01A /QAServer - home to Network Deployment cell QACELL, with deployment manager QADMGR and server QVSR01Aand a separate configuration file system for your production cell:
/WebSphere/V7_prod /CorpServer1 - home to Network Deployment cell CSCELL, with deployment manager CSDMGR and server CSSR01A
In a multisystem z/OS sysplex with no shared file system:
In a multisystem sysplex with no shared file system, each z/OS system must have its own configuration file system datasets. For standalone application servers and for Network Deployment cells that do not span systems, the options are the same as for a single z/OS system.
For Network Deployment cells that span systems:
In a multisystem z/OS sysplex with a shared file system:
If your sysplex has a shared hierarchical file system, you can simply mount a large configuration file system for the entire cell. When using the Profile Management Tool or the zpmt command, specify the common configuration file system mount point on each system. As noted above, you should update the configuration file system from the z/OS system hosting the deployment manager. Performance will depend on the frequency of configuration changes, and ensure you devote extra effort to tuning if this option is chosen.
/LPAR1/WebSphere/V7F1 /DeploymentManager - home to deployment manager F1DMGR in cell F1CELL /AppServer1 - home to node F1NODEA and servers F1SR01A and F1SR02A /LPAR2/WebSphere/V7F1 /AppServer2 - home to node F1NODEB and servers F1SR02B (clustered) and F1SR03BEach system (LPAR1 and LPAR2) mounts its own configuration file system on its system-specific mount point. When using the Profile Management Tool or the zpmt command, specify the following:
The WebSphere Application Server home directory is always relative to the configuration file system in which it resides. In the Profile Management Tool or the zpmt command, therefore, you choose the configuration file system mount point on one panel and fill in just the single directory name for the home directory on another. But when instructions direct you to go to the WAS_HOME directory for a server, they are referring to the entire path name, configuration file system and home directory name combined (/WebSphere/V7R0/AppServer for example).
You can choose any name you want for a home directory if it is unique in the configuration file system. If you are creating a standalone application server or new managed server node to federate into a Network Deployment cell, be sure to choose one that is not in use in the Network Deployment cell's configuration file system.
If you have one node per system, you might want to use some form of the node name or system name. Alternatively, you can use "DeploymentManager" for the deployment manager and "AppServern" for each application server node.
The configuration file system contains a large number of symbolic links to files in the product file system (/usr/lpp/zWebSphere/V7R0 by default). This allows the server processes, administrator, and clients to access a consistent WebSphere Application Server for z/OS code base.
Note that these symbolic links are set up when the WebSphere Application Server home directory is created and are very difficult to change. Therefore, systems that require high availability should keep a separate copy of the WebSphere Application Server for z/OS product file system and product datasets for each maintenance or service level in use (test, assurance, production, and so forth) to allow system maintenance, and use intermediate symbolic links to connect each configuration file system with its product file system.
When a WebSphere Application Server for z/OS node is started, the service level of the configuration is compared against the service level of the product file system. If the configuration file system service level is higher than that of the product file system (probably meaning that an old product file system is mounted), the node's servers will terminate with an error message. If the configuration file system service level is lower than that of the product file system (meaning that service has been applied to the product code base since the node was last started), a task called the post-installer checks for any actions that need to be performed on the configuration file system to keep it up to date. For more information about the post-installer, see Applying product maintenance on z/OS.