For each node in the cluster, verify that the key components listed in Table 11-1 are at the specified levels (DYNIX/ptx V4.4.x and associated layered products). Use the following methods to make these checks:
For software and firmware version levels, use the ptx/ADMIN menu system to view the list of software packages currently installed: System Administration -> Software Management -> List Software Packages. Alternatively, view the /etc/versionlog file for the most recent installation dates and versions.
For CLARiiON LIC firmware version levels, use the /usr/lib/raid/raiddisp command
System Component |
V4.4.2 |
V4.4.4 |
V4.4.5/V4.4.6/V4.4.7 |
DYNIX/ptx |
V4.4.2 |
V4.4.4 |
V4.4.5/ V4.4.6/V4.4.7 |
ptx/BaseComms |
V1.0.0 |
V1.1.1 |
V1.1.1 |
ptx/CLUSTERS |
V2.0.2 |
V2.1.1 |
V2.1.2 |
ptx/SVM |
V2.0.2 |
V2.1.1 |
V2.1.2 |
ptx/TCP/IP |
V4.4.3 |
V4.5.1 |
V4.5.2 |
ptx/LAN |
V4.5.1 |
V4.5.4 |
V4.6.0/ V4.6.1 |
ptx/SPDRIVERS |
V2.1.0 |
V2.2.0 |
V2.3.0 |
ptx/CTC |
V1.1.1 |
V1.1.2 |
V1.1.2 |
ptx/CFS |
V1.0.1 |
V1.0.2 |
V1.0.3 |
ptx/RAID ( for CLARiiON Subsystem) |
V2.0.3 |
V2.0.4 |
V2.0.4 |
Complete the pre-installation tasks described in Part 1 of the "Upgrade Checklist for a Single Node" in Chapter 1. Be sure to back up the ptx/SVM database as described in Chapter 2.
Verify that the naming databases in both systems are synchronized by comparing dumpconf output from each of the nodes. If you need to change a device name, use the devctl -n oldname newname command. See the devctl(1M) man page for more information.
ATTENTION If you change the name of a device, be sure that all references to that device (such as entries in the vfstab file, ORACLE data files, and ptx/SVM devices) now point to the new name of the device.
ATTENTION Do not start the upgrade process until both databases are in sync.
Ensure that a quorum disk exists for the cluster. A quorum disk is necessary to maintain cluster activity when one of the nodes in a cluster is down. Check for a quorum disk with the following command:
# clustadm -vc
Cluster ID = 5
Cluster formation time = Sun Sep 1 11:24:14 1996
Cluster generation number = 2
Last transition time = Mon Sep 2 09:00:35 1996
Expected votes = 3
Current aggregate votes = 3
Minimum votes for quorum = 2
Quorum state = Quorum
Quorum disk:
Name= = sd6s14
Votes contributed = 1
Clusterwide State = UP
Local State = OWNED
If a quorum disk is configured, when the first node is taken down during the upgrade, the cluster failover process defined with ptx/CTC will go into effect.
If there is a quorum disk, skip Step 2 and go to the later section "Upgrade Node 1."
If there is not a quorum disk, you will need to create and configure the disk according to the following guidelines:
Configure the quorum disk on one node only. The quorum disk information that you enter on one node is automatically propagated to all active nodes that are currently in the cluster.
The quorum disk must reside on a single type-1 disk partition with a minimum size of 1 MB.
The disk partition used for the quorum disk must be shareable by all the nodes in the cluster and cannot be used as the quorum disk for any other cluster.
The disk partition used for the quorum disk must not be under ptx/SVM control (although the remaining partitions on the disk can be under ptx/SVM control).
To configure a quorum disk, follow these steps:
If you do not already have a free type-1 partition that is at least 1 MB in size for use as the quorum disk, you will need to create a custom VTOC on one of the nodes to designate the appropriate partition. For information about creating a custom VTOC, see the chapter entitled "Disk Drive Management" in the DYNIX/ptx System Administration Guide.
From one of the active nodes, configure the path of the quorum disk, using either the following command line or ptx/ADMIN. In the command, qdisk_name is the disk partition to be used for the quorum disk (for example, sd7s3). qdisk_name cannot be a fully-qualified pathname.
# clustadm -C qdisk_name
Once the quorum disk is configured on one cluster node, the other nodes will become aware of the quorum disk and the cluster will start using it automatically. You do not need to reboot any of the nodes for the quorum disk configuration to take effect.
Upgrade the operating system and necessary layered products on Node 1. Refer to Chapter 5, "Upgrade Symmetry 5000 Systems Running ptx/SVM V2.x."
ATTENTION Compile a new kernel, but do not reboot.
Verify that /installmnt/etc/vfstab was updated with the location of the root disk.
Resolve any remaining file conflicts.
Return the device where the installation was performed to the rootdg. See "Prepare to Reboot the System" in Chapter 5.
Shut down the operating system on Node 1 with the shutdown command.
Set the boot path to boot the newly installed operating system to single-user mode. Be sure to specify the disk used for the installation.
---> bh osPath='2 slic(2)scsi(1)disk(0)'
If you installed new versions of the QCIC or CSM software, download it as described in the QCIC or CSM release notes.
Boot the operating system to single-user mode.
Verify that the failover is complete. Use the following command:
# /usr/ctc/bin/ctcadm getstatus
Object <node_1_name> STOPPED
Object <node_2_name> STARTED
Perform a ROOT installation of any remaining layered products. See Chapter 8.
Use the ptx/ADMIN menu system to compile a new kernel to include the new products. See Chapter 12, "Build a Custom Kernel." Reboot the operating system to single-user mode.
Perform the post-installation tasks described in Chapter 13, "After the Installation."
Shut down the operating system.
Edit the boot path to allow Node 1 to boot to multiuser mode.
---> bh osPath='0 slic(2)scsi(1)disk(0)'
Boot Node 1 to multiuser mode.
On the Node 1 console, verify application failback and cluster-rejoin.
# /usr/ctc/bin/ctcadm getstatus
Object <node_1_name> STARTED
Object <node_2_name> STARTED
# clustadm -vc
Cluster ID = 5
Cluster formation time = Sun Sep 1 11:24:14 1996
Cluster generation number = 2
Last transition time = Mon Sep 2 09:00:35 1996
Expected votes = 3
Current aggregate votes = 3
Minimum votes for quorum = 2
Quorum state = Quorum
Quorum disk:
Name= = sd6s14
Votes contributed = 1
Clusterwide State = UP
Local State = OWNED
Go to the Node 2 console.
Upgrade the operating system and necessary layered products on Node 2. Refer to Chapter 5, "Upgrade Symmetry 5000 Systems Running ptx/SVM V2.x."
ATTENTION Compile a new kernel, but do not reboot.
Verify that the /installmnt/etc/vfstab file was updated with the location of the root disk.
Resolve any remaining file conflicts.
Return the device where the installation was performed to the rootdg. See "Prepare to Reboot the System" in Chapter 5.
Shut down the operating system on Node 2 with the shutdown command.
Set the boot path to boot the newly installed operating system to single-user mode. Be sure to specify the disk used for the installation.
---> bh osPath='2 slic(2)scsi(1)disk(0)'
If you installed new versions of the QCIC or CSM software, download it as described in the QCIC or CSM release notes.
Boot the operating system to single-user mode.
Verify that the failover is complete. Use the following command:
# /usr/ctc/bin/ctcadm getstatus
Object <node_1_name> STARTED
Object <node_2_name> STOPPED
Perform a ROOT installation of any remaining layered products See Chapter 8.
Compile a new kernel to include the new products. See Chapter 12, "Build a Custom Kernel." Boot the operating system to single-user mode.
Perform the post-installation tasks described in Chapter 13, "After the Installation."
Shut down the operating system.
Edit the boot path to allow Node 2 to boot to multiuser mode.
---> bh osPath='0 slic(2)scsi(1)disk(0)'
Boot Node 2 to multiuser mode.
On the Node 2 console, verify application fail-back and cluster-rejoin.
# /usr/ctc/bin/ctcadm getstatus
Object <node_1_name> STARTED
Object <node_2_name> STARTED
# clustadm -vc
Cluster ID = 5
Cluster formation time = Sun Sep 1 11:24:14 1996
Cluster generation number = 2
Last transition time = Mon Sep 2 09:00:35 1996
Expected votes = 3
Current aggregate votes = 3
Minimum votes for quorum = 2
Quorum state = Quorum
Quorum disk:
Name= = sd6s14
Votes contributed = 1
Clusterwide State = UP
Local State = OWNED
Do not perform this procedure until you are sure that the cluster is up and your applications are running correctly on the new operating system.
ATTENTION After performing this procedure, you can no longer return the cluster to the V4.4.2 environment.
To reestablish the root and swap mirrors, complete these steps:
On Node 1, reestablish the mirrors as described under "Mirror the Original Root Partition to the Upgraded Root Volume" in Chapter 5. You do not need to reboot the system.
On Node 2, reestablish the mirrors as described under "Mirror the Original Root Partition to the Upgraded Root Volume" in Chapter 5. You do not need to reboot the system.
On the next reboot, the original root disk will be restored as the root disk.