These release notes support ptx®/SVM V2.1.4, which is a powerful tool for managing disk-drive usage and disk space. Read this document before you install or run this release of ptx/SVM.
This version of ptx/SVM can be used with the following products:
ATTENTION ptx/SVM V2.1.4 cannot be used to manage shared storage on clusters containing more than 2 nodes. ptx/SVM can be used for mirroring root and primary swap on local disks on the nodes of 3- and 4-node clusters. It can also be used to manage private storage on shareable disks on nodes in 3- and 4-node clusters. Shareable disks are disks that are accessible from more than one node but are reserved by ptx/SVM for private use on one node only. See the section later in these release notes entitled "Disabling ptx/SVM Sharing in 3- and 4-Node Clusters" for more information on ptx/SVM limitations in 3-and 4-node clusters.
This release of ptx/SVM contains a considerable amount of new functionality and significant configuration differences from ptx/SVM V1.x. If you are currently running ptx/SVM V1.x, do not install DYNIX/ptx V4.4.x or any layered products until you have contacted Customer Support.
If you have already upgraded to ptx/SVM V2.0.2, V2.0.3, V2.1.1, V2.1.2, or V2.1.3, you can install this release of ptx/SVM along with DYNIX/ptx and any other layered software products through a single installation process. The DYNIX/ptx and Layered Products Software Installation Release Notes tell how to use this process to install all DYNIX/ptx software.
Although it is not necessary to deinstall ptx/SVM before installing a new version of the software, should you wish to deinstall ptx/SVM, the automatic deinstallation process through the ptx/ADMIN menu system will fail because root and swap volumes are in use. Follow these steps before deinstalling ptx/SVM through ptx/ADMIN to ensure the deinstallation will succeed:
ATTENTION Issuing the following command will disable all volumes. Make sure your system does not depend on any volumes for filesystems, swap space, or other uses that your system requires to boot.
Prevent vxconfigd from starting during the boot process:
# touch /etc/vx/reconfig.d/state.d/install-db
Disable the use of early-access volumes on the next reboot:
# bp /unix vol_disable_early_access_volumes 1
Reboot the system.
The following sections discuss important usage considerations with this release of ptx/SVM.
ptx/SVM's sharing must be disabled in a cluster with more than 2 nodes. On these systems, ptx/SVM can still be used for managing storage on local disks and on shareable disks using private disk groups. Any private disk groups that will be failed over to another node must be imported with the failover attribute, described in the previous section.
To disable sharing in ptx/SVM, follow these steps:
Change directory to /usr/conf/uts/io/vol_sqnt.
Edit the vol_sqnt_space.c file and change the value of vol_disable_clust to 1. Change the line int vol_disable_clust = 0 to int vol_disable_clust = 1. Save the changes to the file.
Recompile the kernel and reboot the node.
Because hot-spare disk takeover currently fails (Problem Report #242311), it is recommended that you do not use hot sparing.
Because the ptx/SVM menu system, vxdiskadm, currently fails to work with shared disks (Problem Report #236943) and does not properly perform OLR of sliced and simple disks (Problem Report #242863), it is recommended that you do not use vxdiskadm.
ptx/SVM V2.x requires that every shareable private spindle have at least one private region. A shareable disk is one that is on the QCIC or Fibre Channel and that could be shared. Shareable spindles that are of type nopriv cannot be added to private disk groups, therefore, because nopriv disks do not contain private areas.
In order to place a shareable nopriv spindle under ptx/SVM control, you will have to re-partition the disk to create a type-8 slice on it for the private configuration database area. To determine if a disk is shareable, examine the output of the dumpconf -d command. Look for an "S" in the fifth column of the output. "S" signifies that a device is shareable.
The fix for problem report 255469 may cause disk group deportation and disk removal operations to fail in some cases.
ptx/SVM provides diagnostic device interfaces for each simple disk under ptx/SVM control. For example, when /dev/dsk/sdN is under ptx/SVM control, then /dev/diak/rdsk/sdN is the diagnostic device file that ptx/SVM provides. When a disk group is deported, if any of the diagnostic device files corresponding to simple disks in the disk group are open, then the deport operation will fail. Likewise, when a simple disk is removed from a disk group and its diagnostic device file is open, then the disk removal operation will fail. The processes that have kept the diagnostic device files open must be terminated in order for the disk group deportation and disk removal operations to succeed. For shared disk groups, the processes keeping the diagnostic device files open must be terminated on all nodes. In a cluster, if the diagnostic device file is open on any node, the deportation operation of the disk group that contains the disk, or the removal of that particular disk from the disk group, will fail.
If a disk group becomes disabled, the diagnostic device interface may not be available for the disks that belong to the disabled disk group. The diagnostic device interface will become available once the disk group is enabled. You may have to deport and re-import the disk group to enable the disk group and to get back the diagnostic device interface for the disks that belong to the disk group.
ATTENTION If there are automated scripts that deport disk groups or remove disks from disk groups, and if your system runs processes that keep the diagnostic device files of disks that belong to these disk groups open, then the automated scripts will fail to deport disk groups or remove disks if the situations described above are encountered. You will need to manually terminate the processes that have kept the diagnostic device open.
If you wish to place a disk containing a VTOC driver under ptx/SVM control and use the disk in a cluster, then you must make an entry in each node's /etc/devtab file for the disk and issue the devbuild command on each of the nodes in the cluster.
If you build the VTOC driver for a disk on one node (where the disk will be recognized as a "sliced" or "nopriv" disk), but not on the other node(s) (where the disk will be recognized as a "simple" disk), then the ptx/SVM shared disk groups will not match across the cluster and you will not be able to use them.
Object manipulation performance in private disk groups in a cluster is significantly better than in shared disk groups. Use shared disk groups only when absolutely necessary (for data that must be shared among cluster nodes) to avoid incurring a performance penalty.
ptx/SVM lets you use the vxdg -k rmdisk medianame command to remove a disk media record even if the volume containing the disk is active. This operation is not recommended, especially if the data to be removed is not mirrored elsewhere. Should you use the vxdg -k rmdisk medianame command, use extra caution if the disk belongs to ROOTVOL, SWAPVOL, or a secondary swap volume. If you issue the command on a disk media record that is part of non-mirrored swap or a secondary swap volume, then the machine will crash or failover.
The following ptx/SVM documentation is available on the online HTML documentation CD-ROM:
The ptx/SVM Quick Reference Card is shipped with each order of ptx/SVM.
This section lists the following problem report summaries:
This release of ptx/SVM contains fixes for the following software defects:
(239445) The vxdg reminor command failed on a shared diskgroup.
(240316) When vxdg free was run from a slave node, it did not return private disk group data.
(242846) ptx/SVM allowed the root and swap partitions to be specified on the same partition.
(244172) A panic occurred when vxstat was run with the -fb option.
(245565) When 50 or more plex attaches were done simultaneously, all subsequent SVM commands hung.
(246898) The vxplex mv command was liable to cause data loss.
(247365) If a disk group had a disk media record in the removed state and the master node went away, the slave node disabled the disk group while it was doing I/O to it.
(248817) When the ptx/CLUSTERS CCI was down, the ptx/SVM slave vxconfigd spun.
(249607) Plex read policy was mistakenly set to "prefer" after the system was rebooted.
(250270) vxconfigd deadlocked itself by holding a read copy of the lock "volop_rwsleep" and requests for the write copy of the lock during a transaction operation.
(250412) vxconfigd hung in a msghead call.
(250436) A deadlock condition existed between vxstat and vxconfigd operations.
(250183) The rootdg disaster recovery file containing the names of the ptx/SVM disks was incomplete.
(251187) The master vxconfigd hung during resilver and resynchronization operations and when the slave node was rebooted.
(251471) Unneeded calls to volsio_stabilize resulted in I/Os being unnecessarily stabilized.
(251986) The round robin policy on mirrored volumes ignored partial plexes that could not satisfy the read request, which led to unrecoverable I/O failure and caused the partial plex to become detached.
(252131) The vxvol init active command enabled a volume with no complete plexes.
(252408) vxconfigd did not see CCI "up" events, which broke shared transactions.
(252693) A vxplex attach operation hung when the master node died.
(253644) Memory for the svm_info structure was not released in some cases, leading to a memory leak.
(253992) The slave to master transition hung after the master node panicked.
(254256) If a mirrored volume had its recover flag set, there was no way of resetting the flag without stopping the volume and moving the volume to the NEEDSYNC state, and then restarting the volume.
(254475) Master takeover failed when vxconfigd hung.
(254611) A cluster was hung on vxconfigd wait in volcvm_iodrain_dg().
(254743) vxconfigd became stuck in send_slaves() after a transition, resulting in a system panic.
(254936) During a master takeover, vxconfigd dropped core in priv_update_header_start().
(255040) A klog flush failure that occurred during the master transition caused a system panic.
(255166) ptx/SVM did not properly distribute configuration database copies across fibrechannel direct-connect configurations with multiple raid cabinets.
(255469) A kernel MMU fault occurred when ptx/SVM and ptxagt conflicted.
This section lists open problems in this release of ptx/SVM.
If you create a volume in rootdg with the same name as a disk group, that disk group cannot be imported automatically on subsequent reboots.
Workaround. Import the disk group manually with a different name, or rename the volume.
The command-line parser for vxdisk init ignores invalid values for loglen.
Workaround. None. Use only valid values for the loglen parameter.
The vxassist arguments maxgrow and maxsize were inadvertently omitted from the man page for vxassist.
The command vxassist [ -g diskgroup] maxgrow volume_name will tell you how large the volume could grow given the current available disks in the specified disk group (or in the root disk group, if no disk group is specified).
The command [vxassist -g diskgroup] maxsize will tell you how large the largest volume could be made in the specified disk group (or in the root disk group, if no disk group is specified).
Workaround. None.
By default, vxconfigd sends errors to the console. It should also send errors to a log file or through syslog.
Workaround. None.
The vxmake command limits the number of objects that can be created in a single invocation. The formula for determining the limits depends on the names of the objects involved as well as the kinds of objects and the values of some other fields in the objects. If the request fails with the message Configuration too large, you should try splitting the description file into two files and running vxmake twice.
When attempting to determine the size of the configuration database (private area) of a disk, a good rule of thumb is to allow about 400 bytes for each object. A 20,000 object database, for example, would require a minimum of 7800 blocks (1024-byte blocks).
Workaround. If the configuration database is undersized and you receive the message Not enough space in database, the problem affects only the disk group containing the database. To enlarge the database, make sure that all copies of the database are made larger by evacuating all volumes from them (see vxevac(1M)). Next, reinitialize them (in the case of simple disks) or repartition them (in the case of sliced disks).
ptx/SVM performance is slow when a large number of volumes (on the order of 100) are started at the same time.
Workaround. None.
If, while vxrecover is running, you attempt to deport disk groups on which vxrecover still intends to perform recovery, vxrecover will return messages saying that it can't find volumes on the disk groups you are attempting to deport.
Workaround. Although the messages are harmless, you can avoid receiving them by not deporting disk groups until recovery is complete (when no volumes are in the SYNC or NEEDSYNC state).
The vxstat command is restricted for use by the root user only.
Workaround. None.
Ordinarily, ptx/SVM prevents other applications and users from opening any device which has been added to a disk group. However, if a disk has been involved in hot sparing, it may be opened by users and applications, even though the device is under ptx/SVM control and in use by ptx/SVM.
Workaround. The problem corrects itself after the system is rebooted.
Mirrored volumes cannot detect any tampering done to them outside of ptx/SVM control. The tampering can be accomplished by modifying the underlying devices before ptx/SVM is started, such as from the stand-alone kernel on NUMA 2000 systems. In the case of the root volume, remounting the root filesystem so that it is writable from either the stand-alone kernel or single-user mode will undetectably render the volume out of sync such that successive reads of the same area will return different results.
Workaround. Before tampering with any component of a ptx/SVM volume, modify the volume so that it is not mirrored.
Automatic deinstallation of ptx/SVM (through the ptx/ADMIN® menu system) fails because root and swap volumes are in use.
ATTENTION Issuing these commands will disable all volumes. Make sure your system does not depend on any volumes for filesystems, swap space, or other uses that your system requires to boot.
Workaround. Before attempting to deinstall ptx/SVM, execute the commands touch /etc/vx/reconfig.d/state.d/install-db and bp /unix vol_disable_early_access_volumes 1 and reboot the system.
ptx/SVM allows you to attach a plex containing more than one subdisk to ROOTVOL.
Workaround. Attach only one subdisk to a plex in ROOTVOL.
vxresize increases the size of the volume but fails to increase the size of the filesystem.
Workaround. Resize the volume and then follow the filesystem-type procedure to increase the filesystem.
The vxdiskadm option to offline a disk does not work.
Workaround. Use the devctl -d -D command to offline a disk.
When ptx/SVM is installed but not enabled, vxiod processes are started anyway.
Workaround. This is expected behavior and there is no workaround.
When creating a volume with vxassist, if the length of the volume is not divisible by the number of subdisks to be associated with the volume, then vxassist will create a volume whose length is shorter than the length of the plex (sum of the subdisks).
Workaround. None.
In a cluster, ROOTVOL sometimes mounts under /dev/vx/dsk/rootdg and sometimes under /dev/vx/dsk.
Workaround. Because this is not really a problem, there is no workaround.
While starting volumes, the vxrecover command ignores the startopt=noattach option.
Workaround. None.
ptx/SVM support for dumpcfg (/etc/dumpcfg.d/vx) does not correctly handle disk groups other than rootdg.
Workaround. None.
The vxconfigd -k command may fail to reimport all disk groups.
Workaround. Import the disk groups manually.
If a disk in a subdisk fails and you try to create a plex with that subdisk with vxmake, vxconfigd will terminate and hang vxmake.
Workaround. None.
The vxdiskadm OLR option fails on shared disks
Workaround. None.
When a volume is in an unstartable state because a disk is missing, vxvol start will fail and return no error message, leading the user to believe the volume started successfully.
Workaround. None.
The following message may appear after a node goes down, while a vxplex att operation is in progress, or if a vxplex/vxrecover operation is killed locally:
vxplex -g diskgroup det plex
vxvm:vxplex: ERROR: Volume volume is locked by another utility
Workaround. Clear the "tutil0" field of the volume and plex with the following commands:
# vxmend clear tutil0 volume
# vxmend clear tutil0 plex
Output from the vxdisk command shows disknames that have been renamed by devctl and no longer exist on the system after a vxdctl enable (or even a vxdisk rm operation followed by a vxdctl enable operation).
Workaround. Delete the device from ptx/SVM altogether with the OLR process.
When a mirrored volume is stopped with vxvol -o force stop, a resynchronization is started on the volume. If the volume is forced to stop again, the offset counter remains partway through the volume when the volume is restarted again. The offset counter should be reset to zero.
Workaround. Do not use the -o force option to stop mirrored volumes. If you use the -o force option and the offset is not returned to zero, reboot the system to return the offset to zero.
The vxassist man page incorrectly includes information about an unsupported option (mirror=ctrl).
Workaround. Ignore all references to the ctrl option.
Problems can arise when the vxconfigd -k command is issued on the slave node and, while the command is processing, a shared disk group is deported from the master. When the slave finishes with the new vxconfigd, it reports the shared disk group is disabled but imported. The master does not show the shared disk group as imported. You cannot deport the disk group because the master does not see it. You cannot import it again because the slave cannot see one or more of the disks in the disk group.
Workaround. Restart vxconfigd on the slave again.
If you force a disk into a disk group with vxdisk -f, but the disk previously belonged to a different disk group, the disk can end up belonging to both disk groups.
Workaround. Do not use the -f (force) option when adding a disk to a disk group.
Asynchronous reads at 32 KB I/O size cause performance problems because the system is set up for reads at 128 KB I/O size.
Workaround. None.
When both cluster nodes have a private disk group with the same name that contains shared disks, they receive a warning message when the nodes are booted because although the group name matches, the disk IDs point to the wrong side.
Workaround. If using shared disks in private disk groups, you should not have the same disk group name on both nodes. Otherwise, you can just ignore the error message.
When you replace a mirrored disk that is part of a two-disk disk group, ptx/SVM does not recognize that this has occurred and data may become corrupted.
Workaround. None.
If quorum is lost and then regained, resilvering seems to hang.
Workaround. Stop the resilvering and restart it manually.
The S03reconfig script outputs the following incorrect error message:
vxvm:vxdisk: TO FIX: Usage:
vxdisk [-f] [-g diskgroup] [-qs] list [disk ...]
Workaround. Ignore the message.
If you change the name of a shared disk group and the master node panics, the new master may only partially pick up the name change. This may make the output of vxdisk list and vxdg list confusing.
Workaround. Deport the disk group from the new master and reimport it before bringing the node that is down back into the cluster.
Hot-spare disk takeover fails.
Workaround. None. Do not use hot sparing.
vxdiskadm fails to execute an OLR procedure on a sliced disk or on a simple disk that is in a state of NODEVICE, and gives no indication of the failure.
Workaround. Do not use svmadmin to perform OLR. Use the manual procedure described in the ptx/SVM Administration Guide.
Systems containing a large number of disks (on the order of 1600 disks) may run out of KVA between the time when vxconfigd is in boot mode and the time it is enabled.
Workaround. None.
If you create a dirty-region log (DRL) subdisk on a shared volume and the master crashes when the system comes back up, the system will hang in S03svm-awaitjoin upon reboot.
Workaround. Do not use dirty-region logging on shared volumes.
In certain circumstances, vxencap can overwrite data by rewriting a VTOC to a "sliced" disk. This situation can occur when the VTOC contains a type-8 partition at the beginning and a large type-1 partition for the rest of the disk, mimicking the private/public areas for "simple" disks.
Workaround. Do not use vxencap.
In a cluster, each time vxdisk list or vxdisk list disk is run from one of the nodes, multiple lines will be generated in the ktlog for each disk not in a shared disk group and owned by another node. The ktlog entries will look similar to the following:
3692ad76 16:25:26 note q0/e0/p51 vxvm:vxio: Cannot open disk: kernel error 16
Workaround. None.
ptx/SVM lets you create private disk groups that use shared disks and lets you give the disk groups the same name. If you create such disk groups on multiple nodes and then attempt to import one of the disk groups by name, ptx/SVM will arbitrarily choose which of the disk groups to import. (ptx/SVM will report, however, that there are other disks in the configuration in similarly named disk groups.)
Workaround. Import the disk group using its ID instead of its name. To determine a disk group's ID, issue the vxdg list command. Then use vxdg import ID instead of vxdg import name to import the disk.
When private disk groups on shared disks are deported, messages similar to the following appear in the ktlog:
36c3854c 17:35:08 warn q0/e2/p55 release_dev_locks: Invalid device close flags.
36c38585 17:36:05 warn q0/e2/p55 release_dev_locks: Invalid device close flags.
...
Workaround. None. You can safely ignore the ktlog messages about "Invalid device close flags."
The vxtrace command only tracks local I/O; it does not work clusterwide.
Workaround. Use vxtrace on each individual node, or just for local I/O traffic on one node.
The ptx/SVM Administration Guide in Chapter 7, "'Fixing' Objects and Moving Between Plex States," states that if an I/O error occurs on a last complete plex, then the incomplete plex(es) mapping to the region of I/O failure should be disabled. In reality, however, the partial plexes mapping to the region of I/O failure (on the complete plex) are NOT disabled, but continue to be in the enabled state.
Workaround. Use of partial plexes is not recommended. Once write errors occur on the only complete plex (which maps to regions on the corresponding partial plex(es), if read I/Os are issued by the user, they might return inconsistent data. This may happen since the complete plex remains enabled even after the previous I/O write error on it.
After the plex for ROOTVOL was renamed, all I/O to root was lost and then the system panicked.
Workaround. Avoid renaming the plex for ROOTVOL.
ptx/SVM V2.x does not provide an easy way to locate the ptx/SVM objects used by a particular disk media record or disk.
Workaround. None.
The vxdiskadm OLR procedure does not complete everything that it should. Disks are left in a removed state and plexes are left in a DISABLED REMOVED state.
Workaround. Use the manual OLR procedure described in the ptx/SVM Administration Guide. Do not use vxdiskadm to perform OLR.
Manually resilvering of shared volumes from the slave is possible but not recommended. There are two reasons for this. First, there is a slight performance cost, since the master must still be involved in some aspects of the resilvering. Second, if the master dies while resilvering, the resilvering will hang until the slave becomes the master. If, during this time, the slave also dies, it is possible these volumes will be left in read/writeback mode, incurring no data loss but a moderate performance penalty.
Workaround. Rebooting both nodes can clear the condition in some cases, but can be disruptive. In other cases, it may still be necessary to manually remove and recreate the volumes to clear this state. Removing and recreating these volumes without disrupting normal operation may be difficult or in some cases impossible; customers may need to call Customer Support for assistance.
vxassist lets you shrink active volumes.
Workaround. Do not attempt to shrink active volumes.
It is possible to boot the system from a disk that was once in the root disk group, but is not there now.
Workaround. Only boot the system from disks that are in the root disk group.
Do not use vxassist to create a mirror on the same disk as the first plex. vxassist will return the following error:
# vxassist mirror ROOTVOL disk
vxvm:vxassist: ERROR: Cannot allocate space to mirror nnnnnn block volume
Workaround. To use vxassist, specify a different spindle. Otherwise, use vxmake.
When a node is disabled, the other node may receive an uncorrectable write error on the ptx/CTC database when it tries to perform failover under the following conditions: During the time of the attempted write, all klog copies for the disk group the ptx/CTC database is in are disabled, and a join process is underway for the disk group. An open operation on the ptx/CTC volume succeeds, but a write to the volume fails.
Workaround. None.
A panic occurs on the system in volsiodone when parent->sio_ops->sop_childdone is NULL.
Workaround. Reboot the system.
A vxvol start command may fail to start a single-plex volume without warning.
Workaround. Start the volume by issuing the vxvol init active volume command.
The vxassist man page should state that the snapstart and snapshot options do not work for alternate root partitions.
Workaround. None. Mirror ROOTVOL instead.
The following confusing error message appears when an fsgen volume is reset to type root:
# vxedit set use_type=root volume
vxvm:vxedit: ERROR: Volume volume usage type is root: not gen or fsgen
This error message is confusing because the volume's usage type is fsgen.
Workaround . None.
A plex cannot contain subdisks that have punctuation (such as plus signs or apostrophes) in their names. (Underscores are acceptable.)
Workaround. None.
ptx/SVM does not allow you to have the same spindle in multiple disk groups, but it is possible to bypass this check by having one of the disk groups deported while adding a "nopriv" disk to another disk group.
Workaround. None.
vxconfigd drops core in memset() under the following conditions: The core file is generated on the master node of a two-node cluster. The nodes previously were distinct, unclustered nodes. When the nodes were brought together, the disk groups were imported as shared.
Workaround. Reboot the system.
A disk with no VTOC can be specifically defined as type "nopriv" (for example, vxdisk define sd5 type=nopriv). This disk could then be added to a shared disk group (where sd5 is on a shared spindle) with vxdg -g shareddg adddisk sd5, where shareddg is a previously defined shared disk group. However, a "nopriv" ptx/SVM disk has no private area to store the configuration database, and this could lead to data corruption since the disk can be added to different disk groups at different points in time.
ATTENTION The default for a vxdisk define operation is to create a disk of type "simple." So a vxdisk define sd5 command will create sd5 to be a "simple" ptx/SVM disk.
Another manifestation of this problem is that a "nopriv" ptx/SVM disk (on a disk which has a VTOC on it) on a non-shared spindle can be added to different private disk groups at different points in time when no other partition from the same disk is added to the shared disk group. This could also lead to data corruption.
Workaround. None. Avoid defining disks on shared spindles (with no VTOCs) as "nopriv" ptx/SVM disks and then adding these disks to different disk groups at different points in time. Also avoid adding "nopriv" ptx/SVM disks (with VTOCs on non-shared spindles) to different disk groups at different points in time when no other partition from the same disk is added to the disk group. Either of these actions could lead to data corruption.
The vxmend fix active command fails when many plexes are provided as an argument to the command.
Workaround. Do not use vxmend fix active.
RAID devices may report a public length of zero.
Workaround. None.
Of 554 "simple" disks in a disk group, 252 of the disks have active configuration and log copies. The disk group is composed of disks from EMC arrays. This results in poor performance of operations against the disk group.
Workaround. None.
The following error may occur on a ptx/SVM V2.1.x system:
vxvm:command: ERROR: Failed to obtain locks:
volume: Cannot remove spindle's last disk label
Workaround. There are three possible ways to work around this problem:
Reboot the system.
Restart vxconfigd with the -k option.
Issue vxdg rmdisk on the "nopriv" disks first, followed by the "sliced" disks.
A disk group can become disabled when new disks are added to a disk group that already contains several (over 500) disks.
Workaround. None.
A volume stranded in the SYNC state, when mounted as a UFS filesystem on a slave node, will have extremely poor performance.
Workaround. None.
vxconfigd hung on the master node under the following conditions: two nodes were booted to single-user mode and then the rc2.d scripts for ptx/SVM were manually started. The slave node was started first and then the master. The disk groups had to be imported manually. Finally, when both nodes were taken to multiuser mode, vxconfigd hung on the master node.
Workaround. Reboot the nodes.
When a subdisk is split with the vxsd command, the following error may appear:
vxvm:vxsd: ERROR: creating subdisk subdisk
Invalid attribute specification
Workaround. Restart vxconfigd with -k or rename the subdisk.
Once a shared disk group is renamed (with vxdg -n), and the master node is rebooted, information for the disk group is inconsistent and confusing; vxdg and vxdisk information do not agree on the new master.
Workaround. None.