ptx/RAID V2.1.0 provides the device driver and administration utilities for a Disk Array Subsystem (DASS) on a SCSI channel. This release will support up to 10.1 TB of data storage on a single Symmetry® 5000 or NUMA-Q system running DYNIX/ptx V4.5.0. High data availability, capacity, and protection are achieved by configuring disk arrays in the DASS unit with RAID-5 striping and parity-based data reconstruction. RAID-0, RAID-1, and RAID 1/0 functionalities are also supported. Hardware fault protection is increased with the Application Transparent Alternate Path (ATAP) feature.
The following configurations are supported:
The ptx/RAID Version 2.1.0 with ATAP functionality requires the following software:
DYNIX/ptx V4.5.0 or higher operating system.
CLARiiON Licensed Internal Code (FLARE code), Version 9.56.03 (packaged with ptx/RAID V2.1.0)
In Symmetry® 5000 systems, QCIC V3.4.
To use ATAP with Oracle® database software, you must have Oracle V7.3.2 or later. Previous versions of Oracle cannot tolerate the I/O delays during trespass operations.
The ptx/RAID V2.1.0 product requires approximately 1.5 MB of free disk space in the root filesystem and 3.5 MB of space in the /usr filesystem for installation.
The ptx/RAID Version 2.1.0 with ATAP requires the following hardware:
Symmetry 5000 or NUMA-Q systems
CLARiiON Series 2000 or 3000 disk arrays with Phoenix Storage Processor (SP) boards
Any Fibre Channel Bridge supported for NUMA-Q systems. In Symmetry 5000 systems, a QCIC-W SCSI Wide Quad Channel I/O Controller or QCIC-E Extended Quad Channel I/O Controller is required.
ptx/RAID V2.1.0 is a minor release containing the changes:
New LIC (FLARE) code, version 9.56.03
The SCSI bus parent can now be deconfigured if the rd paths it owns are not active (assuming all other devices on the SCSI bus have redundant paths and can also be deconfigured).
In order to perform OLR on a bridge or SCSI bus, execute a manual trespass for each rd device with an active path on the SCSI bus. Then the SCSI bus(es) can be deconfigured, followed by the bridge.
Note that TIN must be turned off for all FC Bridge-connected (NUMA-Q) systems. If TIN is left on, deconfiguring either the bridge or all rd devices on it can lead to hung bridges or SCSI buses. The procedure for disabling TIN is as follows.
In the CLARiiON Grid Manager software, select Engineering mode. For details on accessing and navigating CLARiiON Grid Manager, see the NUMA-Q Administration Guide for CLARiiON Disk Arrays.
Follow this menu path: Change Parameters->Change SP Parameters->Change Host Interface Options
A list of parameters and their settings will appear. The current settings will appear in square brackets [ ] and can be accepted by pressing the Return key. For the parameters whose settings are to be changed, type the string shown in boldface and then press Return.
Target Initiator Negotiation [Enabled] Disabled
Substitute Busy for QFULL: [Disabled]
Mode Page 8: [Disabled]
Recovered Error Log Report:
Allow Non-Mirrored Cache: [Enabled]
Auto Trespass: [Disabled]
PQ3: [Disabled]
No Spaces in PID:
Spindle Sync:
Trespass Logging: [Enabled]
Vendor Unique Inquiry support: No
Statistics Logging: [Disabled]
Period Error Reporting: [Disabled]
Apply Changes to both SP's (Y/N)? [N] Y
Make Changes as selected (Y/N)? [N] Y
Reboot the DASS unit as follows:
Type {}{} to enter Diagnostics mode.
At the PHX-Diag prompt, type Reset. The following set of prompts is displayed. Respond as shown here (press the Return key to accept the current setting for each parameter).
Cold/Warm reset [C, W] = W? C
Execute Local SCSI Bus Reset [Y, N] = Y
Execute Local (CPU) Reset [Y, N] = Y
Wait for approximately 5 seconds while the DASS unit reboots.
The following list contains reports of problems that were fixed in ptx/RAID V2.1.0.
This documentation and the raiddown man page have been updated to note that all devices on all nodes in a cluster must have their VTOC removed before downloading FLARE code. The closure of PR 249145 also closes PR 242767 and PR 240471.
If a LUN is not available, both paths to it are marked as "inactive". ptx/RAID was modified to output the following message when this state occurs, rather than returning an assertion failure or unexpected state:
rdN: Two non-active paths were found to this device.If the above message is output, proceed as follows:
Correct the rd device (LUN).
Deconfigure the device with devctl -d rdn where n is the number of the rd device.
Reconfigure the device with devctl -c scsibusx and devctl -c scsibusy, where scsibusx and scsibusy are the parent devices of rdn.
If a non-active path was activated using atapctl -a, the path became active, but no I/O could be sent to the device. This command now functions correctly.
To install and configure your system for ptx/RAID, perform the following steps:
Install the ptx/RAID software product following the instructions in the DYNIX/ptx V4.5.1 and Layered Products Software Installation Release Notes.
ATTENTION When DYNIX/ptx is upgraded from V4.2.x to V4.5.x and ptx/RAID is upgraded from V1.4.0 to V2.1.0, the /etc/dumpconf command is removed. After the upgrade, rename /etc/dumpconf.prog to /etc/dumpconf.
Download LIC code version 9.56.03. See Section 1.3 for details.
Open the file /usr/conf/uts/io/scsidisk/scsidisk_space.c. Check that the following parameter is set as shown. The unit of value is seconds.
int scsidisk_standard_timeout = 150;
Build (configure and compile) the kernel on non-clustered systems as described in the DYNIX System Configuration and Performance Guide. Then reboot the system using the new kernel.
The ptx/RAID software installation is now complete.
The following caveats and limitations apply to RAID devices when connected to NUMA-Q systems:
RAID devices must be used for data storage only. They are not supported as boot, root, main swap, or dump devices.
The limitations on the size of partitions in DYNIX/ptx apply to RAID devices.
If you use RAID devices with ptx/SVM, the ptx/SVM database must conform to the size limitations specified by IBM.
The ptx/RAID V2.1.0 software does not support the following functions:
The format utility.
Writing or reading diagnostic tracks, on-line diagnostics, or stand-alone diagnostics. ptx/RAID V2.1.0 currently runs only on DYNIX/ptx V4.5.x. A problem (PR 249698) in the VDC software within DYNIX/ptx V4.5.0 prevents the diagnostic programs from being aware of rd devices. In DYNIX/ptx 4.4.x and associated layered products such as ptx/RAID V2.0.4, online and stand-alone diagnostics have no difficulty in sensing rd devices.
This section describes problems that have been reported against ptx/RAID V2.1.0.
The number that appears in the title of each problem report is the problem-tracking-system number assigned to the report. These problem reports will be fixed in a future release unless otherwise noted.
In rare cases, during a ptx/RAID failover operation, I/O to the affected DASS may be suspended for several minutes as a result of cascading command timeouts caused by driver-initiated retries. Applications intolerant of these delays may be negatively affected. The resultant behavior is unpredictable and application-specific.
Workaround. Contact Customer Support for more information.
The CLARiiON RAID VTOC description file rd5_5x9gb contains an error which wastes 101000 sectors of disk space.
Workaround. Partition #9 should start on sector 3686808, not sector 3696908. Move up all of the following partitions accordingly, or add the extra space to the last partition on the disk, or create a new partition at the end of the disk to use the space.
If the atapctl -c rdn command is run on one node in the cluster, an I/O error will occur when other nodes in the cluster try to send data down the now available path. Eventually all other nodes in the cluster will mark the previously active path as dead.
Workaround. Manually recover the dead paths on all nodes in the cluster using the atapctl -r rdn command.
Workaround. If both SPs are on the same bus, manually deconfigure the RAID devices before running ffutil. The following example shows how to deconfigure RAID devices using devctl.
# devctl -d rd0
devctl: deconfiguring rd0 from scsibus4, scsibus5
devctl: deleted rd0
MAKEDEV.rd creates device files with incorrect minor numbers, and should not be used. Because devctl automatically creates and modifies device files, there is no need to use MAKEDEV.rd.
Workaround. Use devctl.
In both the default RAID-5 configuration and mixed RAID configurations within a DASS unit, the raiddisp command fails to indicate the numbers assigned to the LUNs; instead, it displays asterisks where the LUN numbers should be.
Workaround. Use the Grid Manager utility over a serial connection to display the LUN numbers properly. See the NUMA-Q® Administration Guide for CLARiiON® Disk Arrays for details.
This problem is seen when all the following conditions exist:
The host is a NUMA-Q system;
A device (such as the DASS) is deconfigured on a SCSI bus that is attached to a Fibre Channel bridge;
Subsequent configuration operations are performed over the bridge;
The rd device designated as LUN 0 on that SCSI bus is in "available" (standby) mode rather than active mode.
Target-initiator negotiation (TIN) is enabled on the DASS unit to which LUN 0 belongs.
Workaround: Disable TIN on the DASS unit. The procedure for disabling TIN assumes that your system has the following hardware, software, and firmware versions:
A NUMA-Q host
LIC version 9.56.03 and PROM code version 1.73 or above on the DASS
DYNIX/ptx V4.5.1
ATTENTION If you are operating your DASS unit with a Symmetry host running DYNIX/ptx V4.5.0, disabling TIN may cause performance degradation after an online replacement of rd devices. Leave TIN enabled on Symmetry systems with this version of the operating system. The problem described by this report occurs only on NUMA-Q systems connected over the FC Bridge.
Bridge firmware version 1.5.1
Workaround: Use the devdestroy command to remove all VTOCs for all LUNs in a given DASS. In the case of a clustered system, perform this action for all nodes in the cluster.
Workaround: None.