RAID 5 Volumes (Tasks)
This chapter provides information about performing Solaris Volume Manager tasks that are associated with RAID 5 volumes. For information about the concepts involved in these tasks, see Chapter 13, RAID 5 Volumes (Overview).
RAID 5 Volumes (Task Map)
The following task map identifies the procedures needed to manage Solaris Volume Manager RAID 5 volumes.
Task | Description | Instructions |
---|---|---|
Create RAID 5 volumes | Use the Solaris Volume Manager GUI or the metainit command to create RAID 5 volumes. | "How to Create a RAID 5 Volume" |
Check the status of RAID 5 volumes | Use the Solaris Volume Manager GUI or the metastat command to check the status of RAID 5 volumes. | |
Expand a RAID 5 volume | Use the Solaris Volume Manager GUI or the metattach command to expand RAID 5 volumes. | "How to Expand a RAID 5 Volume" |
Enable a slice in a RAID 5 volume | Use the Solaris Volume Manager GUI or the metareplace command to enable slices in RAID 5 volumes. | |
Replace a slice in a RAID 5 volume | Use the Solaris Volume Manager GUI or the metareplace command to enable slices in RAID 5 volumes. |
Creating RAID 5 Volumes
How to Create a RAID 5 Volume
Check "Prerequisites for Creating Solaris Volume Manager Elements" and "Background Information for Creating RAID 5 Volumes".
To create the RAID 5 volume, use one of the following methods:
From the Enhanced Storage tool within the Solaris Management Console, open the Volumes node, then choose Action->Create Volume and follow the steps in the wizard. For more information, see the online help.
Use the following form of the metainit command:
metainit name -r component component component
name is the name for the volume to create.
r specifies to create a RAID 5 volume.
component specifies a slice or soft partition to include in the RAID 5 volume.
To specify an interlace value, add the -i interlace-value option. For more information, see the metainit(1M) man page.
Example--Creating a RAID 5 Volume of Three Slices
# metainit d45 -r c2t3d0s2 c3t0d0s2 c4t0d0s2 d45: RAID is setup |
In this example, the RAID 5 volume d45 is created with the -r option from three slices. Because no interlace value is specified, d45 uses the default of 16 Kbytes. The system verifies that the RAID 5 volume has been set up, and begins initializing the volume.
Note - You must wait for the initialization to finish before you can use the RAID 5 volume.
Where to Go From Here
To prepare the newly created RAID 5 volume for a file system, see "Creating File Systems (Tasks)" in System Administration Guide: Basic Administration. An application, such as a database, that uses the raw volume must have its own way of recognizing the volume.
To associate a hot spare pool with a RAID 5 volume, see "How to Associate a Hot Spare Pool With a Volume".
Maintaining RAID 5 Volumes
How to Check the Status of RAID 5 Volumes
To check status on a RAID 5 volume, use one of the following methods:
From the Enhanced Storage tool within the Solaris Management Console, open the Volumes node and view the status of the volumes. Choose a volume, then choose Action->Properties to see more detailed information. For more information, see the online help.
Use the metastat command.
For each slice in the RAID 5 volume, the metastat command shows the following:
"Device" (device name of the slice in the stripe)
"Start Block" on which the slice begins
"Dbase" to show if the slice contains a state database replica
"State" of the slice
"Hot Spare" to show the slice being used to hot spare a failed slice
Example--Viewing RAID 5 Volume Status
Here is sample RAID 5 volume output from the metastat command.
# metastat d10: RAID State: Okay Interlace: 32 blocks Size: 10080 blocks Original device: Size: 10496 blocks Device Start Block Dbase State Hot Spare c0t0d0s1 330 No Okay c1t2d0s1 330 No Okay c2t3d0s1 330 No Okay |
The metastat command output identifies the volume as a RAID 5 volume. For each slice in the RAID 5 volume, it shows the name of the slice in the stripe, the block on which the slice begins, an indicator that none of these slices contain a state database replica, that all the slices are okay, and that none of the slices are hot spare replacements for a failed slice.
RAID 5 Volume Status Information
The following table explains RAID 5 volume states.
Table 14-1 RAID 5 States
State | Meaning |
---|---|
Initializing | Slices are in the process of having all disk blocks zeroed. This process is necessary due to the nature of RAID 5 volumes with respect to data and parity interlace striping. Once the state changes to "Okay," the initialization process is complete and you are able to open the device. Up to this point, applications receive error messages. |
Okay | The device is ready for use and is currently free from errors. |
Maintenance | A slice has been marked as failed due to I/O or open errors that were encountered during a read or write operation. |
The slice state is perhaps the most important information when you are troubleshooting RAID 5 volume errors. The RAID 5 state only provides general status information, such as "Okay" or "Needs Maintenance." If the RAID 5 reports a "Needs Maintenance" state, refer to the slice state. You take a different recovery action if the slice is in the "Maintenance" or "Last Erred" state. If you only have a slice in the "Maintenance" state, it can be repaired without loss of data. If you have a slice in the "Maintenance" state and a slice in the "Last Erred" state, data has probably been corrupted. You must fix the slice in the "Maintenance" state first then the "Last Erred" slice. See "Overview of Replacing and Enabling Components in RAID 1 and RAID 5 Volumes".
The following table explains the slice states for a RAID 5 volume and possible actions to take.
Table 14-2 RAID 5 Slice States
State | Meaning | Action |
---|---|---|
Initializing | Slices are in the process of having all disk blocks zeroed. This process is necessary due to the nature of RAID 5 volumes with respect to data and parity interlace striping. | Normally none. If an I/O error occurs during this process, the device goes into the "Maintenance" state. If the initialization fails, the volume is in the "Initialization Failed" state, and the slice is in the "Maintenance" state. If this happens, clear the volume and re-create it. |
Okay | The device is ready for use and is currently free from errors. | None. Slices can be added or replaced, if necessary. |
Resyncing | The slice is actively being resynchronized. An error has occurred and been corrected, a slice has been enabled, or a slice has been added. | If desired, monitor the RAID 5 volume status until the resynchronization is done. |
Maintenance | A single slice has been marked as failed due to I/O or open errors that were encountered during a read or write operation. | Enable or replace the failed slice. See "How to Enable a Component in a RAID 5 Volume", or "How to Replace a Component in a RAID 5 Volume". The metastat command will show an invoke recovery message with the appropriate action to take with the metareplace command. |
Maintenance/ Last Erred | Multiple slices have encountered errors. The state of the failed slices is either "Maintenance" or "Last Erred." In this state, no I/O is attempted on the slice that is in the "Maintenance" state, but I/O is attempted to the slice marked "Last Erred" with the outcome being the overall status of the I/O request. | Enable or replace the failed slices. See "How to Enable a Component in a RAID 5 Volume", or "How to Replace a Component in a RAID 5 Volume". The metastat command will show an invoke recovery message with the appropriate action to take with the metareplace command, which must be run with the -f flag. This state indicates that data might be fabricated due to multiple failed slices. |
Note - RAID 5 volume initialization or resynchronization cannot be interrupted.