Document fins/I0854-1
FIN #: I0854-1
SYNOPSIS: Invalid Device structure found after disk removal when under Volume
Manager 3.2 Control
DATE: Nov/05/02
KEYWORDS: Invalid Device structure found after disk removal when under Volume
Manager 3.2 Control
---------------------------------------------------------------------
- Sun Proprietary/Confidential: Internal Use Only -
---------------------------------------------------------------------
FIELD INFORMATION NOTICE
(For Authorized Distribution by SunService)
SYNOPSIS: Invalid Device structure found after disk removal when under
Volume Manager 3.2 Control.
SunAlert: No
TOP FIN/FCO REPORT: Yes
PRODUCT_REFERENCE: Sun Fire V880
PRODUCT CATEGORY: Desktop / SW Admin
PRODUCTS AFFECTED:
Systems Affected:
-----------------
Mkt_ID Platform Model Description Serial Number
------ -------- ----- ----------- -------------
- A30 ALL Sun Fire V880 -
X-Options Affected:
-------------------
Mkt_ID Platform Model Description Serial Number
------ -------- ----- ----------- -------------
- A5200 ALL A5200 Storage Array -
PART NUMBERS AFFECTED:
Part Number Description Model
----------- ----------- -----
- - -
REFERENCES:
BugId: 4630477 - bogus device in "vxdisk list" output after replacing
a disk drive.
ESC: 534731- bogus device in `vxdisk list` output after replacing a
disk drive.
URL: http://sdn.sfbay/cgi-bin/escweb?-I024271?-M534731?-P1
PROBLEM DESCRIPTION:
After removing an FC-Al disk from either the internal disk sub-system
of a V880 or from an A5X00. As a result of the disks have been under
the control of Veritas Volume Manager 3.2, An 'Invalid device
structure' error message can be seen. This causes the replacement of
the disks to fail. When this happens, a reboot is needed at this point
to correct affected devices. Having to reboot the system nullifies
disk hotswap a valuable RAS feature.
The following is an example of removing a disk under control of VM3.2.
Invalid device can be seen that's left behind:
# vxdiskadm
Volume Manager Support Operations
Menu: VolumeManager/Disk
1 Add or initialize one or more disks
2 Encapsulate one or more disks
3 Remove a disk
4 Remove a disk for replacement
5 Replace a failed or removed disk
6 Mirror volumes on a disk
7 Move volumes from a disk
8 Enable access to (import) a disk group
9 Remove access to (deport) a disk group
10 Enable (online) a disk device
11 Disable (offline) a disk device
12 Mark a disk as a spare for a disk group
13 Turn off the spare flag on a disk
14 Unrelocate subdisks back to a disk
15 Exclude a disk from hot-relocation use
16 Make a disk available for hot-relocation use
17 Prevent multipathing/Suppress devices from VxVM's view
18 Allow multipathing/Unsuppress devices from VxVM's view
19 List currently suppressed/non-multipathed devices
20 Change the disk naming scheme
21 Get the newly connected/zoned disks in VxVM view
list List disk information
? Display help about menu
?? Display help about the menuing system
q Exit from menus
Select an operation to perform: 4
Remove a disk for replacement
Menu: VolumeManager/Disk/RemoveForReplace
Use this menu operation to remove a physical disk from a disk group,
while retaining the disk name. This changes the state for the disk
name to a "removed" disk. If there are any initialized disks that are
not part of a disk group, you will be given the option of using one of
these disks as a replacement.
Enter disk name [<disk>,list,q,?] list
Disk group: rootdg
DM NAME DEVICE TYPE PRIVLEN PUBLEN STATE
dm disk01 c1t11d0s2 sliced 4711 35358848 -
dm disk02 c1t13d0s2 sliced 4711 35358848 -
Enter disk name [<disk>,list,q,?] disk01
The following volumes will lose mirrors as a result of this operation:
vol01
No data on these volumes will be lost.
The following devices are available as replacements:
c1t2d0
Choose one of these disks now, to replace disk01.
Select "none" if you do not wish to select a replacement disk.
Choose a device, or select "none"
[<device>,none,q,?] (default: c1t2d0) none
The requested operation is to remove disk disk01 from disk group
rootdg. The disk name will be kept, along with any volumes using the
disk, allowing replacement of the disk.
Select "Replace a failed or removed disk" from the main menu
when you wish to replace the disk.
Continue with operation? [y,n,q,?] (default: y)
Removal of disk disk01 completed successfully.
Remove another disk? [y,n,q,?] (default: n)
Volume Manager Support Operations
Menu: VolumeManager/Disk
1 Add or initialize one or more disks
2 Encapsulate one or more disks
3 Remove a disk
4 Remove a disk for replacement
5 Replace a failed or removed disk
6 Mirror volumes on a disk
7 Move volumes from a disk
8 Enable access to (import) a disk group
9 Remove access to (deport) a disk group
10 Enable (online) a disk device
11 Disable (offline) a disk device
12 Mark a disk as a spare for a disk group
13 Turn off the spare flag on a disk
14 Unrelocate subdisks back to a disk
15 Exclude a disk from hot-relocation use
16 Make a disk available for hot-relocation use
17 Prevent multipathing/Suppress devices from VxVM's view
18 Allow multipathing/Unsuppress devices from VxVM's view
19 List currently suppressed/non-multipathed devices
20 Change the disk naming scheme
21 Get the newly connected/zoned disks in VxVM view
list List disk information
? Display help about menu
?? Display help about the menuing system
q Exit from menus
Select an operation to perform: q
Goodbye.
# luxadm remove_device /dev/rdsk/c1t11d0s2
WARNING!!! Please ensure that no filesystems are mounted on these device(s).
All data on these devices should have been backed up.
The list of devices being used (either busy or reserved) by the host:
1: Box Name: "dak" slot 9
Please enter 's' or <CR> to Skip the "busy/reserved" device(s)
or
'q' to Quit and run the subcommand with
-F (force) option. [Default: s]:
# luxadm remove_device -F /dev/rdsk/c1t11d0s2 --> Had to Force removal
of device.
WARNING!!! Please ensure that no filesystems are mounted on these device(s).
All data on these devices should have been backed up.
The list of devices which will be removed is:
1: Box Name: "dak" slot 9
Node WWN: 2000002037d9ff50
Device Type:Disk device
Device Paths:
/dev/rdsk/c1t11d0s2
Please verify the above list of devices and then enter 'c' or <CR> to
Continue or 'q' to Quit. [Default: c]:
stopping: Drive in "dak" slot 9....Done
offlining: Drive in "dak" slot 9....Done
Hit <Return> after removing the device(s).
Jun 6 08:51:52 eis-dak-f picld[233]:
Device DISK9 removed
Jun 6 08:51:52 eis-dak-f picld[233]: Device DISK9 removed
Drive in Box Name "dak" slot 9
Logical Nodes being removed under /dev/dsk/ and /dev/rdsk:
Logical Nodes being removed under /dev/dsk/ and /dev/rdsk:
c1t11d0s0
c1t11d0s1
c1t11d0s2
c1t11d0s3
c1t11d0s4
c1t11d0s5
c1t11d0s6
c1t11d0s7
# format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c1t0d0 <SUN36G cyl 24620 alt 2 hd 27 sec 107>
/pci@8,600000/SUNW,qlc@2/fp@0,0/ssd@w2100002037f3d422,0
1. c1t1d0 <SUN18G cyl 7506 alt 2 hd 19 sec 248>
/pci@8,600000/SUNW,qlc@2/fp@0,0/ssd@w2100002037bd2c91,0
2. c1t2d0 <SUN18G cyl 7506 alt 2 hd 19 sec 248>
/pci@8,600000/SUNW,qlc@2/fp@0,0/ssd@w2100002037d9ff6a,0
3. c1t3d0 <SUN18G cyl 7506 alt 2 hd 19 sec 248>
/pci@8,600000/SUNW,qlc@2/fp@0,0/ssd@w2100002037d9ff44,0
4. c1t4d0 <SUN18G cyl 7506 alt 2 hd 19 sec 248>
/pci@8,600000/SUNW,qlc@2/fp@0,0/ssd@w2100002037d9ff4c,0
5. c1t5d0 <SUN36G cyl 24620 alt 2 hd 27 sec 107>
/pci@8,600000/SUNW,qlc@2/fp@0,0/ssd@w2100002037e6057a,0
6. c1t8d0 <SUN18G cyl 7506 alt 2 hd 19 sec 248>
/pci@8,600000/SUNW,qlc@2/fp@0,0/ssd@w2100002037d9fd70,0
7. c1t9d0 <SUN18G cyl 7506 alt 2 hd 19 sec 248>
/pci@8,600000/SUNW,qlc@2/fp@0,0/ssd@w2100002037d9ff5d,0
8. c1t10d0 <SUN18G cyl 7506 alt 2 hd 19 sec 248>
/pci@8,600000/SUNW,qlc@2/fp@0,0/ssd@w2100002037d9ff56,0
9. c1t11d0 <drive not available: formatting>
/pci@8,600000/SUNW,qlc@2/fp@0,0/ssd@w2100002037d9ff50,0
10. c1t12d0 <SUN18G cyl 7506 alt 2 hd 19 sec 248>
/pci@8,600000/SUNW,qlc@2/fp@0,0/ssd@w2100002037bde2dc,0
11. c1t13d0 <SUN18G cyl 7506 alt 2 hd 19 sec 248>
/pci@8,600000/SUNW,qlc@2/fp@0,0/ssd@w2100002037d9ff46,0
Specify disk (enter its number): 1
selecting c1t1d0
[disk formatted]
FORMAT MENU:
disk - select a disk
type - select (define) a disk type
partition - select (define) a partition table
current - describe the current disk
format - format and analyze the disk
repair - repair a defective sector
label - write label to the disk
analyze - surface analysis
defect - defect list management
backup - search for backup labels
verify - read and display labels
save - save new disk/partition definitions
inquiry - show vendor, product and revision
volname - set 8-character volume name
!<cmd> - execute <cmd>, then return
quit
format> q
# devfsadm -C
# cd /dev/rdsk
# ls -al c1t11*
lrwxrwxrwx 1 root root 74 May 22 08:10 c1t11d0s0 ->
../../devices/pci@8,600000/SUNW,qlc@2/fp@0,0/ssd@w2100002037d9ff50,0:a,raw
lrwxrwxrwx 1 root root 74 May 22 08:10 c1t11d0s1 ->
../../devices/pci@8,600000/SUNW,qlc@2/fp@0,0/ssd@w2100002037d9ff50,0:b,raw
lrwxrwxrwx 1 root root 74 May 22 08:10 c1t11d0s2 ->
../../devices/pci@8,600000/SUNW,qlc@2/fp@0,0/ssd@w2100002037d9ff50,0:c,raw
lrwxrwxrwx 1 root root 74 May 22 08:10 c1t11d0s3 ->
../../devices/pci@8,600000/SUNW,qlc@2/fp@0,0/ssd@w2100002037d9ff50,0:d,raw
lrwxrwxrwx 1 root root 74 May 22 08:10 c1t11d0s4 ->
../../devices/pci@8,600000/SUNW,qlc@2/fp@0,0/ssd@w2100002037d9ff50,0:e,raw
lrwxrwxrwx 1 root root 74 May 22 08:10 c1t11d0s5 ->
../../devices/pci@8,600000/SUNW,qlc@2/fp@0,0/ssd@w2100002037d9ff50,0:f,raw
lrwxrwxrwx 1 root root 74 May 22 08:10 c1t11d0s6 ->
../../devices/pci@8,600000/SUNW,qlc@2/fp@0,0/ssd@w2100002037d9ff50,0:g,raw
lrwxrwxrwx 1 root root 74 May 22 08:10 c1t11d0s7 ->
../../devices/pci@8,600000/SUNW,qlc@2/fp@0,0/ssd@w2100002037d9ff50,0:h,raw
All above Devices should NOT exist.......
# luxadm insert_device
Please hit <RETURN> when finished adding Fibre Channel
Enclosure(s)/Device(s):
Jun 6 08:54:49 eis-dak-f picld[233]: Device DISK9 inserted
Jun 6 08:54:49 eis-dak-f picld[233]: Device DISK9 inserted
Waiting for Loop Initialization to complete...
New Logical Nodes under /dev/dsk and /dev/rdsk :
c1t11d0s0
c1t11d0s1
c1t11d0s2
c1t11d0s3
c1t11d0s4
c1t11d0s5
c1t11d0s6
c1t11d0s7
No new enclosure(s) were added!!
# vxdctl enable
Jun 6 08:56:37 eis-dak-f vxdmp: NOTICE: vxvm:vxdmp: enabled path 118/0x48
belonging to the dmpnode 239/0x10
Jun 6 08:56:37 eis-dak-f vxdmp: NOTICE: vxvm:vxdmp: enabled path 118/0x48
belonging to the dmpnode 239/0x10
Jun 6 08:56:37 eis-dak-f vxdmp: NOTICE: vxvm:vxdmp: enabled dmpnode
239/0x10
Jun 6 08:56:37 eis-dak-f vxdmp: NOTICE: vxvm:vxdmp: enabled dmpnode
239/0x10
Jun 6 08:56:41 eis-dak-f vxdmp: NOTICE: vxvm:vxdmp: disabled path
118/0x208
belonging to the dmpnode 239/0x8
Jun 6 08:56:41 eis-dak-f vxdmp: NOTICE: vxvm:vxdmp: disabled path
118/0x208
belonging to the dmpnode 239/0x8
Jun 6 08:56:41 eis-dak-f vxdmp: NOTICE: vxvm:vxdmp: disabled dmpnode
239/0x8
Jun 6 08:56:41 eis-dak-f vxdmp: NOTICE: vxvm:vxdmp: disabled dmpnode
239/0x8
# vxdisk list
DEVICE TYPE DISK GROUP STATUS
c1t0d0s2 sliced - - error
c1t1d0s2 sliced - - error
c1t2d0s2 sliced - - online
c1t3d0s2 sliced - - error
c1t4d0s2 sliced - - online
c1t5d0s2 sliced - - error
c1t8d0s2 sliced - - online
c1t9d0s2 sliced - - online
c1t10d0s2 sliced - - online
c1t11d0s2 sliced - - error
c1t11d0s2 sliced - - error --> Invalid
Device
c1t12d0s2 sliced - - error
c1t13d0s2 sliced disk02 rootdg online
- - disk01 rootdg removed was:c1t11d0s2
The root cause of the problem is that if the VxVM is not disabled,
then the drive device node is dangling and should be removed from the
device tree. After the new drive is inserted, a new node is created
on the device tree and will not see the dangling device node.
IMPLEMENTATION:
---
| | MANDATORY (Fully Proactive)
---
---
| X | CONTROLLED PROACTIVE (per Sun Geo Plan)
---
---
| | REACTIVE (As Required)
---
CORRECTIVE ACTION:
The following recommendation is provided as a guideline for authorized
Enterprise Services Field Representatives who may encounter the above
mentioned problem.
Workaround to this issue can be achieved with the addition of two steps.
After selecting option 4 from 'vxdiskadm' and prior to using
'devfsadm -C' issue, the 'vxdisk' command and remove the disk along with
'luxadm -e offline' command to detach the appropriate ssd instance.
The steps needed are.
1) vxdiskadm option 4.
2) vxdisk rm <da name like c3t25d0 > - Added Step
3) luxadm -e offline <device_path>. - detach ssd instance
4) devfsadm -C - clean up symlinks.
5) luxadm insert_device.
6) vxdctl enable.
7) vxdiskadm option 5.
Example output:
# vxdiskadm
Volume Manager Support Operations
Menu: VolumeManager/Disk
1 Add or initialize one or more disks
2 Encapsulate one or more disks
3 Remove a disk
4 Remove a disk for replacement
5 Replace a failed or removed disk
6 Mirror volumes on a disk
7 Move volumes from a disk
8 Enable access to (import) a disk group
9 Remove access to (deport) a disk group
10 Enable (online) a disk device
11 Disable (offline) a disk device
12 Mark a disk as a spare for a disk group
13 Turn off the spare flag on a disk
14 Unrelocate subdisks back to a disk
15 Exclude a disk from hot-relocation use
16 Make a disk available for hot-relocation use
17 Prevent multipathing/Suppress devices from VxVM's view
18 Allow multipathing/Unsuppress devices from VxVM's view
19 List currently suppressed/non-multipathed devices
20 Change the disk naming scheme
21 Get the newly connected/zoned disks in VxVM view
list List disk information
? Display help about menu
?? Display help about the menuing system
q Exit from menus
Select an operation to perform: 4
Remove a disk for replacement
Menu: VolumeManager/Disk/RemoveForReplace
Use this menu operation to remove a physical disk from a disk group,
while retaining the disk name. This changes the state for the disk
name to a "removed" disk. If there are any initialized disks that are
not part of a disk group, you will be given the option of using one of
these disks as a replacement.
Enter disk name [<disk>,list,q,?] list
Disk group: rootdg
DM NAME DEVICE TYPE PRIVLEN PUBLEN STATE
dm disk01 c1t11d0s2 sliced 4711 35358848 -
dm disk02 c1t13d0s2 sliced 4711 35358848 -
Enter disk name [<disk>,list,q,?] disk01
The following volumes will lose mirrors as a result of this operation:
vol01
No data on these volumes will be lost.
The following devices are available as replacements:
c1t2d0
Choose one of these disks now, to replace disk01.
Select "none" if you do not wish to select a replacement disk.
Choose a device, or select "none"
[<device>,none,q,?] (default: c1t2d0) none
The requested operation is to remove disk disk01 from disk group
rootdg. The disk name will be kept, along with any volumes using the
disk, allowing replacement of the disk.
Select "Replace a failed or removed disk" from the main menu
when you wish to replace the disk.
Continue with operation? [y,n,q,?] (default: y)
Removal of disk disk01 completed successfully.
Remove another disk? [y,n,q,?] (default: n)
Volume Manager Support Operations
Menu: VolumeManager/Disk
1 Add or initialize one or more disks
2 Encapsulate one or more disks
3 Remove a disk
4 Remove a disk for replacement
5 Replace a failed or removed disk
6 Mirror volumes on a disk
7 Move volumes from a disk
8 Enable access to (import) a disk group
9 Remove access to (deport) a disk group
10 Enable (online) a disk device
11 Disable (offline) a disk device
12 Mark a disk as a spare for a disk group
13 Turn off the spare flag on a disk
14 Unrelocate subdisks back to a disk
15 Exclude a disk from hot-relocation use
16 Make a disk available for hot-relocation use
17 Prevent multipathing/Suppress devices from VxVM's view
18 Allow multipathing/Unsuppress devices from VxVM's view
19 List currently suppressed/non-multipathed devices
20 Change the disk naming scheme
21 Get the newly connected/zoned disks in VxVM view
list List disk information
? Display help about menu
?? Display help about the menuing system
q Exit from menus
Select an operation to perform: q
Goodbye.
# vxdisk rm c1t10d0s2 --------> Added Step
# luxadm -e offline <device_path like /dev/rdsk/c1t10d0s2>
# devfsadm -C
# cd /dev/rdsk
# ls c1t10*
c1t10*: No such file or directory
# luxadm insert_device
Please hit <RETURN> when you have finished adding Fibre Channel
Enclosure(s)/Device(s):
Waiting for Loop Initialization to complete...
New Logical Nodes under /dev/dsk and /dev/rdsk :
c1t10d0s0
c1t10d0s1
c1t10d0s2
c1t10d0s3
c1t10d0s4
c1t10d0s5
c1t10d0s6
c1t10d0s7
No new enclosure(s) were added!!
# vxdctl enable
# vxdisk list
DEVICE TYPE DISK GROUP STATUS
c1t0d0s2 sliced - - error
c1t1d0s2 sliced - - error
c1t2d0s2 sliced - - online
c1t3d0s2 sliced - - online invalid
c1t4d0s2 sliced - - online
c1t5d0s2 sliced - - error
c1t8d0s2 sliced - - online
c1t9d0s2 sliced - - online
c1t10d0s2 sliced - - online
c1t11d0s2 sliced disk01 rootdg online
c1t12d0s2 sliced - - error
c1t13d0s2 sliced - - online invalid
- - disk02 rootdg removed was:c1t10d0s2
# vxdiskadm
Volume Manager Support Operations
Menu: VolumeManager/Disk
1 Add or initialize one or more disks
2 Encapsulate one or more disks
3 Remove a disk
4 Remove a disk for replacement
5 Replace a failed or removed disk
6 Mirror volumes on a disk
7 Move volumes from a disk
8 Enable access to (import) a disk group
9 Remove access to (deport) a disk group
10 Enable (online) a disk device
11 Disable (offline) a disk device
12 Mark a disk as a spare for a disk group
13 Turn off the spare flag on a disk
14 Unrelocate subdisks back to a disk
15 Exclude a disk from hot-relocation use
16 Make a disk available for hot-relocation use
17 Prevent multipathing/Suppress devices from VxVM's view
18 Allow multipathing/Unsuppress devices from VxVM's view
19 List currently suppressed/non-multipathed devices
20 Change the disk naming scheme
21 Get the newly connected/zoned disks in VxVM view
list List disk information
? Display help about menu
?? Display help about the menuing system
q Exit from menus
Select an operation to perform: 5
Replace a failed or removed disk
Menu: VolumeManager/Disk/ReplaceDisk
Use this menu operation to specify a replacement disk for a disk that
was removed with the "Remove a disk for replacement" menu operation, or
that failed during use. You will be prompted for a disk name to
replace and a disk device to use as a replacement. You can choose an
uninitialized disk, in which case the disk will be initialized, or
choose a disk that already have initialized using the Add or initialize
a disk menu operation.
Select a removed or failed disk [<disk>,list,q,?] list
Disk group: rootdg
DM NAME DEVICE TYPE PRIVLEN PUBLEN STATE
dm disk02 - - - - REMOVED
Select a removed or failed disk [<disk>,list,q,?] disk01
The following devices are available as replacements:
c1t2d0s2
Choose one of these disks to replace disk01.
Choose "none" to initialize another disk to replace disk01.
Choose a device, or select "none"
[<device>,none,q,?] (default: c1t2d0s2) none
Select disk device to initialize [<address>,list,q,?] c1t13d0
The requested operation is to initialize disk device c1t13d0 and
to then use that device to replace the removed or failed disk
disk02 in disk group rootdg.
Continue with operation? [y,n,q,?] (default: y)
Use a default private region length for the disk?[y,n,q,?] (default: y)
vxbootsetup: NOTE: Root file system is not defined on a volume.
Replacement of disk disk02 in group rootdg with disk device
c1t13d0 completed successfully.
Replace another disk? [y,n,q,?] (default: n)
Volume Manager Support Operations
Menu: VolumeManager/Disk
# vxdiskadm
Volume Manager Support Operations
Menu: VolumeManager/Disk
1 Add or initialize one or more disks
2 Encapsulate one or more disks
3 Remove a disk
4 Remove a disk for replacement
5 Replace a failed or removed disk
6 Mirror volumes on a disk
7 Move volumes from a disk
8 Enable access to (import) a disk group
9 Remove access to (deport) a disk group
10 Enable (online) a disk device
11 Disable (offline) a disk device
12 Mark a disk as a spare for a disk group
13 Turn off the spare flag on a disk
14 Unrelocate subdisks back to a disk
15 Exclude a disk from hot-relocation use
16 Make a disk available for hot-relocation use
17 Prevent multipathing/Suppress devices from VxVM's view
18 Allow multipathing/Unsuppress devices from VxVM's view
19 List currently suppressed/non-multipathed devices
20 Change the disk naming scheme
21 Get the newly connected/zoned disks in VxVM view
list List disk information
? Display help about menu
?? Display help about the menuing system
q Exit from menus
Select an operation to perform: q
Goodbye.
# vxdisk list
DEVICE TYPE DISK GROUP STATUS
c1t0d0s2 sliced - - error
c1t1d0s2 sliced - - error
c1t2d0s2 sliced - - online
c1t3d0s2 sliced - - online invalid
c1t4d0s2 sliced - - online
c1t5d0s2 sliced - - error
c1t8d0s2 sliced - - online
c1t9d0s2 sliced - - online
c1t10d0s2 sliced - - online
c1t11d0s2 sliced disk01 rootdg online
c1t12d0s2 sliced - - error
c1t13d0s2 sliced disk02 rootdg online
# vxprint
Disk group: rootdg
TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0
dg rootdg rootdg - - - - - -
dm disk01 c1t13d0s2 - 35358848 - - - -
dm disk02 c1t11d0s2 - 35358848 - - - -
v vol01 fsgen ENABLED 2097152 - ACTIVE ATT1 -
pl vol01-01 vol01 ENABLED 2101552 - ACTIVE - -
sd disk01-01 vol01-01 ENABLED 2101552 0 - - -
pl vol01-02 vol01 ENABLED 2101552 - ACTIVE ATT -
sd disk02-01 vol01-02 ENABLED 2101552 0 - - -
# vxprint
Disk group: rootdg
TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0
dg rootdg rootdg - - - - - -
dm disk01 c1t13d0s2 - 35358848 - - - -
dm disk02 c1t11d0s2 - 35358848 - - - -
v vol01 fsgen ENABLED 2097152 - ACTIVE - -
pl vol01-01 vol01 ENABLED 2101552 - ACTIVE - -
sd disk01-01 vol01-01 ENABLED 2101552 0 - - -
pl vol01-02 vol01 ENABLED 2101552 - ACTIVE - -
sd disk02-01 vol01-02 ENABLED
COMMENTS:
Using 'vxdisk rm' command in conjunction with luxadm -e offline will allow
replacement of disk to succeed without having to reboot the system.
Maintaining uptime and availablity. Allowing hotswap of disks
to remain a viable solution. The advantage to using this procedure is that
wheather the disk was pulled prior to taking any VM action. Or when using
the procedure outlined in above steps disk replacement is successful without
needing a reboot. In reference to SRDB 17003 using option 11 to offline a
disk through VM in some cases this will work. In other cases where the disk
has physically been pulled first option 11 to offline disk won't cleanup any
opens left. Thus we will have the duplicate enties.
============================================================================
Implementation Footnote:
i) In case of MANDATORY FINs, Enterprise Services will attempt to
contact all affected customers to recommend implementation of
the FIN.
ii) For CONTROLLED PROACTIVE FINs, Enterprise Services mission critical
support teams will recommend implementation of the FIN (to their
respective accounts), at the convenience of the customer.
iii) For REACTIVE FINs, Enterprise Services will implement the FIN as the
need arises.
----------------------------------------------------------------------------
All released FINs and FCOs can be accessed using your favorite network
browser as follows:
SunWeb Access:
--------------
* Access the top level URL of http://sdpsweb.ebay/FIN_FCO/
* From there, select the appropriate link to query or browse the FIN and
FCO Homepage collections.
SunSolve Online Access:
-----------------------
* Access the SunSolve Online URL at http://sunsolve.Corp/
* From there, select the appropriate link to browse the FIN or FCO index.
Internet Access:
----------------
* Access the top level URL of https://infoserver.Sun.COM
--------------------------------------------------------------------------
General:
--------
* Send questions or comments to finfco-manager@sdpsweb.EBay
--------------------------------------------------------------------------
Copyright (c) 1997-2003 Sun Microsystems, Inc.