InfoDoc ID |
|
Synopsis |
|
Date |
21751 |
|
Sun StorEdge[TM] T3 Array Quick and Easy Installation Procedure |
|
15 Apr 2002 |
INTRODUCTION
This document is a high level overview to be utilized as a "quick" guide for
installing the Sun StorEdge[TM] T3 Disk Array subsystem in single unit or partner group
configurations. This guide will present only "overviews" of the install process.
For detailed "step by step" installation instructions refer to T3 Installation,
Operation and Service Guide, p/n 806-1062-10, REV B and the Administrator's Guide,
p/n 806-1063, REV B..
It is recommended you review the entire guide before proceeding with the
installation.
NOTE: Installation procedures in this guide are accomplished via ethernet from the
management host. No Component Manager or STORtools installation instructions are
described.
The type of host functions required for the T3 are as follows:
Application or "Data" host is the host connected to the T3 via the FC-AL interface
through which customer data will move.
Management or "Admin" host is the host from which administration of the T3 will be
done via the network connection.
Boot host or "TFTP" host is the host where the firmware image is located that can
act as a network boot device for the T3.
The "Data", "Admin" and "TFTP" hosts can be the same or different hosts. The names
are used to define the function that these host(s) will perform in reference to the
T3.
SUPPORTED CONFIGURATIONS
========================
Currently, two administrative domain configurations are supported:
Single controller unit. This standalone disk tray is a high-performance, high-RAS
configuration with a single hardware RAID cached controller. The unit is fully
populated with redundant hot-swap components and nine disk drives.
Partner group. This is a configuration of two controller units paired using
interconnect cables for back-end data and administrative connections. The partner
group provides all the RAS of single controller units, plus redundant hardware RAID
controllers with mirrored caches, and redundant host channels for continuous data
availability for host applications.
PROCEDURE
=========
The following steps are the high level installation instructions. These steps will
refer the installer to pages in the T3 Installation, Operation, Service manual and
the T3 Administrator guide were necessary.
For readability, the T3 Installation, Operation, Service manuals and the T3
Administrator guide will be referred to throughout the installation as the IOS and
Admin Guides.
Install Process
===============
1. Determine the optimal T3 configuration for the customer environment. Confirm
that the default RAID and volume configuration meet customer requirements.
2. Complete a site preparation. Verify the T3 disk tray names, IP addresses,
gateway, netmask and tftpboot host IP addresses from the customer contact. The
Data, Admin and TFTP hosts may be the same or different hosts.
3. Unpack and inspect the disk tray. You should have the following items:
One media interface adapter (MIA)
One 5m fiber-optic cable
Two power cords
Two loop interconnect cables
4. Set T3 unit(s) on table top in a permanent location.
5. Record MAC address. It is located on pull out tab, inside front cover, at the
left front of unit, next to drive 1.
6. Modify the following files on the Admin host for all attached T3 arrays. Make
backup copies of each file before modifying. The Data, Admin and TFTP hosts can be
the same or different hosts.
/etc/ethers Add MAC address and T3 disk tray name
/etc/hosts Add disk tray IP address and disk tray name from /etc/ethers file
/etc/nsswitch.conf Move [NOTFOUND=return] to end of "hosts" and "ethers" lines
/etc/syslog.conf Add "local7.warn /var/adm/messages"
NOTE: TAB between local7.warn and /var/adm/messages
Restart the syslog daemon after making any changes to the syslog.conf file.
OPTION: Add "local7.warn /var/adm/messages.t3" This file must be created
manually and will contain only error messages from the configured
T3(s). Manual maintenance of this file is required.
7. RARP is the suggested method to download and set the T3 IP address. Review
this method with the customer and determine if this is acceptable in their
environment. If RARP can not be used, a SUN Service Field Engineer must
manually set the IP address in the T3.
NOTE: Switches or routers which exist in the ethernet connection between the host
and the T3 can cause RARP/ARP connection problems. Verify the switch and
routers are capable of passing RARP broadcast during the T3 installation.
To verify the RARP daemon is running on the Admin host do:
# ps -eaf | grep rarpd
If it's not running, then it needs to be started on the Admin host
by executing this command as root:
# /usr/sbin/in.rarpd -a &"
Verify the file /etc/init.d/inetsvc on the Admin host contains the line required
for automated RARP start up. If it is not present, add the following lines
into the file and save it:
#echo "network interface configuration:"
#/usr/sbin/ifconfig -a
#Add rarpd daemon for T3
/usr/sbin/in.rarpd -a &
8. Connect 10BASE-T cable from the host network that the Admin host is part of to the
T3. Also connect the MIA(s) to the FC-AL port(s) on the controller(s). Apply AC
power to unit and allow the unit to boot up. T3 will have completed a successful
boot cycle when all LEDs turn to a solid GREEN.
9. Open a telnet session to the T3 from the Admin host. This can only be done once
the unit has completed the boot sequence. This is indicated when the controller
'on line' LED has gone solid GREEN.
Login as "root"
Hit enter for password.
After logging in change the pass word with the "passwd" command.
10. Execute the " set " command and check the current setting, then modify attributes
for the customers parameters:
Netmask: Site specific setting.
Hostname: T3 host name should be same as entered in STEP 6 in
/etc/ethers and /etc/hosts files.
Gateway IP address: Site specific setting.
tftphost IP address: Site specific setting. TFTP host with /tftpboot directory.
tftpfile name: Bootcode image file name.
Example nb100.bin. Must be in the /tftpboot directory.
11. Execute the " date " command with new date and time. Format = 199904011630:30
year, month, day and time
12. Verify the T3 firmware revision levels. These are: the boot code release, the
controller EPROM release, the drive code release and the interconnect loop card
FLASH release revision levels.
Execute: " ver " to get the controller boot code revision level.
Execute: " fru list " and check " revision " for controller EEPROM level,
drive code level and loop card FLASH code revision levels.
Upgrade firmware if necessary. Reference the IOS and Admin guide for procedures.
13. Verify T3 FRU status by executing " fru stat ". All installed FRU's should
have a ready, enabled or normal status.
14. Verify volume configuration and mount status by executing " vol list " and
" vol stat". Verify the volume is displayed under the vol list output and
shows mounted under the vol stat output. The vol-name in the following
example is v0.
If volume is not displayed under the " vol list " output, execute the following
command to create the volume:
"vol add v0 data u1d1-9 raid 5" (Master or single unit)
NOTE: Global parameters such as blocksize must be correctly set at this time.
Changing blocksize requires removing, creating and initialization all volumes.
After the prompt returns, execute the " vol stat " command. Drive status must be
"0", if not, run the " vol init vol-name sysarea " to correct the drive labels.
When the prompt returns, verify the drive status is indeed "0".
The newly created volume must have the data areas initialized before usage.
Execute the following commands to initialize them:
" vol init vol-name data "
This process can take an excess of 20 minutes per volume.
Mount the volume by executing:
" vol mount vol-name "
Verify the volume status is good by executing the " vol list " and " vol stat "
commands.
15. Configure the T3 syslog error message monitoring. On the Admin host create a
"hosts" and syslog.conf file which will be ftp'ed to the T3. Add the following
data to the T3 syslog.conf.
NOTE: an example file can be found in the T3 /etc directory.
#syslog.conf for T3.messages
#
*.notice /syslog
*.warn @199.182.135.239 (IP address of host in step 6)
*.warn snmp_trap 199.182.135.230 (IP address of host monitoring snmp)
On T3 execute the following:
" set logto * " to start the remote logging feature.
16. If you are establishing a partner group, proceed with this step. If this is a
single unit installation, continue with STEP 17.
To make a T3 partner group, complete the following:
a. Power down T3s by pressing "power switches" on all PCU's and
allow the T3 to cycle down. The left side LEDs on PCU's will be
AMBER at completion of the power down cycle.
b. Connect the loop interconnect cables between MASTER and ALTERNATE
MASTER.
c. Power up MASTER and ALT MASTER by pressing the power switches on PCU's.
Allow the T3 MASTER and ALT MASTER to complete the boot sequence.
This will be indicated by GREEN LEDs on all PCU's, loop cards, controllers
and drives.
d. Open a telnet session from the Admin host to the MASTER unit and execute
the " fru list " and " fru stat " commands. Verify all FRU's in the
ALT MASTER (U2) are displayed and status is READY / ENABLED. Also verify
the controllers indicate u1ctr is the MASTER and u2ctr is the ALT MASTER.
e. Execute the " vol list " and " vol stat " commands. Verify volume in
MASTER is still listed and mounted.
f. If a U2 volume is not displayed under " vol list " output, execute the
following commands to create the volume:
The vol-name in the following example is v1.
" vol add v1 data u2d1-9 raid 5 " (Alternate in partner group)
NOTE: Global parameters such as blocksize are set by the Master, changing blocksize
requires removing, creating and initializing all volumes.
After the prompt returns, execute the " vol stat " command. Drive status
must be "0", if not, run the " vol init vol-name sysarea " to correct drive
labels. When the prompt returns, verify the drive status is indeed "0".
The newly created volume must have the data areas initialized before usage.
Execute the following commands to initialize them, " vol init vol-name data ".
This process can take an excess of 20 minutes per volume.
Mount the volume by executing: " vol mount vol-name "
Verify volume status is good by executing the " vol list " and " vol stat "
commands.
g. If alternate pathing (AP) or dynamic multi-path (DMP) software is to be
used on the Data host, execute the " sys list " command to verify that
multi-path support is enabled in the T3. If not, executing the following
command to turn it on:
" sys mp_support rw "
17. Complete the single unit or partner group installation by performing the
following:
a. Issue a " reset " on the T3 to reboot the unit(s). Upon a successful boot,
open a telnet session to the unit(s) from the Admin host and execute:
" set, sys list, fru list, fru stat, vol list, vol mode, port list, and port
listmap " commands. Review the output and copy these into a file for
later reference.
b. On the Data host, insure all necessary operating system and driver patches
required for T3 support are installed. Refer to SunSolve for latest
revisions.
c. Bring down the Data host and power it off if new HBA(s) are to be installed.
d. Connect the fibre channel(s) cable(s) from the Data host to the MIA(s) on
the T3.
e. Power on the Data host if HBAs were installed and perform a boot -r .
Otherwise, perform a boot -r from the Data host to establish data paths
and device links to the new T3 volumes.
f. On the host, use the format command to verify all connections, both
single path and multi-path, to the LUNs in the T3.
g. Use the autoconfig option to place a system label on the LUN.
h. Verify data transfer from host to T3.
18. Install and configure Component Manager on the Admin host.
Install and configure StorTools on the Data host.
Install and configure Veritas on the Data host.
Refer to SUNSOLVE for the latest supported versions.
Once Veritas has been installed or upgraded insure data can be transferred
to and from the T3.
For Product Watch (StorTools and Component Manager) information specific
to the T3 go to url: http://nscc.central/ccare/registration
19. Engineering, Manufacturing and Service wants and needs your feedback so they can
continue to improve their processes and product quality. You can link directly to
an installation report for this customer by entering the URL provided in the
APPROVAL message issued by SCOPEtool for this customer.
APPLIES TO: AFO Vertical Team Docs/Storage, Hardware/Disk Storage Subsystem/StorEdge Disk Array/StorEdge T3
ATTACHMENTS:
Copyright (c) 1997-2003 Sun Microsystems, Inc.