NetApp system installation workbook Spokane
-
Upload
providence-health-services -
Category
Technology
-
view
1.941 -
download
20
Transcript of NetApp system installation workbook Spokane
System Installation Workbook
Spokane
Version 2.7
Date: Apr 2013
ABOUT NETAPP
NetApp creates innovative storage and data management solutions that deliver outstanding cost efficiency and accelerate performance breakthroughs. Discover our passion for helping companies around the world go further, faster at www.netapp.com.
NetApp, Inc.495 East Java DriveSunnyvale, CA 94089 USATelephone: +1 (408) 822-6000Fax: +1 (408) 822-4501Support telephone: +1 (888) 4-NETAPP
© Copyright 2012 NetApp, Inc. All rights reserved.www.netapp.com
Copyright and trademark information
© Copyright 2012 NetApp, Inc. All rights reserved. No portions of this document may be reproduced without prior written consent of NetApp, Inc. Specifications are subject to change without notice. NetApp, the NetApp logo, Go further, faster, and Data ONTAP are trademarks or registered trademarks of NetApp, Inc. in the United States and/or other countries.
All other brands or products are trademarks or registered trademarks of their respective holders and should be treated as such.
© Copyright 2012 NetApp, Inc. All rights reserved.www.netapp.com
Table of contents
1 SITE REQUIREMENTS.....................................................................................................6
1.1 Physical characteristics -Storage controllers and disk drives.......................................6
1.2 System power requirements - Storage controllers and disk drives...............................8
1.2.1 FAS20xx series systems....................................................................................................................................8
1.2.2 FAS22xx series systems....................................................................................................................................8
1.2.3 FAS30xx series systems....................................................................................................................................9
1.2.4 FAS31xx series systems....................................................................................................................................9
1.2.5 FAS32xx series systems..................................................................................................................................10
1.2.6 FAS60xx series systems..................................................................................................................................11
1.2.7 FAS62xx series systems..................................................................................................................................11
1.2.8 DS14 series disk shelves.................................................................................................................................11
1.2.9 DS2246 disk shelves........................................................................................................................................12
1.2.10 DS4243 disk shelves........................................................................................................................................12
1.2.11 DS4246 disk shelves........................................................................................................................................13
1.2.12 DS4486 disk shelves........................................................................................................................................13
1.3 System Cabinet........................................................................................................14
1.4 System cabinet configurations.................................................................................14
1.5 Network cabling requirements..................................................................................15
1.5.1 Ethernet Configuration Recommendations......................................................................................................15
2 DATA ONTAP® 7-MODE CONFIGURATION DETAILS........................................................16
2.1 Basic configuration..................................................................................................16
2.1.1 IFGRPs............................................................................................................................................................. 16
2.1.2 Network interface configuration........................................................................................................................16
2.1.3 Default gateway................................................................................................................................................17
2.1.4 Administration host (Optional)..........................................................................................................................17
2.1.5 Time zone......................................................................................................................................................... 17
2.1.6 Language encoding for multiprotocol files........................................................................................................17
2.1.7 Domain Name Services (DNS) resolution........................................................................................................17
2.1.8 Network Information Services (NIS) resolution.................................................................................................17
2.1.9 Remote Management Settings (RLM/SP/BMC)...............................................................................................18
2.1.10 Alternate Control Path (ACP) management for SAS shelves...........................................................................18
© Copyright 2012 NetApp, Inc. All rights reserved.www.netapp.com
2.1.11 CIFS configuration............................................................................................................................................18
2.1.12 Configure Virtual LANs (Optional)....................................................................................................................18
2.1.13 AutoSupport settings........................................................................................................................................19
2.1.14 Customer/RMA details......................................................................................................................................19
2.1.15 Time synchronization........................................................................................................................................19
2.1.16 SNMP management settings (Optional)...........................................................................................................19
3 DATA ONTAP 7-MODE INSTALLATION AND VERIFICATION CHECKLISTS...........................21
4 DATA ONTAP CLUSTER-MODE CONFIGURATION DETAILS...............................................25
4.1 Cluster information..................................................................................................25
4.1.1 Cluster.............................................................................................................................................................. 25
4.1.2 Licensing.......................................................................................................................................................... 25
4.1.3 Admin Vserver..................................................................................................................................................26
4.1.4 Time synchronization........................................................................................................................................26
4.1.5 Time zone......................................................................................................................................................... 26
4.2 Node information.....................................................................................................26
4.2.1 Physical port identification................................................................................................................................26
4.2.2 Node management LIF.....................................................................................................................................27
4.3 Cluster network information.....................................................................................28
4.3.1 Interface groups (IFGRP).................................................................................................................................28
4.3.2 Configure Virtual LANs (VLANs)......................................................................................................................28
4.3.3 Logical Interfaces (LIFs)...................................................................................................................................28
4.4 Intercluster network information..............................................................................29
4.5 Vserver information.................................................................................................29
4.5.1 Creating Vserver...............................................................................................................................................29
4.5.2 Creating Volumes on the Vserver.....................................................................................................................29
4.5.3 IP Network Interface on the Vserver.................................................................................................................30
4.5.4 FCP Network Interface on the Vserver.............................................................................................................30
4.5.5 LDAP services.................................................................................................................................................. 30
4.5.6 CIFS protocol....................................................................................................................................................31
4.5.7 iSCSI protocol...................................................................................................................................................31
4.5.8 FCP protocol.....................................................................................................................................................31
4.6 Support information.................................................................................................31
© Copyright 2012 NetApp, Inc. All rights reserved.www.netapp.com
4.6.1 Remote Management Settings (RLM/BMC/SP)...............................................................................................31
4.6.2 AutoSupport settings........................................................................................................................................32
4.6.3 Customer/RMA details......................................................................................................................................32
A. DATA ONTAP CLUSTER-MODE INSTALLATION AND VERIFICATION CHECKLISTS...............33
A.1 Definitions...............................................................................................................36
List of Tables
Table 1: Electrical Requirements – FAS20xx series...................................................................................................................8
Table 2: Electrical requirements – FAS2220...............................................................................................................................8
Table 3: Electrical requirements— FAS2240 series (one controller module, no mezzanine card and either 450-GB or 600-GB disk drives for FAS2240; 1TB or 2TB disk drives for FAS2240-4).....................................................................................8
Table 4: Electrical requirements –FAS30xx series.....................................................................................................................9
Table 5: Electrical requirements – FAS31xx series.....................................................................................................................9
Table 6: Electrical requirements – FAS3210 with one 256-GB Flash Cache module— one controller module.......................10
Table 7: Electrical requirements – FAS3240 with one 256-GB, one 512-GB, or one 1-TB Flash Cache module per controller module—two controller modules......................................................................................................................................10
Table 8: Electrical requirements – FAS3270 with one 256-GB, one 512-GB, or one 1-TB Flash Cache module per controller module—two controller modules......................................................................................................................................10
Table 9: Electrical requirements -FAS6030/FAS6040...............................................................................................................11
Table 10: Electrical requirements-FAS 6210 single-controller module;FAS6240 & FAS6280 with I/O expansion...................11
Table 11: Electrical requirements - DS14mk2 AT 7.2K speed..................................................................................................11
Table 12: Electrical requirements- DS14mk2 FC 15K speed....................................................................................................12
Table 13: Electrical requirements—DS2246-SAS drives..........................................................................................................12
Table 14: Electrical requirements—DS4243-SAS drives..........................................................................................................12
Table 15: Electrical requirements—DS4243-SATA drives........................................................................................................13
Table 16: Electrical requirements— DS4246 -SATA drives, 6-100GB SSD drives with 18-1TB or 18-3TB disk drives...........13
Table 17: Electrical requirements –DS4486..............................................................................................................................13
© Copyright 2012 NetApp, Inc. All rights reserved.www.netapp.com
WELCOME
Dear Customer,
Thank you for choosing a NetApp storage system and Professional Services installation.
To ensure a seamless deployment and integration into your environment, please complete the information requested in this document before our engineer arrives on site. This will ensure that as many questions as possible are answered before the day of the installation, so you can start using your system.
The first part of the document includes environmental information about our products, which may help you with your computer room planning
The second part of the workbook covers the information that the professional services engineer will need on the day of installation. Please obtain the required information and return a completed copy of this document to the engineer before they arrive.
We look forward to working with you.
Yours faithfully
(NetApp Services Engineering)
© Copyright 2012 NetApp, Inc. All rights reserved.www.netapp.com
Preface
This document describes how install a NetApp system.
AUDIENCE
The primary audience for this document is PS consultants, and IT admin engineers.
NON-DISCLOSURE REQUIREMENTS
© Copyright 2012 NetApp. All rights reserved. This document contains the confidential and proprietary information of NetApp, Inc. Do not reproduce or distribute without the prior written consent of NetApp.
INFORMATION ABOUT THIS DOCUMENT
All information about this document including version history, review and approval, typographical conventions, references, and a glossary of terms can be found in the final chapter of this document.
© Copyright 2012 NetApp, Inc. All rights reserved.www.netapp.com
1 Site requirements
Please download and read the latest version of the Site requirements guide available at http://support.netapp.com/
1.1 Physical characteristics -Storage controllers and disk drives
Hardware Height Width Depth Weight Rack
units
FAS62xx
series
10.2 in
(25.86 cm)
17.6 in
(44.68 cm)
29 in
(73.66 cm)
Including cable management tray
Single controller module
99.2 lbs(45 kg)
6Controller and I/O expansion module
125.7 lbs(57 kg)
Two controller modules
130.1lbs(59 kg)
FAS60xx series
10.32 in
(26.21 cm)
17.53 in
(44.52 cm)
29 in
(73.66 cm)
Including cable management
tray
122 lbs(55.34 kg)
6
FAS32xx
series
5.12 in
(13.0 cm)
17.61 in
(44.7 cm)
24 in
(60.7 cm)
Single controller module
67.3 lbs(30.5 kg)
3Controller and I/O expansion module
74.5 lbs(33.8 kg)
Two controller modules
79.5 lbs(36.1 kg)
FAS31xx series
10.75 in
(27.3 cm)
17.73 in
(45.0 cm)
24 in
(60.7 cm)
Single controller module
102 lbs(46.27 kg)
6Two controller modules
121 lbs(54.89 kg)
FAS30xx series
5.13 in
(13 cm)
17.73 in
(45.0 cm)
24 in
(60.7 cm)
68 lbs(30.84 kg)3
FAS2240-4 7 in
(17.9 cm)
17.73 in
(45.0 cm)
28 in
(71.1 cm)
Including the cable management arm installed
Single controller module
102.3 lbs(46.4 kg)
4Two controller modules
107.8 lbs(48.9 kg)
FAS2240-2 3.3 in
(8.4 cm)
17.6 in
(44.7 cm)
23.1 in
(58.7 cm)
Including the cable management arm installed
Single controller module
50.7 lbs(23 kg)
2Two controller modules
56 lbs(25.4 kg)
FAS2220 3.4 in
(8.4 cm)
17.6 in
(44.7 cm)
24.1 in
(61.2 cm)
Including the cable management arm installed
Single controller
Module
57.8 lbs(26.2 kg)
2Two controller modules
62.4 lbs (28.3 kg)
FAS2050 6.9 in 17.6 in 22.5 in Full(chassis with 110 lbs(49.9 kg) 4
© Copyright 2012 NetApp, Inc. All rights reserved.www.netapp.com
(17.5 cm) (44.7 cm) (57.2 cm) all disk drives)
Empty(No internal disks)
91 lbs(41.3 kg)
FAS2040 3.5 in
(8.9 cm)
17.6 in
(44.7 cm)
22.5 in
(57.2 cm)
Full(chassis with all disk drives)
66 lbs(29.9 kg)
2Empty(No internal disks)
57 lbs(25.9 kg)
FAS2020 3.5 in
(8.9 cm)
17.6 in
(44.7cm)
22.5 in
(57.2 cm)
Full(chassis with all disk drives)
66 lbs(29.9 kg)
2Empty(No internal disks)
57 lbs(25.9 kg)
Hardware Height Width Depth Weight Rack
Units
DS14
series
5.25 in
(13.3 cm)
17.6 in
(44.7 cm)
DS14mk2FC
DS14mk4FC
20 in
(50.8 cm)
With disk drives
77 lbs(35 kg)
3DS14mk2AT 22in
(55.2 cm)
With disk drives
68 lbs(30.8 kg)
Empty 50.06 lbs(23 kg)
DS2246 3.4 in
(8.5cm)
19 in
(48.0cm)
19.1 in (48.4 cm) With disk drives
49 lbs(22.2 kg)
2Without disk drives
34.6 lbs(15.7 kg)
Empty 17.4 lbs(7.9 kg)
DS4243 7 in
(17.8 cm)
19 in
(48.0 cm)
24 in(61 cm) With disk drives
110 lbs(49.9 kg)
4Without disk drives
53.7 lbs(24.4 kg)
Empty 21.1 lbs(9.6 kg)
DS4246 7 in
(17.8 cm)
17.7 in
(45 cm)
24 in (61 cm) With disk drives
110 lbs (49.9 kg)
4Without disk drives
53.7 lbs (24.4 kg)
Empty 21.1 lbs(9.6kg)
DS4486 6.87 in
(17.44 cm)
17.6 in
(44.7 cm)
27 in (68.6 cm)
Depth from mounting flange to rear chassis bulk head
With disk drives
150 lbs(68kg)
4
With four carriers, IOMs and PSU’s
82lbs (37kg)
Note: The DS14 series includes DS14,DS14mk2FC,DS14mk4FC with an ESH (ESH refers to ESH2 and ESH4) and DS14mk2AT
Hardware Height Width Depth Weight Rack Units
Cisco5010 1.72in(4.4cm) 17.3in(43.9cm) 30in(76.2cm) 35lbs(15.88kg) 1
Cisco5020 3.47(8.8cm) 17.3in(43.9cm) 30in(76.2cm) 50lbs(22.68kg) 2
© Copyright 2012 NetApp, Inc. All rights reserved.www.netapp.com
Cisco2960 1.73in(4.4cm) 17.5in(44.45cm) 9.3in(23.62) 8lbs(3.63kg) 1
* 1U = 1.75 inches
Note: Please plan for at least 36 inches (91.4 centimeters) of clearance on both front and back of the system. This amount of space allows you to reach the back panel for cabling the system. It also allows you to slide the motherboard tray out from the back of the system when removing or installing hardware.
1.2 System power requirements - Storage controllers and disk drives
Note: The following section contains the power requirements for the available FAS series and disk shelves. However, the tables cover values for one-controller modules. If you need additional information such as inclusion of two controllers, mezzanine card, I/O Expansion module, Flash Cache module etc., please refer to the latest Site Requirements guide, before you proceed with the installation.
1.2.1 FAS20xx series systems
Table 1: Electrical Requirements – FAS20xx series
1.2.2
FAS22xx series systems
Table 2: Electrical requirements – FAS2220
100 to 120V 200 to 240 V
Drives
(in GB)
Worstcase
Single
PSU
TypicalWorstcase
single
PSU
Typical
ParameterPer PSU
System
two PSUsPer PSU
System
two PSUs
FAS2020
Input current measured, A
1-TB SATA 3.37 1.61 3.22 1.69 0.83 1.66
2-TB SATA 3.36 1.65 3.29 1.69 0.84 1.68
Input power measured,W 1-TB SATA 332 158 316 327 152.5 305
2-TB SATA 334 162 324 326 160 320
FAS2040
Input current measured, A
1-TB SATA 3.62 1.77 3.53 1.81 0.90 1.8
2-TB SATA 3.34 1.61 3.22 1.66 0.84 1.67
Input power measured,W 1-TB SATA 357 173 345 347 169 337
2-TB SATA 329 158 315 319 156 312
FAS2050
Input current measured, A
1-TB SATA 5.07 2.26 4.51 2.46 1.20 2.40
Input power measured,W 1-TB SATA 504 220 439 474 224 447
© Copyright 2012 NetApp, Inc. All rights reserved.www.netapp.com
Table 3: Elec
Table 3: Electrical requirements — FAS2240 series (one controller module, no mezzanine card and either 450-GB or 600-GB disk drives for FAS2240; 1TB or 2TB disk drives for FAS2240-4)
Input Voltage
100V 200V 215V
Worst-case, single PSU
Typical Worst-case, single PSU /2+2 PSU
Typical Worst-case, single PSU /2+2 PSU
Typical
Per PSU
System, two /four
PSU
Per PSU
System, two PSUs/ System, four PSU
Per PSU
System, two PSUs / System four PSU
FAS2240-2
Input current measured, A
4.76 1.8 3.60 2.31 0.88 1.76 2.15 0.82 1.64
Input power measured,W
474 178 356 456 170 339 456 168 336
FAS2240-4
Input current measured, A
5.34 1.21 4.85 2.68 0.63 2.5 2.53 0.59 2.37
Input power measured,W 533 121
482
(four PSU’s)
517
(2+2 PSU’s)
117
468
(four PSU’s)
515
(2+2 PSU’s)
117466
(four PSU’s)
1.2.3 FAS30xx series systems
Table 4: Electrical requirements –FAS30xx series
Input Voltage 100 to 120V 200 to 240V -40 to -60V
Worst- Typical Worst- Typical Worst- Typical
100V 200 V
Drives
(in GB)
Worst case
single
PSU
Typical Worst-case
single
PSU
Typical
ParameterPer PSU
System
two PSUsPer PSU
System
two PSUs
FAS2220
Input current measured, A
1-TB 4.18 1.3 2.6 2 0.67 1.33
2-TB 4.26 1.34 2.63 2.14 0.68 1.36
3-TB 4.32 1.37 2.74 2.14 0.69 1.38
Input power measured,W
1-TB 417 129 258 396 123 246
2-TB 425 131 261 423 126 252
3-TB 431 136 271 423 129 257
© Copyright 2012 NetApp, Inc. All rights reserved.www.netapp.com
case, single PSU
case, single PSU
case, single PSU
Per PSU
System, two PSUs
Per PSU
System, two PSUs
Per PSU
System,
two PSUs
FAS3020
Input current measured, A
3.39 1.2 2.4 1.77 0.71 1.40 8.2 2.85 5.7
Input power measured,W
336 118 236 329 115 229 328 113 226
FAS3040
Input current measured, A
3.66 1.7 3.4 1.9 0.95 1.9 7.94 3.7 7.4
Input power measured,W
363 169 338 358 165 330 318 148 296
FAS3050
Input current measured, A
3.88 1.7 3.4 2.04 0.95 1.9 9.49 4.0 8.0
Input power measured,W
386 164 328 384 164 327 380 160 319
FAS3070
Input current measured, A
4.03 1.85 3.7 2.06 1.05 2.1 10.57 4.7 9.4
Input power measured,W
400 181 362 387 178 355 423 188 376
1.2.4 FAS31xx series systems
Table 5: Electrical requirements – FAS31xx series
Input Voltage 100 to 120V 200 to 240V -40 to -60V
Worst-case, single PSU
Typical Worst-case, single PSU
Typical Worst-case, single PSU
Typical
Per PSU
System, two PSUs
Per PSU
System, two PSUs
Per PSU
System,
two PSUs
FAS3140
Input current measured, A
3.98 1.89 3.77 1.97 0.97 1.93 8.38 4.88 9.75
Input power measured,W
396 187 373 385 183 366 336 195 389
FAS3160
Input current measured, A
4.80 2.25 4.50 2.38 1.16 2.32 10.07 5.90 11.79
Input power measured,W 476 220 440 460 225 450 404 235 470
FAS3170
Input current measured, A 5.07 2.37 4.74 2.52 1.19 2.38 10.75 6.09 12.18
Input power measured,W 505 235 470 493 230 459 430 243 486
© Copyright 2012 NetApp, Inc. All rights reserved.www.netapp.com
1.2.5 FAS32xx series systems
Table 6: Electrical requirements – FAS3210 with one 256-GB Flash Cache module— one controller module
Input Voltage
100 to 120V 200 to 240V -40 to -60V
Worst-case, single PSU
Typical Worst-case, single PSU
Typical Worst-case, single PSU
Typical
Per PSU
System, two PSUs
Per PSU
System, two PSUs
Per PSU
System,
two PSUs
FAS3210
Input current measured, A 4.22 1.52 3.03 2.11 0.83 1.66 10.45 3.65 7.30
Input power measured,W 421 150 299 411 147 293 418 146 292
Table 7: Electrical requirements – FAS3240 with one 256-GB, one 512-GB, or one 1-TB Flash Cache module per controller module—two controller modules
Input Voltage
100 to 120V 200 to 240V -40 to -60V
Worst-case, single PSU
Typical Worst-case, single PSU
Typical Worst-case, single PSU
Typical
Per PSU
System, two PSUs
Per PSU
System, two PSUs
Per PSU
System,
two PSUs
FAS3240
Input current measured, A 6.37 2.35 4.70 3.15 1.21 2.41 15.9 5.90 11.8
Input power measured,W 635 233 466 620 228 456 636 236 472
Table 8: Electrical requirements – FAS3270 with one 256-GB, one 512-GB, or one 1-TB Flash Cache module per controller module—two controller modules
Input Voltage 100 to 120V 200 to 240V -40 to -60V
Worst-case, single PSU
Typical Worst-case, single PSU
Typical Worst-case, single PSU
Typical
Per PSU
System, two PSUs
Per PSU
System, two PSUs
Per PSU
System,
two PSUs
FAS3270
Input current measured, A 7.28 2.78 5.56 3.58 1.42 2.83 18.2 6.95 13.9
Input power measured,W 728 278 552 707 271 541 728 278 556
1.2.6 FAS60xx series systems
Table 9: Electrical requirements -FAS6030/FAS6040
Input Voltage 100 to 120V 200 to 240V
Worst-case, single PSU
Typical Worst-case, single PSU
Typical
Per PSU
System, two PSUs
Per PSU
System, two PSUs
FAS6030 /FAS6040
Input current measured, A 9.75 2.87 5.74 4.87 1.57 3.14
© Copyright 2012 NetApp, Inc. All rights reserved.www.netapp.com
Input power measured,W 968 279 557 934 217 541
FAS6070/FAS6080
Input current measured, A
11.68 3.63 7.25 5.76 1.96 3.91
Input power measured, W
1,162 352 704 1,115 231 693
1.2.7 FAS62xx series systems
Table 10: Electrical requirements-FAS 6210 single-controller module;FAS6240 & FAS6280 with I/O expansion
Input Voltage 100 to 120V 200 to 240V
Worst-case, single PSU
Typical Worst-case, single PSU
Typical
Per PSU
System, two PSUs
Per PSU
System, two PSUs
FAS6210
Input current measured, A
5 2.25 4.5 2.5 1.15 2.3
Input power measured, W
490 215 430 480 208 415
FAS6240
Input current measured, A
9.3 3.3 6.6 4.5 1.65 3.3
Input power measured, W
920 312.5 625 875 308 615
FAS6280
Input current measured, A
9.6 3.5 6.9 4.7 1.75 3.5
Input power measured, W
950 332.5 665 910 323 645
1.2.8 DS14 series disk shelves
Table 11: Electrical requirements - DS14mk2 AT 7.2K speed
Input Voltage 100 to 120V 200 to 240V -40 to – 60V
Size GB
Worst-Case, single PSU
Typical Worst-case, single PSU
Typical Worst-case, single PSU
Typical
Per PSU
System, two PSUs
Per PSU
System, two PSUs
Per PSU
System, two PSUs
DS14mk2 AT
Input current measured, A
250 2.79 1.36 2.72 1.38 0.70 1.39 7.38 2.84 5.67
320 2.85 1.56 3.12 1.43 0.78 1.56 7.4 2.82 5.64
500 2.94 1.45 2.9 1.43 0.74 1.47 8.04 3.11 6.22
750 3.42 1.61 3.22 1.63 0.53 1.60 8.42 6.63 7.25
1-TB 3.15 1.55 3.10 1.55 0.78 1.56 8.33 3.24 6.48
Input power measured, W
250 279 136 271 271 132 264 295 114 227
320 284 155 310 283 152 304 296 113 226
500 293 144 288 286 142 283 322 125 249
© Copyright 2012 NetApp, Inc. All rights reserved.www.netapp.com
750 341 161 321 323 155 309 337 145 290
1-TB 315 154 308 309 150 300 333 130 259
Table 12: Electrical requirements- DS14mk2 FC 15K speed
Input Voltage 100 to 120V 200 to 240V -40 to – 60V
Size GB
Worst-Case, single PSU
Typical Worst-case, single PSU
Typical Worst-case, single PSU
Typical
Per PSU
System, two PSUs
Per PSU
System, two PSUs
Per PSU
System, two PSUs
DS14mk2FC
Input current measured, A
72 3.41 1.82 3.63 1.67 0.89 1.78 10.04 3.98 7.95
144 3.96 1.88 3.75 1.93 0.94 1.88 10.40 4.13 8.25
288 4.43 2.16 4.32 2.23 1.07 2.13 11.98 4.36 8.72
450 4.43 2.16 4.32 2.23 1.07 2.13 N/A
Input power measured, W
72 340 181 362 331 173 345 402 159 318
144 395 187 373 383 183 365 416 165 330
288 443 216 431 443 208 415 479 175 349
450 443 216 431 443 208 415 N/A
450 1,512 735 1,470 1,512 707 1,414 N/A
1.2.9 DS2246 disk shelves
Table 13: Electrical requirements—DS2246-SAS drives
Input Voltage 100 to 120V 200 to 240V
(200V actual)
Size (GB)
Worst-Case, single PSU
Typical Worst-case, single PSU
Typical
Per PSU
System, two PSUs
Per PSU
System, two PSUs
DS2246
Input current measured, A
450 4.28 1.38 2.76 2.29 0.79 1.58
600 4.22 1.39 2.77 2.29 0.82 1.64
900 4.22 1.39 2.77 2.29 0.82 1.64
Input power measured, W
450 428 137 274 420 135 270
600 422 134 267 418 133 266
900 422 134 267 418 133 266
1.2.10 DS4243 disk shelves
Table 14: Electrical requirements—DS4243-SAS drives
Input Voltage 100 to 120V 200 to 240V
(200V actual)
200 to 240V
(215V actual)
Size (GB)
Worst-Case, single PSU
Typical Worst-case, single PSU
Typical Worst-case, single PSU
Typical
Per PSU
System, two PSUs
Per PSU
System, two PSUs
Per PSU
System, two PSUs
DS4243-SAS
© Copyright 2012 NetApp, Inc. All rights reserved.www.netapp.com
Total input current measured, A
300 5.5 3.0 6.0 2.8 1.5 3.0 2.6 1.4 2.8
450 6.00 3.15 6.30 3.00 1.60 3.20 2.80 1.50 3.00
600 5.98 2.86 5.71 2.99 1.44 2.87 N/A
Total input power measured, W
300 550 300 600 560 300 600 559 301 602
450 600 315 630 600 320 640 602 323 645
600 595 284 567 584 274 547 N/A
Table 15: Electrical requirements—DS4243-SATA drives
Input Voltage 100 to 120V 200 to 240V
(200V actual)
200 to 240V
(215V actual)
Size (GB)
Worst-Case, single PSU
Typical Worst-case, single PSU
Typical Worst-case, single PSU
Typical
Per PSU
System, two PSUs
Per PSU
System, two PSUs
Per PSU
System, two PSUs
DS4243-SATA
Input current measured, A
500 4.30 2.20 4.40 2.10 1.10 2.20 1.90 1.05 2.10
1-TB 4.41 2.21 4.42 2.21 1.14 2.27 1.90 1.05 2.10
2-TB 4.72 2.31 4.62 2.42 1.21 2.42 N/A
3-TB 4.95 2.30 4.60 2.43 1.19 2.38
100 (SSD)
1.96 0.82 1.63 1.0 0.45 0.9 0.95 0.42 0.84
Input power measured, W
500 430 220 440 420 220 440 409 226 452
1-TB 439 219 438 429 212 424 409 226 452
2-TB 469 229 458 470 228 456 N/A
3-TB 495 228 456 476 224 448
100 (SSD)
196 82 163 200 90 180 205 90 180
1.2.11 DS4246 disk shelves
Table 16: Electrical requirements— DS4246 -SATA drives, 6-100GB SSD drives with 18-1TB or 18-3TB disk drives
Input Voltage 100 to 120V 200 to 240V
(200V actual)
Size (GB)
Worst-Case, single PSU
Typical Worst-case, single PSU
Typical
Per PSU
System, two PSUs
Per PSU
System, two PSUs
DS4246
Input current measured, A
1-TB 3.91 1.7 3.41 2.11 0.9 1.84
3-TB 4.11 1.9 3.72 2.25 1.1 2.14
Input power measured, W
1-TB 386 168 335 388 166 331
3-TB 406 123 368 418 199 397
© Copyright 2012 NetApp, Inc. All rights reserved.www.netapp.com
1.2.12 DS4486 disk shelves
Table 17: Electrical requirements –DS4486
Input Voltage 100 to 120V 200 to 240V
(200V actual)
Size (GB)
Worst-Case, single PSU
Typical Worst-case, single PSU
Typical
Per PSU pair
System, two PSUs
Per PSU pair
System, two PSUs
DS4486
Input current measured, A
3-TB 8.71 3.29 6.57 4.59 1.73 3.46
Input power measured, W 3-TB 870 329 657 919 346 692
1.3 System Cabinet
Dimensions
Cabinet 42U(X870B-R6) 42U Deep(X870C-R6)
Height 78.7in(200cm) 78.7in(200cm)
Depth 37.4in(95cm) 44.3in(112.50cm)
Width 23.6in(60cm) 23.6in(60cm)
Weight
Empty Weight 287lb(130.2kg) 307lb(138kg)
Loaded Weight 1500lb(680kg) 2307lb(1046kg)
Clearance
Front 30in(76.3cm) 30in(76.3cm)
Rear 30in(76.3cm) 30in(76.3cm)
Top 12in(20cm) 12in(30cm)
Note: Consult your co-location facility manager or vendor documentation if installing into third party cabinets.
1.4 System cabinet configurations
Config PDU’s PDU
Part #
Plug
Type
Service
Outlet
Cords AmpsOutlets
Approx.
Power
NEMA
30A Single Phase
4X8712C-R6
NEMA
L6-30P30A 2 48 24 10kW@208V
NEMA
30A
3-Phase Delta
2X8719A-R6
NEMA
L15-30P30A 1 41.5 24 8.6kW@208V
NEMA
30A
3-Phase Delta
2X8720A-R6
NEMA
L21-30P30A 1 41.5 24 8.6kW@208V
IEC 4 X8713C- IEC 32A 2 64 24 14.7kW@230V
© Copyright 2012 NetApp, Inc. All rights reserved.www.netapp.com
Config PDU’s PDU
Part #
Plug
Type
Service
Outlet
Cords AmpsOutlets
Approx.
Power
32A Single Phase
R660309-32A
P+N+E
IEC
32A
3-Phase Wye
2X8718A-R6
IEC 60309-32A
3P+N+E
32A 1 96 24 22.1kW@230V
Note: PDU Number is per cabinet; cords per side; amps per side and outlets per side
1.5 Network cabling requirements
Network Device Cabling Requirements
100Base-TX Cat 5/5e/6 UTP cable with RJ-45 connector
Gigabit Ethernet (Optical)
Multimode OM-1, OM-2, OM-3, or OM-4 fiber optic cable with LC connector
Gigabit Ethernet (Copper)
Cat 5e/6 UTP cable with RJ45 connector
10 Gigabit Ethernet (Optical)
10Gbase-SR SFP+ transceiver with LC connector* and a multimode OM-3 or OM-4 fiber optic cable
10 Gigabit Ethernet (Copper)
10Gbase Copper SFP+ twin-ax cable*
Fibre Channel Multimode OM-1, OM-2, OM-3, or OM-4 fiber optic cable with LC connector
*Must be provided by NetApp or on the NetApp compatibility list
Note: Refer TR-3552 titled “Optical Network Installation Guide” for more information on Optical networking requirements and distance limitations for a particular cable type and data rate
1.5.1 Ethernet Configuration Recommendations
Switch ports connected to 100Base-TX storage controller ports should be configured manually for speed and duplex settings (100 Mbit Full Duplex) when possible.
Flow control should be enabled on Gigabit and 10 Gigabit network ports. On the storage controller configure with Send on, Receive off, and on the switch configure as Send off, Receive on.
© Copyright 2012 NetApp, Inc. All rights reserved.www.netapp.com
PortFast can be enabled on all switch ports connected to the storage controller to allow the port to enter forwarding state faster.
© Copyright 2012 NetApp, Inc. All rights reserved.www.netapp.com
2 Data ONTAP® 7-Mode configuration details
Please work with your Professional Services representative to complete this worksheet prior to the installation date. The requested information enables us to configure your equipment quickly and efficiently. Depending on the desired configuration, some fields may not be applicable.
Note: This worksheet does NOT replace the requirement for reading and understanding the appropriate Data ONTAP manuals that describe the operations of Data ONTAP in 7-Mode. Data ONTAP manuals can be found at the NetApp Support Site under documentation.
Customer checklist of site preparation requirements (check all that apply):
Adequate rack space for the NetApp system and disk shelves has been provided. The power requirements for the NetApp system and disk shelves have been satisfied. The network patch cabling and switch port configuration is complete.
Company Name: PHS NetApp Sales Order #: 600122473
Storage Controller Model: FAS2240-4 Data ONTAP® Version: 8.1.2
2.1 Basic configuration
System information Controller 1 Controller 2
Serial Number Host name (nas + the last 4 S/N) nasxxxx nasxxxx
Aggregate Type (32-bit or 64-bit) 64-bit 64-bit
2.1.1 IFGRPs
Interface Groups (IFGRPs) bond multiple network ports together for increased bandwidth and/or fault tolerance.
Note: For systems without an e0P port, leave one network port available for ACP connections to SAS disk shelves.
Interface details Controller 1 Controller 2
Number of interface groups to configure Vif1 Vif1
Names of the interface groups For example, ifgrp1, iscsi_ifgrp2
Ifgrp1 Ifgrp1 IFGRP type (multi, single, LACP)
Multi – all ports are active
Single – one port active, other ports are on
standby for failover
LACP – network switch manages traffic
ifgrp1: LACP ifgrp1: LACP
ifgrp2: LACP ifgrp2: LACP
ifgrp3: ifgrp3: Multi-mode IFGRP load balancing style (IP, MAC, round-robin, or port based)
ifgrp1: IP ifgrp1: IP
ifgrp2: IP ifgrp2: IP
ifgrp3: ifgrp3:
Number of links (network ports) in each IFGRP
ifgrp1: 2 ifgrp1: 2
ifgrp2: 2 ifgrp2: 2
ifgrp3: ifgrp3: Name of network ports in each IFGRP For example,ifgrp1= e0a, e1d ifgrp3=ifgrp1, ifgrp2
ifgrp1: E0a,e0c
ifgrp2: E0b, e0d
© Copyright 2012 NetApp, Inc. All rights reserved.www.netapp.com
Interface details Controller 1 Controller 2
ifgrp3:
2.1.2 Network interface configuration
If you created IFGRPs, then use their names, otherwise use port names (for example, e0a).
Some controllers have an e0M interface for environments with a subnet dedicated to managing servers. Include the e0M settings if you have a management subnet.
Note: For systems without an e0P port, leave one network port available for ACP connections to SAS disk shelves.
Controller name
Interface name
IP address Network maskPartner interface name or IP address
Media type
Enable Jumbo frames?
nasxxxx (A) e0M 10.108.193.10 255.255.255.0 Ethernet No
nasxxxx (A) e0P 10.108.193.12 255.255.255.0 Ethernet No
nasxxxx (B) e0M 10.108.193.11 255.255.255.0 Ethernet No
nasxxxx (B) e0P 10.108.193.13 255.255.255.0 Ethernet No
nasxxxx (A) VIF 170.173.144.10 255.255.255.0 170.173.144.11 VIF Yes
nasxxxx (B) VIF 170.173.144.11 255.255.255.0 170.173.144.10 VIF Yes 2.1.3 Default gateway
Gateway details Controller 1 Controller 2
Default Gateway IP address 170.173.144.1 170.173.144.1
2.1.4 Administration host (Optional)
You can limit the systems or subnets authorized to mount the root volume.
Host details Controller 1 Controller 2
Admin host/subnet IP 2.1.5 Time zone
What time zone should the systems set their clocks to (for example, US/Pacific).
Time zone Details Controller 1 Controller 2
Time zone Pacific pacific
Physical Location (for example, Bldg 4, Dallas)
101 W. 8th Avenue
Spokane WA, 99204
101 W. 8th Avenue
Spokane WA, 99204
2.1.6 Language encoding for multiprotocol files
The default is POSIX and only needs to be changed for systems storing files using international alphabets.
Encoding details Controller 1 Controller 2
Language for multiprotocol files English English
© Copyright 2012 NetApp, Inc. All rights reserved.www.netapp.com
2.1.7 Domain Name Services (DNS) resolution
DNS resolution Values
DNS Domain Name wa.providence.org
DNS Server IP addresses (up to 3) 170.173.161.38; 170.173.113.228; 170.173.132.39
2.1.8 Network Information Services (NIS) resolution
NIS resolution Values
NIS Domain Name
NIS Server IP addresses
2.1.9 Remote Management Settings (RLM/SP/BMC)
All systems include Remote LAN Module (RLM), Baseboard Management Controller (BMC), or a Service Processor (SP) to provide out-of-band control of the storage system. NetApp recommends configuring these interfaces for easier, secure management and troubleshooting.
RLM/BMC Controller 1 Controller 2
IP Address 10.108.193.12 10.108.193.13
Network Mask 255.255.255.0 255.255.255.0
Gateway 10.108.193.1 10.108.193.1
Mail server hostname
smtplegacy.providence.org smtplegacy.providence.org
Mail server IP 170.173.161.56 170.173.161.55
2.1.10 Alternate Control Path (ACP) management for SAS shelves
For system models prior to the FAS3200 series, use an onboard NIC port to use ACP. New systems with dedicated e0P ports automatically assign IP addresses.
Controller 1 Controller 2
Interface name
(if not using e0P)
Private subnet
(default: 192.168.0.0/22)
Network Mask 2.1.11 CIFS configuration
Systems with a CIFS license run the CIFS setup wizard, immediately after the Setup wizard completes. NT4 domains will require a server account to be created before running CIFS setup. You can abort the wizard using Ctrl+C from the keyboard and run later if necessary.
Note: The installation engineer will require someone with Domain Administrator privileges to help perform this section. When CIFS is configured, a domain administrator should move the controllers out of OU=Computers into an OU for servers. This will ensure Group Policy Objects can be applied to the controllers.
CIFS configuration Controller 1 Controller 2
Authentication mode
Choose one of:
Active Directory domain
NT 4 domain
Workgroup
/etc/passwd or NIS/LDAP
Choose one of:
Active Directory domain
NT 4 domain
Workgroup
/etc/passwd or NIS/LDAP
© Copyright 2012 NetApp, Inc. All rights reserved.www.netapp.com
CIFS configuration Controller 1 Controller 2
Domain name wa.providence.org wa.providence.org
NetBios name Do you want the system visible via WINS (Y/N)?
WINS IP addresses
(up to 3)
Multiprotocol or NTFS only? Multiprotocol Multiprotocol
2.1.12 Configure Virtual LANs (Optional)
VLANs are used to segment network domains using 802.1Q protocol standards.
Controller name Interface name VLAN IDs to activate Enable GVRP?
nasxxxx (A) e0M 264 nasxxxx (B) e0M 264 nasxxxx (A) VIF 867 nasxxxx (B) VIF 867
Note: To trunk VLANs across an interface or IFGRP, you need to set "switchport mode trunk" on that interface or logical interface. This will allow 802.1q trunking, so that traffic across it is VLAN tagged. You must then create the relevant VLAN interfaces on the storage controller.
If you want a port or EtherChannel interface to be the only access port for a particular VLAN you must set "switchport mode access" on that interface. Then give the storage controller interface an IP address on that VLAN. No other information is required to VLAN tag the frames.
Reboot the controllers at this point for the settings to go into effect.
2.1.13 AutoSupport settings
AutoSupport is an automated diagnostic reporting function designed to notify you and NetApp of any event triggered messages. In addition, it provides weekly logs, NetApp health triggers, and performance statistics. This ensures prompt support responsiveness and system wide proactive health checks.
Note: System must remain on a support contract and the level of responsiveness is dependent on the level of service purchased.
AutoSupport Settings Controller 1 Controller 2
Configure AutoSupport on: Yes Yes
SMTP Server Name or IP smtplegacy.providence.org smtplegacy.providence.org
AutoSupport Transport
One of:
HTTPS (default)
HTTP
SMTP
One of:
HTTPS (default)
HTTP
SMTP
AutoSupport From E-Mail address
[email protected] [email protected]
AutoSupport To E-Mail address(es)
2.1.14 Customer/RMA details
Verify this information by logging into the http://www.now.netapp.com website. This information is required to ensure that the Technical Support personnel can reach you and the replacement parts are sent to the correct address.
© Copyright 2012 NetApp, Inc. All rights reserved.www.netapp.com
Customer/RMA details Primary contact Secondary contact
Contact Name James Abella Henry PAN
Contact Address 1801 Lind Ave SW, Renton,WA 1801 Lind Ave SW Renton, WA
Contact Phone (805) 218-3791 425-525-3328
Contact E-mail Address [email protected] [email protected]
RMA Address RMA Attention to Name
2.1.15 Time synchronization
Time synchronization details Values
Time services protocol (ntp) ntp
Time Servers (up to 3 internal or external hostnames or IP addresses)
Time.providence.org
Max time skew (<5 minutes for CIFS) < 5 minutes
2.1.16 SNMP management settings (Optional)
Fill out if you have SNMP monitoring applications (for example, Operations Manager). Set by using the ‘snmp options’ command.
SNMP settings Controller 1 Controller 2
SNMP Trap Host SNMP Community Data Fabric Manager Server Name or IP
Data Fabric Manager Protocol
Choose one of:
HTTP
HTTPS
Choose one of:
HTTP
HTTPS
Data Fabric Manager Port 8080 8080
© Copyright 2012 NetApp, Inc. All rights reserved.www.netapp.com
3 Data ONTAP 7-Mode installation and verification checklists
The installer will perform the following checks to ensure that your new systems are configured correctly and are ready to turn over to you.
Physical installation Status
Check and verify all ordered components were delivered to the customer site.
Confirm the NetApp controllers are properly installed in the cabinets.
Confirm there is sufficient airflow and cooling in and around the NetApp system.
Confirm all power connections are secured adequately.
Confirm the racks are grounded (if not in NetApp cabinets).
Confirm there is sufficient power distribution to NetApp controllers & disk shelves.
Confirm power cables are properly arranged in the cabinet.
Confirm that LEDs and LCDs are displaying the correct information.
Confirm that cables from NetApp controllers to disk shelves and among disk shelves are not crimped or stretched.(fiber cable services loops should be bigger than your fist )
Confirm that fiber cables laid between cabinets are properly connected and are not prone to physical damage.
Confirm disk shelves IDs are set correctly.
Confirm that fiber channel 2Gb/4Gb loop speeds are set correctly on DS14 shelves and proper LC-LC cables are used.
Confirm that Ethernet cables are arranged and labeled properly.
Confirm all Fiber cables are arranged and labeled properly.
Confirm the Cluster Interconnect Cables are connected (for HA pairs).
Confirm there is sufficient space behind the cabinets to perform hardware maintenance.
Power On and Diagnostics Status
Power up the disk shelves to ensure that the disks spin up and are initialized properly.
Connect the console to the serial port cable and establish a console connection using a terminal emulator like Terra Term, PuTTY or Hyperterm.
Note: Log all console output to a text file.
Power on the controllers.
Boot the controller and press Ctrl+C at the second prompt for ‘Special Boot Menu options’.
Go to Maintenance Mode by selecting option 5.
Check the onboard fibre ports status:
*> fcadmin config
Change the port mode if necessary from targets to initiators (for SAN requirements).
Verify the cable connections to all shelves:
*> fcadmin device_map
*> sasadmin shelf for SAS shelves
Verify disk ownership assignments:
*> disk show –a
Assign disks to each node using the disk assign command if necessary.
Verify the Multipath High Availability (MPHA) cabling. Each disk must have an A and B path:
© Copyright 2012 NetApp, Inc. All rights reserved.www.netapp.com
Power On and Diagnostics Status
*> storage show disk -p
Verify the system has one root aggregate assigned:
*> aggr status
Follow these steps for both cluster nodes, halt and then reboot each system into Data ONTAP:
*> halt
LOADER> boot_ontap
Verify power and cooling are at acceptable levels:
fas1> environment status
Verify expansion cards are installed in the correct slots:
fas1> sysconfig -c
Verify all local and partner shelves are visible to the system:
fas1> fcadmin device_map
Verify that all disks are owned:
fas1> disk show -n
Use the WireGauge tool to verify that all the shelves are cabled correctly.
Installation and configuration Status
Confirm the correct version of Data ONTAP software, Disk Qualification Package and disk, shelf, motherboard and RLM/BMC firmware is installed on each controller
fas1> version –b
fas1> sysconfig -a
Confirm ALL controllers are named as per the customer naming standards
Confirm the root volume is sufficiently sized ( 250GB minimum)
fas1> vol size <root volume name>
Confirm all the licenses are installed
fas1> license
Check the /etc/rc and /etc/hosts files:
fas1> rdfile /etc/rc
fas1> rdfile /etc/hosts
Verify all configured Ethernet network interfaces (individual and ifgrp) are configured correctly as per the customer requirements: IP address, media type, flow control and speed.
Confirm any interfaces not required to perform host name resolution are configured with the "-wins" option
For clustered systems, verify they have partner interfaces for failover
Where necessary, confirm the network switches are configured to support dynamic or static multi-mode ifgrps (LACP or Etherchannel) as per customer requirement.
Has the customer accessed the system console using the RLM / SP / BMC?
Verify network connectivity and DNS resolution is configured properly:
fas1> ping <hostname of mail server>
Verify configured IFGRPs function properly by disconnecting one or more cables
fas1> ifgrp status
Pull cables
fas1> ping <hostname of mail server>
fas1> ifgrp status
Reinsert cables
Confirm each controller is configured to synchronise time with a centralised source
© Copyright 2012 NetApp, Inc. All rights reserved.www.netapp.com
Installation and configuration Status
fas1> options timed
fas1> timezone
fas1> date
Confirm that AutoSupport is configured and functioning correctly.
fas1> options autosupport.doit “Test”
Confirm the default ‘home’ share is stopped from each controller (and vFiler)
If necessary, confirm that telnet and RSH is disabled and SSH is enabled
If required, confirm SNMP is configured on all controllers to the appropriate traphost
Download documentation pack and upload to controller(s)
CIFS configuration Status
If necessary, run through CIFS setup and join the controllers to the customer's Active Directory (requires an AD account with suitable permissions).
Confirm the NetApp controller’s local administrator account was created while configuring the CIFS service (and the password is set appropriately).
Confirm the permissions to the root volume (c$) and /etc folder (etc$) are configured appropriately (that is, NOT Everyone Full Control).
Confirm that appropriate Windows Domain Administrators group(s) is/are member of the NetApp controller’s local administrator group.
Create a share.
Have the customer map the share to a host, write data to it.
Create a Snapshot and confirm that Snapshot visibility is configured appropriately (for example, hidden to regular CIFS clients)
Confirm that qtrees storing CIFS data have the appropriate security style specified:
fas1> qtree status
Confirm that qtrees storing CIFS data have the appropriate ‘oplocks’ setting.
NFS configuration Status
Create a qtree and confirm the appropriate security style is specified
fas1> qtree create <path>
fas1> qtree status
Export the qtree.
Check the /etc/exports file and update the same with new mount entries with appropriate permissions.
Have the customer mount the qtree from a host and write data to it.
Take a Snapshot and confirm that Snapshot visibility is configured appropriately (for example, hidden to regular clients)
iSCSI configuration Status
Make sure the iSCSI service is started.
Verify that an iSCSI host attach or support kit has been installed on the host.
If appropriate, verify SnapDrive has been installed on the host.
Create a qtree, igroup, and LUN on the system (using SnapDrive if necessary).
Have the customer establish an iSCSI session from the host.
Create a file system on the LUN, write some data to it and confirm the data is on the LUN.
© Copyright 2012 NetApp, Inc. All rights reserved.www.netapp.com
iSCSI configuration Status
Reboot the host and confirm that the LUN is still attached.
FCP configuration Status
Make sure the FCP service is started
fas1> fcp status
Verify an FCP host attach or support kit has been installed on the host.
If appropriate, verify that SnapDrive has been installed on the host.
Create a qtree, igroup, and LUN on the system (using SnapDrive if necessary).
Have the customer establish an FCP session from the host.
Have the customer create a file system on the LUN and, write some data to it.
Have the customer reboot the host and confirm the LUN is still attached.
Verification checklist Status
Where necessary Make sure the CLUSTER license is enabled where necessary.
Verify the storage failover options on both systems in the HA pair are identical.
Temporarily disable AutoSupport:
fas1> options autosupport.enable off
Test manual Cluster Failover (in both directions) and ensure success, rectify any errors and prove network connectivity continues to function correctly during failover.
fas1> cf enable
fas1> cf takeover
fas1> partner
fas2/fas1*> ifconfig –a
fas2/fas1*> ifgrp status
fas2/fas1*> partner
fas1> cf giveback
Test Uncontrolled storage Failover (in both directions) by disconnecting one controller from power. Rectify any errors.
Test component failure of a PSU (Check status of LEDs and console).
Test component failure of a LAN cable (Interface Group Test), include ifgrp favor.
Test component failure of a fibre cable to disk shelf (Path Test), For Multipath HA cabling to ensure all disks have an A and B channel. Type
storage show disk –p
Run the WireGauge tool to ensure the shelf cabling is correct.
When installing a new system into a new NetApp cabinet, switch off one cabinet PDU, and make sure all controllers and shelves remain powered on. Check the status of LEDs and console.
Insert an entry into the system log indicating installation is complete:
fas1> logger * * * System Install complete <installer name> <date> * * *
Backup the system configuration:
fas1> config dump <date>.cfg
Re-enable AutoSupport:
fas1> options autosupport.enable on
Post installation checklist Status
Give new customers a brief tour of FilerView or Systems Manager to explain the basic functions
© Copyright 2012 NetApp, Inc. All rights reserved.www.netapp.com
Post installation checklist Status
of managing their new system.
Log onto the NOW website and give the customer a brief tour of the site. Show them how to access documentation, download software and firmware, search the Knowledge Base, and verify their RMA information.
Discuss training available through NetApp University with new customers.
Since they are the basis for most Data ONTAP functionality, have the customer explain how Snapshots work. Correct any misconceptions.
Create and send a Trip Report within 24 hours to the customer, partner sales team and NetApp sales team.
When all tasks are completed, have customer sign a Certificate of Completion.
© Copyright 2012 NetApp, Inc. All rights reserved.www.netapp.com
4 Data ONTAP Cluster-Mode configuration details
Please work with your professional services representative to complete this worksheet prior to the installation date. The requested information enables us to configure your equipment quickly and efficiently. Depending on the desired configuration, some fields may not be applicable.
Note: This worksheet does not replace the requirement for reading and understanding the appropriate Data ONTAP manuals that describe the operations of Data ONTAP in Cluster-Mode. Data ONTAP manuals can be found at the NetApp Support site under documentation.
Customer checklist of site preparation requirements (check all that apply):
Adequate rack space for the NetApp system and disk shelves has been provided. The power requirements for the NetApp system and disk shelves have been satisfied. The network patch cabling and switch port configuration is complete.
Company Name: NetApp Sales Order #:
Data ONTAP® Version:
4.1 Cluster information
It is assumed that the cluster will contain four nodes. If there are more than four nodes, replicate the appropriate section to add additional node information.
Starting from Data ONTAP 8.1 the 'cluster create' and 'cluster join' commands have built-in
wizards.
The wizard generates hostnames, IP addresses for the cluster LIF and subnet masks for the cluster LIF. It is recommended to use the cluster setup wizard while creating a new cluster or attempting to join an existing cluster.
The wizard has the following rules:
The names for the nodes in the cluster are derived from the name of the cluster. If the cluster is named clust1, the nodes will be names as clust-01, clust-02 and so on. The node name can be changed later with the cluster::system>node>modify command.
The cluster LIF will be assigned IP address in the 169.254.0.0 range with a Class B subnet (255.255.0.0) if the default is taken.
The initial cluster creating and configuration will be performed on the first node that is booted. The initial setup script will ask if the operator wants to create a cluster or join a cluster. The first node will be “create” and subsequent nodes will be “join”.
4.1.1 Cluster
The cluster base aggregate will contain the root volume for the cluster Vserver.
Cluster name Cluster Base Aggregate 4.1.2 Licensing
A base license is required, but additional features also need licensing.
License Values
© Copyright 2012 NetApp, Inc. All rights reserved.www.netapp.com
License Values 4.1.3 Admin Vserver
The Cluster Administration Vserver is used to manage the cluster activities. It is different from the node Vservers and is used by System Manager to access the cluster.
Type of information Value
Cluster administrator password
The password for the ‘admin’ account that the cluster requires before granting cluster administrator access at the console or through a secure protocol.
The default rules for passwords are as follows:
A password must be at least eight characters long.
A password must contain at least one letter and one number.
Cluster management LIF IP address
A unique IP address for the cluster management LIF. The cluster administrator uses this address to access the cluster admin Vserver and manage the cluster. Typically, this address should be on the data network.
Cluster management LIF netmask
The subnet mask that defines the range of valid IP addresses on the cluster management network.
Cluster management LIF default gateway
The IP address for the router on the cluster management network.
DNS domain name
The name of your network's DNS domain. The domain name cannot contain an underscore (_) and must consist of alphanumeric characters. To enter multiple DNS domain names, separate each name with either a comma or a space.
Name server IP addresses
The IP addresses of the DNS name servers. Separate each address with either a comma or a space.
4.1.4 Time synchronization
Time synchronization details Values
Time services protocol (NTP) Time Servers (up to 3 internal or external hostnames or IP addresses)
Max time skew (<5 minutes for CIFS)
4.1.5 Time zone
What time zone should the systems set their clocks to (for example, US/Pacific)?
Time Zone Location
© Copyright 2012 NetApp, Inc. All rights reserved.www.netapp.com
4.2 Node information
Individual controllers are called nodes. Each node has a unique name. Unlike the cluster name, the node name can be changed after it is initially defined.
System information Node 1 Node 2 Node 3 Node 4
Serial number Node name
4.2.1 Physical port identification
Each port services a specific type of function or role. These roles are:
Node Management Data Intercluster Cluster
Node Management ports are required to maintain connection between the node to site services such as NTP and AutoSupport. Data ports are used to transfer data or communicate between the cluster and the applications. Intercluster LIFs are used to setup peer relations between clusters for replicating data between clusters. Cluster ports are specifically used to transfer data between nodes within a cluster.
Note: Due to BURT 322675, NetApp recommends setting up an interface group for the node management LIF on each node of the cluster. The instructions below cover scenarios that have or do not have a fix for this BURT. Follow the section that is relevant to your case. Some of these instructions might diverge from the guidelines on the NetApp Support site. Check for updated versions of this document for latest information.
For versions of Data ONTAP that do not have a fix for BURT 322675, create a single-mode interface group of the following ports. Use this interface group as the port for the node management LIF. The interface group should be created before using the ‘cluster setup’ wizard on the node.
For versions of Data ONTAP that have a fix for BURT 322675:
System model Port grouping
FAS3040 & FAS3070 e0a and e0c
V3040 & V3070 e0a and e0c
FAS3140, FAS3160 & FAS3170 e0a and e0b
V3140, V3160 & V3170 e0a and e0b
FAS3210, FAS3240 & FAS3270 e0a and e0b
V3210, V3240 & V3270 e0a and e0b
FAS6030, FAS6040, FAS6070 & FAS6080 e0a and e0c
V6030, V6040, V6070 & V6080 e0a and e0c
FAS6210, FAS6240 & FAS6280 e0a and e0b
V6210, V6240 & V6280 e0a and e0b
Some controllers have an e0M interface for environments with a subnet dedicated to managing servers. Include the e0M settings if you have a management subnet.
Note: For systems without an e0P port, leave one network port available for ACP connections to SAS disk shelves.
Note: The following table is used to define port roles. If BURT 322675 is not installed, the IFGRP column should be used and the associated ports noted. If BURT 322675 is installed, omit the IFGRP column.
© Copyright 2012 NetApp, Inc. All rights reserved.www.netapp.com
Node Name IFGRP Ports MTU Port Role 4.2.2 Node management LIF
Each node has a management port that is used to communicate with it.
Node NamePort or IFGRP
LIF Name IP Address Netmask Gateway
4.3 Cluster network information
Starting from Data ONTAP 8.1 the 'cluster create' and 'cluster join' commands have built-in wizards to generate hostnames, IP addresses for the cluster LIF, and subnet masks for the cluster LIF. NetApp recommends using the cluster setup wizard whenever you create a new cluster or attempt to join an existing cluster.
The wizard has the following rules:
The names for the nodes in the cluster are derived from the name of the cluster. If the cluster is named cmode, the nodes will be names as cmode-01, cmode-02 and so on
The cluster LIF is assigned IP address in the 169.254.0.0 range with a Class B subnet (255.255.0.0)
Once the cluster has been defined and the nodes are joined to the cluster, other elements can be created. These elements can be created using System Manager, Element Manager, or CLI.
4.3.1 Interface groups (IFGRP)
Interface groups bond multiple network ports together for increased bandwidth and/or fault tolerance.
IFGRP name NodeDistribution function
Mode Ports
4.3.2 Configure Virtual LANs (VLANs)
(Optional) VLANs are used to segment network domains. The VLAN has a specific name that is a combination of the associated network port and the switch VLAN ID.
VLAN name Node Associated Network Port Switch VLAN ID
© Copyright 2012 NetApp, Inc. All rights reserved.www.netapp.com
VLAN name Node Associated Network Port Switch VLAN ID 4.3.3 Logical Interfaces (LIFs)
Logical Interfaces are the point at which the customer interfaces with the cluster.
LIF name Home node Home port Netmask Routing group Failover group 4.4 Intercluster network information
The intercluster ports used for cross-cluster communication. An intercluster port should be routable to the following:
Another intercluster port Data port of another cluster.
Node name Port LIF name IP address Netmask Gateway 4.5 Vserver information
Application access to data residing in the cluster must be done through a Vserver. Vservers can be used to support single or multiple protocols, user groups, or whatever delineation that the customer chooses. Additionally Vservers can restrict allocation of data to specific Aggregates.
To create a Vserver, you can use any of the available administrative interfaces: System Manager, Element Manager, or CLI. The Vserver Setup wizard has the following sub-wizards, which you can run after you create a Vserver:
Network setup Storage setup Services setup Data access protocol setup
Use the following section as a guide to create Vservers. Replicate this section as many times as required.
4.5.1 Creating Vserver
Type of information Value
Vserver name
The name of a Vserver can contain alphanumeric characters and the following special characters: ".", "-", and "_". However, the
© Copyright 2012 NetApp, Inc. All rights reserved.www.netapp.com
Type of information Value
name of a Vserver must not start with a
number or a special character.
Protocols
Protocols that you want to configure or allow on that Vserver.
Name Services
Name Services that you want to configure on the Vserver
Aggregate name
Aggregate name on which you want to create the Vserver's root volume. The default aggregate name is used if you do not specify one.
Language Setting
Language you want the volumes to use.
4.5.2 Creating Volumes on the Vserver
Volume name Aggregate name Volume size Junction path (NAS only) 4.5.3 IP Network Interface on the Vserver
End user applications connect to the data in the cluster only through interfaces defined to Vservers. The following table models the first 4 LIFs. Replicate the ‘Interface’ columns or the entire table if more interfaces are required.
Type of Information Interface 1 Interface 2 Interface 3 Interface 4
LIF name
The default LIF name is used if you do not specify one.
IP address
Subnet mask
Home node
Home node is the node on which you want to create a logical interface. The default home node is used if you do not specify one.
Home port
Home port is the port on which you want to create a logical interface. The default home port is used if you do not specify one.
Routing Group
Protocols
Protocols that can use the LIF.
© Copyright 2012 NetApp, Inc. All rights reserved.www.netapp.com
Type of Information Interface 1 Interface 2 Interface 3 Interface 4
Failover Group
DNS Zone
4.5.4 FCP Network Interface on the Vserver
Type of information Value
LIF name
The default LIF name is used if you do not specify one.
Home node
Home node is the node on which you want to create a logical interface. The default home node is used if you do not specify one.
Home port
Home port is the port on which you want to create a logical interface. The default home port is used if you do not specify one
4.5.5 LDAP services
Type of information Value
LDAP server IP address LDAP server port number
The default LDAP server port number is used if you do not specify one.
LDAP server minimum bind authentication level Bind DN and password Base DN
4.5.6 CIFS protocol
Type of information Value
Domain name CIFS share name
The default CIFS share name is used if you do not specify one.
Note: You must not use characters or Unicode characters in CIFS share names. You can use alphanumeric characters and the following special characters : ".", "!", "@", "#", "$",
"%", "&", "(", ")", ",", "_", ' " , "{", "}", "~", and "-".
CIFS share path
The default CIFS share path is used if you do not specify one.
CIFS access control list
The default CIFS access control list is used if you do not specify one.
4.5.7 iSCSI protocol
Type of information Value
igroup name
The default igroup name is used if you do not specify one.
Names of the initiators Operating system of the initiators LUN names
The default LUN name is used if you do not specify one.
© Copyright 2012 NetApp, Inc. All rights reserved.www.netapp.com
Type of information Value
Volume name
The volume that the LUN will reside on.
LUN sizes 4.5.8 FCP protocol
Type of Information Value
igroup name
The default igroup name is used if you do not specify one.
WWPN
World wide port number (WWPN) of the initiators.
Operating system of the initiators. LUN names
The default LUN name is used if you do not specify one.
Volume name
The volume that the LUN will reside on.
LUN sizes 4.6 Support information
The following section describes the support features.
4.6.1 Remote Management Settings (RLM/BMC/SP)
You can access the cluster's system console remotely by using the system console redirection feature provided by the remote management device of a node. Depending on your storage system model, the remote management device can be the Service Processor (SP), the Remote LAN Module (RLM), or the Baseboard Management Controller (BMC). NetApp recommends configuring these interfaces for easier, secure management and troubleshooting.
Node name IP address Netmask Default gatewayMail server hostname
Mail server IP address
4.6.2 AutoSupport settings
AutoSupport is a ‘phone home’ function to notify you and NetApp of any hardware problems, so that new hardware can be automatically delivered to solve the issue. (System must remain on a support contract and the level of responsiveness is dependent on the level of service contract: 2 hours – Next Business Day.)
Enable AutoSupport? If not, provide justification.
SMTP Server Name or IP
AutoSupport transport
AutoSupport from e-mail address
AutoSupport to e-mail address(es)
One of:
HTTPS (default)
HTTP
SMTP
© Copyright 2012 NetApp, Inc. All rights reserved.www.netapp.com
Enable AutoSupport? If not, provide justification.
SMTP Server Name or IP
AutoSupport transport
AutoSupport from e-mail address
AutoSupport to e-mail address(es)
One of:
HTTPS (default)
HTTP
SMTP
One of:
HTTPS (default)
HTTP
SMTP
One of:
HTTPS (default)
HTTP
SMTP
4.6.3 Customer/RMA details
Verify this information by logging into the NetApp support site: http://now.netapp.com. This information is required to ensure that the Technical Support personnel can reach you and the replacement parts are sent to the correct address.
Customer/RMA details Primary contact Secondary contact
Contact name Contact address Contact phone Contact e-mail address RMA address RMA attention to name
© Copyright 2012 NetApp, Inc. All rights reserved.www.netapp.com
A. Data ONTAP Cluster-Mode installation and verification checklists
The installer will perform the following checks to ensure that your new systems are configured correctly and are ready to turn over to you.
Physical installation Status
Check and verify all ordered components were delivered to the customer site.
Confirm the NetApp controllers are properly installed in the cabinets.
Confirm there is sufficient airflow and cooling in and around the NetApp system.
Confirm all power connections are secured adequately.
Confirm the racks are grounded (if not in NetApp cabinets).
Confirm there is sufficient power distribution to NetApp controllers & disk shelves.
Confirm power cables are properly arranged in the cabinet.
Confirm that LEDs and LCDs are displaying the correct information.
Confirm that cables from NetApp controllers to disk shelves and among disk shelves are not crimped or stretched.(fiber cable services loops should be bigger than your fist )
Confirm that fiber cables laid between cabinets are properly connected and are not prone to physical damage.
Confirm disk shelves IDs are set correctly.
Confirm that fiber channel 2Gb/4Gb loop speeds are set correctly on DS14 shelves and proper LC-LC cables are used.
Confirm that Ethernet cables are arranged and labeled properly.
Confirm all Fiber cables are arranged and labeled properly.
Confirm the Cluster Interconnect Cables are connected (for HA pairs).
Confirm there is sufficient space behind the cabinets to perform hardware maintenance.
Confirm that the Cisco Nexus Cluster Interconnect switches are properly placed in the cabinet.
Confirm that the Cisco IP switches are properly placed in the cabinet.
Confirm that the Cisco FCP switches are properly placed in the cabinet.
Confirm that the latest “Reference Configuration File” for the Cisco Nexus switches has been installed.
Confirm that any VLANs required have been defined to the appropriate switches.
Confirm that the Ethernet cables are properly connected to the Cisco IP switches.
Confirm that the FCP cables are properly connected to the Cisco Fabric switches.
Power On and Perform Cluster Creation, Node and Vserver configuration Status
Power up the disk shelves to ensure that the disks spin up and are initialized properly.
Connect the console to the serial port cable and establish a console connection using a terminal emulator like Terra Term, PuTTY or Hyperterm.
Note: Log all console output to a text file.
Power on the controllers.
On the first controller console, reply to the initial Cluster Setup response request with “create” to initialize the cluster and the first node.
On the next controller console, reply to the initial Cluster Setup response request with “join” to initialize the second node and join the cluster.
On each subsequent controller, perform the same task as the second controller to join them as nodes in the cluster.
Install System Manager 2.0 on a Windows or Linux system.
© Copyright 2012 NetApp, Inc. All rights reserved.www.netapp.com
Power On and Perform Cluster Creation, Node and Vserver configuration Status
Use System Manager 2.0 to install remaining licenses on the cluster.
Note: If any of the nodes are V-Series, the V-Series license needs to be added at the node level for each node that is a V-Series controller. You have 72 hours from the Cluster Setup script completion to install the license on the local nodes.
cluster::>run –node node1
node1>license add <V-Series license>
node1>exit
Use System Manager 2.0 to create the first Vservers.
Use the WireGauge tool to verify that all the shelves are cabled correctly and switches are properly connected.
Miscellaneous configuration Status
Where necessary, confirm the network switches are configured to support dynamic or static multi-mode IFGRPs (LACP or Etherchannel) as per customer requirement.
Has the customer accessed the system console using the RLM / BMC / SP?
Verify network connectivity and DNS resolution is configured properly:
cluster::>network ping -node <node name> –destination <hostname of DNS server>
Verify configured IFGRPs with more than one port function properly by disconnecting one or more cables
Confirm each node date and timezone is set correctly
cluster::>system node date show
cluster::>timezone
Display whether NTP is used in the cluster
cluster::>system services ntp config show
cluster::>system services ntp server show
Confirm that AutoSupport is configured and functioning correctly.
cluster::>system node autosupport show
Confirm that telnet and RSH is disabled and SSH is enabled
If required, confirm SNMP is configured on all controllers to the appropriate traphost
Download documentation pack and provide to customer
CIFS configuration (per Vserver servicing CIFS) Status
Check the export policy rules to ensure that the CIFS access protocol will allow access
cluster::>vserver export-policy rule show
If necessary, run through CIFS setup and join the controllers to the customer's Active Directory (requires an AD account with suitable permissions).
Confirm the NetApp controller’s local administrator account was created while configuring the CIFS service (and the password is set appropriately).
Confirm the permissions to the root volume (c$) and /etc folder (etc$) are configured appropriately (that is, NOT Everyone Full Control).
Confirm that appropriate Windows Domain Administrators group(s) are member of the cluster’s local administrator group.
Create a share.
Have the customer map the share to a host, write data to it.
Create a Snapshot and confirm that Snapshot visibility is configured appropriately (for example, hidden to regular CIFS clients)
© Copyright 2012 NetApp, Inc. All rights reserved.www.netapp.com
CIFS configuration (per Vserver servicing CIFS) Status
Confirm that qtrees storing CIFS data have the appropriate security style specified:
cluster::volume> qtree show –vserver <vserver> -volume <volume name> -qtree <qtree name>
Confirm that qtrees storing CIFS data have the appropriate ‘oplocks’ setting.
Take a Snapshot and confirm that Snapshot visibility is configured appropriately (for example, hidden to regular clients)
NFS configuration (per Vserver servicing NFS) Status
Create a qtree and confirm the appropriate security style is specified
cluster::>volume qtree create –vserver <vserver> -volume <volume name> -qtree <qtree name> -security-style {unix|ntfs|mixed}
cluster::>volume qtree show –vserver <vserver> -volume <volume name> -qtree <qtree name>
Check the export policy rules to ensure that the NFS access protocol will allow access
cluster::>vserver export-policy ruleshow
Have the customer mount the qtree from a host and write data to it.
Take a Snapshot and confirm that Snapshot visibility is configured appropriately (for example, hidden to regular clients)
iSCSI configuration (per Vserver servicing iSCSI) Status
Make sure the iSCSI service is started.
Verify that an iSCSI host attach or support kit has been installed on the host.
If appropriate, verify SnapDrive has been installed on the host.
Create a qtree, igroup, and LUN on the system (using SnapDrive if necessary).
Have the customer establish an iSCSI session from the host.
Create a file system on the LUN, write some data to it and confirm the data is on the LUN.
Reboot the host and confirm that the LUN is still attached.
FCP configuration (per Vserver servicing FCP) Status
Make sure the FCP service is started
Verify an FCP host attach or support kit has been installed on the host.
If appropriate, verify that SnapDrive has been installed on the host.
Create a qtree, igroup, and LUN on the system (using SnapDrive if necessary).
Have the customer establish an FCP session from the host.
Have the customer create a file system on the LUN and, write some data to it.
Have the customer reboot the host and confirm the LUN is still attached.
Verification checklist Status
Where necessary make sure the CLUSTER license is enabled where necessary.
Verify the cluster options on all nodes in the cluster are identical.
Temporarily disable AutoSupport on nodes of the cluster.
cluster::>system node autosupport modify -node <node name> -state disable
Test manual node Takeover (in both directions) and ensure success, rectify any errors and prove network connectivity continues to function correctly during failover.
cluster::>system storage failover takeover –ofnode <node> -bynode <node>
© Copyright 2012 NetApp, Inc. All rights reserved.www.netapp.com
Verification checklist Status
cluster::>system storage failover show-giveback
cluster::>system storage failover giveback –ofnode <node> -fromnode
Test Uncontrolled Cluster Failover (in both directions) by disconnecting one controller from power. Rectify any errors.
Repeat above test for all HA pairs in the cluster
Test component failure of a PSU (Check status of LEDs and console).
Test component failure of a LAN cable
Run the WireGauge tool to ensure the shelf cabling is correct.
When installing a new system into a new NetApp cabinet, switch off one cabinet PDU, and make sure all controllers and shelves remain powered on. Check the status of LEDs and console.
Re-enable AutoSupport:
cluster::>system node autosupport modify -node <node name> -state enable
Post installation checklist Status
Give new customers a brief tour of Systems Manager and Element Manager to explain the basic functions of managing their new cluster.
Log onto the NOW website and give the customer a brief tour of the site. Show them how to access documentation, download software and firmware, search the Knowledge Base, and verify their RMA information.
Discuss training available through NetApp University with new customers.
Since they are the basis for most Data ONTAP functionality, have the customer explain how Snapshots work. Correct any misconceptions.
Create and send a Trip Report within 24 hours to the customer, partner sales team and NetApp sales team.
When all tasks are completed, have customer sign a Certificate of Completion.
A.1 Definitions
This section contains the glossary of terms used throughout this document.
Term Definition
CIFS Common Internet File Service
DNS Domain Name System
DR Disaster Recovery
DRC Disaster Recovery Center (data center)
FAS Fabric Attached Storage
FC Fibre Channel
FlexVol Flexible volume
IOPS Input/Output Operations per Second
iSCSI Internet Protocol – Small Computer Systems Interface
MAN Managed / Metro Area Network
LUN Logical Unit Number
NAS Network Attached Storage
NFS Network File System
NIS Network Information Service
© Copyright 2012 NetApp, Inc. All rights reserved.www.netapp.com
Term Definition
NTP Network Time Protocol
PDU Power Distribution Units
PDC Primary Data Center
RPM Rotations Per Minute
RAID Redundant Array of Independent Disks
SAN Storage Area Network
SNMP Simple Network Management Protocol
SATA Serial Advanced Technology Attachment
UPS Uninterruptible Power Supply
VIF Virtual Interface
VLAN Virtual Local Area Network
WINS Windows Internet Naming Service