havcs-410-101 a-2-10-srt-pg_2

406
VERITAS Cluster Server for UNIX, Fundamentals (Appendixes) HA-VCS-410-101A-2-10-SRT (100-002149-B)

Transcript of havcs-410-101 a-2-10-srt-pg_2

Page 1: havcs-410-101 a-2-10-srt-pg_2

VERITAS Cluster Server for UNIX, Fundamentals(Appendixes) HA-VCS-410-101A-2-10-SRT (100-002149-B)

Page 2: havcs-410-101 a-2-10-srt-pg_2

COURSE DEVELOPERSBilge GerritsSiobhan SeegerDawn Walker

LEAD SUBJECT MATTER EXPERTS

Geoff BergrenPaul JohnstonDave RogersJim SenickaPete Toemmes

TECHNICAL CONTRIBUTORS AND REVIEWERS

Billie BachraBarbara CeranBob LucasGene HenriksenMargy Cassidy

Disclaimer

The information contained in this publication is subject to change without notice. VERITAS Software Corporation makes no warranty of any kind with regard to this guide, including, but not limited to, the implied warranties of merchantability and fitness for a particular purpose. VERITAS Software Corporation shall not be liable for errors contained herein or for incidental or consequential damages in connection with the furnishing, performance, or use of this manual.

Copyright

Copyright © 2005 VERITAS Software Corporation. All rights reserved. No part of the contents of this training material may be reproduced in any form or by any means or be used for the purposes of training or education without the written permission of VERITAS Software Corporation.

Trademark Notice

VERITAS, the VERITAS logo, and VERITAS FirstWatch, VERITAS Cluster Server, VERITAS File System, VERITAS Volume Manager, VERITAS NetBackup, and VERITAS HSM are registered trademarks of VERITAS Software Corporation. Other product names mentioned herein may be trademarks and/or registered trademarks of their respective companies.

VERITAS Cluster Server for UNIX, Fundamentals Participant Guide

April 2005 Release

VERITAS Software Corporation350 Ellis StreetMountain View, CA 94043Phone 650–527–8000 www.veritas.com

Page 3: havcs-410-101 a-2-10-srt-pg_2

Table of Contents iCopyright © 2005 VERITAS Software Corporation. All rights reserved.

Appendix A: Lab SynopsesLab 2 Synopsis: Validating Site Preparation ........................................................... A-2Lab 3 Synopsis: Installing VCS ............................................................................... A-6Lab 4 Synopsis: Using the VCS Simulator............................................................ A-18Lab 5 Synopsis: Preparing Application Services................................................... A-24Lab 6 Synopsis: Starting and Stopping VCS......................................................... A-29Lab 7 Synopsis: Online Configuration of a Service Group.................................... A-31Lab 8 Synopsis: Offline Configuration of a Service Group.................................... A-38Lab 9 Synopsis: Creating a Parallel Service Group .............................................. A-47Lab 10 Synopsis: Configuring Notification ............................................................ A-52Lab 11 Synopsis: Configuring Resource Fault Behavior....................................... A-55Lab 13 Synopsis: Testing Communication Failures .............................................. A-60Lab 14 Synopsis: Configuring I/O Fencing............................................................ A-66

Appendix B: Lab DetailsLab 2: Validating Site Preparation........................................................................... B-3Lab 3: Installing VCS............................................................................................. B-11Lab 4: Using the VCS Simulator ........................................................................... B-21Lab 5: Preparing Application Services .................................................................. B-29Lab 6: Starting and Stopping VCS ........................................................................ B-37Lab 7: Online Configuration of a Service Group ................................................... B-41Lab 8: Offline Configuration of a Service Group ................................................... B-57Lab 9: Creating a Parallel Service Group.............................................................. B-73Lab 10: Configuring Notification ............................................................................ B-85Lab 11: Configuring Resource Fault Behavior ...................................................... B-93Lab 13 Details: Testing Communication Failures................................................ B-101Lab 14: Configuring I/O Fencing ......................................................................... B-111

Appendix C: Lab SolutionsLab 2 Solutions: Validating Site Preparation........................................................... C-3Lab 3 Solutions: Installing VCS............................................................................. C-13Lab 4 Solutions: Using the VCS Simulator............................................................ C-35Lab 5 Solutions: Preparing Application Services .................................................. C-51Lab 6 Solutions: Starting and Stopping VCS ........................................................ C-63Lab 7 Solutions: Online Configuration of a Service Group.................................... C-67Lab 8 Solutions: Offline Configuration of a Service Group.................................... C-89Lab 9 Solutions: Creating a Parallel Service Group............................................ C-109Lab 10 Solutions: Configuring Notification .......................................................... C-125

Table of Contents

Page 4: havcs-410-101 a-2-10-srt-pg_2

ii VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

Lab 11 Solutions: Configuring Resource Fault Behavior .................................... C-133Lab 13 Solutions: Testing Communication Failures............................................ C-149Lab 14 Solutions: Configuring I/O Fencing ......................................................... C-163

Appendix D: Job AidsCluster System States............................................................................................. D-2Resource States and Transitions ............................................................................ D-4Service Group Configuration Procedure ................................................................. D-5Resource Configuration Procedure......................................................................... D-6List of Notifier Events and Traps............................................................................. D-7Example Bundled Agent Reference Guide Entries ............................................... D-10SCSI-3 Persistent Reservations............................................................................ D-17Best Practices ....................................................................................................... D-18New Features in VCS 4.1...................................................................................... D-22New Features in VCS 4.0...................................................................................... D-24

Appendix E: Design Worksheet: Template

Page 5: havcs-410-101 a-2-10-srt-pg_2

Appendix ALab Synopses

Page 6: havcs-410-101 a-2-10-srt-pg_2

A–2 VERITAS Cluster Server for UNIX, FundamentalsCopyright © 2005 VERITAS Software Corporation. All rights reserved.

Lab 2 Synopsis: Validating Site PreparationIn this lab, work with your partner to prepare the systems for installing VCS.

Step-by-step instructions for this lab are located on the following page:• “Lab 2: Validating Site Preparation,” page B-3

Solutions for this exercise are located on the following page:• “Lab 2 Solutions: Validating Site Preparation,” page C-3

Visually inspect the classroom lab site.Complete and validate the design worksheet.Use the lab appendix best suited to your experience level:

Visually inspect the classroom lab site.Complete and validate the design worksheet.Use the lab appendix best suited to your experience level:

See the next slide for lab assignments.See the next slide for lab assignments.

Lab 2: Validating Site Preparation

train1

train2

train2train1Sample Value

SystemSystem

Your ValueSystem Definition

? Appendix A: Lab Synopses? Appendix B: Lab Details? Appendix C: Lab Solutions

? Appendix A: Lab Synopses? Appendix B: Lab Details? Appendix C: Lab Solutions

Page 7: havcs-410-101 a-2-10-srt-pg_2

Appendix A Lab Synopses A–3Copyright © 2005 VERITAS Software Corporation. All rights reserved.

A

Lab AssignmentsUse the table to record your cluster values as you work through the lab.

Object Sample Value Your Value

Your system host name your_sys

train1

Partner system host name their_sys

train2

name prefix for your objects

bob

Interconnect link 1 Solaris: qfe0Sol Mob: dfme0AIX: en2HP-UX lan1Linux: eth2VA bge2

Interconnect link 2 Solaris: qfe1Sol Mob: dmfe1AIX: en3HP-UX lan2Linux: eth3VA bge3

Public network interfaceinterface

Solaris: eri0Sol Mob: dfme0AIX: en1HP-UX lan0Linux: eth1VA bge0

Admin IP address for your_sys

192.168.xx.xxx

Admin IP address for their_sys

192.168.xx.xxx

Page 8: havcs-410-101 a-2-10-srt-pg_2

A–4 VERITAS Cluster Server for UNIX, FundamentalsCopyright © 2005 VERITAS Software Corporation. All rights reserved.

1 Verify that the Ethernet network interfaces for the two cluster interconnect links are cabled together using crossover cables.Note: In actual implementations, each link should use a completely separate infrastructure (separate NIC and separate hub or switch). For simplicity of configuration in the classroom environment, the two interfaces used for the cluster interconnect are on the same NIC.

2 Verify that the public interface is cabled and accessible on the classroom public network.

Virtual AcademySkip this step.

Verifying the Network Configuration

Four Node—UNIX

Classroom LAN 192.168.XX, where XX=27, 28, or 29

SANDiskArray

SANTape

Library

train1192.168.XX.101

train2192.168.XX.102

train3192.168.XX.103

train4192.168.XX.104

train12192.168.XX.112

train11192.168.XX.111

train10192.168.XX.110

train9192.168.XX.109

LANLAN

train5192.168.XX.105

train6192.168.XX.106

train8192.168.XX.108

train7192.168.XX.107

Hub/Switch

Hub/Switch

Hub/Sw

itch

Hub/Sw

itchHub

/Sw

itch

Hub

/Sw

itch

Software Share192.168.XX.100

Page 9: havcs-410-101 a-2-10-srt-pg_2

Appendix A Lab Synopses A–5Copyright © 2005 VERITAS Software Corporation. All rights reserved.

A

1 Check the PATH environment variable. If necessary, add the /sbin, /usr/sbin, /opt/VRTS/bin, and /opt/VRTSvcs/bin directories to your PATH environment variable.

2 Check the VERITAS licenses to determine whether a VERITAS Cluster Server license is installed.

When installing any Storage Foundation product or VERITAS Volume Replicator, the VRTSalloc package (the VERITAS Volume Manager Intelligent Storage Provisioning feature) requires that the following Red Hat packages are installed:• compat-gcc-c++-7.3-2.96.128• compat-libstdc++-7.3-2.96.128

Version 7.3-2.96.128 is provided with Red Hat Enterprise Linux 3 Update 2 (i686).

Verify that ssh configuration files are set up in order to install VCS on Linux or to run remote commands without prompts for passwords.

If you do not configure ssh, you are required to type in the root passwords for all systems for every remote command issued during the following services preparation lab and the installation procedure.

If you do not want to use ssh with automatic login using saved passphrases on a regular basis, run the following commands at the command line. This is in effect only for this session.exec /usr/bin/ssh-agent $SHELL

ssh-add

Save your passphrase during your GNOME session.

1 Open a console window so you can observe messages during later labs.

2 Open a System Log Display tool.

Other Checks

Checking Packages—Linux Only

Configuring Secure Shell—Linux Only

Setting Up a Console Window—Linux Only

Page 10: havcs-410-101 a-2-10-srt-pg_2

A–6 VERITAS Cluster Server for UNIX, FundamentalsCopyright © 2005 VERITAS Software Corporation. All rights reserved.

Lab 3 Synopsis: Installing VCSIn this lab, work with your lab partner to install VCS on both systems.

Step-by-step instructions for this lab are located on the following page:• “Lab 3: Installing VCS,” page B-11

Solutions for this exercise are located on the following page:• “Lab 3 Solutions: Installing VCS,” page C-13

Obtaining Classroom InformationUse the following table to collect information you need to install VCS. Your instructor may also ask you to install VERITAS Volume Manager and VERITAS File System.

Lab 3: Installing VCS

train1 train2

# ./installer# ./installer

vcs1vcs1

Link 1:______Link 2:______

Software location:_______________________________

Subnet:_______

Link 1:______Link 2:______

Public:______ Public:______

4.x4.x

# ./installvcs# ./installvcsPre-4.0Pre-4.0

Page 11: havcs-410-101 a-2-10-srt-pg_2

Appendix A Lab Synopses A–7Copyright © 2005 VERITAS Software Corporation. All rights reserved.

A

Cluster Definition These values define cluster properties and are required to install VCS.

Attributes/Properties Sample Value Your Values

Node names, cluster name, and cluster ID

train1 train2 vcs1 1train3 train4 vcs2 2train5 train6 vcs3 3train7 train8 vcs4 4train9 train10 vcs5 5train11 train12 vcs6 6

train1train2vcs11

Cluster interconnect Ethernet interface for interconnect link #1

Solaris: qfe0Sol Mob: dmfe0AIX: en2HP-UX lan1Linux: eth1VA: bge2

Ethernet interface for interconnect link #2

Solaris: qfe1Sol Mob: dmfe1AIX: en3HP-UX lan2Linux: eth2VA: bge3

Public network interfaceinterface

Solaris: eri0Sol Mob: dmfe0AIX: en1HP-UX lan0Linux: eth1VA: bge0

Page 12: havcs-410-101 a-2-10-srt-pg_2

A–8 VERITAS Cluster Server for UNIX, FundamentalsCopyright © 2005 VERITAS Software Corporation. All rights reserved.

Web GUI IP Address:train1 train2train3 train4train5 train6train7 train8train9 train10train11 train12

Subnet MaskNetwork interface

NetworkHosts (HP-UX only)

192.168.xxx.91192.168.xxx.92192.168.xxx.93192.168.xxx.94192.168.xxx.95192.168.xxx.96

255.255.255.0Solaris: eri0Sol Mob: dmfe0AIX: en0HP-UX lan0Linux: eth0VA: bge0see instructor

Installation software locationinstall_dir

License

Administrator account NamePassword

adminpassword

Cluster Definition These values define cluster properties and are required to install VCS.

Attributes/Properties Sample Value Your Values

Page 13: havcs-410-101 a-2-10-srt-pg_2

Appendix A Lab Synopses A–9Copyright © 2005 VERITAS Software Corporation. All rights reserved.

A1 Obtain the location of the installation software from your instructor.

Installation software location:

_____________________________________________________________

2 This first step is to be performed from only one system in the cluster. The install script installs and configures all systems in the cluster.

a Change to the install directory.

b Run the installer script (VERITAS Product Installer) located in the directory specified above. For versions of VCS before 4.0, use installvcs. Use the information in the previous table or design worksheet to respond to the installation prompts. Note: For VCS 4.x, install Storage Foundation (Volume Manager and File System).

c If a license key is needed, obtain one from your instructor and record it here.License Key: ____________________________________________

d Install all optional packages (including Web console and Simulator).

e Accept default of Y to configure VCS.

f Do not configure a third heartbeat link at this time.

g Do not configure a low-priority heartbeat link at this time.

h Do not configure VERITAS Security Services.

i Do not set any user names or passwords.

j Retain the default admin user account and password.

k Configure the Cluster Server Cluster Manager.

l Do not configure SMTP Notification.

Installing VERITAS Cluster Server Software

Page 14: havcs-410-101 a-2-10-srt-pg_2

A–10 VERITAS Cluster Server for UNIX, FundamentalsCopyright © 2005 VERITAS Software Corporation. All rights reserved.

m Do not configure SNMP Notification.

n Select the option to install all packages simultaneously on all systems.o Do not set up enclosure-based naming for Volume Manager.

p Start Storage Foundation Enterprise HA processes.

q Do not set up a default disk group.

3 If you did not install the Java GUI package as part of the installer (VPI) process (or installvcs for earlier versions of VCS), install the VRTScscm Java GUI package on each system in the cluster. The location of this package is in the pkgs directory under the install location directory given to you by your instructor.

1 If your instructor indicates that additional software, such as VCS patches or updates, is required, obtain the location of the installation software from your instructor.

Installation software directory:

_____________________________________________________________

2 Install any VCS patches or updates, as directed by your instructor. Use the operating system-specific command.

3 Install any other software indicated by your instructor. For example, if your classroom uses VCS 3.5, you may be directed to install VERITAS Volume Manager and VERITAS File System.

Installing Other Software

Page 15: havcs-410-101 a-2-10-srt-pg_2

Appendix A Lab Synopses A–11Copyright © 2005 VERITAS Software Corporation. All rights reserved.

A

You can use the worksheet at the end of this lab synopsis to verify and record your cluster configuration.

1 Verify that VCS is now running using hastatus.

If hastatus -sum shows the cluster systems in a running state and a ClusterService service group is online on one of your cluster systems, VCS has been properly installed and configured.

2 Perform additional verification (generally only necessary if there is a problem displayed by hastatus -sum).

a Verify that all packages are loaded.

b Verify that LLT is running.

c Verify that GAB is running.

View the configuration files set up by the VCS installation procedure.

1 Explore the LLT configuration.

2 Explore the GAB configuration.

3 Explore the VCS configuration files.

Viewing VERITAS Cluster Server Installation Results

Exploring the Default VCS Configuration

Page 16: havcs-410-101 a-2-10-srt-pg_2

A–12 VERITAS Cluster Server for UNIX, FundamentalsCopyright © 2005 VERITAS Software Corporation. All rights reserved.

Verify GUI connectivity with the Java GUI and the Web GUI. Both GUIs can connect to the cluster with the default user of admin and password as the default password.

1 Use a Web browser to connect to the Web GUI.

2 Start the Java GUI and connect to the cluster using these values:

3 Browse the cluster configuration.

Verifying Connectivity with the GUIs

Page 17: havcs-410-101 a-2-10-srt-pg_2

Appendix A Lab Synopses A–13Copyright © 2005 VERITAS Software Corporation. All rights reserved.

A

First system:

Design Worksheet: Cluster Interconnect Configuration

/etc/VRTSvcs/comms/llttab Sample Value Your Value

set-node(host name)

train1

set-cluster(number in host name of odd system)

1

link Solaris: qfe0Sol Mob: dfme0AIX: en2HP-UX lan1Linux: eth2VA: bge2

link Solaris: qfe1Sol Mob: dmfe1AIX: en3HP-UX lan2Linux: eth3VA: bge3

/etc/VRTSvcs/comms/llthosts Sample Value Your Value

train1train2

/etc/VRTSvcs/comms/sysname Sample Value Your Value

train1

Page 18: havcs-410-101 a-2-10-srt-pg_2

A–14 VERITAS Cluster Server for UNIX, FundamentalsCopyright © 2005 VERITAS Software Corporation. All rights reserved.

Second system:

Cluster Configuration (main.cf)

/etc/VRTSvcs/comms/llttab Sample Value Your Value

set-node train2

set-cluster 1

link Solaris: qfe0Sol Mob: dfme0AIX: en2HP-UX lan1Linux: eth2VA: bge2

link Solaris: qfe1Sol Mob: dmfe1AIX: en3HP-UX lan2Linux: eth3VA: bge3

/etc/VRTSvcs/comms/llthosts Sample Value Your Value

train1train2

/etc/VRTSvcs/comms/sysname Sample Value Your Value

train2

Types Definition Sample Value Your Value

Include types.cf

Page 19: havcs-410-101 a-2-10-srt-pg_2

Appendix A Lab Synopses A–15Copyright © 2005 VERITAS Software Corporation. All rights reserved.

A

Cluster Definition Sample Value Your Value

Cluster vcs1

Required Attributes

UserNames admin=password

ClusterAddress 192.168.xx.91

Administrators admin

Optional Attributes

CounterInterval 5

System Definition Sample Value Your Value

System train1 (odd)

System train2 (even)

Service Group Definition Sample Value Your Value

Group ClusterService

Required Attributes

FailoverPolicy Priority

SystemList train1=0 train2=1

Optional Attributes

AutoStartList train1

OnlineRetryLimit 3

Tag CSG

Page 20: havcs-410-101 a-2-10-srt-pg_2

A–16 VERITAS Cluster Server for UNIX, FundamentalsCopyright © 2005 VERITAS Software Corporation. All rights reserved.

Resource Definition Sample Value Your Value

Service Group ClusterService

Resource Name webip

Resource Type IP

Required Attributes

Device eri0

Address 192.168.xx.91

Optional Attributes

Netmask 255.255.255.0

Critical? Yes (1)

Enabled? Yes (1)

Resource Definition Sample Value Your Value

Service Group ClusterService

Resource Name csgnic

Resource Type NIC

Required Attributes

Device <platform specific>

Critical? Yes (1)

Enabled? Yes (1)

Resource Definition Sample Value Your Value

Service Group ClusterService

Resource Name VCSWeb

Resource Type VRTSWebApp

Required Attributes

AppName vcs

InstallDir /opt/VRTSweb/VERITAS

TimeForOnline 5

Critical? Yes (1)

Enabled? Yes (1)

Page 21: havcs-410-101 a-2-10-srt-pg_2

Appendix A Lab Synopses A–17Copyright © 2005 VERITAS Software Corporation. All rights reserved.

A

Resource Dependency Definition

Service Group ClusterService

Parent Resource Requires Child Resource

VCSWeb webip

webip csgnic

Page 22: havcs-410-101 a-2-10-srt-pg_2

A–18 VERITAS Cluster Server for UNIX, FundamentalsCopyright © 2005 VERITAS Software Corporation. All rights reserved.

Lab 4 Synopsis: Using the VCS SimulatorThis lab uses the VERITAS Cluster Server Simulator and the Cluster Manager Java Console. You are provided with a preconfigured main.cf file to learn about managing the cluster.

Step-by-step instructions for this lab are located on the following page:• “Lab 4: Using the VCS Simulator,” page B-21

Solutions for this exercise are located on the following page:• “Lab 4 Solutions: Using the VCS Simulator,” page C-35

Obtaining Classroom InformationUse the following table to record the values for your classroom.

Attribute Sample Value Your Value

Port 15559

VCS user account/password

oper/oper

Lab 4: Using the VCS Simulator1. Start the Simulator Java GUI.

hasimgui &2. Add a cluster.3. Copy the preconfigured

main.cf file to the new directory.

4. Start the cluster from the Simulator GUI.

5. Launch the Cluster Manager Java Console

6. Log in using the VCS account oper with password oper. This account demonstrates different privilege levels in VCS.

See next slide for classroom valuesSee next slide for lab assignments.See next slide for lab assignments.

Page 23: havcs-410-101 a-2-10-srt-pg_2

Appendix A Lab Synopses A–19Copyright © 2005 VERITAS Software Corporation. All rights reserved.

A

File Locations

Type of File Location

Lab main.cf file: cf_files_dir

Simulator configuration directory:sim_config_dir

Page 24: havcs-410-101 a-2-10-srt-pg_2

A–20 VERITAS Cluster Server for UNIX, FundamentalsCopyright © 2005 VERITAS Software Corporation. All rights reserved.

4 Add /opt/VRTScssim/bin to your PATH environment variable after any /opt/VRTSvcs/bin entries, if it is not already present.

5 Set the VCS_SIMULATOR_HOME environment variable to /opt/VRTScssim, if it is not already set.

6 Start the Simulator GUI.

7 Add a cluster.

8 Use these values to define the new simulated cluster:– Cluster Name: vcs_operations – System Name: S1– Port: 15559– Platform: Solaris– WAC Port: -1

9 In a terminal window, change to the simulator configuration directory for the new simulated cluster named vcs_operations.

10 Copy the main.cf, types.cf, and OracleTypes.cf files provided by your instructor into the vcs_operations simulation configuration directory.

11 From the Simulator GUI, start the vcs_operations cluster, launch the VCS Java Console for the vcs_operations simulated cluster, and log in as oper with password oper.

Note: While you may use admin/password to log in, the point of using oper is to demonstrate the differences in privileges between VCS user accounts.

Starting the Simulator on UNIX

Page 25: havcs-410-101 a-2-10-srt-pg_2

Appendix A Lab Synopses A–21Copyright © 2005 VERITAS Software Corporation. All rights reserved.

A

1 How many systems are members of the cluster?

2 Determine the status of all service groups.

3 Which service groups have service group operator privileges set for the oper account?

4 Which resources in the AppSG service group have the Critical resource attribute enabled?

5 Which resource is the top-most parent in the OracleSG service group?

6 Which immediate child resources does the Oracle resource in the OracleSG service group depend on?

Viewing Status and Attributes

Service Group Status on S1 Status on S2 Status on S3

AppSG

OracleSG

ClusterService

Page 26: havcs-410-101 a-2-10-srt-pg_2

A–22 VERITAS Cluster Server for UNIX, FundamentalsCopyright © 2005 VERITAS Software Corporation. All rights reserved.

1 Attempt to take the ClusterService group offline on S1.

What happens?

2 Attempt to take the AppSG service group offline on S1.

What happens?

3 Attempt to take the Oracle service group offline on S1.

What happens?

4 Take all service groups that you have privileges for offline everywhere.

5 Bring the AppSG service group online on S2.

6 Bring the OracleSG service group online on S1.

7 Switch service group AppSG to S1.

8 Switch the OracleSG service group to S2.

9 Bring all service groups that you have privileges for online on S3.

Manipulating Service Groups

Page 27: havcs-410-101 a-2-10-srt-pg_2

Appendix A Lab Synopses A–23Copyright © 2005 VERITAS Software Corporation. All rights reserved.

A

1 Attempt to take the OraListener resource in OracleSG offline on S3.

What happens to the OracleSG service group?

2 Bring the OraListener resource online on S3.

3 Attempt to take the OraMount resource offline on system S3.

What happens?

4 Attempt to bring only the OraListener resource online on S1.

What happens?

5 Fault the Oracle (oracle) resource in the OracleSG service group.

6 What happens to the service group and resource?

7 View the log entries to see the sequence of events.

8 Attempt to switch the OracleSG service group back to S3.

What happens?

9 Clear the fault on the Oracle resource in the OracleSG service group.

10 Switch the OracleSG service group back to S3.

11 Save and close the configuration, log off from the GUI, and stop the simulator.

Manipulating Resources

Page 28: havcs-410-101 a-2-10-srt-pg_2

A–24 VERITAS Cluster Server for UNIX, FundamentalsCopyright © 2005 VERITAS Software Corporation. All rights reserved.

Lab 5 Synopsis: Preparing Application ServicesThe purpose of this lab is to prepare the loopy process service for high availability.

Step-by-step instructions for this lab are located on the following page:• “Lab 5: Preparing Application Services,” page B-29

Solutions for this exercise are located on the following page:• “Lab 5 Solutions: Preparing Application Services,” page C-51

Lab AssignmentsUse the design worksheet to gather and record the values needed to complete the preparation steps.

disk1

bobDG1/bob1 bobVol1

Lab 5: Preparing Application Services

disk2

sueDG1sueVol1 /sue1

NIC

IP Address

while truedoecho “…”

done

/bob1/loopy

Disk/Lun Disk/Lun

NIC

IP Address

while truedoecho “…”

done

/sue1/loopy

See next slide for classroom values.See next slide for classroom values.

Page 29: havcs-410-101 a-2-10-srt-pg_2

Appendix A Lab Synopses A–25Copyright © 2005 VERITAS Software Corporation. All rights reserved.

A

Resource Definition Sample Value Your Value

Service Group nameSG1

Resource Name nameNIC1

Resource Type NIC

Required Attributes

Device Solaris: eri0Sol Mob: dmfe0AIX: en1HP-UX: lan0Linux: eth0VA: bge0

NetworkHosts* 192.168.xx.1 (HP-UX only)

Critical? No (0)

Enabled? Yes (1)

Resource Definition Sample Value Your Value

Service Group nameSG1

Resource Name nameIP1

Resource Type IP

Required Attributes

Device Solaris: eri0Sol Mob: dmfe0AIX: en1HP-UX: lan0Linux: eth1VA: bge0

Address 192.168.xx.5* see table

Optional Attributes

Netmask 255.255.255.0

Critical? No (0)

Enabled? Yes (1)

Page 30: havcs-410-101 a-2-10-srt-pg_2

A–26 VERITAS Cluster Server for UNIX, FundamentalsCopyright © 2005 VERITAS Software Corporation. All rights reserved.

1 Create a disk group using the convention specified in the worksheet.

2 Create a 2 GB volume and a vxfs file system.

3 Create a mount points, mount the file system on your cluster system, and verify that it is mounted.

1 Verify that an IP address exists on the base interface for the public network.

2 Configure a virtual IP address on the public network interface using the IP address from the design worksheet.

System IP Address

train1 192.168.xx.51

train2 192.168.xx.52

train3 192.168.xx.53

train4 192.168.xx.54

train5 192.168.xx.55

train6 192.168.xx.56

train7 192.168.xx.57

train8 192.168.xx.58

train9 192.168.xx.59

train10 192.168.xx.60

train11 192.168.xx.61

train12 192.168.xx.62

Configuring Storage for an Application

Configuring Networking for an Application

Page 31: havcs-410-101 a-2-10-srt-pg_2

Appendix A Lab Synopses A–27Copyright © 2005 VERITAS Software Corporation. All rights reserved.

A

A script named loopy is used as the example application for this lab exercise. 1 Obtain the location of the loopy script from your instructor.

loopy script location:

__________________________________________________________

2 Copy this file to a file named loopy on the file system you created.

3 Start the loopy application in the background.

4 Verify that the loopy application is working correctly.

Setting up the Application

Page 32: havcs-410-101 a-2-10-srt-pg_2

A–28 VERITAS Cluster Server for UNIX, FundamentalsCopyright © 2005 VERITAS Software Corporation. All rights reserved.

Complete the following steps to migrate the application to the other system.1 Stop all resources used in this service to prepare to manually migrate the

service.

a Stop your loopy process.

b Stop all storage resources.

c Unconfigure the virtual IP address.

2 On the other cluster system, import your disk group and bring up the remaining storage resources and the virtual IP address.

3 Start the loopy application and verify that it is running.

4 After you have verified that all resources are working properly on the second system, stop all resources.

Manually Migrating the Application

Page 33: havcs-410-101 a-2-10-srt-pg_2

Appendix A Lab Synopses A–29Copyright © 2005 VERITAS Software Corporation. All rights reserved.

A

Lab 6 Synopsis: Starting and Stopping VCSThe following procedure demonstrate how the cluster configuration changes states during startup and shutdown, and shows how the .stale file works.

Step-by-step instructions for this lab are located on the following page:• “Lab 6: Starting and Stopping VCS,” page B-37

Solutions for this exercise are located on the following page:• “Lab 6 Solutions: Starting and Stopping VCS,” page C-63

Note: Complete this section with your lab partner.

1 Verify that there is no .stale file in the configuration directory.

2 Open the cluster configuration and verify that the .stale file has been created.

3 Try to stop VCS.

4 Stop VCS forcibly and leave the applications running.

Lab 6: Starting and Stopping VCS

train1 train2

# hastop –all -force# hastop –all -force

vcs1vcs1

Page 34: havcs-410-101 a-2-10-srt-pg_2

A–30 VERITAS Cluster Server for UNIX, FundamentalsCopyright © 2005 VERITAS Software Corporation. All rights reserved.

5 Start VCS on each system in the cluster and check the cluster status.

Why are all systems in the STALE_ADMIN_WAIT state?

6 Verify that the .stale file is present.

7 Return all systems to a running state (from one system in the cluster). View the build process to see the LOCAL_BUILD and REMOTE_BUILD system states.

8 Verify that there is no .stale file.

Page 35: havcs-410-101 a-2-10-srt-pg_2

Appendix A Lab Synopses A–31Copyright © 2005 VERITAS Software Corporation. All rights reserved.

A

Lab 7 Synopsis: Online Configuration of a Service GroupThe purpose of this lab is to create a service group while VCS is running using either the Cluster Manager graphical user interface or the command-line interface.

Step-by-step instructions for this lab are located on the following page:• “Lab 7: Online Configuration of a Service Group,” page B-41

Solutions for this exercise are located on the following page:• “Lab 7 Solutions: Online Configuration of a Service Group,” page C-67

Lab 7: Online Configuration of a Service GroupUse the Java GUI to:

Create a service group.Add resources to the service group from the bottom of the dependency tree.Substitute the name you used to create the disk group and volume.

Page 36: havcs-410-101 a-2-10-srt-pg_2

A–32 VERITAS Cluster Server for UNIX, FundamentalsCopyright © 2005 VERITAS Software Corporation. All rights reserved.

Fill in the design worksheet with values appropriate for your cluster and use the information to create a service group.

1 Create the service group using the values in the table.

2 Save the cluster configuration and view the configuration file to verify your changes.

Creating a Service Group

Service Group Definition Sample Value Your Value

Group nameSG1

Required Attributes

FailOverPolicy Priority

SystemList train1=0 train2=1

Optional Attributes

AutoStartList train1

Page 37: havcs-410-101 a-2-10-srt-pg_2

Appendix A Lab Synopses A–33Copyright © 2005 VERITAS Software Corporation. All rights reserved.

A

Add NIC, IP, DiskGroup, Volume, and Process resources to the service group using the information from the design worksheets.

After each resource is added:• Bring each resource online.• Save the cluster configuration.

Adding Resources to a Service Group

Resource Definition Sample Value Your Value

Service Group nameSG1

Resource Name nameNIC1

Resource Type NIC

Required Attributes

Device Solaris: eri0Sol Mob: dmfe0AIX: en1HP-UX: lan0Linux: eth0VA: bge0

NetworkHosts* 192.168.xx.1 (HP-UX only)

Critical? No (0)

Enabled? Yes (1)

Page 38: havcs-410-101 a-2-10-srt-pg_2

A–34 VERITAS Cluster Server for UNIX, FundamentalsCopyright © 2005 VERITAS Software Corporation. All rights reserved.

Resource Definition Sample Value Your Value

Service Group nameSG1

Resource Name nameIP1

Resource Type IP

Required Attributes

Device Solaris: eri0Sol Mob: dmfe0AIX: en1HP-UX: lan0Linux: eth0VA: bge0

Address 192.168.xx.** see table

Optional Attributes

Netmask 255.255.255.0

Critical? No (0)

Enabled? Yes (1)

System IP Address

train1 192.168.xx.51

train2 192.168.xx.52

train3 192.168.xx.53

train4 192.168.xx.54

train5 192.168.xx.55

train6 192.168.xx.56

train7 192.168.xx.57

train8 192.168.xx.58

train9 192.168.xx.59

train10 192.168.xx.60

train11 192.168.xx.61

train12 192.168.xx.62

Page 39: havcs-410-101 a-2-10-srt-pg_2

Appendix A Lab Synopses A–35Copyright © 2005 VERITAS Software Corporation. All rights reserved.

A

Resource Definition Sample Value Your Value

Service Group nameSG1

Resource Name nameDG1

Resource Type DiskGroup

Required Attributes

DiskGroup nameDG1

Optional Attributes

StartVolumes 1

StopVolumes 1

Critical? No (0)

Enabled? Yes (1)

Resource Definition Sample Value Your Value

Service Group nameSG1

Resource Name nameVol1

Resource Type Volume

Required Attributes

Volume nameVol1

DiskGroup nameDG1

Critical? No (0)

Enabled? Yes (1)

Page 40: havcs-410-101 a-2-10-srt-pg_2

A–36 VERITAS Cluster Server for UNIX, FundamentalsCopyright © 2005 VERITAS Software Corporation. All rights reserved.

Resource Definition Sample Value Your Value

Service Group nameSG1

Resource Name nameMount1

Resource Type Mount

Required Attributes

MountPoint /name1

BlockDevice /dev/vx/dsk/nameDG1/nameVol1 (no spaces)

FSType vxfs

FsckOpt -y

Critical? No (0)

Enabled? Yes (1)

Resource Definition Sample Value Your Value

Service Group nameSG1

Resource Name nameProcess1

Resource Type Process

Required Attributes

PathName /bin/sh

Optional Attributes

Arguments /name1/loopy name 1

Critical? No (0)

Enabled? Yes (1)

Page 41: havcs-410-101 a-2-10-srt-pg_2

Appendix A Lab Synopses A–37Copyright © 2005 VERITAS Software Corporation. All rights reserved.

A

After you have verified that all resources are online, link the resources as shown in worksheet.

1 Test the service group by switching it between systems.

2 Set each resource to critical.

3 Save the cluster configuration and view the configuration file to verify your changes.

4 Close the cluster configuration after all students working in your cluster are finished.

Linking Resources in the Service Group

Resource Dependency Definition

Service Group nameSG1

Parent Resource Requires Child Resource

nameVol1 nameDG1

nameMount1 nameVol1

nameIP1 nameNIC1

nameProcess1 nameMount1

nameProcess1 nameIP1

Testing the Service Group

Page 42: havcs-410-101 a-2-10-srt-pg_2

A–38 VERITAS Cluster Server for UNIX, FundamentalsCopyright © 2005 VERITAS Software Corporation. All rights reserved.

Lab 8 Synopsis: Offline Configuration of a Service GroupThe purpose of this lab is to add a service group by copying and editing the definition in main.cf for nameSG1.

Step-by-step instructions for this lab are located on the following page:• “Lab 8: Offline Configuration of a Service Group,” page B-57

Solutions for this exercise are located on the following page:• “Lab 8 Solutions: Offline Configuration of a Service Group,” page C-89

Lab AssignmentsComplete the following worksheet for the resources managed by the service groups you create in this lab. Then follow the procedure to configure the resources.

Lab 8: Offline Configuration of a Service Group

nameProcess2

AppVol

AppDG

nameNIC2

nameIP2

nameDG2

nameVol2

nameMount2

nameProcess1

nameDG1

nameVol1

nameMount1

nameNIC1

nameIP1

nameSG1nameSG1 nameSG2nameSG2

Working together, follow the offline configuration procedure. Alternately, work alone and use the GUI to create a new service group.

Working together, follow the offline configuration procedure. Alternately, work alone and use the GUI to create a new service group.

Page 43: havcs-410-101 a-2-10-srt-pg_2

Appendix A Lab Synopses A–39Copyright © 2005 VERITAS Software Corporation. All rights reserved.

A

Object Sample Value Your Value

Your system host name your_sys

Use the same system as previous labs

Partner system host name their_sys

Use the same system as previous labs

Name prefix for your objects

name

Disk assignment for disk group

Solaris: c#t#d#AIX: hdisk##HP-UX: c#t#d#Linux: sd##

Disk group name nameDG2

Volume name nameVol2 (2Gb)

Mount point /name2

Application script location

Page 44: havcs-410-101 a-2-10-srt-pg_2

A–40 VERITAS Cluster Server for UNIX, FundamentalsCopyright © 2005 VERITAS Software Corporation. All rights reserved.

Use the values in the table to prepare resources for VCS.1 Create a disk group using the convention specified in the worksheet.

2 Create a 2 GB volume and a vxfs file system.

3 Create a mount points, mount the file system on your cluster system, and verify it is mounted.

4 Copy the loopy script to this file system.

5 Start the loopy and verify that the application is working correctly.

6 Stop the resources to prepare to place them under VCS control in the next section of the lab.

Prepare Resources

Page 45: havcs-410-101 a-2-10-srt-pg_2

Appendix A Lab Synopses A–41Copyright © 2005 VERITAS Software Corporation. All rights reserved.

A

In the design worksheet, record information needed to create a new service group using the offline process described in the next section.

Completing the Design Worksheet

Service Group Definition Sample Value Your Value

Group nameSG2

Required Attributes

FailOverPolicy Priority

SystemList train1=0 train2=1

Optional Attributes

AutoStartList train1

Resource Definition Sample Value Your Value

Service Group nameSG2

Resource Name nameNIC2

Resource Type NIC

Required Attributes

Device Solaris: eri0Sol Mob: dmfe0AIX: en1HP-UX: lan0Linux: eth0VA: bge0

NetworkHosts* 192.168.xx.1 (HP-UX only)

Critical? No (0)

Enabled? Yes (1)

Page 46: havcs-410-101 a-2-10-srt-pg_2

A–42 VERITAS Cluster Server for UNIX, FundamentalsCopyright © 2005 VERITAS Software Corporation. All rights reserved.

Resource Definition Sample Value Your Value

Service Group nameSG2

Resource Name nameIP2

Resource Type IP

Required Attributes

Device Solaris: eri0Sol Mob: dmfe0AIX: en1HP-UX: lan0Linux: eth0VA: bge0

Address 192.168.xx.** see table

Optional Attributes

Netmask 255.255.255.0

Critical? No (0)

Enabled? Yes (1)

System IP Address

train1 192.168.xx.71

train2 192.168.xx.72

train3 192.168.xx.73

train4 192.168.xx.74

train5 192.168.xx.75

train6 192.168.xx.76

train7 192.168.xx.77

train8 192.168.xx.78

train9 192.168.xx.79

train10 192.168.xx.80

train11 192.168.xx.81

train12 192.168.xx.82

Page 47: havcs-410-101 a-2-10-srt-pg_2

Appendix A Lab Synopses A–43Copyright © 2005 VERITAS Software Corporation. All rights reserved.

A

Resource Definition Sample Value Your Value

Service Group nameSG2

Resource Name nameDG2

Resource Type DiskGroup

Required Attributes

DiskGroup nameDG2

Optional Attributes

StartVolumes 1

StopVolumes 1

Critical? No (0)

Enabled? Yes (1)

Resource Definition Sample Value Your Value

Service Group nameSG2

Resource Name nameVol2

Resource Type Volume

Required Attributes

Volume nameVol2

DiskGroup nameDG2

Critical? No (0)

Enabled? Yes (1)

Page 48: havcs-410-101 a-2-10-srt-pg_2

A–44 VERITAS Cluster Server for UNIX, FundamentalsCopyright © 2005 VERITAS Software Corporation. All rights reserved.

Resource Definition Sample Value Your Value

Service Group nameSG2

Resource Name nameMount2

Resource Type Mount

Required Attributes

MountPoint /name2

BlockDevice /dev/vx/dsk/nameDG2/nameVol2 (no spaces)

FSType vxfs

FsckOpt -y

Critical? No (0)

Enabled? Yes (1)

Resource Definition Sample Value Your Value

Service Group nameSG2

Resource Name nameProcess2

Resource Type Process

Required Attributes

PathName /bin/sh

Optional Attributes

Arguments /name2/loopy name 2

Critical? No (0)

Enabled? Yes (1)

Page 49: havcs-410-101 a-2-10-srt-pg_2

Appendix A Lab Synopses A–45Copyright © 2005 VERITAS Software Corporation. All rights reserved.

A

1 Working with your lab partner, verify that the cluster configuration is saved and closed.

2 Make a test subdirectory of the configuration directory.

3 Create copies of the main.cf and types.cf files in the test subdirectory.Linux

Also copy the vcsApacheTypes.cf file.

4 One student at a time, modify the main.cf file in the test directory on one system in the cluster.

a Copy the first student’s nameSG1 service group structure to a nameSG2 and rename all of the resources within the nameSG1 service group to end with 2 instead of 1, as shown in the following table.

Resource Dependency Definition

Service Group nameSG2

Parent Resource Requires Child Resource

nameVol2 nameDG2

nameMount2 nameVol2

nameIP2 nameNIC2

nameProcess2 nameMount2

nameProcess2 nameIP2

Modifying a VCS Configuration File

Page 50: havcs-410-101 a-2-10-srt-pg_2

A–46 VERITAS Cluster Server for UNIX, FundamentalsCopyright © 2005 VERITAS Software Corporation. All rights reserved.

b Copy and modify the dependency section.

c Repeat this for the other student’s service group.

5 Edit the attributes of each copied resource to match the design worksheet values shown earlier in this section.

6 Verify the cluster configuration and fix any errors found.

7 Stop VCS on all systems, but leave the applications still running.

8 Copy the main.cf file from the test subdirectory into the configuration directory.

9 Start the cluster from the system where you edited the configuration file and start the other system in the stale state.

10 Bring the new service group online on your system. Students can bring their own service groups online.

11 Verify the status of the cluster.

Existing Name Change To New Name

nameProcess1 nameProcess2

nameIP1 nameIP2

nameNIC1 nameNIC2

nameMount1 nameMount2

nameVol1 nameVol2

nameDG1 nameDG2

Page 51: havcs-410-101 a-2-10-srt-pg_2

Appendix A Lab Synopses A–47Copyright © 2005 VERITAS Software Corporation. All rights reserved.

A

Lab 9 Synopsis: Creating a Parallel Service GroupThe purpose of this lab is to add a parallel service group to monitor the NIC resource and replace the NIC resources in the failover service groups with Proxy resources.

Step-by-step instructions for this lab are located on the following page:• “Lab 9: Creating a Parallel Service Group,” page B-73

Solutions for this exercise are located on the following page:• “Lab 9 Solutions: Creating a Parallel Service Group,” page C-109

Lab 9: Creating a Parallel Service Group

nameProcess2

DBVol

DBDG

nameProxy2

nameIP2

nameDG2

nameVol2

nameMount2

nameProcess1

nameDG1

nameVol1

nameMount1

nameProxy1

nameIP1

NetworkNIC

NetworkPhantom

nameSG1nameSG1 nameSG2nameSG2

NetworkSGNetworkSG

Page 52: havcs-410-101 a-2-10-srt-pg_2

A–48 VERITAS Cluster Server for UNIX, FundamentalsCopyright © 2005 VERITAS Software Corporation. All rights reserved.

Work with your lab partner to create a parallel service group containing network resources using the information in the design worksheet.

Creating a Parallel Network Service Group

Service Group Definition Sample Value Your Value

Group NetworkSG

Required Attributes

Parallel 1

SystemList train1=0 train2=1

Optional Attributes

AutoStartList train1 train2

Page 53: havcs-410-101 a-2-10-srt-pg_2

Appendix A Lab Synopses A–49Copyright © 2005 VERITAS Software Corporation. All rights reserved.

A

Use the values in the following tables to create NIC and Phantom resources and then bring them online. Remember to save the cluster configuration.

Adding Resources

Resource Definition Sample Value Your Value

Service Group NetworkSG

Resource Name NetworkNIC

Resource Type NIC

Required Attributes

Device Solaris: eri0Sol Mob: dmfe0AIX: en1HP-UX: lan0Linux: eth0VA: bge0

Critical? No (0)

Enabled? Yes (1)

Resource Definition Sample Value Your Value

Service Group NetworkSG

Resource Name NetworkPhantom

Resource Type Phantom

Required Attributes

Critical? No (0)

Enabled? Yes (1)

Page 54: havcs-410-101 a-2-10-srt-pg_2

A–50 VERITAS Cluster Server for UNIX, FundamentalsCopyright © 2005 VERITAS Software Corporation. All rights reserved.

Working on your own, use the values in the tables to replace the NIC resources with Proxy resources and create new links.

Replacing NIC Resources with Proxy Resources

Resource Definition Sample Value Your Value

Service Group nameSG1

Resource Name nameProxy1

Resource Type Proxy

Required Attributes

TargetResName NetworkNIC

Critical? No (0)

Enabled? Yes (1)

Resource Definition Sample Value Your Value

Service Group nameSG2

Resource Name nameProxy2

Resource Type Proxy

Required Attributes

TargetResName NetworkNIC

Critical? No (0)

Enabled? Yes (1)

Resource Definition Sample Value Your Value

Service Group ClusterService

Resource Name csgProxy

Resource Type Proxy

Required Attributes

TargetResName NetworkNIC

Critical? No (0)

Enabled? Yes (1)

Page 55: havcs-410-101 a-2-10-srt-pg_2

Appendix A Lab Synopses A–51Copyright © 2005 VERITAS Software Corporation. All rights reserved.

A

1 Use the values in the tables to replace the NIC resources with Proxy resources and create new links.

2 Switch each service group (nameSG1, nameSG2, ClusterService) to ensure that they can run on each system.

3 Set all resources to critical.

4 Save and close the cluster configuration.

Linking Resources and Testing the Service Group

Resource Dependency Definition

Service Group nameSG1

Parent Resource Requires Child Resource

nameIP1 nameProxy1

Resource Dependency Definition

Service Group nameSG2

Parent Resource Requires Child Resource

nameIP2 nameProxy2

Resource Dependency Definition

Service Group ClusterService

Parent Resource Requires Child Resource

webip csgProxy

Page 56: havcs-410-101 a-2-10-srt-pg_2

A–52 VERITAS Cluster Server for UNIX, FundamentalsCopyright © 2005 VERITAS Software Corporation. All rights reserved.

Lab 10 Synopsis: Configuring NotificationThe purpose of this lab is to configure notification.

Step-by-step instructions for this lab are located on the following page:• “Lab 10: Configuring Notification,” page B-85

Solutions for this exercise are located on the following page:• “Lab 10 Solutions: Configuring Notification,” page C-125

Lab 10: Configuring Notification

nameSG1ClusterService nameSG2

NotifierMngr

TriggersTriggersresfaultnofailover

resadminwait

resfaultnofailoverresadminwait

Optional Lab

SMTP Server:

___________________________________

SMTP Server:

___________________________________

Page 57: havcs-410-101 a-2-10-srt-pg_2

Appendix A Lab Synopses A–53Copyright © 2005 VERITAS Software Corporation. All rights reserved.

A1 Work with your lab partner to add a NotifierMngr type resource to the ClusterService service group using the information in the design worksheet.

2 Bring the resource online and test the service group by switching it between systems.

3 Set the notifier resource to critical.

4 Save and close the cluster configuration and view the configuration file to verify your changes.

Note: In the next lab, you will see the effects of configuring notification and triggers when you test various resource fault scenarios.

Configuring the NotifierMngr Resource

Resource Definition Sample Value Your Value

Service Group ClusterService

Resource Name notifier

Resource Type NotifierMngr

Required Attributes

SmtpServer localhost

SmtpRecipients root Warning

PathName /xxx/xxx (AIX only)

Critical? No (0)

Enabled? Yes (1)

Page 58: havcs-410-101 a-2-10-srt-pg_2

A–54 VERITAS Cluster Server for UNIX, FundamentalsCopyright © 2005 VERITAS Software Corporation. All rights reserved.

Use the following procedure to configure triggers for notification. In this lab, each student creates a local copy of the trigger script on their own system. If you are working alone in the cluster, copy your completed triggers to the other system.

1 Create a text file in the /opt/VRTSvcs/bin/triggers directory named resfault. Add the following lines to the file:

#!/bin/shecho `date` > /tmp/resfault.msgecho message from the resfault trigger >> /tmp/resfault.msgecho Resource $2 has faulted on System $1 >> /tmp/resfault.msgecho Please check the problem. >> /tmp/resfault.msg/usr/lib/sendmail root </tmp/resfault.msgrm /tmp/resfault.msg

2 Create a nofailover trigger using the same script, replacing resfault with nofailover.

3 Create a resadminwait trigger using the same script, replacing resfault with resadminwait.

4 Ensure that all trigger files are executable.

5 If you are working alone, copy all triggers to the other system.

Optional Lab: Configuring Triggers

Page 59: havcs-410-101 a-2-10-srt-pg_2

Appendix A Lab Synopses A–55Copyright © 2005 VERITAS Software Corporation. All rights reserved.

A

Lab 11 Synopsis: Configuring Resource Fault BehaviorThe purpose of this lab is to observe how VCS responds to faults in a variety of scenarios.

Step-by-step instructions for this lab are located on the following page:• “Lab 11: Configuring Resource Fault Behavior,” page B-93

Solutions for this exercise are located on the following page:• “Lab 11 Solutions: Configuring Resource Fault Behavior,” page C-133

This part of the lab exercise explores the default behavior of VCS. Each student works independently in this lab.

1 Verify that all resources in the nameSG1 service group are currently set to critical; if not, set them to critical.

2 Set the IP and Process resources to not critical in the nameSG1 service group.

Non-Critical Resource Faults

Lab 11: Configuring Resource Fault Behavior

nameSG1 nameSG2

Critical=0Critical=1FaultPropagation=0FaultPropagation=1ManageFaults=NONEManageFaults=ALLRestartLimit=1

Note: Network interfaces for virtual IP addresses are unconfigured to force the IP resource to fault. In your classroom, the interface you specify is:______

Replace the variable interface in the lab steps with thisvalue.

Note: Network interfaces for virtual IP addresses are unconfigured to force the IP resource to fault. In your classroom, the interface you specify is:______

Replace the variable interface in the lab steps with thisvalue.

Page 60: havcs-410-101 a-2-10-srt-pg_2

A–56 VERITAS Cluster Server for UNIX, FundamentalsCopyright © 2005 VERITAS Software Corporation. All rights reserved.

3 Change the monitor interval for the IP resource type to 10 seconds and the offline monitor interval for the IP resource type to 30 seconds.

4 Verify that your nameSG1 service group is currently online on your system.

5 Unconfigure the interface corresponding to the virtual IP address—outside of VCS.

What happens?

6 Clear any faults.

7 Bring the IP and Process resources back online on your system.

8 Set the IP and process resource to critical in the nameSG1 service group.

1 Verify that all resources in the nameSG1 service group are currently set to critical.

2 Verify that your nameSG1 service group is currently online on your system.

3 Unconfigure the interface corresponding to the virtual IP address—outside of VCS.

What happens?

4 Without clearing faults from the last failover, unconfigure the virtual IP address on their system.

What happens?

5 Clear the nameIP1 resource on all systems and bring the nameSG1 service group online on your system.

Critical Resource Faults

Page 61: havcs-410-101 a-2-10-srt-pg_2

Appendix A Lab Synopses A–57Copyright © 2005 VERITAS Software Corporation. All rights reserved.

A

1 Verify that all resources in the nameSG1 service group are currently set to critical.

2 Verify that your nameSG1 Service group is currently online on your system.

3 Freeze the nameSG1 service group.

4 Unconfigure the interface corresponding to the virtual IP address—outside of VCS.

What happens?

5 Bring up the virtual IP address outside of VCS.

What happens?

6 Unconfigure the virtual IP address outside of VCS to fault the IP resource again. While the resource is faulted, unfreeze the service group.

7 Did unfreezing the service group cause a failover or any resources to come offline? Explain why or why not.

8 Clear the fault and bring the resource online.

Faults within Frozen Service Groups

Page 62: havcs-410-101 a-2-10-srt-pg_2

A–58 VERITAS Cluster Server for UNIX, FundamentalsCopyright © 2005 VERITAS Software Corporation. All rights reserved.

1 Verify that all resources in the nameSG1 service group are currently set to critical.

2 Set the FaultPropagation attribute for the nameSG1 service group to off (0).

3 Unconfigure the interface corresponding to the virtual IP address—outside of VCS.

What happens?

4 Clear the faulted resource and bring the resource back online.

5 Set the ManageFaults attribute for the nameSG1 service group to NONE and set the FaultPropagation attribute back to one (1).

6 Unconfigure the interface corresponding to the virtual IP address—outside of VCS.

What happens?

7 Recover the resource from the ADMIN_WAIT state.

8 Unconfigure the interface corresponding to the virtual IP address—outside of VCS.

What happens?

9 Recover the resource from the ADMIN_WAIT state by faulting the service group.

10 Clear the faulted nameIP1 resource and switch the nameSG1 service group back to your system.

11 Set ManageFaults back to ALL for the nameSG1 service group and save the cluster configuration.

Effects of ManageFaults and FaultPropagation

Page 63: havcs-410-101 a-2-10-srt-pg_2

Appendix A Lab Synopses A–59Copyright © 2005 VERITAS Software Corporation. All rights reserved.

A

This section illustrates failover behavior of a resource type using restart limits.1 Verify that all resources in the nameSG1 service group are set to critical.

2 Set the RestartLimit Attribute for the Process resource type to 1.

3 Stop the loopy process running in the nameSG1 service group by sending a kill signal.

What happens?

4 Stop the loopy process running in the nameSG1 service group by sending a kill signal.

Note: The effects of stopping loopy can take up to 60 seconds to be detected.

What happens?

5 Clear the faulted resource and switch the nameSG1 service group back to your system.

6 When all students have completed the lab, save and close the configuration.

RestartLimit Behavior

Page 64: havcs-410-101 a-2-10-srt-pg_2

A–60 VERITAS Cluster Server for UNIX, FundamentalsCopyright © 2005 VERITAS Software Corporation. All rights reserved.

Lab 13 Synopsis: Testing Communication FailuresThe purpose of this lab is to configure a low-priority link and then pull network cables and observe how VCS responds.

Step-by-step instructions for this lab are located on the following page:• “Lab 13 Details: Testing Communication Failures,” page B-101

Solutions for this exercise are located on the following page:• “Lab 13 Solutions: Testing Communication Failures,” page C-149

Lab 13: Testing Communication Failures

TriggerTrigger injeopardyinjeopardy

Optional Lab

1. Configure the InJeopardy trigger (optional).2. Configure a low-priority link.3. Test failures.

trainxxtrainxxtrainxxtrainxx

Page 65: havcs-410-101 a-2-10-srt-pg_2

Appendix A Lab Synopses A–61Copyright © 2005 VERITAS Software Corporation. All rights reserved.

A

Use the following procedure to configure triggers for jeopardy notification. In this lab, students create a local copy of the trigger script on their own systems. 1 Create a text file in the /opt/VRTSvcs/bin/triggers directory named

injeopardy. Add the following lines to the file:

#!/bin/shecho `date` > /tmp/injeopardy.msgecho message from the injeopardy trigger >> /tmp/injeopardy.msgecho System $1 is in Jeopardy >> /tmp/injeopardy.msgecho Please check the problem. >> /tmp/injeopardy.msg/usr/lib/sendmail root </tmp/injeopardy.msgrm /tmp/injeopardy.msg

2 Make the trigger file executable.

3 If you are working alone, copy the trigger to the other system.

4 Continue with the next lab sections. The “Multiple LLT Link Failures—Jeopardy” section of this lab shows the effects of configuring the InJeopardy trigger.

Optional Lab: Configuring the InJeopardy Trigger

Page 66: havcs-410-101 a-2-10-srt-pg_2

A–62 VERITAS Cluster Server for UNIX, FundamentalsCopyright © 2005 VERITAS Software Corporation. All rights reserved.

Working with your lab partner, use the procedures to create a low-priority link and then fault communication links and observe what occurs in a cluster environment when fencing is not configured.

1 Save and close the cluster configuration.

2 Shut down VCS, leaving the applications running on all systems in the cluster.

3 Unconfigure GAB and LLT on each system in the cluster.

Adding a Low-Priority Link

Object Sample Value Your Value

Public Ethernet interface for link low-pri

Solaris: eri0Sol Mob: dmfe0AIX: en1HP-UX lan0Linux: eth0VA: bge0

Cluster interconnect link 1 Solaris: qfe0Sol Mob: dmfe0AIX: en2HP-UX lan1Linux: eth1VA: bge2

Cluster interconnect link 2 Solaris: qfe1Sol Mob: dmfe1AIX: en3HP-UX lan2Linux: eth2VA: bge3

Host name for sysname file for your_sys

train1

Host name for sysname file for their_sys

train2

Page 67: havcs-410-101 a-2-10-srt-pg_2

Appendix A Lab Synopses A–63Copyright © 2005 VERITAS Software Corporation. All rights reserved.

A

4 Edit the /etc/llttab LLT configuration file on each system to add a directive for a low-priority LLT link on the public network.

Solaris MobileSkip this step for mobile classrooms. There is only one public interface and it is already configured as a low-priority link.

5 Start LLT and GAB on each system.

6 Verify GAB membership.

7 Start VCS on each system.

Note: For Solaris mobile classrooms, skip this section.

1 Copy the lltlink_enable and lltlink_disable utilities from the location provided by your instructor into the /tmp directory.

_____________________________________________________________

2 Change the NIC resource type MonitorInterval attribute to 3600 seconds temporarily for communications testing. This prevents the NetworkNIC resource from faulting during this lab when the low-priority LLT link is pulled.

3 Throughout this lab, use the lltlink_disable command to simulate failure of an LLT link where you are instructed to remove a link.

Notes: – Use lltlink_enable to restore the LLT link.– The utilities prompt you to select an interface.– These classroom utilities are provided to enable you to simulate

disconnecting and reconnecting Ethernet cables without risk of damaging connectors.

– Run the utility from one system only, unless otherwise specified.

Single LLT Link Failure

Page 68: havcs-410-101 a-2-10-srt-pg_2

A–64 VERITAS Cluster Server for UNIX, FundamentalsCopyright © 2005 VERITAS Software Corporation. All rights reserved.

4 Using the lltlink_disable utility, remove one LLT link and watch for the link to expire in the console or system log file.

5 Restore communications using lltlink_enable.

1 Verify the status of GAB.

2 Remove all but one LLT link and watch for the link to expire in the console.

Solaris MobileRemove only the one high-priority LLT link (dmfe1).

3 Verify the status of GAB.

4 Restore communications by replacing the LLT link cables.

5 Verify the status of GAB.

1 Verify the status of GAB from each system.

2 Remove all but one LLT link and watch for the link to expire in the console.

Solaris MobileRemove only the one high-priority LLT link (dmfe1).

3 From each system, verify that the links are down by checking the status of GAB.

4 Remove the last LLT link and watch for the link to expire in the console.

Multiple LLT Link Failures—Jeopardy

Multiple LLT Link Failures—Network Partition

Page 69: havcs-410-101 a-2-10-srt-pg_2

Appendix A Lab Synopses A–65Copyright © 2005 VERITAS Software Corporation. All rights reserved.

A

5 What is the status of service groups running on each system?

6 Recover from the network partition.

7 Change the NIC resource type MonitorInterval attribute back to 60 seconds.

Page 70: havcs-410-101 a-2-10-srt-pg_2

A–66 VERITAS Cluster Server for UNIX, FundamentalsCopyright © 2005 VERITAS Software Corporation. All rights reserved.

Lab 14 Synopsis: Configuring I/O FencingUse the lab instructions in one of the following appendixes.

Step-by-step instructions for this lab are located on the following page:• “Lab 14: Configuring I/O Fencing,” page B-111

Solutions for this exercise are located on the following page:• “Lab 14 Solutions: Configuring I/O Fencing,” page C-163

Lab 14: Configuring I/O FencingWork with your lab partner to configure fencing.

trainxxtrainxx

Coordinator Disks

nameDG1, nameDG2

Disk 1:___________________

Disk 2:___________________

Disk 3:___________________

Page 71: havcs-410-101 a-2-10-srt-pg_2

Appendix BLab Details

Page 72: havcs-410-101 a-2-10-srt-pg_2

B–2 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2004 VERITAS Software Corporation. All rights reserved.

Page 73: havcs-410-101 a-2-10-srt-pg_2

Lab 2: Validating Site Preparation B–3Copyright © 2005 VERITAS Software Corporation. All rights reserved.

B

Lab 2: Validating Site Preparation

Page 74: havcs-410-101 a-2-10-srt-pg_2

B–4 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

In this lab, you work with your partner to prepare the systems for installing VCS.

Brief instructions for this lab are located on the following page:• “Lab 2 Synopsis: Validating Site Preparation,” page A-2

Solutions for this exercise are located on the following page:• “Lab 2 Solutions: Validating Site Preparation,” page C-3

Lab AssignmentsFill in the table with the applicable values for your lab cluster.

Object Sample Value Your Value

Your system host name your_sys

train1

Partner system host name their_sys

train2

name prefix for your objects

bob

Interconnect link 1 Solaris: qfe0Sol Mob: dfme0AIX: en2HP-UX lan1Linux: eth2VA bge2

Visually inspect the classroom lab site.Complete and validate the design worksheet.Use the lab appendix best suited to your experience level:

Visually inspect the classroom lab site.Complete and validate the design worksheet.Use the lab appendix best suited to your experience level:

See the next slide for lab assignments.See the next slide for lab assignments.

Lab 2: Validating Site Preparation

train1

train2

train2train1Sample Value

SystemSystem

Your ValueSystem Definition

? Appendix A: Lab Synopses? Appendix B: Lab Details? Appendix C: Lab Solutions

? Appendix A: Lab Synopses? Appendix B: Lab Details? Appendix C: Lab Solutions

Page 75: havcs-410-101 a-2-10-srt-pg_2

Lab 2: Validating Site Preparation B–5Copyright © 2005 VERITAS Software Corporation. All rights reserved.

B

Interconnect link 2 Solaris: qfe1Sol Mob: dmfe1AIX: en3HP-UX lan2Linux: eth3VA bge3

Public network interfaceinterface

Solaris: eri0Sol Mob: dfme0AIX: en1HP-UX lan0Linux: eth1VA bge0

Admin IP address for your_sys

192.168.xx.xxx

Admin IP address for their_sys

192.168.xx.xxx

Object Sample Value Your Value

Page 76: havcs-410-101 a-2-10-srt-pg_2

B–6 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

1 Verify that the Ethernet network interfaces for the two cluster interconnect links are cabled together using crossover cables.Note: In actual implementations, each link should use a completely separate infrastructure (separate NIC and separate hub or switch). For simplicity of configuration in the classroom environment, the two interfaces used for the cluster interconnect are on the same NIC.

2 Verify that the public interface is cabled, as shown in the diagram.Virtual Academy

Skip this step.

3 Determine the host name of the local system.

4 Determine the base IP address configured on the public network interface for both your system and your partner’s system.

5 Verify that the public IP address of each system in your cluster is listed in the /etc/hosts file.

Verifying the Network Configuration

Four Node—UNIX

Classroom LAN 192.168.XX, where XX=27, 28, or 29

SANDiskArray

SANTape

Library

train1192.168.XX.101

train2192.168.XX.102

train3192.168.XX.103

train4192.168.XX.104

train12192.168.XX.112

train11192.168.XX.111

train10192.168.XX.110

train9192.168.XX.109

LANLAN

train5192.168.XX.105

train6192.168.XX.106

train8192.168.XX.108

train7192.168.XX.107

Hub/Switch

Hub/Switch

Hub/Sw

itch

Hub/Sw

itchHub

/Sw

itch

Hub

/Sw

itch

Software Share192.168.XX.100

Page 77: havcs-410-101 a-2-10-srt-pg_2

Lab 2: Validating Site Preparation B–7Copyright © 2005 VERITAS Software Corporation. All rights reserved.

B

6 Test connectivity to your partner’s system on the public network.

1 Check the PATH environment variable. If necessary, add the /sbin, /usr/sbin, /opt/VRTS/bin, and /opt/VRTSvcs/bin directories to your PATH environment variable.

2 Check the VERITAS licenses to determine whether a VERITAS Cluster Server license is installed.

When you install any Storage Foundation product or VERITAS Volume Replicator, the VRTSalloc package (the VERITAS Volume Manager Intelligent Storage Provisioning feature) requires that the following Red Hat packages are installed:• compat-gcc-c++-7.3-2.96.128• compat-libstdc++-7.3-2.96.128

Version 7.3-2.96.128 is provided with Red Hat Enterprise Linux 3 Update 2 (i686).

To determine whether these library versions are installed, type:# rpm -qi compat-gcc-c++

# rpm -qi compat-libstdc++

Other Checks

Checking Packages—Linux Only

Page 78: havcs-410-101 a-2-10-srt-pg_2

B–8 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

Verify that ssh configuration files are set up in order to install VCS on Linux or to run remote commands without prompts for passwords.

If you do not configure ssh, you are required to type in the root passwords for all systems for every remote command issued during the following services preparation lab and the installation procedure.

To configure ssh:1 Log on to your system.

2 Generate a DSA key pair on this system by running the following command:

ssh-keygen -t dsa

3 Accept the default location of ~/.ssh/id_dsa.

4 When prompted, do not enter a passphrase.

5 Change the permissions of the .ssh directory by typing:

# chmod 755 ~/.ssh

6 The file ~/.ssh/id_dsa.pub contains a line beginning with ssh_dss and ending with the name of the system on which it was created.

a Copy this line to the /root/.ssh/authorized_keys2 file on all systems where VCS is to be installed.

b Ensure that you copy the line to the other systems in your cluster.

c To ensure easy accessibility, include all of the ssh_dss lines in the authorized_keys2 file on each system in the cluster. This allows commands to be run from any system to any system.

Configuring Secure Shell—Linux Only

Page 79: havcs-410-101 a-2-10-srt-pg_2

Lab 2: Validating Site Preparation B–9Copyright © 2005 VERITAS Software Corporation. All rights reserved.

B

If you do not want to use ssh with automatic login using saved passphrases on a regular basis, run the following commands at the command line. This is in effect only for this session.exec /usr/bin/ssh-agent $SHELL

ssh-add

To save your passphrase during your GNOME session, follow these steps:1 The openssh-askpass-gnome package should be loaded on your system.

To confirm this, type:

rpm -q openssh-askpass-gnome

If it is not installed, see your instructor.

2 If you do not have a $HOME/.Xclients file (you should not have one after installation), run switchdesk to create it. In your $HOME/.Xclients file, edit the following:

exec $HOME/.Xclients-default

Change the line so that it reads:

exec /usr/bin/ssh-agent $HOME/.Xclients-default

3 From the Red Hat icon, select Preferences—>More Preferences—>Sessions.

a Click the Startup Programs Tab and Add and enter /usr/bin/ssh-add in the Startup Command text area.

b Set the priority to a number higher than any existing commands to ensure that it is executed last. A good priority number for ssh-add is 70 or higher. The higher the priority number, the lower the priority. If you have other programs listed, this one should have the lowest priority.

c Click OK to save your settings, and exit the GNOME Control Center.

4 Log out and then log back into GNOME; in other words, restart X.

After GNOME is started, a dialog box is displayed, prompting for your passphrases. Enter the passphrase requested. If you have both DSA and RSA key pairs configured, you are prompted for both. From this point on, you should not be prompted for a password by ssh, scp, or sftp.

For more information, see the Linux Customization Guide.

Page 80: havcs-410-101 a-2-10-srt-pg_2

B–10 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

1 Open a console window so you can observe messages during later labs.

a Select Run from the main menu or Gnome foot icon.

b Type this command:

xterm -C -fg white -bg black -sl 2000 &

This opens a console window with a white foreground, a black background, and a scroll line buffer of 2000 lines.

2 Open a System Log Display tool.

From the RedHat icon, select System Tools—>System Logs.

Setting Up a Console Window—Linux Only

Page 81: havcs-410-101 a-2-10-srt-pg_2

Lab 3: Installing VCS B–11Copyright © 2005 VERITAS Software Corporation. All rights reserved.

B

Lab 3: Installing VCS

Page 82: havcs-410-101 a-2-10-srt-pg_2

B–12 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

In this lab, you work with your lab partner to install VCS on both systems.

Brief instructions for this lab are located on the following page:• “Lab 3 Synopsis: Installing VCS,” page A-6

Solutions for this exercise are located on the following page:• “Lab 3 Solutions: Installing VCS,” page C-13

Obtaining Classroom InformationUse the following table to collect information you need to install VCS. Your instructor may also ask you to install VERITAS Volume Manager and VERITAS File System.

Lab 3: Installing VCS

train1 train2

# ./installer# ./installer

vcs1vcs1

Link 1:______Link 2:______

Software location:_______________________________

Subnet:_______

Link 1:______Link 2:______

Public:______ Public:______

4.x4.x

# ./installvcs# ./installvcsPre-4.0Pre-4.0

Page 83: havcs-410-101 a-2-10-srt-pg_2

Lab 3: Installing VCS B–13Copyright © 2005 VERITAS Software Corporation. All rights reserved.

B

Cluster Definition These values define cluster properties and are required to install VCS.

Attributes/Properties Sample Value Your Values

Node names, cluster name, and cluster ID

train1 train2 vcs1 1train3 train4 vcs2 2train5 train6 vcs3 3train7 train8 vcs4 4train9 train10 vcs5 5train11 train12 vcs6 6

train1train2vcs11

Cluster interconnect Ethernet interface for interconnect link #1

Solaris: qfe0Sol Mob: dmfe0AIX: en2HP-UX lan1Linux: eth1VA: bge2

Ethernet interface for interconnect link #2

Solaris: qfe1Sol Mob: dmfe1AIX: en3HP-UX lan2Linux: eth2VA: bge3

Public network interfaceinterface

Solaris: eri0Sol Mob: dmfe0AIX: en1HP-UX lan0Linux: eth0VA: bge0

Page 84: havcs-410-101 a-2-10-srt-pg_2

B–14 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

Web GUI IP Address:train1 train2train3 train4train5 train6train7 train8train9 train10train11 train12

Subnet MaskNetwork interface

NetworkHosts (HP-UX only)

192.168.xxx.91192.168.xxx.92192.168.xxx.93192.168.xxx.94192.168.xxx.95192.168.xxx.96

255.255.255.0Solaris: eri0Sol Mob: dmfe0AIX: en0HP-UX lan0Linux: eth0VA: bge0see instructor

Installation software locationinstall_dir

License

Administrator account NamePassword

adminpassword

Cluster Definition These values define cluster properties and are required to install VCS.

Attributes/Properties Sample Value Your Values

Page 85: havcs-410-101 a-2-10-srt-pg_2

Lab 3: Installing VCS B–15Copyright © 2005 VERITAS Software Corporation. All rights reserved.

B

1 Obtain the location of the installation software from your instructor.

Installation software location:

____________________________________________________________

2 This step is to be performed from only one system in the cluster. The install script installs and configures all systems in the cluster.

a Change to the install directory.

b Run the installer script (VERITAS Product Installer) located in the directory specified above.

Notes: › For VCS 4.x, install Storage Foundation HA (which includes VCS,

Volume Manager, and File System).› Use the information in the previous table or design worksheet to

respond to the installation prompts.› Sample prompts and input are provided at the end of the lab solution in

Appendix C.› For versions of VCS before 4.0, use installvcs.

c If a license key is needed, obtain one from your instructor and record it here.License Key: _________________________________

d Install all optional packages (including Web console and Simulator).

e Accept the default of Y to configure VCS.

f Do not configure a third heartbeat link at this time.

g Do not configure a low-priority heartbeat link at this time.

h Do not configure VERITAS Security Services.

i Do not set any user names or passwords.

j Retain the default admin user account and password.

Installing VERITAS Cluster Server Software

Page 86: havcs-410-101 a-2-10-srt-pg_2

B–16 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

k Configure the Cluster Server Cluster Manager.

l Do not configure SMTP Notification.

m Do not configure SNMP Notification.

n Select the option to install all packages simultaneously on all systems.

o Do not set up enclosure-based naming for Volume Manager.

p Start Storage Foundation Enterprise HA processes.

q Do not set up a default disk group.

3 If you did not install the Java GUI package as part of the installer (CPI) process (or installvcs for earlier versions of VCS), install the VRTScscm Java GUI package on each system in the cluster. The location of this package is in the pkgs directory under the install location directory given to you by your instructor.

Page 87: havcs-410-101 a-2-10-srt-pg_2

Lab 3: Installing VCS B–17Copyright © 2005 VERITAS Software Corporation. All rights reserved.

B

1 If your instructor indicates that additional software, such as VCS patches or updates, is required, obtain the location of the installation software from your instructor.

Installation software location:

_______________________________________install_dir

2 Install any VCS patches or updates, as directed by your instructor. Use the operating system-specific command, as shown in the following examples.

Solarispkgadd -d /install_dir/pkgs VRTSxxxx

HPswinstall -s /install_dir/pkgs VRTSxxxx

AIXinstallp -a -d /install_dir/pkgs/VRTSxxxx.rte.bff VRTSxxxx.rte

Linuxrpm -ihv VRTSxxxx-x.x.xx.xx-GA_RHEL.i686.rpm

3 Install any other software indicated by your instructor. For example, if your classroom uses VCS 3.5, you may be directed to install VERITAS Volume Manager and VERITAS File System.

Installing Other Software

Page 88: havcs-410-101 a-2-10-srt-pg_2

B–18 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

1 Verify that VCS is now running using hastatus.

If hastatus -sum shows the cluster systems in a running state and a ClusterService service group is online on one of your cluster systems, VCS has been properly installed and configured.

2 Perform additional verification (generally only necessary if there is a problem displayed by hastatus -sum).

a Verify that all packages are loaded.

b Verify that LLT is running.

c Verify that GAB is running.

Viewing VERITAS Cluster Server Installation Results

Page 89: havcs-410-101 a-2-10-srt-pg_2

Lab 3: Installing VCS B–19Copyright © 2005 VERITAS Software Corporation. All rights reserved.

B

View the configuration files set up by the VCS installation procedure.

1 Explore the LLT configuration.

a Verify that the cluster ID, system names, and network interfaces specified during install are present in the /etc/llttab file.

b Verify the system names in the /etc/llthosts file.

2 Explore the GAB configuration.

Verify that the number of systems in the cluster matches the value for the -n flag set in the /etc/gabtab file.

3 Explore the VCS configuration files.

Verify the cluster name, system names, and IP address for the Cluster Manager in the /etc/VRTSvcs/conf/config/main.cf file.

Exploring the Default VCS Configuration

Page 90: havcs-410-101 a-2-10-srt-pg_2

B–20 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

Verify GUI connectivity with the Java GUI and the Web GUI. Both GUIs can connect to the cluster with the default user of admin and password as the default password.

1 Use a Web browser to connect to the Web GUI. – The URL is http://ipaddress:8181/vcs. – The IP address is given in the design worksheet and was entered during

installation to configure the Cluster Manager.

2 Start the Java GUI and connect to the cluster using these values:– Cluster alias: nameCluster– Host name: ip_address (used during installation)– Failover retries: 12 (retain default)

3 Browse the cluster configuration.

Verifying Connectivity with the GUIs

Page 91: havcs-410-101 a-2-10-srt-pg_2

Lab 4: Using the VCS Simulator B–21Copyright © 2005 VERITAS Software Corporation. All rights reserved.

B

Lab 4: Using the VCS Simulator

Page 92: havcs-410-101 a-2-10-srt-pg_2

B–22 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

This lab uses the VERITAS Cluster Server Simulator and the Cluster Manager Java Console. You are provided with a preconfigured main.cf file to learn about managing the cluster.

Brief instructions for this lab are located on the following page:• “Lab 4 Synopsis: Using the VCS Simulator,” page A-18

Solutions for this exercise are located on the following page:• “Lab 4 Solutions: Using the VCS Simulator,” page C-35

Obtaining Classroom InformationUse the following table to record the values for your classroom.

Attribute Sample Value Your Value

Port 15559

VCS user account/password

oper/oper

Lab 4: Using the VCS Simulator1. Start the Simulator Java GUI.

hasimgui &2. Add a cluster.3. Copy the preconfigured

main.cf file to the new directory.

4. Start the cluster from the Simulator GUI.

5. Launch the Cluster Manager Java Console

6. Log in using the VCS account oper with password oper. This account demonstrates different privilege levels in VCS.

See next slide for classroom valuesSee next slide for lab assignments.See next slide for lab assignments.

Page 93: havcs-410-101 a-2-10-srt-pg_2

Lab 4: Using the VCS Simulator B–23Copyright © 2005 VERITAS Software Corporation. All rights reserved.

B

File Locations

Type of File Location

Lab main.cf file cf_files_dir

Local Simulator config directorysim_config_dir

Page 94: havcs-410-101 a-2-10-srt-pg_2

B–24 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

1 Add /opt/VRTScssim/bin to your PATH environment variable after any /opt/VRTSvcs/bin entries, if it is not already present.

2 Set the VCS_SIMULATOR_HOME environment variable to /opt/VRTScssim, if it is not already set.

3 Start the Simulator GUI.

4 Add a cluster.

5 Use these values to define the new simulated cluster:– Cluster Name: vcs_operations – System Name: S1– Port: 15559– Platform: Solaris– WAC Port: -1

6 In a terminal window, change to the simulator configuration directory for the new simulated cluster named vcs_operations.

Specify this directory in place of sim_config_dir variable elsewhere in the lab.

7 Copy the main.cf, types.cf, and OracleTypes.cf files provided by your instructor into the vcs_operations simulation configuration directory.

Source location of main.cf, types.cf, and OracleTypes.cf files:

___________________________________________cf_files_dir

8 From the Simulator GUI, start the vcs_operations cluster.

9 Launch the VCS Java Console for the vcs_operations simulated cluster.

Starting the Simulator on UNIX

Page 95: havcs-410-101 a-2-10-srt-pg_2

Lab 4: Using the VCS Simulator B–25Copyright © 2005 VERITAS Software Corporation. All rights reserved.

B

10 Log in as oper with password oper.

Note: While you may use admin/password to log in, the point of using oper is to demonstrate the differences in privileges between VCS user accounts.

Page 96: havcs-410-101 a-2-10-srt-pg_2

B–26 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

1 How many systems are members of the cluster?

2 Determine the status of all service groups.

3 Which service groups have service group operator privileges set for the oper account?

4 Which resources in the AppSG service group have the Critical resource attribute enabled?

5 Which resource is the top-most parent in the OracleSG service group?

6 Which immediate child resources does the Oracle resource in the OracleSG service group depend on?

Viewing Status and Attributes

Service Group Status on S1 Status on S2 Status on S3

AppSG

OracleSG

ClusterService

Page 97: havcs-410-101 a-2-10-srt-pg_2

Lab 4: Using the VCS Simulator B–27Copyright © 2005 VERITAS Software Corporation. All rights reserved.

B

1 Attempt to take the ClusterService group offline on S1.

What happens?

2 Attempt to take the AppSG service group offline on S1.

What happens?

3 Attempt to take the Oracle service group offline on S1.

What happens?

4 Take all service groups that you have privileges for offline everywhere.

5 Bring the AppSG service group online on S2.

6 Bring the OracleSG service group online on S1.

7 Switch service group AppSG to S1.

8 Switch the OracleSG service group to S2.

9 Bring all service groups that you have privileges for online on S3.

Manipulating Service Groups

Page 98: havcs-410-101 a-2-10-srt-pg_2

B–28 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

1 Attempt to take the OraListener resource in OracleSG offline on S3.

What happens to the OracleSG service group?

2 Bring the OraListener resource online on S3.

3 Attempt to take the OraMount resource offline on system S3.

What happens?

4 Attempt to bring only the OraListener resource online on S1.

What happens?

5 Fault the Oracle (oracle) resource in the OracleSG service group.

6 What happens to the service group and resource?

7 View the log entries to see the sequence of events.

8 Attempt to switch the OracleSG service group back to S3.

What happens?

9 Clear the fault on the Oracle resource in the OracleSG service group.

10 Switch the OracleSG service group back to S3.

11 Save and close the configuration.

12 Log off from the GUI.

13 Stop the simulator.

Manipulating Resources

Page 99: havcs-410-101 a-2-10-srt-pg_2

Lab 5: Preparing Application Services B–29Copyright © 2005 VERITAS Software Corporation. All rights reserved.

B

Lab 5: Preparing Application Services

Page 100: havcs-410-101 a-2-10-srt-pg_2

B–30 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

The purpose of this lab is to prepare the loopy process service for high availability.

Brief instructions for this lab are located on the following page:• “Lab 5 Synopsis: Preparing Application Services,” page A-24

Solutions for this exercise are located on the following page:• “Lab 5 Solutions: Preparing Application Services,” page C-51

Lab AssignmentsFill in the table with the applicable values for your lab cluster.

Object Sample Value Your Value

Your system host name your_sys

train1

Partner system host name their_sys

train2

Name prefix for your objects

name

Disk assignment for disk group: disk_dev

Solaris: c#t#d#AIX: hdisk##HP-UX: c#t#d#Linux: sd##

Disk group name nameDG1

disk1

bobDG1/bob1 bobVol1

Lab 5: Preparing Application Services

disk2

sueDG1sueVol1 /sue1

NIC

IP Address

while truedoecho “…”

done

/bob1/loopy

Disk/Lun Disk/Lun

NIC

IP Address

while truedoecho “…”

done

/sue1/loopy

See next slide for classroom values.See next slide for classroom values.

Page 101: havcs-410-101 a-2-10-srt-pg_2

Lab 5: Preparing Application Services B–31Copyright © 2005 VERITAS Software Corporation. All rights reserved.

B

Volume name nameVol1

Mount point /name1

Public network interface:interface

Solaris: eri0Sol Mob: dmfe0AIX: en1HP-UX lan0Linux: eth0VA bge0

IP Addressipaddress

train1 192.168.xxx.51train2 192.168.xxx.52train3 192.168.xxx.53train4 192.168.xxx.54train5 192.168.xxx.55train6 192.168.xxx.56train7 192.168.xxx.57train8 192.168.xxx.58train9 192.168.xxx.59train10 192.168.xxx.60train11 192.168.xxx.61train12 192.168.xxx.62

Application script locationclass_sw_dir

Object Sample Value Your Value

Page 102: havcs-410-101 a-2-10-srt-pg_2

B–32 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

1 Verify disk availability for Volume Manager.

2 Determine whether any disks are already in use in disk groups.

3 Initialize a disk for Volume Manager using the disk device from the worksheet.

4 Create a disk group with the name from the worksheet using the initialized disk.

5 Create a 2 GB volume in the disk group.

6 Create a vxfs file system on the volume.

7 Create a mount point on each system in the cluster.

8 Mount the file system on your cluster system.

9 Verify that the file system is mounted on your system.

Configuring Storage for an Application

Page 103: havcs-410-101 a-2-10-srt-pg_2

Lab 5: Preparing Application Services B–33Copyright © 2005 VERITAS Software Corporation. All rights reserved.

B

Complete the following steps to set up a virtual IP address for the application.

1 Verify that an IP address exists on the base interface for the public network.

2 Configure a virtual IP address on the public network interface. Use the IP address from the design worksheet.

3 Verify that the virtual IP address is configured.

Configuring Networking for an Application

Page 104: havcs-410-101 a-2-10-srt-pg_2

B–34 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

A script named loopy is used as the example application for this lab exercise.

1 Obtain the location of the loopy script from your instructor.

loopy script location:

__________________________________________________________

2 Copy or type this code to a file named loopy on the file system you created previously in this lab.

3 Verify that you have a console window open to see the display from the script.

4 Start the loopy application in the background.

5 Verify that the loopy application is working correctly.

Setting up the Application

Page 105: havcs-410-101 a-2-10-srt-pg_2

Lab 5: Preparing Application Services B–35Copyright © 2005 VERITAS Software Corporation. All rights reserved.

B

Complete the following steps to migrate the application to the other system.

1 Stop your loopy process by sending a kill signal. Verify that the process is stopped.

2 Remove the virtual IP address configured earlier in this lab. Verify that the IP address is no longer configured.

3 Unmount your file system and verify that it is no longer mounted.

4 Stop the volume and verify that it is disabled.

5 Deport your disk group and verify that it is deported.

6 Log in to the other system.

7 Update VxVM so that the disk group is visible.

8 Import your disk group and verify that it imported.

9 Start your volume and verify that it is enabled.

10 Verify that your mount point directory exists. Create it if it does not exist.

11 Mount your file system and verify that it is mounted.

12 Configure your virtual IP address and verify that it is configured.

13 Start the loopy application and verify that it is running.

Manually Migrating the Application

Page 106: havcs-410-101 a-2-10-srt-pg_2

B–36 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

Complete the following steps to bring the application offline on the other system so that it is ready to be placed under VCS control.

1 While still logged into the other system, stop your loopy process by sending a kill signal. Verify that the process is stopped.

2 Remove the virtual IP address configured earlier in this lab. Verify that the IP address is no longer configured.

3 Unmount your file system and verify that it is no longer mounted.

4 Stop the volume and verify that it is disabled.

5 Deport your disk group and verify that it is deported.

Bringing the Services Offline

Page 107: havcs-410-101 a-2-10-srt-pg_2

Lab 6: Starting and Stopping VCS B–37Copyright © 2005 VERITAS Software Corporation. All rights reserved.

B

Lab 6: Starting and Stopping VCS

Page 108: havcs-410-101 a-2-10-srt-pg_2

B–38 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

The following procedure demonstrate how the cluster configuration changes states during startup and shutdown, and shows how the .stale file works.

Brief instructions for this lab are located on the following page:• “Lab 6 Synopsis: Starting and Stopping VCS,” page A-29

Solutions for this exercise are located on the following page:• “Lab 6 Solutions: Starting and Stopping VCS,” page C-63

Note: Complete this section with your lab partner.

1 Change to the /etc/VRTSvcs/conf/config directory.

2 Verify that there is no .stale file in the /etc/VRTSvcs/conf/config directory. This file should not exist yet.

3 Open the cluster configuration.

4 Verify that the .stale file has been created in the directory, /etc/VRTSvcs/conf/config.

5 Attempt to stop VCS using the hastop -all command.

Lab 6: Starting and Stopping VCS

train1 train2

# hastop –all -force# hastop –all -force

vcs1vcs1

Page 109: havcs-410-101 a-2-10-srt-pg_2

Lab 6: Starting and Stopping VCS B–39Copyright © 2005 VERITAS Software Corporation. All rights reserved.

B

6 Stop the cluster using the hastop -all -force command from one system only to stop VCS forcibly and leave the applications running.

7 Start VCS on each system in the cluster.

8 Verify the status of the cluster.

9 Why are all systems in the STALE_ADMIN_WAIT state?

10 Verify that the .stale file is present in the /etc/VRTSvcs/conf/config directory. This file should exist.

11 Return all systems to a running state (from one system in the cluster).

12 Watch the console during the build process to see the LOCAL_BUILD and REMOTE_BUILD system states.

13 Check the status of the cluster.

14 Verify that there is no .stale file in the /etc/VRTSvcs/conf/config directory. This file should have been removed.

Page 110: havcs-410-101 a-2-10-srt-pg_2

B–40 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

Page 111: havcs-410-101 a-2-10-srt-pg_2

Lab 7: Online Configuration of a Service Group B–41Copyright © 2005 VERITAS Software Corporation. All rights reserved.

B

Lab 7: Online Configuration of a Service Group

Page 112: havcs-410-101 a-2-10-srt-pg_2

B–42 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

The purpose of this lab is to create a service group while VCS is running using either the Cluster Manager graphical user interface or the command-line interface.

Brief instructions for this lab are located on the following page:• “Lab 7 Synopsis: Online Configuration of a Service Group,” page A-31

Solutions for this exercise are located on the following page:• “Lab 7 Solutions: Online Configuration of a Service Group,” page C-67

Classroom-Specific ValuesFill in this table with the applicable values for your lab cluster.

Object Sample Value Your Value

Service group prefixname

name

Your system host name your_sys

train1

Partner system host name their_sys

train2

Lab 7: Online Configuration of a Service GroupUse the Java GUI to:

Create a service group.Add resources to the service group from the bottom of the dependency tree.Substitute the name you used to create the disk group and volume.

Page 113: havcs-410-101 a-2-10-srt-pg_2

Lab 7: Online Configuration of a Service Group B–43Copyright © 2005 VERITAS Software Corporation. All rights reserved.

B

Fill in the design worksheet with values appropriate for your cluster and use the information to create a service group.

1 If you are using the GUI, start Cluster Manager and log in to the cluster.

2 Open the cluster configuration.

3 Create the service group.

4 Modify the SystemList to allow the service group to run on the two systems specified in the design worksheet.

5 Modify the AutoStartList attribute to allow the service group to start on your system.

6 Verify that the service group can autostart and that it is a failover service group.

7 Save the cluster configuration and view the configuration file to verify your changes.

Creating a Service Group

Service Group Definition Sample Value Your Value

Group nameSG1

Required Attributes

FailOverPolicy Priority

SystemList train1=0 train2=1

Optional Attributes

AutoStartList train1

Page 114: havcs-410-101 a-2-10-srt-pg_2

B–44 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

Complete the following steps to add NIC, IP, DiskGroup, Volume, and Process resources to the service group using the information from the design worksheet.

1 Add the resource to the service group.

2 Set the resource to not critical.

3 Set the required attributes for this resource, and any optional attributes, if needed.

4 Enable the resource.

Adding Resources to a Service Group

Adding an NIC Resource

Resource Definition Sample Value Your Value

Service Group nameSG1

Resource Name nameNIC1

Resource Type NIC

Required Attributes

Device Solaris: eri0Sol Mob: dmfe0AIX: en1HP-UX: lan0Linux: eth0VA: bge0

NetworkHosts* 192.168.xx.1 (HP-UX only)

Critical? No (0)

Enabled? Yes (1)

Page 115: havcs-410-101 a-2-10-srt-pg_2

Lab 7: Online Configuration of a Service Group B–45Copyright © 2005 VERITAS Software Corporation. All rights reserved.

B

5 Verify that the resource is online. Because this is a persistent resource, you do not need to bring it online.

6 Save the cluster configuration and view the configuration file to verify your changes.

Page 116: havcs-410-101 a-2-10-srt-pg_2

B–46 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

Adding an IP Resource

Resource Definition Sample Value Your Value

Service Group nameSG1

Resource Name nameIP1

Resource Type IP

Required Attributes

Device Solaris: eri0Sol Mob: dmfe0AIX: en1HP-UX: lan0Linux: eth1VA: bge0

Address 192.168.xx.** see table

Optional Attributes

Netmask 255.255.255.0

Critical? No (0)

Enabled? Yes (1)

System IP Address

train1 192.168.xx.51

train2 192.168.xx.52

train3 192.168.xx.53

train4 192.168.xx.54

train5 192.168.xx.55

train6 192.168.xx.56

train7 192.168.xx.57

train8 192.168.xx.58

train9 192.168.xx.59

train10 192.168.xx.60

train11 192.168.xx.61

train12 192.168.xx.62

Page 117: havcs-410-101 a-2-10-srt-pg_2

Lab 7: Online Configuration of a Service Group B–47Copyright © 2005 VERITAS Software Corporation. All rights reserved.

B

1 Add the resource to the service group.

2 Set the resource to not critical.

3 Set the required attributes for this resource, and any optional attributes, if needed.

4 Enable the resource.

5 Bring the resource online on your system.

6 Verify that the resource is online.

7 Save the cluster configuration and view the configuration file to verify your changes.

Page 118: havcs-410-101 a-2-10-srt-pg_2

B–48 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

1 Add the resource to the service group using either the GUI or CLI.

2 Set the resource to not critical.

3 Set the required attributes for this resource, and any optional attributes, if needed.

4 Enable the resource.

5 Bring the resource online on your system.

6 Verify that the resource is online in VCS and at the O/S level.

7 Save the cluster configuration and view the configuration file to verify your changes.

Adding a DiskGroup Resource

Resource Definition Sample Value Your Value

Service Group nameSG1

Resource Name nameDG1

Resource Type DiskGroup

Required Attributes

DiskGroup nameDG1

Optional Attributes

StartVolumes 1

StopVolumes 1

Critical? No (0)

Enabled? Yes (1)

Page 119: havcs-410-101 a-2-10-srt-pg_2

Lab 7: Online Configuration of a Service Group B–49Copyright © 2005 VERITAS Software Corporation. All rights reserved.

B

1 Add the resource to the service group using either the GUI or CLI.

2 Set the resource to not critical.

3 Set the required attributes for this resource, and any optional attributes, if needed.

4 Enable the resource.

5 Bring the resource online on your system.

6 Verify that the resource is online in VCS and at the operating system level.

7 Save the cluster configuration and view the configuration file to verify your changes.

Adding a Volume Resource

Resource Definition Sample Value Your Value

Service Group nameSG1

Resource Name nameVol1

Resource Type Volume

Required Attributes

Volume nameVol1

DiskGroup nameDG1

Critical? No (0)

Enabled? Yes (1)

Page 120: havcs-410-101 a-2-10-srt-pg_2

B–50 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

1 Add the resource to the service group using either the GUI or CLI.

2 Set the resource to not critical.

3 Set the required attributes for this resource, and any optional attributes, if needed.

4 Enable the resource.

5 Bring the resource online on your system.

6 Verify that the resource is online in VCS and at the operating system level.

7 Save the cluster configuration and view the configuration file to verify your changes.

Adding a Mount Resource

Resource Definition Sample Value Your Value

Service Group nameSG1

Resource Name nameMount1

Resource Type Mount

Required Attributes

MountPoint /name1

BlockDevice /dev/vx/dsk/nameDG1/nameVol1 (no spaces)

FSType vxfs

FsckOpt -y

Critical? No (0)

Enabled? Yes (1)

Page 121: havcs-410-101 a-2-10-srt-pg_2

Lab 7: Online Configuration of a Service Group B–51Copyright © 2005 VERITAS Software Corporation. All rights reserved.

B

1 Add the resource to the service group using either the GUI or CLI.

2 Set the resource to not critical.

3 Set the required attributes for this resource, and any optional attributes, if needed.

4 Enable the resource.

5 Ensure that you have the console or a terminal window open for loopy output.

6 Bring the resource online on your system.

7 Verify that the resource is online in VCS and at the operating system level.

8 Save the cluster configuration and view the configuration file to verify your changes.

Adding a Process Resource

Resource Definition Sample Value Your Value

Service Group nameSG1

Resource Name nameProcess1

Resource Type Process

Required Attributes

PathName /bin/sh

Optional Attributes

Arguments /name1/loopy name 1

Critical? No (0)

Enabled? Yes (1)

Page 122: havcs-410-101 a-2-10-srt-pg_2

B–52 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

1 Link resource pairs together based on the design worksheet.

2 Verify that the resources are linked properly.

3 Save the cluster configuration and view the configuration file to verify your changes.

Linking Resources in the Service Group

Resource Dependency Definition

Service Group nameSG1

Parent Resource Requires Child Resource

nameVol1 nameDG1

nameMount1 nameVol1

nameIP1 nameNIC1

nameProcess1 nameMount1

nameProcess1 nameIP1

Page 123: havcs-410-101 a-2-10-srt-pg_2

Lab 7: Online Configuration of a Service Group B–53Copyright © 2005 VERITAS Software Corporation. All rights reserved.

B

Complete the following steps to test the service group on each system in the service group SystemList.

1 Test the service group by switching away from your system in the cluster.

2 Verify that the service group came online properly on their system.

3 Test the service group by switching it back to your system in the cluster.

4 Verify that the service group came online properly on your system.

Testing the Service Group

Page 124: havcs-410-101 a-2-10-srt-pg_2

B–54 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

1 Set each resource to critical.

2 Save the cluster configuration and view the configuration file to verify your changes.

3 Close the cluster configuration after all students working in your cluster are finished.

Setting Resources to Critical

Page 125: havcs-410-101 a-2-10-srt-pg_2

Lab 7: Online Configuration of a Service Group B–55Copyright © 2005 VERITAS Software Corporation. All rights reserved.

B

group nameSG1 (

SystemList = { train1 = 0, train2 = 1 }

AutoStartList = { train1 }

)

DiskGroup nameDG1 (

DiskGroup = nameDG1

)

IP nameIP1 (

Device = eri0

Address = "192.168.27.51"

)

Mount nameMount1 (

MountPoint = "/name1"

BlockDevice = "/dev/vx/dsk/nameDG1/nameVol1"

FSType = vxfs

FsckOpt = "-y"

)

Process nameProcess1 (

PathName = "/bin/sh"

Arguments = "/name1/loopy name 1"

)

NIC nameNIC1 (

Device = eri0

)

Partial Sample Configuration File

Page 126: havcs-410-101 a-2-10-srt-pg_2

B–56 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

Volume nameVol1 (

Volume = nameVol1

DiskGroup = nameDG1

)

nameIP1 requires nameNIC1

nameMount1 requires nameVol1

nameProcess1 requires nameIP1

nameProcess1 requires nameMount1

nameVol1 requires nameDG1

Page 127: havcs-410-101 a-2-10-srt-pg_2

Lab 8: Offline Configuration of a Service Group B–57Copyright © 2005 VERITAS Software Corporation. All rights reserved.

B

Lab 8: Offline Configuration of a Service Group

Page 128: havcs-410-101 a-2-10-srt-pg_2

B–58 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

The purpose of this lab is to add a service group by copying and editing the definition in main.cf for nameSG1.

Brief instructions for this lab are located on the following page:• “Lab 8 Synopsis: Offline Configuration of a Service Group,” page A-38

Solutions for this exercise are located on the following page:• “Lab 8 Solutions: Offline Configuration of a Service Group,” page C-89

Lab AssignmentsComplete the following worksheet for the resources managed by the service groups you create in this lab. Then follow the procedure to configure the resources.

Object Sample Value Your Value

Your system host name your_sys

Use the same system as previous labs

Partner system host name their_sys

Use the same system as previous labs

Name prefix for your objects

name

Disk assignment for disk group

Solaris: c#t#d#AIX: hdisk##HP-UX: c#t#d#Linux: sd##

Lab 8: Offline Configuration of a Service Group

nameProcess2

AppVol

AppDG

nameNIC2

nameIP2

nameDG2

nameVol2

nameMount2

nameProcess1

nameDG1

nameVol1

nameMount1

nameNIC1

nameIP1

nameSG1nameSG1 nameSG2nameSG2

Working together, follow the offline configuration procedure. Alternately, work alone and use the GUI to create a new service group.

Working together, follow the offline configuration procedure. Alternately, work alone and use the GUI to create a new service group.

Page 129: havcs-410-101 a-2-10-srt-pg_2

Lab 8: Offline Configuration of a Service Group B–59Copyright © 2005 VERITAS Software Corporation. All rights reserved.

B

Disk group name nameDG2

Volume name nameVol2

Mount point /name2

Application script locationclass_sw_dir

Object Sample Value Your Value

Page 130: havcs-410-101 a-2-10-srt-pg_2

B–60 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

Use the values in the table to prepare resources for VCS.

1 Verify disk availability for Volume Manager.

2 Initialize a disk for Volume Manager using the disk device from the worksheet.

3 Create a disk group with the name from the worksheet using the initialized disk.

4 Create a 2 GB volume in the disk group.

5 Create a vxfs file system on the volume.

6 Create a mount point on each system in the cluster.

7 Mount the file system on your cluster system.

8 Verify that the file system is mounted on your system.

9 Copy the loopy script to your file system created in this lab.

10 Start the new loopy application.

11 Verify that the new loopy application is working correctly.

Prepare Resources

Page 131: havcs-410-101 a-2-10-srt-pg_2

Lab 8: Offline Configuration of a Service Group B–61Copyright © 2005 VERITAS Software Corporation. All rights reserved.

B

12 Stop the resources to prepare to place them under VCS control in the next section of the lab.

a Stop the loopy process by sending a kill signal. Verify that the process is stopped.

b Unmount your file system and verify that it is no longer mounted.

c Stop the volume and verify that it is disabled.

d Deport your disk group and verify that it is deported.

Page 132: havcs-410-101 a-2-10-srt-pg_2

B–62 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

In the design worksheet, record information needed to create a new service group using the offline process described in the next section.

Completing the Design Worksheet

Service Group Definition Sample Value Your Value

Group nameSG2

Required Attributes

FailOverPolicy Priority

SystemList train1=0 train2=1

Optional Attributes

AutoStartList train1

Resource Definition Sample Value Your Value

Service Group nameSG2

Resource Name nameNIC2

Resource Type NIC

Required Attributes

Device Solaris: eri0Sol Mob: dmfe0AIX: en1HP-UX: lan0Linux: eth0VA: bge0

NetworkHosts* 192.168.xx.1 (HP-UX only)

Critical? No (0)

Enabled? Yes (1)

Page 133: havcs-410-101 a-2-10-srt-pg_2

Lab 8: Offline Configuration of a Service Group B–63Copyright © 2005 VERITAS Software Corporation. All rights reserved.

B

Resource Definition Sample Value Your Value

Service Group nameSG2

Resource Name nameIP2

Resource Type IP

Required Attributes

Device Solaris: eri0Sol Mob: dmfe0AIX: en1HP-UX: lan0Linux: eth0VA bge0

Address 192.168.xx.** see table

Optional Attributes

Netmask 255.255.255.0

Critical? No (0)

Enabled? Yes (1)

System IP Address

train1 192.168.xx.71

train2 192.168.xx.72

train3 192.168.xx.73

train4 192.168.xx.74

train5 192.168.xx.75

train6 192.168.xx.76

train7 192.168.xx.77

train8 192.168.xx.78

train9 192.168.xx.79

train10 192.168.xx.80

train11 192.168.xx.81

train12 192.168.xx.82

Page 134: havcs-410-101 a-2-10-srt-pg_2

B–64 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

Resource Definition Sample Value Your Value

Service Group nameSG2

Resource Name nameDG2

Resource Type DiskGroup

Required Attributes

DiskGroup nameDG2

Optional Attributes

StartVolumes 1

StopVolumes 1

Critical? No (0)

Enabled? Yes (1)

Resource Definition Sample Value Your Value

Service Group nameSG2

Resource Name nameVol2

Resource Type Volume

Required Attributes

Volume nameVol2

DiskGroup nameDG2

Critical? No (0)

Enabled? Yes (1)

Page 135: havcs-410-101 a-2-10-srt-pg_2

Lab 8: Offline Configuration of a Service Group B–65Copyright © 2005 VERITAS Software Corporation. All rights reserved.

B

Resource Definition Sample Value Your Value

Service Group nameSG2

Resource Name nameMount2

Resource Type Mount

Required Attributes

MountPoint /name2

BlockDevice /dev/vx/dsk/nameDG2/nameVol2 (no spaces)

FSType vxfs

FsckOpt -y

Critical? No (0)

Enabled? Yes (1)

Resource Definition Sample Value Your Value

Service Group nameSG2

Resource Name nameProcess2

Resource Type Process

Required Attributes

PathName /bin/sh

Optional Attributes

Arguments /name2/loopy name 2

Critical? No (0)

Enabled? Yes (1)

Page 136: havcs-410-101 a-2-10-srt-pg_2

B–66 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

Resource Dependency Definition

Service Group nameSG2

Parent Resource Requires Child Resource

nameVol2 nameDG2

nameMount2 nameVol2

nameIP2 nameNIC2

nameProcess2 nameMount2

nameProcess2 nameIP2

Page 137: havcs-410-101 a-2-10-srt-pg_2

Lab 8: Offline Configuration of a Service Group B–67Copyright © 2005 VERITAS Software Corporation. All rights reserved.

B

Note: You may choose to use the GUI to create the nameSG2 service group. If so, skip this section and complete the “Alternate Lab” section instead.

1 Working with your lab partner, verify that the cluster configuration is saved and closed.

2 Change to the VCS configuration directory.

3 Make a subdirectory named test.

4 Copy the main.cf and types.cf files into the test subdirectory.

LinuxAlso copy the vcsApacheTypes.cf file.

5 Change to the test directory.

6 Edit the main.cf file in the test directory on one system in the cluster.

a For each student’s service group, copy the nameSG1 service group structure to a nameSG2.

b Rename all of the resources within the nameSG1 service group to end with 2 instead of 1, as shown in the following table.

c Copy and modify the dependency section.

Modifying a VCS Configuration File

Existing Name Change To New Name

nameProcess1 nameProcess2

nameIP1 nameIP2

nameNIC1 nameNIC2

nameMount1 nameMount2

nameVol1 nameVol2

nameDG1 nameDG2

Page 138: havcs-410-101 a-2-10-srt-pg_2

B–68 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

7 Edit the attributes of each copied resource to match the design worksheet values shown earlier in this section.

8 Verify the cluster configuration and fix any errors found.

9 Stop VCS on all systems, but leave the applications still running.

10 Verify that the loopy applications are still running.

11 Copy the main.cf file from the test subdirectory into the configuration directory.

12 Start the cluster from the system where you edited the configuration file.

13 Start the cluster in the stale state on the other system in the cluster (where the configuration was not edited).

14 Verify the status of the cluster.

15 View the build process to see the LOCAL_BUILD and REMOTE_BUILD system states.

16 Bring the new service group online on your system. Students can bring their own service groups online.

17 Verify the status of the cluster.

Page 139: havcs-410-101 a-2-10-srt-pg_2

Lab 8: Offline Configuration of a Service Group B–69Copyright © 2005 VERITAS Software Corporation. All rights reserved.

B

Use the information in the design worksheet in the previous section to create a new service group, using the GUI to copy resources from the nameSG1 service group.

1 Start Cluster Manager and log in to the cluster.

2 Open the cluster configuration.

3 Create the service group.

4 Modify the SystemList to allow the service group to run on the two systems specified in the design worksheet.

5 Modify the AutoStartList attribute to allow the service group to start on your system.

6 Verify that the service group can autostart and that it is a failover service group.

7 Save the cluster configuration and view the configuration file to verify your changes.

8 Copy all resources from the nameSG1 service group to nameSG2.

Note: When you paste a copied resource or resource tree, the Name Clashes window is displayed, which enables you to rename each resource you are pasting.

Change the resource names as shown in the table:

Alternate Lab: Using the GUI to Create the Service Group

Page 140: havcs-410-101 a-2-10-srt-pg_2

B–70 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

9 Set each resource to not critical.

10 Modify each resource to set the attribute values as specified in the worksheet.

11 Save the cluster configuration and view the configuration file to verify your changes.

12 Enable each resource.

13 Bring the nameSG2 resources online, starting from the bottom of the dependency tree.

14 Save and close the cluster configuration.

Note: In the GUI, the Close configuration action saves the configuration automatically.

Existing Name Change To New Name

nameProcess1 nameProcess2

nameIP1 nameIP2

nameNIC1 nameNIC2

nameMount1 nameMount2

nameVol1 nameVol2

nameDG1 nameDG2

Page 141: havcs-410-101 a-2-10-srt-pg_2

Lab 8: Offline Configuration of a Service Group B–71Copyright © 2005 VERITAS Software Corporation. All rights reserved.

B

group nameSG2 (

SystemList = { train1 = 0, train2 = 1 }

AutoStartList = { train1 }

)

DiskGroup nameDG2 (

DiskGroup = nameDG2

)

IP nameIP2 (

Device = eri0

Address = "192.168.27.71"

)

Mount nameMount2 (

MountPoint = "/name2"

BlockDevice = "/dev/vx/dsk/nameDG2/nameVol2"

FSType = vxfs

FsckOpt = "-y"

)

Process nameProcess2 (

PathName = "/bin/sh"

Arguments = "/name2/loopy name 2"

)

NIC nameNIC2 (

Device = eri0

)

Partial Sample Configuration File

Page 142: havcs-410-101 a-2-10-srt-pg_2

B–72 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

Volume nameVol2 (

Volume = nameVol2

DiskGroup = nameDG2

)

nameIP2 requires nameNIC2

nameMount2 requires nameVol2

nameProcess2 requires nameIP2

nameProcess2 requires nameMount2

nameVol2 requires nameDG2

Page 143: havcs-410-101 a-2-10-srt-pg_2

Lab 9: Creating a Parallel Service Group B–73Copyright © 2005 VERITAS Software Corporation. All rights reserved.

B

Lab 9: Creating a Parallel Service Group

Page 144: havcs-410-101 a-2-10-srt-pg_2

B–74 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

The purpose of this lab is to add a parallel service group to monitor the NIC resource and replace the NIC resources in the failover service groups with Proxy resources.

Brief instructions for this lab are located on the following page:• “Lab 9 Synopsis: Creating a Parallel Service Group,” page A-47

Solutions for this exercise are located on the following page:• “Lab 9 Solutions: Creating a Parallel Service Group,” page C-109

Work with your lab partner to create a parallel service group containing network resources using the information in the design worksheet.

Creating a Parallel Network Service Group

Service Group Definition Sample Value Your Value

Group NetworkSG

Required Attributes

Parallel 1

SystemList train1=0 train2=1

Optional Attributes

AutoStartList train1 train2

Lab 9: Creating a Parallel Service Group

nameProcess2

DBVol

DBDG

nameProxy2

nameIP2

nameDG2

nameVol2

nameMount2

nameProcess1

nameDG1

nameVol1

nameMount1

nameProxy1

nameIP1

NetworkNIC

NetworkPhantom

nameSG1nameSG1 nameSG2nameSG2

NetworkSGNetworkSG

Page 145: havcs-410-101 a-2-10-srt-pg_2

Lab 9: Creating a Parallel Service Group B–75Copyright © 2005 VERITAS Software Corporation. All rights reserved.

B

1 Open the cluster configuration.

2 Create the service group.

3 Modify the SystemList to allow the service group to run on the systems specified in the design worksheet.

4 Modify the AutoStartList attribute to allow the service group to start on both systems.

5 Modify the Parallel attribute to allow the service group to run on both systems.

6 View the service group attribute settings.

Page 146: havcs-410-101 a-2-10-srt-pg_2

B–76 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

Use the values in the following tables to create NIC and Phantom resources.

1 Add the NIC resource to the service group.

2 Set the resource to not critical.

3 Set the required attributes for this resource, and any optional attributes, if needed.

Adding Resources

Resource Definition Sample Value Your Value

Service Group NetworkSG

Resource Name NetworkNIC

Resource Type NIC

Required Attributes

Device Solaris: eri0Sol Mob: dmfe0AIX: en1HP-UX: lan0Linux: eth0VA: bge0

Critical? No (0)

Enabled? Yes (1)

Resource Definition Sample Value Your Value

Service Group NetworkSG

Resource Name NetworkPhantom

Resource Type Phantom

Required Attributes

Critical? No (0)

Enabled? Yes (1)

Page 147: havcs-410-101 a-2-10-srt-pg_2

Lab 9: Creating a Parallel Service Group B–77Copyright © 2005 VERITAS Software Corporation. All rights reserved.

B

4 Enable the resource.

5 Verify that the resource is online. Because it is a persistent resource, you do not need to bring it online.

6 Add the Phantom resource to the service group.

7 Set the resource to not critical.

8 Enable the resource.

9 Verify that the status of the NetworkSG service group now shows as online.

10 Save the cluster configuration and view the configuration file.

Page 148: havcs-410-101 a-2-10-srt-pg_2

B–78 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

Use the values in the tables to replace the NIC resources with Proxy resources and create new links.

Replacing NIC Resources with Proxy Resources

Resource Definition Sample Value Your Value

Service Group nameSG1

Resource Name nameProxy1

Resource Type Proxy

Required Attributes

TargetResName NetworkNIC

Critical? No (0)

Enabled? Yes (1)

Resource Definition Sample Value Your Value

Service Group nameSG2

Resource Name nameProxy2

Resource Type Proxy

Required Attributes

TargetResName NetworkNIC

Critical? No (0)

Enabled? Yes (1)

Resource Definition Sample Value Your Value

Service Group ClusterService

Resource Name csgProxy

Resource Type Proxy

Required Attributes

TargetResName NetworkNIC

Critical? No (0)

Enabled? Yes (1)

Page 149: havcs-410-101 a-2-10-srt-pg_2

Lab 9: Creating a Parallel Service Group B–79Copyright © 2005 VERITAS Software Corporation. All rights reserved.

B

1 Delete all NIC resources in the ClusterService, nameSG1, and nameSG2 service groups.

Note: Only one student can delete the ClusterService NIC resource.

2 Add a proxy resource to each failover service group using the service group naming convention:– nameProxy1– nameProxy2– csgProxy

3 Set the value for each Proxy TargetResName attribute to NetworkNIC.

4 Set the resources to not critical.

5 Enable the resources.

6 Verify that the Proxy resources are in an online state.

7 Save the cluster configuration.

Page 150: havcs-410-101 a-2-10-srt-pg_2

B–80 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

Use the values in the tables to replace the NIC resources with Proxy resources and create new links.

1 Link the Proxy resources as children of the corresponding IP resources of each service group.

2 Switch each service group (nameSG1, nameSG2, ClusterService) to ensure that they can run on each system.

3 Set all resources to critical.

4 Save and close the cluster configuration.

Linking Resources and Testing the Service Group

Resource Dependency Definition

Service Group nameSG1

Parent Resource Requires Child Resource

nameIP1 nameProxy1

Resource Dependency Definition

Service Group nameSG2

Parent Resource Requires Child Resource

nameIP2 nameProxy2

Resource Dependency Definition

Service Group ClusterService

Parent Resource Requires Child Resource

webip csgProxy

Page 151: havcs-410-101 a-2-10-srt-pg_2

Lab 9: Creating a Parallel Service Group B–81Copyright © 2005 VERITAS Software Corporation. All rights reserved.

B

include "types.cf"

cluster vcs (

UserNames = { admin = ElmElgLimHmmKumGlj }

ClusterAddress = "192.168.27.51"

Administrators = { admin }

CounterInterval = 5

)

system train1 (

)

system train2 (

)

group ClusterService (

SystemList = { train1 = 0, train2 = 1 }

AutoStartList = { train1, train2 }

OnlineRetryLimit = 3

Tag = CSG

)

IP webip (

Device = eri0

Address = "192.168.27.42"

NetMask = "255.255.255.0"

)

Proxy csgProxy (

TargetResName = NetworkNIC

)

VRTSWebApp VCSweb (

Sample Configuration File

Page 152: havcs-410-101 a-2-10-srt-pg_2

B–82 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

Critical = 0

AppName = vcs

InstallDir = "/opt/VRTSweb/VERITAS"

TimeForOnline = 5

)

VCSweb requires webip

webip requires csgProxy

group NetworkSG (

SystemList = { train1 = 0, train2 = 1 }

Parallel = 1

AutoStartList = ( train1, train2 }

)

NIC NetworkNIC (

Device = eri0

)

Phantom NetworkPhantom (

)

group nameSG1 (

SystemList = { train1 = 0, train2 = 1 }

AutoStartList = { train1 }

)

DiskGroup nameDG1 (

DiskGroup = nameDG1

)

IP nameIP1 (

Device = eri0

Address = "192.168.27.51"

)

Page 153: havcs-410-101 a-2-10-srt-pg_2

Lab 9: Creating a Parallel Service Group B–83Copyright © 2005 VERITAS Software Corporation. All rights reserved.

B

Mount nameMount1 (

MountPoint = "/name1"

BlockDevice = "/dev/vx/dsk/nameDG1/nameVol1"

FSType = vxfs

FsckOpt = "-y"

)

Process nameProcess1 (

PathName = "/bin/sh"

Arguments = "/name1/loopy name 1"

)

Proxy nameProxy1 (

TargetResName = NetworkNIC

)

Volume nameVol1 (

Volume = nameVol1

DiskGroup = nameDG1

)

nameIP1 requires nameProxy1

nameMount1 requires nameVol1

nameProcess1 requires nameIP1

nameProcess1 requires nameMount1

nameVol1 requires nameDG1

group nameSG2 (

SystemList = { train1 = 0, train2 = 1 }

AutoStartList = { train1 }

)

DiskGroup nameDG2 (

Page 154: havcs-410-101 a-2-10-srt-pg_2

B–84 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

DiskGroup = nameDG2

)

IP nameIP2 (

Device = eri0

Address = "192.168.27.71"

)

Mount nameMount2 (

MountPoint = "/name2"

BlockDevice = "/dev/vx/dsk/nameDG2/nameVol2"

FSType = vxfs

FsckOpt = "-y"

)

Process nameProcess2 (

PathName = "/bin/sh"

Arguments = "/name2/loopy name 2"

)

Proxy nameProxy2 (

TargetResName = NetworkNIC

)

Volume nameVol2 (

Volume = nameVol2

DiskGroup = nameDG2

)

nameIP2 requires nameProxy2

nameMount2 requires nameVol2

nameProcess2 requires nameIP2

nameProcess2 requires nameMount2

nameVol2 requires nameDG2

Page 155: havcs-410-101 a-2-10-srt-pg_2

Lab 10: Configuring Notification B–85Copyright © 2005 VERITAS Software Corporation. All rights reserved.

B

Lab 10: Configuring Notification

Page 156: havcs-410-101 a-2-10-srt-pg_2

B–86 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

The purpose of this lab is to configure notification.

Brief instructions for this lab are located on the following page:• “Lab 10 Synopsis: Configuring Notification,” page A-52

Solutions for this exercise are located on the following page:• “Lab 10 Solutions: Configuring Notification,” page C-125

Lab 10: Configuring Notification

nameSG1ClusterService nameSG2

NotifierMngr

TriggersTriggersresfaultnofailover

resadminwait

resfaultnofailoverresadminwait

Optional Lab

SMTP Server:

___________________________________

SMTP Server:

___________________________________

Page 157: havcs-410-101 a-2-10-srt-pg_2

Lab 10: Configuring Notification B–87Copyright © 2005 VERITAS Software Corporation. All rights reserved.

B

Work with your lab partner to add a NotifierMngr type resource to the ClusterService service group using the information in the design worksheet.

1 Open the cluster configuration.

2 Add the resource to the service group.

3 Set the resource to not critical.

4 Set the required attributes for this resource and any optional attributes, if needed.

5 Enable the resource.

6 Link the notifier resource to csgproxy.

7 Bring the resource online on the system running the ClusterService service group.

Configuring the NotifierMngr Resource

Resource Definition Sample Value Your Value

Service Group ClusterService

Resource Name notifier

Resource Type NotifierMngr

Required Attributes

SmtpServer localhost

SmtpRecipients root Warning

PathName /xxx/xxx (AIX only)

Critical? No (0)

Enabled? Yes (1)

Page 158: havcs-410-101 a-2-10-srt-pg_2

B–88 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

8 Verify that the resource is online.

9 Save the cluster configuration.

Page 159: havcs-410-101 a-2-10-srt-pg_2

Lab 10: Configuring Notification B–89Copyright © 2005 VERITAS Software Corporation. All rights reserved.

B

1 Test the service group by switching it to the other system in the cluster.

2 Verify that the service group came online properly on the other system.

3 Test the service group by switching it back to the original system in the cluster.

4 Verify that the service group came online properly on the original system.

5 Set the notifier resource to critical.

6 Save and close the cluster configuration and view the configuration file to verify your changes.

Note: In the next lab, you will see the effects of configuring notification and triggers when you test various resource fault scenarios.

Testing the Service Group

Page 160: havcs-410-101 a-2-10-srt-pg_2

B–90 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

Use the following procedure to configure triggers for notification. In this lab, each student creates a local copy of the trigger script on their own system. If you are working alone in the cluster, copy your completed triggers to the other system.1 Create a text file in the /opt/VRTSvcs/bin/triggers directory named

resfault. Add the following lines to the file:

#!/bin/shecho `date` > /tmp/resfault.msgecho message from the resfault trigger >> /tmp/resfault.msgecho Resource $2 has faulted on System $1 >> /tmp/resfault.msgecho Please check the problem. >> /tmp/resfault.msg/usr/lib/sendmail root </tmp/resfault.msgrm /tmp/resfault.msg

2 Create a text file in the /opt/VRTSvcs/bin/triggers directory named nofailover. Add the following lines to the file.

#!/bin/shecho `date` > /tmp/nofailover.msgecho message from the nofailover trigger >> /tmp/nofailover.msgecho no failover for service group $2 >> /tmp/nofailover.msgecho Please check the problem. >> /tmp/nofailover.msg/usr/lib/sendmail root </tmp/nofailover.msgrm /tmp/nofailover.msg

Optional Lab: Configuring Triggers

Page 161: havcs-410-101 a-2-10-srt-pg_2

Lab 10: Configuring Notification B–91Copyright © 2005 VERITAS Software Corporation. All rights reserved.

B

3 Create a text file in the /opt/VRTSvcs/bin/triggers directory named resadminwait. Add the following lines to the file.

#!/bin/shecho `date` > /tmp/resadminwait.msgecho message from the resadminwait trigger >> /tmp/resadminwait.msgecho Resource $2 on System $1 is in adminwait for Reason $3 >> /tmp/resadminwait.msgecho Please check the problem. >> /tmp/resadminwait.msg/usr/lib/sendmail root </tmp/resadminwait.msgrm /tmp/resadminwait.msg

4 Ensure that all trigger files are executable.

5 If you are working alone, copy all triggers to the other system.

Page 162: havcs-410-101 a-2-10-srt-pg_2

B–92 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

Page 163: havcs-410-101 a-2-10-srt-pg_2

Lab 11: Configuring Resource Fault Behavior B–93Copyright © 2005 VERITAS Software Corporation. All rights reserved.

B

Lab 11: Configuring Resource Fault Behavior

Page 164: havcs-410-101 a-2-10-srt-pg_2

B–94 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

The purpose of this lab is to observe how VCS responds to faults in a variety of scenarios.

Brief instructions for this lab are located on the following page:• “Lab 11 Synopsis: Configuring Resource Fault Behavior,” page A-55

Solutions for this exercise are located on the following page:• “Lab 11 Solutions: Configuring Resource Fault Behavior,” page C-133

Lab 11: Configuring Resource Fault Behavior

nameSG1 nameSG2

Critical=0Critical=1FaultPropagation=0FaultPropagation=1ManageFaults=NONEManageFaults=ALLRestartLimit=1

Note: Network interfaces for virtual IP addresses are unconfigured to force the IP resource to fault. In your classroom, the interface you specify is:______

Replace the variable interface in the lab steps with thisvalue.

Note: Network interfaces for virtual IP addresses are unconfigured to force the IP resource to fault. In your classroom, the interface you specify is:______

Replace the variable interface in the lab steps with thisvalue.

Page 165: havcs-410-101 a-2-10-srt-pg_2

Lab 11: Configuring Resource Fault Behavior B–95Copyright © 2005 VERITAS Software Corporation. All rights reserved.

B

This part of the lab exercise explores the default behavior of VCS. Each student works independently in this lab.

1 Open the cluster configuration.

2 Verify that all resources in the nameSG1 service group are currently set to critical; if not, set them to critical.

3 Set the IP and Process resources to not critical in the nameSG1 service group.

4 Change the monitor interval for the IP resource type to 10 seconds and the offline monitor interval for the IP resource type to 30 seconds.

5 Save the cluster configuration.

6 Verify that your nameSG1 service group is currently online on your system. If it is not, bring it online or switch it to your system.

7 Unconfigure the interface corresponding to the virtual IP address—outside of VCS.

a What happens to the resources?

b Does the service group fail over?

c Did you receive e-mail notification?

8 Clear any faults.

9 Bring the IP and Process resources back online on your system.

10 Set the IP and process resource to critical in the nameSG1 service group.

11 Save the cluster configuration.

Non-Critical Resource Faults

Page 166: havcs-410-101 a-2-10-srt-pg_2

B–96 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

1 Verify that all resources in the nameSG1 service group are currently set to critical.

2 Set all resources to critical, if they are not already set, and save the cluster configuration.

3 Verify that your nameSG1 service group is currently online on your system. If it is not online locally, bring it online or switch it to your system.

4 Unconfigure the interface corresponding to the virtual IP address—outside of VCS.

a What happens to the resources?

b Does the service group fail over?

c Did you receive e-mail notification?

5 Without clearing faults from the last failover, unconfigure the virtual IP address on their system.

a What happens to the resources?

b Does the service group fail over?

c Did you receive e-mail notification?

6 Clear the nameIP1 resource on all systems and bring the nameSG1 service group online on your system.

Critical Resource Faults

Page 167: havcs-410-101 a-2-10-srt-pg_2

Lab 11: Configuring Resource Fault Behavior B–97Copyright © 2005 VERITAS Software Corporation. All rights reserved.

B

1 Verify that all resources in the nameSG1 service group are currently set to critical.

2 Set all resources to critical, if they are not already set, and save the cluster configuration.

3 Verify that your nameSG1 Service group is currently online on your system. If it is not online locally, bring it online or switch it to your system.

4 Freeze the nameSG1 service group.

5 Unconfigure the interface corresponding to the virtual IP address—outside of VCS.

a What happens to the resources?

b Does the service group fail over?

c Did you receive e-mail notification?

6 Bring up the virtual IP address outside of VCS.What happens?

7 Unconfigure the virtual IP address outside of VCS to fault the IP resource again. While the resource is faulted, unfreeze the service group.

8 Did unfreezing the service group cause a failover or any resources to come offline? Explain why or why not.

9 Clear the fault and bring the resource online.

Faults within Frozen Service Groups

Page 168: havcs-410-101 a-2-10-srt-pg_2

B–98 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

This section illustrates service group failover behavior using the ManageFaults and FaultPropagation attributes.1 Verify that all resources in the nameSG1 service group are currently set to

critical.

2 Set all resources to critical, if they are not already set, and save the cluster configuration.

3 Set the FaultPropagation attribute for the nameSG1 service group to off (0).

4 Unconfigure the interface corresponding to the virtual IP address—outside of VCS.

a What happens to the resources?

b Does the service group fail over?

c Did you receive e-mail notification?

5 Clear the faulted resource and bring the resource back online.

6 Set the ManageFaults attribute for the nameSG1 service group to NONE and set the FaultPropagation attribute back to one (1).

7 Unconfigure the interface corresponding to the virtual IP address—outside of VCS.

a What happens to the resources?

b Does the service group fail over?

c Did you receive e-mail notification?

8 Recover the resource from the ADMIN_WAIT state by bringing up the IP address outside of VCS and clearing the AdminWait attribute without a fault.

Effects of ManageFaults and FaultPropagation

Page 169: havcs-410-101 a-2-10-srt-pg_2

Lab 11: Configuring Resource Fault Behavior B–99Copyright © 2005 VERITAS Software Corporation. All rights reserved.

B

9 Unconfigure the interface corresponding to the virtual IP address—outside of VCS.

a What happens to the resources?

b Does the service group fail over?

c Did you receive e-mail notification?

10 Recover the resource from the ADMIN_WAIT state by faulting the service group.

11 Clear the faulted nameIP1 resource and switch the nameSG1 service group back to your system.

12 Set ManageFaults back to ALL for the nameSG1 service group and save the cluster configuration.

Page 170: havcs-410-101 a-2-10-srt-pg_2

B–100 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

This section illustrates failover behavior of a resource type using restart limits.1 Verify that all resources in the nameSG1 service group are set to critical.

2 Set all resources to critical and save the cluster configuration.

3 Set the RestartLimit Attribute for the Process resource type to 1.

4 Stop the loopy process running in the nameSG1 service group by sending a kill signal.

a What happens to the resources?

b Does the service group fail over?

c Did you receive e-mail notification?

5 Stop the loopy process running in the nameSG1 service group by sending a kill signal.

Note: The effects of stopping loopy can take up to 60 seconds to be detected.

a What happens to the resources?

b Does the service group fail over?

c Did you receive e-mail notification?

6 Clear the faulted resource and switch the nameSG1 service group back to your system.

7 When all students have completed the lab, save and close the configuration.

RestartLimit Behavior

Page 171: havcs-410-101 a-2-10-srt-pg_2

Lab 13 Details: Testing Communication Failures B–101Copyright © 2005 VERITAS Software Corporation. All rights reserved.

B

Lab 13 Details: Testing Communication Failures

Page 172: havcs-410-101 a-2-10-srt-pg_2

B–102 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

The purpose of this lab is to configure a low-priority link and then pull network cables and observe how VCS responds.

Brief instructions for this lab are located on the following page:• “Lab 13 Synopsis: Testing Communication Failures,” page A-60

Solutions for this exercise are located on the following page:• “Lab 13 Solutions: Testing Communication Failures,” page C-149

Lab 13: Testing Communication Failures

TriggerTrigger injeopardyinjeopardy

Optional Lab

1. Configure the InJeopardy trigger (optional).2. Configure a low-priority link.3. Test failures.

trainxxtrainxxtrainxxtrainxx

Page 173: havcs-410-101 a-2-10-srt-pg_2

Lab 13 Details: Testing Communication Failures B–103Copyright © 2005 VERITAS Software Corporation. All rights reserved.

B

Use the following procedure to configure triggers for jeopardy notification. In this lab, students create a local copy of the trigger script on their own systems. If you are working alone in the cluster, copy your completed triggers to the other system.

1 Create a text file in the /opt/VRTSvcs/bin/triggers directory named injeopardy. Add the following lines to the file:

#!/bin/shecho `date` > /tmp/injeopardy.msgecho message from the injeopardy trigger >> /tmp/injeopardy.msgecho System $1 is in Jeopardy >> /tmp/injeopardy.msgecho Please check the problem. >> /tmp/injeopardy.msg/usr/lib/sendmail root </tmp/injeopardy.msgrm /tmp/injeopardy.msg

2 Make the trigger file executable.

3 If you are working alone, copy the trigger to the other system.

4 Continue with the next lab sections. The “Multiple LLT Link Failures—Jeopardy” section of this lab shows the effects of configuring the InJeopardy trigger.

Optional Lab: Configuring the InJeopardy Trigger

Page 174: havcs-410-101 a-2-10-srt-pg_2

B–104 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

Working with your lab partner, use the procedures to create a low-priority link and then fault communication links and observe what occurs in a cluster environment when fencing is not configured.

1 Save and close the cluster configuration.

2 Shut down VCS, leaving the applications running on all systems in the cluster.

3 Unconfigure GAB on each system in the cluster.

4 Unconfigure LLT on each system in the cluster.

Adding a Low-Priority Link

Object Sample Value Your Value

Public Ethernet interface for link low-pri

Solaris: eri0Sol Mob: dmfe0AIX: en1HP-UX lan0Linux: eth0VA: bge0

Cluster interconnect link 1 Solaris: qfe0Sol Mob: dmfe0AIX: en2HP-UX lan1Linux: eth1VA: bge2

Cluster interconnect link 2 Solaris: qfe1Sol Mob: dmfe1AIX: en3HP-UX lan2Linux: eth2VA: bge3

Host name for sysname file for your_sys

train1

Host name for sysname file for their_sys

train2

Page 175: havcs-410-101 a-2-10-srt-pg_2

Lab 13 Details: Testing Communication Failures B–105Copyright © 2005 VERITAS Software Corporation. All rights reserved.

B

5 Edit the /etc/llttab LLT configuration file on each system to add a directive for a low-priority LLT link on the public network.

Solaris MobileSkip this step for mobile classrooms. There is only one public interface and it is already configured as a low-priority link.

6 Start LLT on each system.

7 Verify that LLT is running.

8 Start GAB on each system.

9 Verify GAB membership.

10 Start VCS on each system.

11 Verify that VCS is running.

Page 176: havcs-410-101 a-2-10-srt-pg_2

B–106 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

Note: For Solaris mobile classrooms, skip this section.

1 Copy the lltlink_enable and lltlink_disable utilities from the location provided by your instructor into the /tmp directory.

_____________________________________________________________

2 Change to the /tmp directory.

cd /tmp

3 Change the NIC resource type MonitorInterval attribute to 3600 seconds temporarily for communications testing. This prevents the NetworkNIC resource from faulting during this lab when the low-priority LLT link is pulled.

4 Throughout this lab, use the lltlink_disable command to simulate failure of an LLT link where you are instructed to remove a link.

Notes: – Use lltlink_enable to restore the LLT link.– The utilities prompt you to select an interface.– These classroom utilities are provided to enable you to simulate

disconnecting and reconnecting Ethernet cables without risk of damaging connectors.

– Run the utility from one system only, unless otherwise specified.

5 Using the lltlink_disable utility, remove one LLT link and watch for the link to expire in the console or system log file.

6 Verify that the link is down.

7 Restore communications using the lltlink_enable utility.

8 Verify that the link is now up and communications are restored.

Single LLT Link Failure

Page 177: havcs-410-101 a-2-10-srt-pg_2

Lab 13 Details: Testing Communication Failures B–107Copyright © 2005 VERITAS Software Corporation. All rights reserved.

B

1 Verify the status of GAB.

2 Use lltlink_disable to remove all but one LLT link and watch for the link to expire in the console.

Solaris MobileRemove only the one high-priority LLT link (dmfe1).

3 Verify that the links are down.

4 Verify the status of GAB.

5 Restore communications using lltlink_enable.

6 Verify that the link is now up and communications are restored.

7 Verify the status of GAB.

Multiple LLT Link Failures—Jeopardy

Page 178: havcs-410-101 a-2-10-srt-pg_2

B–108 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

1 Verify the status of GAB from each system.

2 Remove all but one LLT link and watch for the link to expire in the console.

Solaris MobileRemove only the one high-priority LLT link (dmfe1).

3 Verify that the links are down from each system.

4 Verify the status of GAB from each system.

5 Remove the last LLT link and watch for the link to expire in the console.

6 Verify that all links are down from each system.

7 Verify the status of GAB from each system.

8 What is the status of service groups running on each system?

9 Recover from the network partition.

a Stop HAD on one system but leave services running.

Note: If you have more than two systems in the cluster, you must stop HAD on all systems on either side of the network partition.

b If you physically unplugged cables, restore communications reconnecting the LLT link cables.

Note: If you used lltlink_disable to simulate link failure, skip this step.

c Verify that the LLT connections are up.

Multiple LLT Link Failures—Network Partition

Page 179: havcs-410-101 a-2-10-srt-pg_2

Lab 13 Details: Testing Communication Failures B–109Copyright © 2005 VERITAS Software Corporation. All rights reserved.

B

d Verify that GAB has proper membership.

e Start VCS on the system where you stopped it.

f Verify that each service group is autoenabled.

10 Change the NIC resource type MonitorInterval attribute back to 60 seconds.

Page 180: havcs-410-101 a-2-10-srt-pg_2

B–110 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

Page 181: havcs-410-101 a-2-10-srt-pg_2

Lab 14: Configuring I/O Fencing B–111Copyright © 2005 VERITAS Software Corporation. All rights reserved.

B

Lab 14: Configuring I/O Fencing

Page 182: havcs-410-101 a-2-10-srt-pg_2

B–112 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

The purpose of this lab is to set up I/O fencing in a two-node cluster and simulate node and communication failures.

Brief instructions for this lab are located on the following page:• “Lab 14 Synopsis: Configuring I/O Fencing,” page A-66

Solutions for this exercise are located on the following page:• “Lab 14 Solutions: Configuring I/O Fencing,” page C-163

Lab 14: Configuring I/O FencingWork with your lab partner to configure fencing.

trainxxtrainxx

Coordinator Disks

nameDG1, nameDG2

Disk 1:___________________

Disk 2:___________________

Disk 3:___________________

Page 183: havcs-410-101 a-2-10-srt-pg_2

Lab 14: Configuring I/O Fencing B–113Copyright © 2005 VERITAS Software Corporation. All rights reserved.

B

Lab AssignmentsWorking with your lab partner, use the following procedure and the information provided in the table to configure fencing for your cluster.

Object Sample Value Your Value

Disk assignments for coordinator disk group

cXtXdXsXcXtXdXsXcXtXdXsX

Disk group name oddfendgorevenfendg

/etc/vxfendg oddfendgorevenfendg

UseFence cluster attribute SCSI3

Page 184: havcs-410-101 a-2-10-srt-pg_2

B–114 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

1 Configure a disk group for the coordinator disks.

a Initialize three disks for use in the disk group.

b Display your cluster ID. Your cluster ID determines your coordinator disk group name.

c Initialize the disk group.› If your cluster ID is odd, use oddfendg for the disk group name. › If your cluster ID is even, use evenfendg for the disk group name.

Note: Replace the placeholder string "______fendg" with the appropriate odd or even coordinator disk name throughout the remainder of this lab.

d Deport the disk group.

2 Optional for the classroom: Use the vxfentsthdw utility to verify that the shared storage disks support SCSI-3 persistent reservations. Notes: – For the purposes of this lab, you do not need to test the disks. The disks

used in this lab support SCSI-3 persistent reservations. The complete steps are given here as a guide for real-world use.

– To see how the command is used, you can run vxfentsthdw on a disk not in use; this will enable you to continue with the lab while the vxfentsthdw is running.

– Create a test disk group with one disk and run vxfentsthdw on that test disk group.

– Use the -r option to perform read-only testing of data disks.

3 Enter the coordinator disk group name in the /etc/vxfendg fencing configuration file on each system in the cluster.

4 Start the fencing driver on each system using the vxfen init script.

Configuring Disks and Fencing Driver

Page 185: havcs-410-101 a-2-10-srt-pg_2

Lab 14: Configuring I/O Fencing B–115Copyright © 2005 VERITAS Software Corporation. All rights reserved.

B

5 Verify that the /etc/vxfentab file has been created on each system and it contains a list of the coordinator disks.

6 Verify the setup of the coordinator disks.

a Verify that port b GAB membership is listed for both nodes.

b Verify that registrations are assigned to the coordinator disks.

c How many keys are present for each disk and why?

Page 186: havcs-410-101 a-2-10-srt-pg_2

B–116 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

1 Verify that you have a Storage Foundation Enterprise license installed on each system for fencing support using vxlicrep.

2 Working together, verify that the cluster configuration is saved and closed.

3 Change to the VCS configuration directory.

4 Make a subdirectory named test, if one does not already exist.

5 Copy the main.cf and types.cf files into the test subdirectory.

6 Change to the test directory.

7 Edit the main.cf file on that one system to set UseFence to SCSI3.

8 Verify the cluster configuration and correct any errors found.

9 Stop VCS and shut down the applications. The disk groups must be reimported for fencing to take effect.

10 Copy the main.cf file from the test subdirectory into the configuration directory.

11 Start the cluster from the system where you edited the configuration file.

12 Start the cluster in the stale state on the other system in the cluster (where the configuration was not edited).

13 Verify the status of the cluster.

14 Verify that the UseFence cluster attribute is set.

Configuring VCS for Fencing

Page 187: havcs-410-101 a-2-10-srt-pg_2

Lab 14: Configuring I/O Fencing B–117Copyright © 2005 VERITAS Software Corporation. All rights reserved.

B

1 If the service groups with disk groups did not come online at cluster startup, bring them online now. This imports the disk groups, which initiates fencing on the data disks. Each student can perform these steps on their service groups.

2 Verify registrations and reservations on the data disks.

Verifying Data Disks for I/O Fencing

Page 188: havcs-410-101 a-2-10-srt-pg_2

B–118 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

In most cases, the following sections require that you work together with your lab partner to observe how fencing protects data in a variety of failure situations. Steps you can perform on your own are indicated within the procedure.

Scenario 1: Manual Concurrency Violation

Students can try this scenario on their own. Try to import a disk group imported on one system on another system using vxdg with the -C option.

1 On the system where nameDG1 is not imported, attempt to manually import it clearing the host locks.

2 Were you successful? Describe why or why not.

Scenario 2: Response to System Failure

Work with your lab partner to observe how VCS responds to system failures.

1 Verify that the nameSG1 and nameSG2 service groups are online on your system if two students are working on the cluster. If you are working alone, ensure that you have a service group online on each system. This scenario requires that disk groups be imported on each system. Switch them, if necessary.

2 Verify the registrations on the coordinator disks for both systems.

3 Verify the registrations and reservations on the data disks for the disk groups imported on each system.

4 Fail one of the systems by removing power or hard booting the system. Observe the failure.

5 Verify the registrations on the coordinator disks for the remaining system.

6 Verify that the service groups that were running on the failed system have failed over to the remaining system.

Testing Communication Failures

Page 189: havcs-410-101 a-2-10-srt-pg_2

Lab 14: Configuring I/O Fencing B–119Copyright © 2005 VERITAS Software Corporation. All rights reserved.

B

7 Verify that the registrations and reservations on the data disks are now for the remaining system.

8 Boot the failed system and observe it rejoin cluster membership. Verify cluster membership and verify that the coordinator disks have registrations for both systems again.

Scenario 3: Response to Interconnect Failures

Work with your lab partner to observe how VCS responds to cluster interconnect failures.

1 If you did not already perform this step in the “Testing Communication Failures” lab, copy the lltlink_enable and lltlink_disable utilities from the location provided by your instructor into the /tmp directory.

_____________________________________________________________

2 Change to the /tmp directory.

3 Change the NIC resource type MonitorInterval attribute to 3600 seconds temporarily for the purposes of communications testing. This prevents the NetworkNIC resource from faulting during this lab when the low-priority LLT link is pulled.

4 Verify that the nameSG1 and nameSG2 service groups are online on your system if two students are working on the cluster. If you are working alone, ensure that you have a service group online on each system. This scenario requires that one disk group be imported on each system. Switch the service groups, if necessary.

5 Verify the registrations on the coordinator disks for both systems.

6 Verify the registrations and reservations on the data disks for the disk groups imported on each system.

7 Using the lltlink_disable utility, remove all cluster interconnect links from one system. Watch for the link to expire in the console.

Page 190: havcs-410-101 a-2-10-srt-pg_2

B–120 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

8 Observe LLT and GAB timeouts and membership change.

9 What happens to the systems?

10 On one system, view the registrations for the coordinator disks.

11 What happens to the service groups?

12 Verify that the registrations and reservations on the data disks are now for the remaining system.

13 When the system that rebooted is running, check the status of GAB and HAD.

14 Verify that the coordinator disks have registrations for the remaining system only.

15 Recover the system that rebooted.

a Shut down the system.

b Reconnect the cluster interconnects.

c Reboot the system.

16 Verify that cluster membership has been established for both systems and both systems are now registered with the coordinator disks.

17 Set the NIC resource type's monitor interval to back to 60.

Page 191: havcs-410-101 a-2-10-srt-pg_2

Lab 14: Configuring I/O Fencing B–121Copyright © 2005 VERITAS Software Corporation. All rights reserved.

B

Note: Do not complete this section unless directed by your instructor.

1 Verify that the cluster configuration is saved and closed.

2 Stop VCS and all service groups.

3 Unconfigure the fencing driver.

4 From one system, import and remove the coordinator disk group.

5 Use the offline configuration procedure to set the UseFence cluster attribute to the value NONE in the main.cf file and restart the cluster with the new configuration. Note: You cannot set UseFence dynamically while VCS is running.

a Change to the configuration directory.

b Copy the main.cf file into the test subdirectory.

c Edit the main.cf file in the test directory on one system in the cluster to set the value of UseFence to NONE.

6 Verify the cluster configuration and correct any errors found.

7 Copy the main.cf file back into the /etc/VRTSvcs/conf/config directory.

8 Start the cluster from the system where you edited the configuration file.

9 Start the cluster in the stale state on the other system in the cluster (where the configuration was not edited).

10 Verify the status of the cluster.

Optional: Removing the Fencing Configuration

Page 192: havcs-410-101 a-2-10-srt-pg_2

B–122 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

Page 193: havcs-410-101 a-2-10-srt-pg_2

Appendix CLab Solutions

Page 194: havcs-410-101 a-2-10-srt-pg_2

C–2 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2004 VERITAS Software Corporation. All rights reserved.

Page 195: havcs-410-101 a-2-10-srt-pg_2

Lab 2 Solutions: Validating Site Preparation C–3Copyright © 2005 VERITAS Software Corporation. All rights reserved.

C

Lab 2 Solutions: Validating Site Preparation

Page 196: havcs-410-101 a-2-10-srt-pg_2

C–4 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

In this lab, you work with your partner to prepare the systems for installing VCS.

Brief instructions for this lab are located on the following page:• “Lab 2 Synopsis: Validating Site Preparation,” page A-2

Step-by-step instructions for this lab are located on the following page:• “Lab 2: Validating Site Preparation,” page B-3

Lab AssignmentsFill in the following table with the applicable values for your lab cluster.

Object Sample Value Your Value

Your system host name your_sys

train1

Partner system host name their_sys

train2

name prefix for your objects

bob

Interconnect link 1 Solaris: qfe0Sol Mob: dfme0AIX: en2HP-UX lan1Linux: eth2VA bge2

Visually inspect the classroom lab site.Complete and validate the design worksheet.Use the lab appendix best suited to your experience level:

Visually inspect the classroom lab site.Complete and validate the design worksheet.Use the lab appendix best suited to your experience level:

See the next slide for lab assignments.See the next slide for lab assignments.

Lab 2: Validating Site Preparation

train1

train2

train2train1Sample Value

SystemSystem

Your ValueSystem Definition

? Appendix A: Lab Synopses? Appendix B: Lab Details? Appendix C: Lab Solutions

? Appendix A: Lab Synopses? Appendix B: Lab Details? Appendix C: Lab Solutions

Page 197: havcs-410-101 a-2-10-srt-pg_2

Lab 2 Solutions: Validating Site Preparation C–5Copyright © 2005 VERITAS Software Corporation. All rights reserved.

C

Interconnect link 2 Solaris: qfe1Sol Mob: dmfe1AIX: en3HP-UX lan2Linux: eth3VA bge3

Public network interfaceinterface

Solaris: eri0Sol Mob: dfme0AIX: en1HP-UX lan0Linux: eth1VA bge0

Admin IP address for your_sys

192.168.xx.xxx

Admin IP address for their_sys

192.168.xx.xxx

Object Sample Value Your Value

Page 198: havcs-410-101 a-2-10-srt-pg_2

C–6 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

1 Verify that the Ethernet network interfaces for the two cluster interconnect links are cabled together using crossover cables.Note: In actual implementations, each link should use a completely separate infrastructure (separate NIC and separate hub or switch). For simplicity of configuration in the classroom environment, the two interfaces used for the cluster interconnect are on the same NIC.

2 Verify that the public interface is cabled, as shown in the diagram.Virtual Academy

Skip this step.

3 Determine the host name of the local system.

hostname

4 Determine the base IP address configured on the public network interface for both your system and your partner’s system.

ifconfig public_interface

Verifying the Network Configuration

Four Node—UNIX

Classroom LAN 192.168.XX, where XX=27, 28, or 29

SANDiskArray

SANTape

Library

train1192.168.XX.101

train2192.168.XX.102

train3192.168.XX.103

train4192.168.XX.104

train12192.168.XX.112

train11192.168.XX.111

train10192.168.XX.110

train9192.168.XX.109

LANLAN

train5192.168.XX.105

train6192.168.XX.106

train8192.168.XX.108

train7192.168.XX.107

Hub/Switch

Hub/Switch

Hub/Sw

itch

Hub/Sw

itchHub

/Sw

itch

Hub

/Sw

itch

Software Share192.168.XX.100

Page 199: havcs-410-101 a-2-10-srt-pg_2

Lab 2 Solutions: Validating Site Preparation C–7Copyright © 2005 VERITAS Software Corporation. All rights reserved.

C

5 Verify that the public IP address of each system in your cluster is listed in the /etc/hosts file.

cat /etc/hosts

6 Test connectivity to your partner’s system on the public network.

ping public_IP_address

Page 200: havcs-410-101 a-2-10-srt-pg_2

C–8 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

1 Check the PATH environment variable. If necessary, add the /sbin, /usr/sbin, /opt/VRTS/bin, and /opt/VRTSvcs/bin directories to your PATH environment variable.

echo $PATH | grep VRTSvcs

If you are using the Bourne Shell (sh, ksh, or bash), use the following command:$PATH=/sbin:/usr/sbin:/opt/VRTS/bin:/opt/VRTSvcs/bin:$PATH;export PATH

If you are using the C Shell (csh or tcsh), use the following command:% setenv PATH /sbin:/usr/sbin:/opt/VRTS/bin:/opt/VRTSvcs/bin:$PATH

2 Check the VERITAS licenses to determine whether a VERITAS Cluster Server license is installed.

vxlicrep -s

License Key = P2EE-TCBU-FSUN-NDOR-3JEP-CWEO Product Name = VERITAS Cluster Server License Type = DEMO Demo End Date = Sat Nov 8 00:00:00 2003

(15.3 days from now).

Other Checks

Page 201: havcs-410-101 a-2-10-srt-pg_2

Lab 2 Solutions: Validating Site Preparation C–9Copyright © 2005 VERITAS Software Corporation. All rights reserved.

C

When you install any Storage Foundation product or VERITAS Volume Replicator, the VRTSalloc package (the VERITAS Volume Manager Intelligent Storage Provisioning feature) requires that the following Red Hat packages are installed:• compat-gcc-c++-7.3-2.96.128• compat-libstdc++-7.3-2.96.128

Version 7.3-2.96.128 is provided with Red Hat Enterprise Linux 3 Update 2 (i686).

To determine whether these library versions are installed, type:# rpm -qi compat-gcc-c++

# rpm -qi compat-libstdc++

Checking Packages—Linux Only

Page 202: havcs-410-101 a-2-10-srt-pg_2

C–10 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

Verify that ssh configuration files are set up in order to install VCS on Linux or to run remote commands without prompts for passwords.

If you do not configure ssh, you are required to type in the root passwords for all systems for every remote command issued during the following services preparation lab and the installation procedure.

1 Log on to your system.

2 Generate a DSA key pair on this system by running the following command:

ssh-keygen -t dsa

3 Accept the default location of ~/.ssh/id_dsa.

4 When prompted, do not enter a passphrase.

5 Change the permissions of the .ssh directory by typing:

# chmod 755 ~/.ssh

6 The file ~/.ssh/id_dsa.pub contains a line beginning with ssh_dss and ending with the name of the system on which it was created.

a Copy this line to the /root/.ssh/authorized_keys2 file on all systems where VCS is to be installed.

b Ensure that you copy the line to the other systems in your cluster.

c To ensure easy accessibility, include all of the ssh_dss lines in the authorized_keys2 file on each system in the cluster. This allows commands to be run from any system to any system.

Configuring Secure Shell—Linux Only

Page 203: havcs-410-101 a-2-10-srt-pg_2

Lab 2 Solutions: Validating Site Preparation C–11Copyright © 2005 VERITAS Software Corporation. All rights reserved.

C

If you do not want to use ssh with automatic login using saved passphrases on a regular basis, run the following commands at the command line. This is in effect only for this session.exec /usr/bin/ssh-agent $SHELL

ssh-add

To save your passphrase during your GNOME session, follow these steps.

1 The openssh-askpass-gnome package should be loaded on your system. To confirm this, type:

rpm -q openssh-askpass-gnome

If it is not installed, see your instructor.

2 If you do not have a $HOME/.Xclients file (you should not have one after installation), run switchdesk to create it. In your $HOME/.Xclients file, edit the following:

exec $HOME/.Xclients-default

Change the line so that it reads:

exec /usr/bin/ssh-agent $HOME/.Xclients-default

3 From the Red Hat icon, select Preferences—>More Preferences—>Sessions.

a Click the Startup Programs Tab and Add and enter /usr/bin/ssh-add in the Startup Command text area.

b Set the priority to a number higher than any existing commands to ensure that it is executed last. A good priority number for ssh-add is 70 or higher. The higher the priority number, the lower the priority. If you have other programs listed, this one should have the lowest priority.

c Click OK to save your settings, and exit the GNOME Control Center.

4 Log out and then log back into GNOME; in other words, restart X.

After GNOME is started, a dialog box is displayed, prompting for your passphrases. Enter the passphrase requested. If you have both DSA and RSA key pairs configured, you are prompted for both. From this point on, you should not be prompted for a password by ssh, scp, or sftp.

For more information, see the Linux Customization Guide.

Page 204: havcs-410-101 a-2-10-srt-pg_2

C–12 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

1 Open a console window so you can observe messages during later labs.

a Select Run from the main menu or Gnome foot icon.

b Type this command:

xterm -C -fg white -bg black -sl 2000 &

This opens a console window with a white foreground, a black background, and a scroll line buffer of 2000 lines.

2 Open a System Log Display tool.

From the RedHat icon, select System Tools—>System Logs.

Setting Up a Console Window—Linux Only

Page 205: havcs-410-101 a-2-10-srt-pg_2

Lab 3 Solutions: Installing VCS C–13Copyright © 2005 VERITAS Software Corporation. All rights reserved.

C

Lab 3 Solutions: Installing VCS

Page 206: havcs-410-101 a-2-10-srt-pg_2

C–14 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

In this lab, work with your lab partner to install VCS on both systems.

Brief instructions for this lab are located on the following page:• “Lab 3 Synopsis: Installing VCS,” page A-6

Step-by-step instructions for this lab are located on the following page:• “Lab 3: Installing VCS,” page B-11

Obtaining Classroom InformationUse the following table to collect information you need to install VCS. Your instructor may also ask you to install VERITAS Volume Manager and VERITAS File System.

Lab 3: Installing VCS

train1 train2

# ./installer# ./installer

vcs1vcs1

Link 1:______Link 2:______

Software location:_______________________________

Subnet:_______

Link 1:______Link 2:______

Public:______ Public:______

4.x4.x

# ./installvcs# ./installvcsPre-4.0Pre-4.0

Page 207: havcs-410-101 a-2-10-srt-pg_2

Lab 3 Solutions: Installing VCS C–15Copyright © 2005 VERITAS Software Corporation. All rights reserved.

C

Cluster Definition These values define cluster properties and are required to install VCS.

Attributes/Properties Sample Value Your Values

Node names, cluster name, and cluster ID

train1 train2 vcs1 1train3 train4 vcs2 2train5 train6 vcs3 3train7 train8 vcs4 4train9 train10 vcs5 5train11 train12 vcs6 6

train1train2vcs11

Cluster interconnect Ethernet interface for interconnect link #1

Solaris: qfe0Sol Mob: dmfe0AIX: en2HP-UX lan1Linux: eth1VA: bge2

Ethernet interface for interconnect link #2

Solaris: qfe1Sol Mob: dmfe1AIX: en3HP-UX lan2Linux: eth2VA: bge3

Public network interfaceinterface

Solaris: eri0Sol Mob: dmfe0AIX: en1HP-UX lan0Linux: eth0VA: bge0

Page 208: havcs-410-101 a-2-10-srt-pg_2

C–16 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

Web GUI IP Address:train1 train2train3 train4train5 train6train7 train8train9 train10train11 train12

Subnet MaskNetwork interface

NetworkHosts (HP-UX only)

192.168.xxx.91192.168.xxx.92192.168.xxx.93192.168.xxx.94192.168.xxx.95192.168.xxx.96

255.255.255.0Solaris: eri0Sol Mob: dfme0AIX: en0HP-UX lan0Linux: eth0VA: bge0see instructor

Installation software locationinstall_location

License

Administrator account NamePassword

adminpassword

Cluster Definition These values define cluster properties and are required to install VCS.

Attributes/Properties Sample Value Your Values

Page 209: havcs-410-101 a-2-10-srt-pg_2

Lab 3 Solutions: Installing VCS C–17Copyright © 2005 VERITAS Software Corporation. All rights reserved.

C

1 Obtain the location of the installation software from your instructor.

Installation software directory:

_____________________________________________________________install_dir

2 This step is to be performed from only one system in the cluster. The install script installs and configures all systems in the cluster.

a Change to the install directory.

cd install_dir

b Run the installer script (VERITAS Product Installer) located in the directory specified above.

Notes: › For VCS 4.x, install Storage Foundation HA (which includes VCS,

Volume Manager, and File System).› Use the information in the previous table or design worksheet to

respond to the installation prompts.› Sample prompts and input are provided at the end of the lab.› For versions of VCS before 4.0, use installvcs.

c If a license key is needed, obtain one from your instructor and record it here.License Key: _________________________________

d Install all optional packages (including Web console and Simulator).

e Accept the default of Y to configure VCS.

f Do not configure a third heartbeat link at this time.

g Do not configure a low-priority heartbeat link at this time.

h Do not configure VERITAS Security Services.

i Do not set any user names or passwords.

Installing VERITAS Cluster Server Software

Page 210: havcs-410-101 a-2-10-srt-pg_2

C–18 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

j Retain the default admin user account and password.

k Configure the Cluster Server Cluster Manager.

l Do not configure SMTP Notification.

m Do not configure SNMP Notification.

n Select the option to install all packages simultaneously on all systems.

o Do not set up enclosure-based naming for Volume Manager.

p Start Storage Foundation Enterprise HA processes.

q Do not set up a default disk group.

3 If you did not install the Java GUI package as part of the installer (VPI) process (or installvcs for earlier versions of VCS), install the VRTScscm Java GUI package on each system in the cluster. The location of this package is in the pkgs directory under the install location directory given to you by your instructor.

Solarispkgadd -d /install_dir/cluster_server/pkgs VRTScscm

HPswinstall -s /install_dir/cluster_server/pkgs VRTScscm

AIXinstallp -a -d /install_dir/cluster_server/pkgs/VRTScscm.rte.bff VRTScscm.rte

Linuxrpm -ihv VRTScscm-4.1.00.0-GA_GENERIC.noarch.rpm

Page 211: havcs-410-101 a-2-10-srt-pg_2

Lab 3 Solutions: Installing VCS C–19Copyright © 2005 VERITAS Software Corporation. All rights reserved.

C

1 If your instructor indicates that additional software, such as VCS patches or updates, is required, obtain the location of the installation software from your instructor.

Installation software location:

_______________________________________install_dir

2 Install any VCS patches or updates, as directed by your instructor. Use the operating system-specific command, as shown in the following examples.

Solarispkgadd -d /install_dir/pkgs VRTSxxxx

HPswinstall -s /install_dir/pkgs VRTSxxxx

AIXinstallp -a -d /install_dir/pkgs/VRTSxxxx.rte.bff VRTSxxxx.rte

Linuxrpm -ihv VRTSxxxx-x.x.xx.xx-GA_RHEL.i686.rpm

3 Install any other software indicated by your instructor. For example, if your classroom uses VCS 3.5, you may be directed to install VERITAS Volume Manager and VERITAS File System.

Installing Other Software

Page 212: havcs-410-101 a-2-10-srt-pg_2

C–20 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

1 Verify that VCS is now running using hastatus.

hastatus -sum

If hastatus -sum shows the cluster systems in a running state and a ClusterService service group is online on one of your cluster systems, VCS has been properly installed and configured.

2 Perform additional verification (generally only necessary if there is a problem displayed by hastatus -sum).

a Verify that all packages are loaded.Solaris

pkginfo | grep -i vrtsAIX

lslpp -L | grep -i vrtsHP-UX

swlist | grep -i vrtsLinux

rpm -qa | grep VRTS

b Verify that LLT is running.

lltconfig

c Verify that GAB is running.

gabconfig -a

Viewing VERITAS Cluster Server Installation Results

Page 213: havcs-410-101 a-2-10-srt-pg_2

Lab 3 Solutions: Installing VCS C–21Copyright © 2005 VERITAS Software Corporation. All rights reserved.

C

View the configuration files set up by the VCS installation procedure.

1 Explore the LLT configuration.

a Verify that the cluster ID, system names, and network interfaces specified during install are present in the /etc/llttab file.

cat /etc/llttab

b Verify the system names in the /etc/llthosts file.

cat /etc/llthosts

2 Explore the GAB configuration.Verify that the number of systems in the cluster matches the value for the -n flag set in the /etc/gabtab file.

cat /etc/gabtab

3 Explore the VCS configuration files.Verify the cluster name, system names, and IP address for the Cluster Manager in the /etc/VRTSvcs/conf/config/main.cf file.

cat /etc/VRTSvcs/conf/config/main.cf

Exploring the Default VCS Configuration

Page 214: havcs-410-101 a-2-10-srt-pg_2

C–22 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

Verify GUI connectivity with the Java GUI and the Web GUI. Both GUIs can connect to the cluster with the default user of admin and password as the default password.

1 Use a Web browser to connect to the Web GUI. – The URL is http://ipaddress:8181/vcs. – The IP address is given in the design worksheet and was entered during

installation to configure the Cluster Manager.

2 Start the Java GUI and connect to the cluster using these values:– Cluster alias: nameCluster– Host name: ip_address (used during installation)– Failover retries: 12 (retain default)

hagui &Select File—>New Cluster.

3 Browse the cluster configuration.

Verifying Connectivity with the GUIs

Page 215: havcs-410-101 a-2-10-srt-pg_2

Lab 3 Solutions: Installing VCS C–23Copyright © 2005 VERITAS Software Corporation. All rights reserved.

C

VERITAS Storage Foundation and High Availability Solutions 4.1

VERITAS Licensing utilities are not installed on this system.

Product menu cannot be displayed until VERITAS Licensing utilities are installed.

Selection Menu:

I) Install/Upgrade a Product C) Configure an Installed Product

L) License a Product P) Perform a Preinstallation Check

U) Uninstall a Product D) View a Product Description

Q) Quit ?) Help

Enter a Selection: [I,C,L,P,U,D,Q,?] I

VERITAS Storage Foundation and High Availability Solutions 4.1

1) VERITAS Cluster Server

2) VERITAS File System

3) VERITAS Volume Manager

4) VERITAS Volume Replicator

5) VERITAS Storage Foundation, Storage Foundation for Oracle, Storage

Foundation for DB2, and Storage Foundation for Sybase

6) VERITAS Storage Foundation Cluster File System

7) VERITAS Storage Foundation for Oracle RAC

B) Back to previous menu

Select a product to install: [1-7,b,q] 5

Sample Installation Answers

Page 216: havcs-410-101 a-2-10-srt-pg_2

C–24 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

Enter the system names separated by spaces on which to install SF: train1 train2

Checking system communication:

Checking OS version on train1 ............................... SunOS 5.9

Verifying communication with train2 ................... ping successful

Attempting rsh with train2 ............................. rsh successful

Attempting rcp with train2 ............................. rcp successful

Checking OS version on train2 ............................... SunOS 5.9

Creating log directory on train2 ................................. Done

Logs for installer are being created in /var/tmp/installer131114423.

Using /usr/bin/rsh and /usr/bin/rcp to communicate with remote systems.

Initial system check completed successfully.

VERITAS Infrastructure package installation:

Installing VERITAS Infrastructure packages on train1:

Checking VRTScpi package ................................ not installed

. . .

Each system requires a SF product license before installation. License keys for additional product features should also be added at this time.

Page 217: havcs-410-101 a-2-10-srt-pg_2

Lab 3 Solutions: Installing VCS C–25Copyright © 2005 VERITAS Software Corporation. All rights reserved.

C

Some license keys are node locked and are unique per system. Other license keys, such as demo keys and site license keys, are registered on all systems and must be entered on the first system.

SF Licensing Verification:

Checking SF license key on train1 ........................ not licensed

Enter a SF license key for train1: [?] RRPE-BDP6-DRME-NRFS-O47X-CNNN-3C

Registering VERITAS Storage Foundation Enterprise HA DEMO key on train1

Do you want to enter another license key for train1? [y,n,q,?] (n) n

Registering RRPE-BDP6-DRME-NRFS-O47X-CNNN-3C on train2

Checking SF license key on train2 ... Storage Foundation Enterprise HA Demo

Do you want to enter another license key for train2? [y,n,q,?] (n) n

SF licensing completed successfully.

installer can install the following optional SF packages:

VRTSobgui VERITAS Enterprise Administrator

VRTSvmman VERITAS Volume Manager Manual Pages

. . .

1) Install all of the optional packages

2) Install none of the optional packages

3) View package descriptions and select optional packages

Select the optional packages to be installed on all systems? [1-3,q,?] (1) 1

. . .

Page 218: havcs-410-101 a-2-10-srt-pg_2

C–26 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

Installation requirement checks completed successfully.

Press [Return] to continue:

It is possible to install SF packages without performing configuration.

SF cannot be started without proper configuration.

It is optional to configure SF now. If you choose to configure SF later, you can either do so manually or run the installsf -configure command.

Are you ready to configure SF? [y,n,q] (y) y

installer will now ask sets of SF configuration-related questions.

To configure VCS for SF the following is required:

A unique Cluster name

A unique Cluster ID number between 0-255

Two or more NIC cards per system used for heartbeat links

One or more heartbeat links are configured as private links

One heartbeat link may be configured as a low priority link

All systems are being configured to create one cluster

Enter the unique cluster name: [?] vcs1

Enter the unique Cluster ID number between 0-255: [b,?] 1

Discovering NICs on train3 ........ discovered eri0 qfe0 qfe1 qfe2 qfe3

Page 219: havcs-410-101 a-2-10-srt-pg_2

Lab 3 Solutions: Installing VCS C–27Copyright © 2005 VERITAS Software Corporation. All rights reserved.

C

Enter the NIC for the first private heartbeat NIC on train3: [b,?] qfe0

Enter the NIC for the second private heartbeat NIC on train3: [b,?] qfe1

Would you like to configure a third private heartbeat link? [y,n,q,b,?] (n) n

Do you want to configure an additional low priority heartbeat link?[y,n,q,b,?] (n) n

Are you using the same NICs for private heartbeat links on all systems?[y,n,q,b,?] (y) y

Cluster information verification:

Cluster Name: vcs1

Cluster ID Number: 1

Private Heartbeat NICs for train1: link1=qfe0 link2=qfe1

Private Heartbeat NICs for train2: link1=qfe0 link2=qfe1

Is this information correct? [y,n,q] (y) y

Storage Foundation can be configured to utilize VERITAS Security Services.

Running VCS in Secure Mode guarantees that all inter-system communication is

encrypted and that users are verified with security credentials.

When running VCS in Secure Mode, NIS and system usernames and passwords are used to verify identity. VCS usernames and passwords are no longer utilized when a cluster is running in Secure Mode.

Before configuring a cluster to operate using VERITAS Security Services, another system must already have VERITAS Security Services installed and be operating as a Root Broker. Refer to the Cluster Server Installation Guide for more information on configuring a VxSS Root Broker.

Page 220: havcs-410-101 a-2-10-srt-pg_2

C–28 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

Would you like to configure SF to use VERITAS Security Services? [y,n,q] (n) n

VERITAS STORAGE FOUNDATION 4.1 INSTALLATION PROGRAM

The following information is required to add VCS users:

A user name

A password for the user

User privileges (Administrator, Operator, or Guest)

Do you want to set the username and/or password for the Admin user

(default username = 'admin', password='password')? [y,n,q] (n) n

Do you want to add another user to the cluster? [y,n,q] (y) n

VCS User verification:

User: admin Privilege: Administrators

Passwords are not displayed

Is this information correct? [y,n,q] (y) Y

Page 221: havcs-410-101 a-2-10-srt-pg_2

Lab 3 Solutions: Installing VCS C–29Copyright © 2005 VERITAS Software Corporation. All rights reserved.

C

The following information is required to configure Cluster Manager:

A public NIC used by each system in the clusterA Virtual IP address and netmask for Cluster Manager

Do you want to configure Cluster Manager (Web Console) [y,n,q] (y) y

Active NIC devices discovered on train3: eri0

Enter the NIC for Cluster Manager (Web Console) to use on train3: [b,?] (qfe1) eri0

Is qfe1 to be the public NIC used by all systems [y,n,q,b,?] (y) y

Enter the Virtual IP address for Cluster Manager: [b,?] 192.168.XXX.XXX

Enter the netmask for IP 192.168.XXX.XXX: [b,?] (255.255.255.0) 255.255.255.0

Cluster Manager (Web Console) verification:

NIC: eri0

IP: 192.168.27.91

Netmask: 255.255.255.0

Is this information correct? [y,n,q] (y)

The following information is required to configure SMTP notification:

The domain-based hostname of the SMTP server

The e-mail address of each SMTP recipient

A minimum severity level of messages to send to each recipient

Do you want to configure SMTP notification? [y,n,q] (y) n

The following information is required to configure SNMP notification:

Page 222: havcs-410-101 a-2-10-srt-pg_2

C–30 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

System names of SNMP consoles to receive VCS trap messages

SNMP trap daemon port numbers for each console

A minimum severity level of messages to send to each console

Do you want to configure SNMP notification? [y,n,q] (y) n

SF can be installed on systems consecutively or simultaneously. Installing on systems consecutively takes more time but allows for better error handling.

Would you like to install Storage Foundation Enterprise HA on all systems

simultaneously? [y,n,q,?] (y) y

Installing Storage Foundation Enterprise HA 4.1 on all systems simultaneously:

Copying VRTSperl.tar.gz to train2 ................. Done 1 of 123 steps

Installing VRTSperl 4.0.12 on train1 .............. Done 2 of 123 steps

. . .

The enclosure-based naming scheme is a feature of Volume Manager. It allows one to reference disks using a symbolic name that is more meaningful than the operating system's normal device access name. This symbolic name is typically derived from the array name.

Do you want to set up the enclosure-based naming scheme? [y,n,q,?] (n) n

Do you want to start Storage Foundation Enterprise HA processes now? [y,n,q] (y) y

Note: The vxconfigd daemon will be started, which can take a while depending

upon the hardware configuration.

Page 223: havcs-410-101 a-2-10-srt-pg_2

Lab 3 Solutions: Installing VCS C–31Copyright © 2005 VERITAS Software Corporation. All rights reserved.

C

Disabling enclosure-based naming on train1 ....................... Done

Starting vxconfigd for VxVM on train1 ......................... Started

Disabling enclosure-based naming on train2 ....................... Done

Starting vxconfigd for VxVM on train2 ......................... Started

Starting Cluster Server:

Starting LLT on train1 ........................................ Started

Starting LLT on train2 ........................................ Started

Starting GAB on train1 ........................................ Started

Starting GAB on train2 ........................................ Started

Starting Cluster Server on train1 ............................. Started

Starting Cluster Server on train2 ............................. Started

Confirming Cluster Server startup ................... 2 systems RUNNING

Volume Manager default disk group configuration:

Many Volume Manager commands affect the contents or configuration of a disk group. Such commands require that the user specify a disk group. This is accomplished by using the -g option of a command or setting the VXVM_DEFAULTDG environment variable. An alternative to these two methods is to configure the default disk group of a system.

Do you want to set up the default disk group for each system? [y,n,q,?] (y) n

Volume Manager default disk group setup and daemon startup

Page 224: havcs-410-101 a-2-10-srt-pg_2

C–32 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

You declined to set up the default disk group for train1.

Starting vxcached on train1 ................................... Started

You declined to set up the default disk group for train2.

Starting vxcached on train2 ................................... Started

Storage Foundation Enterprise HA was started successfully.

Press [Return] to continue:

Installation of Storage Foundation Enterprise HA 4.1 has completed successfully.

The installation summary is saved at:

/opt/VRTS/install/logs/installer131114527.summary

The installer log is saved at:

/opt/VRTS/install/logs/installer131114527.log

The installation response file is saved at:

/opt/VRTS/install/logs/installer131114527.response

Reboot all systems on which VxFS was installed or upgraded.

shutdown -y -i6 -g0

See the VERITAS File System Administrators Guide for information on using VxFS.

Page 225: havcs-410-101 a-2-10-srt-pg_2

Lab 3 Solutions: Installing VCS C–33Copyright © 2005 VERITAS Software Corporation. All rights reserved.

C

include "types.cf"

cluster vcs (

UserNames = { admin = ElmElgLimHmmKumGlj }

CredRenewFrequency = 0

ClusterAddress = "192.168.27.91"

Administrators = { admin }

CounterInterval = 5

)

system train1 (

)

system train2 (

)

group ClusterService (

SystemList = { train1 = 0, train2 = 1 }

AutoStartList = { train1, train2 }

OnlineRetryLimit = 3

)

IP webip (

Device = eri0

Address = "192.168.27.42"

NetMask = "255.255.255.0"

)

NIC csgnic (

Device = eri0

)

VRTSWebApp VCSweb (

Sample Configuration File

Page 226: havcs-410-101 a-2-10-srt-pg_2

C–34 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

Critical = 0

AppName = vcs

InstallDir = "/opt/VRTSweb/VERITAS"

TimeForOnline = 5

)

VCSweb requires webip

webip requires csgnic

Page 227: havcs-410-101 a-2-10-srt-pg_2

Lab 4 Solutions: Using the VCS Simulator C–35Copyright © 2005 VERITAS Software Corporation. All rights reserved.

C

Lab 4 Solutions: Using the VCS Simulator

Page 228: havcs-410-101 a-2-10-srt-pg_2

C–36 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

This lab uses the VERITAS Cluster Server Simulator and the Cluster Manager Java Console. You are provided with a preconfigured main.cf file to learn about managing the cluster.

Brief instructions for this lab are located on the following page:• “Lab 4 Synopsis: Using the VCS Simulator,” page A-18

Step-by-step instructions for this lab are located on the following page:• “Lab 4: Using the VCS Simulator,” page B-21

Obtaining Classroom InformationUse the following table to record the values for your classroom.

Attribute Sample Value Your Value

Port 15559

VCS user account/password

oper/oper

Lab 4: Using the VCS Simulator1. Start the Simulator Java GUI.

hasimgui &2. Add a cluster.3. Copy the preconfigured

main.cf file to the new directory.

4. Start the cluster from the Simulator GUI.

5. Launch the Cluster Manager Java Console

6. Log in using the VCS account oper with password oper. This account demonstrates different privilege levels in VCS.

See next slide for classroom valuesSee next slide for lab assignments.See next slide for lab assignments.

Page 229: havcs-410-101 a-2-10-srt-pg_2

Lab 4 Solutions: Using the VCS Simulator C–37Copyright © 2005 VERITAS Software Corporation. All rights reserved.

C

File Locations

Type of File Location

Lab main.cf file: cf_files_dir

Local Simulator config directory:sim_config_dir

Page 230: havcs-410-101 a-2-10-srt-pg_2

C–38 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

1 Add /opt/VRTScssim/bin to your PATH environment variable after any /opt/VRTSvcs/bin entries, if it is not already present.

PATH=$PATH:/opt/VRTScssim/binexport PATH

2 Set the VCS_SIMULATOR_HOME environment variable to /opt/VRTScssim, if it is not already set.

VCS_SIMULATOR_HOME=/opt/VRTScssimexport VCS_SIMULATOR_HOME

3 Start the Simulator GUI.

hasimgui &

Starting the Simulator on UNIX

Page 231: havcs-410-101 a-2-10-srt-pg_2

Lab 4 Solutions: Using the VCS Simulator C–39Copyright © 2005 VERITAS Software Corporation. All rights reserved.

C

4 Add a cluster.

Click Add Cluster.

5 Use these values to define the new simulated cluster:– Cluster Name: vcs_operations – System Name: S1– Port: 15559– Platform: Solaris– WAC Port: -1

6 In a terminal window, change to the simulator configuration directory for the new simulated cluster named vcs_operations.

cd /opt/VRTScssim/vcs_operations/conf/config

Specify this directory in place of sim_config_dir variable elsewhere in the lab.

Page 232: havcs-410-101 a-2-10-srt-pg_2

C–40 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

7 Copy the main.cf, types.cf, and OracleTypes.cf files provided by your instructor into the vcs_operations simulation configuration directory.

Source location of main.cf, types.cf, and OracleTypes.cf files:

___________________________________________cf_files_dir

cp cf_files_dir/main.cf /opt/VRTScssim/vcs_operations/conf/configcp cf_files_dir/types.cf /opt/VRTScssim/vcs_operations/conf/configcp cf_files_dir/OracleTypes.cf /opt/VRTScssim/vcs_operations/conf/config

8 From the Simulator GUI, start the vcs_operations cluster.

Select vcs_operations under Cluster Name.Click Start Cluster.

Page 233: havcs-410-101 a-2-10-srt-pg_2

Lab 4 Solutions: Using the VCS Simulator C–41Copyright © 2005 VERITAS Software Corporation. All rights reserved.

C

9 Launch the VCS Java Console for the vcs_operations simulated cluster.

Select vcs_operations under Cluster Name.Click Launch Console.

10 Log in as oper with password oper.

Note: While you may use admin/password to log in, the point of using oper is to demonstrate the differences in privileges between VCS user accounts.

Page 234: havcs-410-101 a-2-10-srt-pg_2

C–42 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

11 Notice the cluster name is now VCS. This is the cluster name specified in the new main.cf file you copied into the config directory.

Page 235: havcs-410-101 a-2-10-srt-pg_2

Lab 4 Solutions: Using the VCS Simulator C–43Copyright © 2005 VERITAS Software Corporation. All rights reserved.

C

1 How many systems are members of the cluster?

3With the Cluster object name selected in the left-hand frame of the Cluster Manager, click on the Status tab in the right-hand frame. Notice the Systems-> indicator and count the number of named columns, that is, one for each cluster member.

2 Determine the status of all service groups.

With the Cluster object name selected in the left-hand frame of the Cluster Manager, click on the Status tab in the right frame. The service groups with their names as labels are shown.

3 Which service groups have service group operator privileges set for the oper account?

AppSG and OracleSG. For each service group:a Click on the service group name in the left-hand frame of the Cluster

Manager.b Click on the Properties tab.c Click on the Show all attributes button.d Scroll down and observe the value of the Operators service group

attributes. For each of these attributes, you may have to click on the -> symbol in the Value column of the display panel. The value should be oper, which is the user name with which you are logged into Cluster Manager.

e Close the Show all attributes panel.

Viewing Status and Attributes

Service Group Status on S1 Status on S2 Status on S3

AppSG Online Offline Offline

OracleSG Offline Online Offline

ClusterService Online Offline Offline

Page 236: havcs-410-101 a-2-10-srt-pg_2

C–44 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

4 Which resources in the AppSG service group have the Critical resource attribute enabled?

AppNIC, AppIP, and AppMount. Mouse over each resource to see status.Alternately:a Click on the AppSG service group name in the left-hand frame of the

Cluster Manager. b Click on the Resources tab in the right-hand frame. For each resource

shown in the dependency tree, right-click on the resource and observe whether the Critical menu item is checked or not. A checked Critical menu item indicates that the resource is set to critical.

5 Which resource is the top-most parent in the OracleSG service group?

OraListener.a Click on the OracleSG service group name in the left-hand frame of

the Cluster Manager. b Click on the Resources tab in the right-hand frame. c Observe the top-most parent resource in the resource dependency tree.

6 Which immediate child resources does the Oracle resource in the OracleSG service group depend on?

OraMount.a Click on the OracleSG service group name in the left-hand frame of

the Cluster Manager. b Click on the Resources tab in the right-hand frame. c Observe the child resources in the resource dependency tree for the

dependent parent resource named Oracle.

Page 237: havcs-410-101 a-2-10-srt-pg_2

Lab 4 Solutions: Using the VCS Simulator C–45Copyright © 2005 VERITAS Software Corporation. All rights reserved.

C

1 Attempt to take the ClusterService group offline on S1.

Right-click on the ClusterService service group name in the left-hand panel of the Cluster Manager.

What happens?

There is no offline menu selection.You cannot take the service group offline because you do not have privileges for this service group.

2 Attempt to take the AppSG service group offline on S1.

Right-click on the AppSG service group, select Offline, and click S1.

What happens?

The Offline selection is displayed for this service group and you can take the group offline because you have privileges for this service group.

3 Attempt to take the Oracle service group offline on S1.

Right-click on the OracleSG service group, select Offline, and click S1.

What happens?

OracleSG is currently online on system S2, so it is already offline S1.

4 Take all service groups that you have privileges for offline everywhere.

For each service group for which you have privileges (AppSG and OracleSG) and that is not already offline everywhere:a Right-click the service group.b Select the Offline menu option and click All Systems.

Note: The Simulator attempts to represent a real-world cluster environment so resources may take some time to change state (offline/online). Wait for service groups to show as fully offline or online before attempting further operations.

Manipulating Service Groups

Page 238: havcs-410-101 a-2-10-srt-pg_2

C–46 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

5 Bring the AppSG service group online on S2.

a Right-click on the AppSG service group. b Select the Online menu option and click S2.

6 Bring the OracleSG service group online on S1.

a Right-click on the OracleSG service group. b Select the Online menu option and click S1.

7 Switch service group AppSG to S1.

a Right-click on the AppSG service group. b Select Switch To and click S1.

8 Switch the OracleSG service group to S2.

a Right-click on the OracleSG service group. b Select the Switch To menu option and click S2.

9 Bring all service groups that you have privileges for online on S3.

a Right-click on the AppSG service group b Select the Switch To menu option and click S3.c Right-click on the OracleSG service group. d Select the Switch To menu option and click S3.

Page 239: havcs-410-101 a-2-10-srt-pg_2

Lab 4 Solutions: Using the VCS Simulator C–47Copyright © 2005 VERITAS Software Corporation. All rights reserved.

C

1 Attempt to take the OraListener resource in OracleSG offline on system S3.

a Click on the OracleSG service group.b Click on the Resources tab in the right-hand frame. c Right-click on the OraListener resource, select the Offline menu

option, and click on system name S3.

What happens to the OracleSG service group?

Status shows as partial online on system S3. OracleSG does not fail over; taking a resource offline does not cause failover.

2 Bring the OraListener resource online on system S3.

Right-click on the OraListener resource, select the Online menu option, and click on system name S3.

3 Attempt to take the OraMount resource offline on system S3.

a Click on the S3 system name in the lower portion of the right-hand frame.

b Right-click on the OraMount resource, select the Offline menu option, and click on system name S3.

What happens?

You cannot take OraMount offline because a dependent resource (Oracle) is online.

Manipulating Resources

Page 240: havcs-410-101 a-2-10-srt-pg_2

C–48 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

4 Attempt to bring only the OraListener resource online on S1.

Right-click on the OraListener resource, select the Online menu option, and click on system S1.

What happens?

You cannot bring OraListener online on S1 because OracleSG is a failover service group that is already online on S3.

5 Fault the Oracle resource in the OracleSG service group.

Right-click on the Oracle resource and select Fault resource.

6 What happens to the service group and resource?

– The resource is marked faulted (red x).– The service group is shown as faulted on S3 in the bottom row showing

each system. The S3 icon is surrounded by a red box.– The service group is brought offline on S3.– The service group is failed over to another system and brought online.

7 View the log entries to see the sequence of events.

Click the exclamation point icon in the toolbar.

8 Attempt to switch the OracleSG service group back to S3.

What happens?

a Right-click the OracleSG service group.b Choose Switch To from the menu.

S3 is not available. You cannot switch a group to a system where it is faulted.

9 Clear the fault on the Oracle resource in the OracleSG service group.

a Right-click on Oracle.b Choose Clear Fault from the menu.c Choose S3.The fault is now cleared.

Page 241: havcs-410-101 a-2-10-srt-pg_2

Lab 4 Solutions: Using the VCS Simulator C–49Copyright © 2005 VERITAS Software Corporation. All rights reserved.

C

10 Switch the OracleSG service group back to S3.

a Right-click on the OracleSG.b Choose Switch To from the menu.c Choose S3.The group should return to S3.

11 Save and close the configuration.

Select File—>Close configuration.

12 Log off from the GUI.

Select File—>Log Out.

13 Stop the simulator from the Simulator Java Console.

Select the vcs_operations cluster and click Stop Cluster.

Page 242: havcs-410-101 a-2-10-srt-pg_2

C–50 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

Page 243: havcs-410-101 a-2-10-srt-pg_2

Lab 5 Solutions: Preparing Application Services C–51Copyright © 2005 VERITAS Software Corporation. All rights reserved.

C

Lab 5 Solutions: Preparing Application Services

Page 244: havcs-410-101 a-2-10-srt-pg_2

C–52 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

The purpose of this lab is to prepare the loopy process service for high availability.

Brief instructions for this lab are located on the following page:• “Lab 5 Synopsis: Preparing Application Services,” page A-24

Step-by-step instructions for this lab are located on the following page:• “Lab 5: Preparing Application Services,” page B-29

Lab AssignmentsFill in the table with the applicable values for your lab cluster.

Object Sample Value Your Value

Your system host name your_sys

train1

Partner system host name their_sys

train2

Name prefix for your objects

name

Disk assignment for disk group: disk_dev

Solaris: c#t#d#AIX: hdisk##HP-UX: c#t#d#Linux: sd##

Disk group name nameDG1

disk1

bobDG1/bob1 bobVol1

Lab 5: Preparing Application Services

disk2

sueDG1sueVol1 /sue1

NIC

IP Address

while truedoecho “…”

done

/bob1/loopy

Disk/Lun Disk/Lun

NIC

IP Address

while truedoecho “…”

done

/sue1/loopy

See next slide for classroom values.See next slide for classroom values.

Page 245: havcs-410-101 a-2-10-srt-pg_2

Lab 5 Solutions: Preparing Application Services C–53Copyright © 2005 VERITAS Software Corporation. All rights reserved.

C

Volume name nameVol1

Mount point /name1

Public network interface:interface

Solaris: eri0Sol Mob dmfe0AIX: en1HP-UX lan0Linux: eth0VA: bge0

IP Addressipaddress

train1 192.168.xxx.51train2 192.168.xxx.52train3 192.168.xxx.53train4 192.168.xxx.54train5 192.168.xxx.55train6 192.168.xxx.56train7 192.168.xxx.57train8 192.168.xxx.58train9 192.168.xxx.59train10 192.168.xxx.60train11 192.168.xxx.61train12 192.168.xxx.62

Application script locationclass_sw_dir

Object Sample Value Your Value

Page 246: havcs-410-101 a-2-10-srt-pg_2

C–54 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

1 Verify disk availability for Volume Manager.

vxdisk list

2 Determine whether any disks are already in use in disk groups.

vxdisk -o alldgs list

3 Initialize a disk for Volume Manager using the disk device from the worksheet.

vxdisksetup -i disk_device

4 Create a disk group with the name from the worksheet using the initialized disk.

vxdg init nameDG1 nameDG101=disk_device

5 Create a 2 GB volume in the disk group.

vxassist -g nameDG1 make nameVol1 2g

6 Create a file system on the volume.Solaris

mkfs -F vxfs /dev/vx/rdsk/nameDG1/nameVol1HP-UX

mkfs -F vxfs /dev/vx/rdsk/nameDG1/nameVol1AIX

mkfs -V vxfs /dev/vx/rdsk/nameDG1/nameVol1Linux

mkfs -t vxfs /dev/vx/rdsk/nameDG1/nameVol1

7 Create a mount point on each system in the cluster.All

mkdir /name1Solaris, AIX

rsh their_sys mkdir /name1HP-UX

remsh their_sys mkdir /name1Linux

ssh their_sys mkdir /name1

Configuring Storage for an Application

Page 247: havcs-410-101 a-2-10-srt-pg_2

Lab 5 Solutions: Preparing Application Services C–55Copyright © 2005 VERITAS Software Corporation. All rights reserved.

C

8 Mount the file system on your cluster system.Solaris

mount -F vxfs /dev/vx/dsk/nameDG1/nameVol1 /name1HP-UX

mount -F vxfs /dev/vx/dsk/nameDG1/nameVol1 /name1AIX

mount -V vxfs /dev/vx/dsk/nameDG1/nameVol1 /name1Linux

mount -t vxfs /dev/vx/dsk/nameDG1/nameVol1 /name1

9 Verify that the file system is mounted on your system.

mount | grep name1

Page 248: havcs-410-101 a-2-10-srt-pg_2

C–56 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

Complete the following steps to set up a virtual IP address for the application.

1 Verify that an IP address exists on the base interface for the public network.

Solaris, AIX, Linuxifconfig -a

HP-UXnetstat -i

2 Configure a virtual IP address on the public network interface. Use the IP address from the design worksheet.

Solarisifconfig interface addif ipaddress up

AIXifconfig interface inet ipaddress netmask mask alias

HP-UXifconfig interface inet ipaddress

Linuxifconfig interface add ipaddress

3 Verify that the virtual IP address is configured.

ifconfig -a

Configuring Networking for an Application

Page 249: havcs-410-101 a-2-10-srt-pg_2

Lab 5 Solutions: Preparing Application Services C–57Copyright © 2005 VERITAS Software Corporation. All rights reserved.

C

A script named loopy is used as the example application for this lab exercise.

1 Obtain the location of the loopy script from your instructor.

loopy script location:

__________________________________________________________class_sw_dir

2 Copy or type this code into a file named loopy on the file system you created previously in this lab.

cp /class_sw_dir/loopy /name1/loopy

3 Verify that you have a console window open to see the display from the script.

4 Start the loopy application in the background.

/name1/loopy name 1 &

5 Verify that the loopy application is working correctly.

Solaris, AIX, HP-UXView the console and verify that loopy is echoing nameSG1 in the message.

LinuxUse the System Log Viewer.

Setting up the Application

Page 250: havcs-410-101 a-2-10-srt-pg_2

C–58 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

Complete the following steps to migrate the application to the other system.1 Stop your loopy process by sending a kill signal. Verify that the process is

stopped.

ps -ef | grep "loopy name 1"kill -9 pidps -ef | grep "loopy name 1"

2 Remove the virtual IP address configured earlier in this lab. Verify that the IP address is no longer configured.

Solaris ifconfig -aifconfig virtual_interface unplumbifconfig -a

AIXifconfig -aifconfig interface ipaddress deleteifconfig -a

HP-UXnetstat -inifconfig interface inet 0.0.0.0netstat -i

Linuxifconfig -aifconfig interface:instance downifconfig -a

3 Unmount your file system and verify that it is no longer mounted.

umount /name1mount | grep name1

4 Stop the volume and verify that it is disabled.

vxvol -g nameDG1 stop nameVol1vxprint | grep nameVol1

Manually Migrating the Application

Page 251: havcs-410-101 a-2-10-srt-pg_2

Lab 5 Solutions: Preparing Application Services C–59Copyright © 2005 VERITAS Software Corporation. All rights reserved.

C

5 Deport your disk group and verify that it is deported.

vxdg deport nameDG1vxdisk -o alldgs list

6 Log in to the other system.Solaris, AIX, HP-UX

rlogin their_sysLinux

ssh their_sysVirtual Academy

Use the Operations pull-down menu to connect to the other system.

7 Update VxVM so that the disk group is visible.

vxdctl enable

8 Import your disk group and verify that it imported.

vxdg import nameDG1vxdisk list

9 Start your volume and verify that it is enabled.

vxvol -g nameDG1 start nameVol1vxprint | grep nameVol1

10 Verify that your mount point directory exists. Create it if it does not exist.

ls -d /name1mkdir /name1

11 Mount your file system.Solaris

mount -F vxfs /dev/vx/dsk/nameDG1/nameVol1 /name1HP

mount -F vxfs /dev/vx/dsk/nameDG1/nameVol1 /name1AIX

mount -V vxfs /dev/vx/dsk/nameDG1/nameVol1 /name1Linux

mount -t vxfs /dev/vx/dsk/nameDG1/nameVol1 /name1

Page 252: havcs-410-101 a-2-10-srt-pg_2

C–60 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

12 Verify that it is mounted.

mount | grep name

13 Configure your virtual IP address and verify that it is configured.Solaris

ifconfig interface addif ipaddress upifconfig -a

AIXifconfig interface inet ipaddress netmask mask aliasifconfig -a

HP-UXifconfig interface inet ipaddress netstat -in

Linuxifconfig interface add ipaddress ifconfig -a

14 Start the loopy application.

/name1/loopy name 1 &

15 Verify that it is running.Solaris, AIX, HP-UX

Watch the console on their system, and ensure that loopy is echoing your name in the message.

LinuxUse the System Log Viewer.

Page 253: havcs-410-101 a-2-10-srt-pg_2

Lab 5 Solutions: Preparing Application Services C–61Copyright © 2005 VERITAS Software Corporation. All rights reserved.

C

Complete the following steps to bring the application offline on the other system so that it is ready to be placed under VCS control.

1 While still logged into the other system, stop your loopy process by sending a kill signal. Verify that the process is stopped.

ps -ef | grep "loopy name 1"kill -9 pidps -ef | grep "loopy name 1"

2 Remove the virtual IP address configured earlier in this lab. Verify that the IP address is no longer configured.

Solaris ifconfig -aifconfig virtual_interface unplumbifconfig -a

AIXifconfig -aifconfig interface ipaddress deleteifconfig -a

HP-UXnetstat -inifconfig interface inet 0.0.0.0netstat -in

Linuxifconfig -aifconfig interface:instance downifconfig -a

3 Unmount your file system and verify that it is no longer mounted.

umount /name1mount | grep name1

4 Stop the volume and verify that it is disabled.

vxvol -g nameDG1 stop nameVol1vxprint | grep nameVol1

Bringing the Services Offline

Page 254: havcs-410-101 a-2-10-srt-pg_2

C–62 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

5 Deport your disk group and verify that it is deported.

vxdg deport nameDG1vxdisk -o alldgs list

Page 255: havcs-410-101 a-2-10-srt-pg_2

Lab 6 Solutions: Starting and Stopping VCS C–63Copyright © 2005 VERITAS Software Corporation. All rights reserved.

C

Lab 6 Solutions: Starting and Stopping VCS

Page 256: havcs-410-101 a-2-10-srt-pg_2

C–64 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

The following procedure demonstrate how the cluster configuration changes states during startup and shutdown, and shows how the .stale file works.

Brief instructions for this lab are located on the following page:• “Lab 6 Synopsis: Starting and Stopping VCS,” page A-29

Step-by-step instructions for this lab are located on the following page:• “Lab 6: Starting and Stopping VCS,” page B-37

Note: Complete this section with your lab partner.

1 Change to the /etc/VRTSvcs/conf/config directory.

cd /etc/VRTSvcs/conf/config

2 Verify that there is no .stale file in the /etc/VRTSvcs/conf/config directory. This file should not exist yet.

ls -al .

3 Open the cluster configuration.

haconf -makerw

Lab 6: Starting and Stopping VCS

train1 train2

# hastop –all -force# hastop –all -force

vcs1vcs1

Page 257: havcs-410-101 a-2-10-srt-pg_2

Lab 6 Solutions: Starting and Stopping VCS C–65Copyright © 2005 VERITAS Software Corporation. All rights reserved.

C

4 Verify that the .stale file has been created in the directory, /etc/VRTSvcs/conf/config.

ls -al .

5 Try to stop VCS using the hastop -all command.

hastop -all

The command should return an error asking to close the configuration or stop with the -force option.

6 Stop the cluster using the hastop -all -force command from one system only to stop VCS forcibly and leave the applications running.

hastop -all -force

7 Start VCS on each system in the cluster.

hastart

8 Verify the status of the cluster.

hastatus -summary

9 Why are all systems in the STALE_ADMIN_WAIT state?

The cluster configuration was left open when VCS was stopped.

10 Verify that the .stale file is present in the /etc/VRTSvcs/conf/config directory. This file should exist.

ls -al /etc/VRTSvcs/conf/config

11 Return all systems to a running state (from one system in the cluster).

hacf -verify /etc/VRTSvcs/conf/config hasys -force your_sys

Page 258: havcs-410-101 a-2-10-srt-pg_2

C–66 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

12 View the build process to see the LOCAL_BUILD and REMOTE_BUILD system states.

Solaris, AIX, HP-UXWatch the console to see the build process.

LinuxUse the System Log Viewer to watch the build process.

Virtual AcademyUse dmesg or tail /var/adm/messages to see the VCS build states.

13 Check the status of the cluster.

hastatus -summary

Any service groups that were online at the time that the hastop -all -force command was run should still be online now that VCS has been restarted.

14 Verify that there is no .stale file in the /etc/VRTSvcs/conf/config directory. This file should have been removed.

ls -al /etc/VRTSvcs/conf/config

Page 259: havcs-410-101 a-2-10-srt-pg_2

Lab 7 Solutions: Online Configuration of a Service Group C–67Copyright © 2005 VERITAS Software Corporation. All rights reserved.

C

Lab 7 Solutions: Online Configuration of a Service Group

Page 260: havcs-410-101 a-2-10-srt-pg_2

C–68 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

The purpose of this lab is to create a service group while VCS is running using either the Cluster Manager graphical user interface or the command-line interface.

Brief instructions for this lab are located on the following page:• “Lab 7 Synopsis: Online Configuration of a Service Group,” page A-31

Step-by-step instructions for this lab are located on the following page:• “Lab 7: Online Configuration of a Service Group,” page B-41

Classroom-Specific ValuesFill in this table with the applicable values for your lab cluster.

Object Sample Value Your Value

Service group prefixname

name

Your system host name your_sys

train1

Partner system host name their_sys

train2

Lab 7: Online Configuration of a Service GroupUse the Java GUI to:

Create a service group.Add resources to the service group from the bottom of the dependency tree.Substitute the name you used to create the disk group and volume.

Page 261: havcs-410-101 a-2-10-srt-pg_2

Lab 7 Solutions: Online Configuration of a Service Group C–69Copyright © 2005 VERITAS Software Corporation. All rights reserved.

C

Fill in the design worksheet with values appropriate for your cluster and use the information to create a service group.

1 If you are using the GUI, start Cluster Manager and log in to the cluster.

hagui &

2 Open the cluster configuration.

GUI: Select File—>Open configuration.

CLI: haconf -makerw

3 Create the service group.

GUI: Right-click your cluster name in the left panel and select Add Service Group.

CLI: hagrp -add nameSG1

4 Modify the SystemList to allow the service group to run on the two systems specified in the design worksheet.

GUI: Select each system and click the right arrow button.

CLI: hagrp -modify nameSG1 SystemList your_sys 0 their_sys 1

Creating a Service Group

Service Group Definition Sample Value Your Value

Group nameSG1

Required Attributes

FailOverPolicy Priority

SystemList train1=0 train2=1

Optional Attributes

AutoStartList train1

Page 262: havcs-410-101 a-2-10-srt-pg_2

C–70 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

5 Modify the AutoStartList attribute to allow the service group to start on your system.

GUI: Click the Startup box for your system; then click OK to create the service group.

CLI: hagrp -modify nameSG1 AutoStartList your_sys

6 Verify that the service group can autostart and that it is a failover service group.

GUI: Right click the service group, select Properties, and click Show all attributes.

CLI: hagrp -display nameSG1

7 Save the cluster configuration and view the configuration file to verify your changes.

GUI: Select File—>Save configuration.

CLI: haconf -dump

view /etc/VRTSvcs/conf/config/main.cf

Page 263: havcs-410-101 a-2-10-srt-pg_2

Lab 7 Solutions: Online Configuration of a Service Group C–71Copyright © 2005 VERITAS Software Corporation. All rights reserved.

C

Complete the following steps to add NIC, IP, DiskGroup, Volume, and Process resources to the service group using the information from the design worksheet.

1 Add the resource to the service group.

GUI: a Right-click the service group and select Add Resource.b Type the name from the table.c Select the resource type from the list.

CLI: hares -add nameNIC1 NIC nameSG1

2 Set the resource to not critical.

GUI: Clear Critical.

CLI: hares -modify nameNIC1 Critical 0

Adding Resources to a Service Group

Adding an NIC Resource

Resource Definition Sample Value Your Value

Service Group nameSG1

Resource Name nameNIC1

Resource Type NIC

Required Attributes

Device Solaris: eri0Sol Mob: dmfe0AIX: en1HP-UX: lan0Linux: eth0VA: bge0

NetworkHosts* 192.168.xx.1 (HP-UX only)

Critical? No (0)

Enabled? Yes (1)

Page 264: havcs-410-101 a-2-10-srt-pg_2

C–72 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

3 Set the required attributes for this resource, and any optional attributes, if needed.

GUI: For each attribute in the table:a Click Edit.b Double-click in the Value field.c Type the values you entered in your table.

CLI:Solaris

hares -modify nameNIC1 Device interfaceAIX

hares -modify nameNIC1 Device interfaceHP-UX

hares -modify nameNIC1 Device interfacehares -modify nameNIC1 NetworkHosts other_system1 other_system2

Linuxhares -modify nameNIC1 Device interface

4 Enable the resource.

GUI: Check Enabled and click OK to complete resource configuration.

CLI: hares -modify nameNIC1 Enabled 1

5 Verify that the resource is online. Because this is a persistent resource, you do not need to bring it online.

GUI: Verify that the resource icon is blue.

CLI: hares -display nameNIC1

6 Save the cluster configuration and view the configuration file to verify your changes.

GUI: Select File—>Close configuration.

CLI: haconf -dump -makero

view /etc/VRTSvcs/conf/config/main.cf

Page 265: havcs-410-101 a-2-10-srt-pg_2

Lab 7 Solutions: Online Configuration of a Service Group C–73Copyright © 2005 VERITAS Software Corporation. All rights reserved.

C

Adding an IP Resource

Resource Definition Sample Value Your Value

Service Group nameSG1

Resource Name nameIP1

Resource Type IP

Required Attributes

Device Solaris: eri0Sol Mob: dmfe0AIX: en1HP-UX: lan0Linux: eth0VA: bge0

Address 192.168.xx.** see table

Optional Attributes

Netmask 255.255.255.0

Critical? No (0)

Enabled? Yes (1)

System IP Address

train1 192.168.xx.51

train2 192.168.xx.52

train3 192.168.xx.53

train4 192.168.xx.54

train5 192.168.xx.55

train6 192.168.xx.56

train7 192.168.xx.57

train8 192.168.xx.58

train9 192.168.xx.59

train10 192.168.xx.60

train11 192.168.xx.61

train12 192.168.xx.62

Page 266: havcs-410-101 a-2-10-srt-pg_2

C–74 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

1 Add the resource to the service group.

GUI: a Right-click the service group and select Add Resource.b Type the name from the table.c Select the resource type from the list.

CLI: hares -add nameIP1 IP nameSG1

2 Set the resource to not critical.

GUI: Clear Critical.

CLI: hares -modify nameIP1 Critical 0

3 Set the required attributes for this resource, and any optional attributes, if needed.

GUI: For each attribute in the table:a Click Edit.b Double-click in the Value field.c Type the values you entered in your table.

CLI:hares -modify nameIP1 Device interfacehares -modify nameIP1 Address xxx.xxx.xxx.xxx

4 Enable the resource.

GUI: Check Enabled and click OK to complete resource configuration.

hares -modify nameIP1 Enabled 1

5 Bring the resource online on your system.

GUI: Right-click the resource and select Online—>your_sys.

CLI: hares -online nameIP1 -sys your_sys

Page 267: havcs-410-101 a-2-10-srt-pg_2

Lab 7 Solutions: Online Configuration of a Service Group C–75Copyright © 2005 VERITAS Software Corporation. All rights reserved.

C

6 Verify that the resource is online.

GUI: Verify that the resource icon is blue.

CLI: hares -display nameIP1

Solaris, AIX, Linuxifconfig -a

HP-UXnetstat -in

7 Save the cluster configuration and view the configuration file to verify your changes.

GUI: Select File—>Save configuration.

CLI: haconf -dump

view /etc/VRTSvcs/conf/config/main.cf

Page 268: havcs-410-101 a-2-10-srt-pg_2

C–76 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

1 Add the resource to the service group using either the GUI or CLI.

hares -add nameDG1 DiskGroup nameSG1

2 Set the resource to not critical.

hares -modify nameDG1 Critical 0

3 Set the required attributes for this resource, and any optional attributes, if needed.

hares -modify nameDG1 DiskGroup nameDG1

4 Enable the resource.

hares -modify nameDG1 Enabled 1

5 Bring the resource online on your system.

hares -online nameDG1 -sys your_sys

Adding a DiskGroup Resource

Resource Definition Sample Value Your Value

Service Group nameSG1

Resource Name nameDG1

Resource Type DiskGroup

Required Attributes

DiskGroup nameDG1

Optional Attributes

StartVolumes 1

StopVolumes 1

Critical? No (0)

Enabled? Yes (1)

Page 269: havcs-410-101 a-2-10-srt-pg_2

Lab 7 Solutions: Online Configuration of a Service Group C–77Copyright © 2005 VERITAS Software Corporation. All rights reserved.

C

6 Verify that the resource is online in VCS and at the O/S level.

hares -display nameDG1vxprint -g nameDG1

7 Save the cluster configuration and view the configuration file to verify your changes.

haconf -dumpview /etc/VRTSvcs/conf/config/main.cf

Page 270: havcs-410-101 a-2-10-srt-pg_2

C–78 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

1 Add the resource to the service group using either the GUI or CLI.

hares -add nameVol1 Volume nameSG1

2 Set the resource to not critical.

hares -modify nameVol1 Critical 0

3 Set the required attributes for this resource, and any optional attributes, if needed.

hares -modify nameVol1 Volume nameVol1hares -modify nameVol1 DiskGroup nameDG1

4 Enable the resource.

hares -modify nameVol1 Enabled 1

5 Bring the resource online on your system.

hares -online nameVol1 -sys your_sys

6 Verify that the resource is online in VCS and at the operating system level.

hares -display nameVol1vxprint -g nameDG1

Adding a Volume Resource

Resource Definition Sample Value Your Value

Service Group nameSG1

Resource Name nameVol1

Resource Type Volume

Required Attributes

Volume nameVol1

DiskGroup nameDG1

Critical? No (0)

Enabled? Yes (1)

Page 271: havcs-410-101 a-2-10-srt-pg_2

Lab 7 Solutions: Online Configuration of a Service Group C–79Copyright © 2005 VERITAS Software Corporation. All rights reserved.

C

7 Save the cluster configuration and view the configuration file to verify your changes.

haconf -dumpview /etc/VRTSvcs/conf/config/main.cf

Page 272: havcs-410-101 a-2-10-srt-pg_2

C–80 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

1 Add the resource to the service group using either the GUI or CLI.

hares -add nameMount1 Mount nameSG1

2 Set the resource to not critical.

hares -modify nameMount1 Critical 0

3 Set the required attributes for this resource, and any optional attributes, if needed.

hares -modify nameMount1 MountPoint /name1hares -modify nameMount1 BlockDevice /dev/vx/dsk/nameDG1/nameVol1hares -modify nameMount1 FSType vxfshares -modify nameMount1 FsckOpt %-y

4 Enable the resource.

hares -modify nameMount1 Enabled 1

5 Bring the resource online on your system.

hares -online nameMount1 -sys your_sys

Adding a Mount Resource

Resource Definition Sample Value Your Value

Service Group nameSG1

Resource Name nameMount1

Resource Type Mount

Required Attributes

MountPoint /name1

BlockDevice /dev/vx/dsk/nameDG1/nameVol1 (no spaces)

FSType vxfs

FsckOpt -y

Critical? No (0)

Enabled? Yes (1)

Page 273: havcs-410-101 a-2-10-srt-pg_2

Lab 7 Solutions: Online Configuration of a Service Group C–81Copyright © 2005 VERITAS Software Corporation. All rights reserved.

C

6 Verify that the resource is online in VCS and at the operating system level.

hares -display nameMount1mount

7 Save the cluster configuration and view the configuration file to verify your changes.

haconf -dumpview /etc/VRTSvcs/conf/config/main.cf

Page 274: havcs-410-101 a-2-10-srt-pg_2

C–82 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

1 Add the resource to the service group using either the GUI or CLI.

hares -add nameProcess1 Process nameSG1

2 Set the resource to not critical.

hares -modify nameProcess1 Critical 0

3 Set the required attributes for this resource, and any optional attributes, if needed.

hares -modify nameProcess1 PathName /bin/shhares -modify nameProcess1 Arguments "/name1/loopy name 1"

Note: If you are using the GUI to configure the resource, you do not need to include the quotation marks.

4 Enable the resource.

hares -modify nameProcess1 Enabled 1

5 Ensure that you have the console or a terminal window open for loopy output.

Adding a Process Resource

Resource Definition Sample Value Your Value

Service Group nameSG1

Resource Name nameProcess1

Resource Type Process

Required Attributes

PathName /bin/sh

Optional Attributes

Arguments /name1/loopy name 1

Critical? No (0)

Enabled? Yes (1)

Page 275: havcs-410-101 a-2-10-srt-pg_2

Lab 7 Solutions: Online Configuration of a Service Group C–83Copyright © 2005 VERITAS Software Corporation. All rights reserved.

C

6 Bring the resource online on your system.

hares -online nameProcess1 -sys your_sys

7 Verify that the resource is online in VCS and at the operating system level.

hares -display nameProcess1

8 Save the cluster configuration and view the configuration file to verify your changes.

haconf -dumpview /etc/VRTSvcs/conf/config/main.cf

Page 276: havcs-410-101 a-2-10-srt-pg_2

C–84 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

11 Link resource pairs together based on the design worksheet.

hares -link nameIP1 nameNIC1hares -link nameVol1 nameDG1hares -link nameMount1 nameVol1hares -link nameProcess1 nameIP1hares -link nameProcess1 nameMount1

2 Verify that the resources are linked properly.

hares -dep

3 Save the cluster configuration and view the configuration file to verify your changes.

haconf -dumpview /etc/VRTSvcs/conf/config/main.cf

Linking Resources in the Service Group

Resource Dependency Definition

Service Group nameSG1

Parent Resource Requires Child Resource

nameVol1 nameDG1

nameMount1 nameVol1

nameIP1 nameNIC1

nameProcess1 nameMount1

nameProcess1 nameIP1

Page 277: havcs-410-101 a-2-10-srt-pg_2

Lab 7 Solutions: Online Configuration of a Service Group C–85Copyright © 2005 VERITAS Software Corporation. All rights reserved.

C

Complete the following steps to test the service group on each system in the service group SystemList.

1 Test the service group by switching away from your system in the cluster.

hagrp -switch nameSG1 -to their_sys

2 Verify that the service group came online properly on their system.

hastatus -summary

3 Test the service group by switching it back to your system in the cluster.

hagrp -switch nameSG1 -to your_sys

4 Verify that the service group came online properly on your system.

hastatus -summary

Testing the Service Group

Page 278: havcs-410-101 a-2-10-srt-pg_2

C–86 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

1 Set each resource to critical.

hares -modify nameNIC1 Critical 1hares -modify nameIP1 Critical 1hares -modify nameDG1 Critical 1hares -modify nameVol1 Critical 1hares -modify nameMount1 Critical 1hares -modify nameProcess1 Critical 1

2 Save the cluster configuration and view the configuration file to verify your changes.

haconf -dumpview /etc/VRTSvcs/conf/config/main.cf

3 Close the cluster configuration after all students working in your cluster are finished.

haconf -dump -makero

Setting Resources to Critical

Page 279: havcs-410-101 a-2-10-srt-pg_2

Lab 7 Solutions: Online Configuration of a Service Group C–87Copyright © 2005 VERITAS Software Corporation. All rights reserved.

C

group nameSG1 (

SystemList = { train1 = 0, train2 = 1 }

AutoStartList = { train1 }

)

DiskGroup nameDG1 (

DiskGroup = nameDG1

)

IP nameIP1 (

Device = eri0

Address = "192.168.27.51"

)

Mount nameMount1 (

MountPoint = "/name1"

BlockDevice = "/dev/vx/dsk/nameDG1/nameVol1"

FSType = vxfs

FsckOpt = "-y"

)

Process nameProcess1 (

PathName = "/bin/sh"

Arguments = "/name1/loopy name 1"

)

NIC nameNIC1 (

Device = eri0

)

Volume nameVol1 (

Volume = nameVol1

DiskGroup = nameDG1

)

Partial Sample Configuration File

Page 280: havcs-410-101 a-2-10-srt-pg_2

C–88 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

nameIP1 requires nameNIC1

nameMount1 requires nameVol1

nameProcess1 requires nameIP1

nameProcess1 requires nameMount1

nameVol1 requires nameDG1

Page 281: havcs-410-101 a-2-10-srt-pg_2

Lab 8 Solutions: Offline Configuration of a Service Group C–89Copyright © 2005 VERITAS Software Corporation. All rights reserved.

C

Lab 8 Solutions: Offline Configuration of a Service Group

Page 282: havcs-410-101 a-2-10-srt-pg_2

C–90 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

The purpose of this lab is to add a service group by copying and editing the definition in main.cf for nameSG1.

Brief instructions for this lab are located on the following page:• “Lab 8 Synopsis: Offline Configuration of a Service Group,” page A-38

Step-by-step instructions for this lab are located on the following page:• “Lab 8: Offline Configuration of a Service Group,” page B-57

Lab 8: Offline Configuration of a Service Group

nameProcess2

AppVol

AppDG

nameNIC2

nameIP2

nameDG2

nameVol2

nameMount2

nameProcess1

nameDG1

nameVol1

nameMount1

nameNIC1

nameIP1

nameSG1nameSG1 nameSG2nameSG2

Working together, follow the offline configuration procedure. Alternately, work alone and use the GUI to create a new service group.

Working together, follow the offline configuration procedure. Alternately, work alone and use the GUI to create a new service group.

Page 283: havcs-410-101 a-2-10-srt-pg_2

Lab 8 Solutions: Offline Configuration of a Service Group C–91Copyright © 2005 VERITAS Software Corporation. All rights reserved.

C

Complete the following worksheet for the resources managed by the service groups you create in this lab. Then follow the procedure to configure the resources.

Object Sample Value Your Value

Your system host name your_sys

Use the same system as previous labs

Partner system host name their_sys

Use the same system as previous labs

Name prefix for your objects

name

Disk assignment for disk group

Solaris: c#t#d#AIX: hdisk##HP-UX: c#t#d#Linux: sd##

Disk group name nameDG2

Volume name nameVol2

Mount point /name2

Network interface Solaris: eri0Sol Mob: dmfe0AIX: en1HP-UX: lan0Linux: eth0VA bge0

IP Address train1 192.168.xxx.71train2 192.168.xxx.72train3 192.168.xxx.73train4 192.168.xxx.74train5 192.168.xxx.75train6 192.168.xxx.76train7 192.168.xxx.77train8 192.168.xxx.78train9 192.168.xxx.79train10 192.168.xxx.80train11 192.168.xxx.81train12 192.168.xxx.82

Application script location

Page 284: havcs-410-101 a-2-10-srt-pg_2

C–92 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

Use the values in the table to prepare resources for VCS.

1 Verify disk availability for Volume Manager.

vxdisk list

2 Initialize a disk for Volume Manager using the disk device from the worksheet.

vxdisksetup -i disk_device

3 Create a disk group with the name from the worksheet using the initialized disk.

vxdg init nameDG2 nameDG201=disk_device

4 Create a 2 GB volume in the disk group.

vxassist -g nameDG2 make nameVol2 2g

5 Create a VxFS file system on the volume.Solaris

mkfs -F vxfs /dev/vx/rdsk/nameDG2/nameVol2HP

mkfs -F vxfs /dev/vx/rdsk/nameDG2/nameVol2AIX

mkfs -V vxfs /dev/vx/rdsk/nameDG2/nameVol2Linux

mkfs -t vxfs /dev/vx/rdsk/nameDG2/nameVol2

6 Create a mount point on each system in the cluster.

Allmkdir /name2

Solaris, AIXrsh their_sys mkdir /name2

HP-UXremsh their_sys mkdir /name2

Linuxssh their_sys mkdir /name2

Prepare Resources

Page 285: havcs-410-101 a-2-10-srt-pg_2

Lab 8 Solutions: Offline Configuration of a Service Group C–93Copyright © 2005 VERITAS Software Corporation. All rights reserved.

C

7 Mount the file system on your cluster system.Solaris

mount -F vxfs /dev/vx/dsk/nameDG2/nameVol2 /name2HP

mount -F vxfs /dev/vx/dsk/nameDG2/nameVol2 /name2AIX

mount -V vxfs /dev/vx/dsk/nameDG2/nameVol2 /name2Linux

mount -t vxfs /dev/vx/dsk/nameDG2/nameVol2 /name2

8 Verify that the file system is mounted on your system.

mount

9 Copy the loopy script to your file system created in this lab.

cp /class_sw_dir/loopy /name2/loopy

10 Start the new loopy application.

/name2/loopy name 2 &

11 Verify that the new loopy application is working correctly.

View the console and verify that the new loopy process is echoing nameSG2 in the message.

12 Stop the resources to prepare to place them under VCS control in the next section of the lab.a Stop the loopy process by sending a kill signal. Verify that the process is

stopped.

ps -ef | grep "loopy name 2"kill -9 pidps -ef | grep "loopy name 2"

b Unmount your file system and verify that it is no longer mounted.

umount /name2mount

Page 286: havcs-410-101 a-2-10-srt-pg_2

C–94 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

c Stop the volume and verify that it is disabled.

vxvol -g nameDG2 stop nameVol2vxprint | grep nameVol2

d Deport your disk group and verify that it is deported.

vxdg deport nameDG2vxdisk -o alldgs list

Page 287: havcs-410-101 a-2-10-srt-pg_2

Lab 8 Solutions: Offline Configuration of a Service Group C–95Copyright © 2005 VERITAS Software Corporation. All rights reserved.

C

Record information needed to create a new service group in the design worksheet.

Completing the Design Worksheet

Service Group Definition Sample Value Your Value

Group nameSG2

Required Attributes

FailOverPolicy Priority

SystemList train1=0 train2=1

Optional Attributes

AutoStartList train1

Resource Definition Sample Value Your Value

Service Group nameSG2

Resource Name nameNIC2

Resource Type NIC

Required Attributes

Device Solaris: eri0Sol Mob: dmfe0AIX: en1HP-UX: lan0Linux: eth0VA: bge0

NetworkHosts* 192.168.xx.1 (HP-UX only)

Critical? No (0)

Enabled? Yes (1)

Page 288: havcs-410-101 a-2-10-srt-pg_2

C–96 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

Resource Definition Sample Value Your Value

Service Group nameSG2

Resource Name nameIP2

Resource Type IP

Required Attributes

Device Solaris: eri0Sol Mob: dmfe0AIX: en1HP-UX: lan0Linux: eth0VA: bge0

Address 192.168.xx.** see table

Optional Attributes

Netmask 255.255.255.0

Critical? No (0)

Enabled? Yes (1)

System IP Address

train1 192.168.xx.71

train2 192.168.xx.72

train3 192.168.xx.73

train4 192.168.xx.74

train5 192.168.xx.75

train6 192.168.xx.76

train7 192.168.xx.77

train8 192.168.xx.78

train9 192.168.xx.79

train10 192.168.xx.80

train11 192.168.xx.81

train12 192.168.xx.82

Page 289: havcs-410-101 a-2-10-srt-pg_2

Lab 8 Solutions: Offline Configuration of a Service Group C–97Copyright © 2005 VERITAS Software Corporation. All rights reserved.

C

Resource Definition Sample Value Your Value

Service Group nameSG2

Resource Name nameDG2

Resource Type DiskGroup

Required Attributes

DiskGroup nameDG2

Optional Attributes

StartVolumes 1

StopVolumes 1

Critical? No (0)

Enabled? Yes (1)

Resource Definition Sample Value Your Value

Service Group nameSG2

Resource Name nameVol2

Resource Type Volume

Required Attributes

Volume nameVol2

DiskGroup nameDG2

Critical? No (0)

Enabled? Yes (1)

Page 290: havcs-410-101 a-2-10-srt-pg_2

C–98 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

Resource Definition Sample Value Your Value

Service Group nameSG2

Resource Name nameMount2

Resource Type Mount

Required Attributes

MountPoint /name2

BlockDevice /dev/vx/dsk/nameDG2/nameVol2 (no spaces)

FSType vxfs

FsckOpt -y

Critical? No (0)

Enabled? Yes (1)

Resource Definition Sample Value Your Value

Service Group nameSG2

Resource Name nameProcess2

Resource Type Process

Required Attributes

PathName /bin/sh

Optional Attributes

Arguments /name2/loopy name 2

Critical? No (0)

Enabled? Yes (1)

Page 291: havcs-410-101 a-2-10-srt-pg_2

Lab 8 Solutions: Offline Configuration of a Service Group C–99Copyright © 2005 VERITAS Software Corporation. All rights reserved.

C

Resource Dependency Definition

Service Group nameSG2

Parent Resource Requires Child Resource

nameVol2 nameDG2

nameMount2 nameVol2

nameIP2 nameNIC2

nameProcess2 nameMount2

nameProcess2 nameIP2

Page 292: havcs-410-101 a-2-10-srt-pg_2

C–100 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

Note: You may choose to use the GUI to create the nameSG2 service group. If so, skip this section and complete the “Alternate Lab” section instead.

1 Working with your lab partner, verify that the cluster configuration is saved and closed.

haconf -dump -makero

2 Change to the VCS configuration directory.

cd /etc/VRTSvcs/conf/config

3 Make a subdirectory named test.

mkdir test

4 Copy the main.cf and types.cf files into the test subdirectory.

Allcp main.cf types.cf test

LinuxAlso copy the vcsApacheTypes.cf file.

5 Change to the test directory.

cd test

6 Edit the main.cf file in the test directory on one system in the cluster.

a For each student’s service group, copy the nameSG1 service group structure to a nameSG2.

b Rename all of the resources within the nameSG1 service group to end with 2 instead of 1, as shown in the following table.

Modifying a VCS Configuration File

Existing Name Change To New Name

nameProcess1 nameProcess2

nameIP1 nameIP2

Page 293: havcs-410-101 a-2-10-srt-pg_2

Lab 8 Solutions: Offline Configuration of a Service Group C–101Copyright © 2005 VERITAS Software Corporation. All rights reserved.

C

Partial Example:

# vi main.cf

group nameSG2 ( SystemList = { train3 = 0, train4 = 1 } AutoStartList = { train3 } )

DiskGroup nameDG2 ( DiskGroup = nameDG2 )...

c Copy and modify the dependency section.

nameIP2 requires nameNIC2nameProcess2 requires nameIP2nameProcess2 requires nameMount2...

7 Edit the attributes of each copied resource to match the design worksheet values shown earlier in this section.

nameNIC1 nameNIC2

nameMount1 nameMount2

nameVol1 nameVol2

nameDG1 nameDG2

Existing Name Change To New Name

Page 294: havcs-410-101 a-2-10-srt-pg_2

C–102 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

8 Verify the cluster configuration and fix any errors found.

hacf -verify /etc/VRTSvcs/conf/config/test

9 Stop VCS on all systems, but leave the applications still running.

hastop -all -force

10 Verify that the loopy applications are still running.

View the console window. (loopy 2 is not running, it was stopped in an earlier section.)

11 Copy the main.cf file from the test subdirectory into the configuration directory.

cp main.cf ../main.cf

12 Start the cluster from the system where you edited the configuration file.

hastart

13 Start the cluster in the stale state on the other system in the cluster (where the configuration was not edited).

hastart -stale

14 Verify the status of the cluster.

hastatus -summary

15 View the build process to see the LOCAL_BUILD and REMOTE_BUILD system states.

Solaris, AIX, HP-UXWatch the console during the build process to see the system states.

LinuxUse the System Log Viewer to watch the build process.

Virtual AcademyUse dmesg or tail /var/adm/messages to see the VCS build states.

Page 295: havcs-410-101 a-2-10-srt-pg_2

Lab 8 Solutions: Offline Configuration of a Service Group C–103Copyright © 2005 VERITAS Software Corporation. All rights reserved.

C

16 Bring the new service group online on your system. Students can bring their own service groups online.

hagrp -online nameSG2 -sys your_sys

17 Verify the status of the cluster.

hastatus -summary

Page 296: havcs-410-101 a-2-10-srt-pg_2

C–104 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

Use the information in the design worksheet in the previous section to create a new service group using the GUI to copy resources from the nameSG1 service group.1 Start Cluster Manager and log in to the cluster.

hagui &

2 Open the cluster configuration.

GUI: Select File—>Open configuration.

3 Create the service group.

GUI: Right-click your cluster name in the left panel and select Add Service Group.

4 Modify the SystemList to allow the service group to run on the two systems specified in the design worksheet.

GUI: Select each system and click the right arrow button.

5 Modify the AutoStartList attribute to allow the service group to start on your system.

GUI: Click the Startup box for your system then click OK to create the service group.

6 Verify that the service group can autostart and that it is a failover service group.

GUI: Right-click the service group, select Properties, and click Show all attributes.

7 Save the cluster configuration and view the configuration file to verify your changes.

GUI: Select File—>Save configuration.

view /etc/VRTSvcs/conf/config/main.cf

Alternate Lab: Using the GUI to Create the Service Group

Page 297: havcs-410-101 a-2-10-srt-pg_2

Lab 8 Solutions: Offline Configuration of a Service Group C–105Copyright © 2005 VERITAS Software Corporation. All rights reserved.

C

8 Copy all resources from the nameSG1 service group to nameSG2.

a Click the nameSG1 service group in the left pane.

b Select the Resources tab to display the resource icons.

c Right-click the top-most resource in the dependency tree, nameProcess1.

d Select Copy—>Self and Child Nodes.

e Click the new nameSG2 service group in the left pane.

f Select the Resources tab to display the resource view. There are no resources yet in nameSG2.

g Right-click anywhere in the right pane display area of the Resources tab.

h Select Paste.

The Name Clashes window is displayed, which enables you to rename each resource you are pasting.

i Change the resource names as follows:

Existing Name Change To New Name

nameProcess1 nameProcess2

nameIP1 nameIP2

nameNIC1 nameNIC2

nameMount1 nameMount2

nameVol1 nameVol2

nameDG1 nameDG2

Page 298: havcs-410-101 a-2-10-srt-pg_2

C–106 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

j Click Apply.

k Click OK.

9 Set each resource to not critical.

Right-click the resource and clear Critical.

10 Modify each resource to set the attribute values as specified in the worksheet.

Right-click a resource and select View—>Properties View.

11 Save the cluster configuration and view the configuration file to verify your changes.

Select File—>Save configuration.

view /etc/VRTSvcs/conf/config/main.cf

12 Enable each resource.

Right-click the resource and select Enabled.

13 Bring the nameSG2 resources online, starting from the bottom of the dependency tree.

Right-click the resource, select Online, and choose your system. 14 Save and close the cluster configuration.

Select File—>Close configuration.

Note: In the GUI, the Close configuration action saves the configuration automatically.

Page 299: havcs-410-101 a-2-10-srt-pg_2

Lab 8 Solutions: Offline Configuration of a Service Group C–107Copyright © 2005 VERITAS Software Corporation. All rights reserved.

C

group nameSG2 (

SystemList = { train1 = 0, train2 = 1 }

AutoStartList = { train1 }

)

DiskGroup nameDG2 (

DiskGroup = nameDG2

)

IP nameIP2 (

Device = eri0

Address = "192.168.27.71"

)

Mount nameMount2 (

MountPoint = "/name2"

BlockDevice = "/dev/vx/dsk/nameDG2/nameVol2"

FSType = vxfs

FsckOpt = "-y"

)

Process nameProcess2 (

PathName = "/bin/sh"

Arguments = "/name2/loopy name 2"

)

NIC nameNIC2 (

Device = eri0

)

Partial Sample Configuration File

Page 300: havcs-410-101 a-2-10-srt-pg_2

C–108 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

Volume nameVol2 (

Volume = nameVol2

DiskGroup = nameDG2

)

nameIP2 requires nameNIC2

nameMount2 requires nameVol2

nameProcess2 requires nameIP2

nameProcess2 requires nameMount2

nameVol2 requires nameDG2

Page 301: havcs-410-101 a-2-10-srt-pg_2

Lab 9 Solutions: Creating a Parallel Service Group C–109Copyright © 2005 VERITAS Software Corporation. All rights reserved.

C

Lab 9 Solutions: Creating a Parallel Service Group

Page 302: havcs-410-101 a-2-10-srt-pg_2

C–110 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

The purpose of this lab is to add a parallel service group to monitor the NIC resource and replace the NIC resources in the failover service groups with Proxy resources.

Brief instructions for this lab are located on the following page:• “Lab 9 Synopsis: Creating a Parallel Service Group,” page A-47

Step-by-step instructions for this lab are located on the following page:• “Lab 9: Creating a Parallel Service Group,” page B-73

Lab 9: Creating a Parallel Service Group

nameProcess2

DBVol

DBDG

nameProxy2

nameIP2

nameDG2

nameVol2

nameMount2

nameProcess1

nameDG1

nameVol1

nameMount1

nameProxy1

nameIP1

NetworkNIC

NetworkPhantom

nameSG1nameSG1 nameSG2nameSG2

NetworkSGNetworkSG

Page 303: havcs-410-101 a-2-10-srt-pg_2

Lab 9 Solutions: Creating a Parallel Service Group C–111Copyright © 2005 VERITAS Software Corporation. All rights reserved.

C

Work with your lab partner to create a parallel service group containing network resources using the information in the design worksheet.

1 Open the cluster configuration.

haconf -makerw

2 Create the service group.

hagrp -add NetworkSG

3 Modify the SystemList to allow the service group to run on the systems specified in the design worksheet.

hagrp -modify NetworkSG SystemList your_sys 0 their_sys 1

4 Modify the AutoStartList attribute to allow the service group to start on both systems.

hagrp -modify NetworkSG AutoStartList your_sys their_sys

5 Modify the Parallel attribute to allow the service group to run on both systems.

hagrp -modify NetworkSG Parallel 1

6 View the service group attribute settings.

hagrp -display NetworkSG

Creating a Parallel Network Service Group

Service Group Definition Sample Value Your Value

Group NetworkSG

Required Attributes

Parallel 1

SystemList train1=0 train2=1

Optional Attributes

AutoStartList train1 train2

Page 304: havcs-410-101 a-2-10-srt-pg_2

C–112 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

Use the values in the following tables to create NIC and Phantom resources.

1 Add the NIC resource to the service group.

hares -add NetworkNIC NIC NetworkSG

2 Set the resource to not critical.

hares -modify NetworkNIC Critical 0

Adding Resources

Resource Definition Sample Value Your Value

Service Group NetworkSG

Resource Name NetworkNIC

Resource Type NIC

Required Attributes

Device Solaris: eri0Sol Mob: dmfe0AIX: en1HP-UX: lan0Linux: eth0VA: bge0

Critical? No (0)

Enabled? Yes (1)

Resource Definition Sample Value Your Value

Service Group NetworkSG

Resource Name NetworkPhantom

Resource Type Phantom

Required Attributes

Critical? No (0)

Enabled? Yes (1)

Page 305: havcs-410-101 a-2-10-srt-pg_2

Lab 9 Solutions: Creating a Parallel Service Group C–113Copyright © 2005 VERITAS Software Corporation. All rights reserved.

C

3 Set the required attributes for this resource, and any optional attributes, if needed.

Allhares -modify NetworkNIC Device interface

HP-UXhares -modify NetworkNIC NetworkHosts other_system1 other_system2

4 Enable the resource.

hares -modify NetworkNIC Enabled 1

5 Verify that the resource is online. Because it is a persistent resource, you do not need to bring it online.

hares -display NetworkNIC

6 Add the Phantom resource to the service group.

hares -add NetworkPhantom Phantom NetworkSG

7 Set the resource to not critical.

hares -modify NetworkPhantom Critical 0

8 Enable the resource.

hares -modify NetworkPhantom Enabled 1

9 Verify that the status of the NetworkSG service group now shows as online.

hastatus -sum

10 Save the cluster configuration and view the configuration file.

haconf -dump view /etc/VRTSvcs/conf/config/main.cf

Page 306: havcs-410-101 a-2-10-srt-pg_2

C–114 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

Use the values in the tables to replace the NIC resources with Proxy resources and create new links.

Replacing NIC Resources with Proxy Resources

Resource Definition Sample Value Your Value

Service Group nameSG1

Resource Name nameProxy1

Resource Type Proxy

Required Attributes

TargetResName NetworkNIC

Critical? No (0)

Enabled? Yes (1)

Resource Definition Sample Value Your Value

Service Group nameSG2

Resource Name nameProxy2

Resource Type Proxy

Required Attributes

TargetResName NetworkNIC

Critical? No (0)

Enabled? Yes (1)

Resource Definition Sample Value Your Value

Service Group ClusterService

Resource Name csgProxy

Resource Type Proxy

Required Attributes

TargetResName NetworkNIC

Critical? No (0)

Enabled? Yes (1)

Page 307: havcs-410-101 a-2-10-srt-pg_2

Lab 9 Solutions: Creating a Parallel Service Group C–115Copyright © 2005 VERITAS Software Corporation. All rights reserved.

C

1 Delete all NIC resources in the ClusterService, nameSG1, and nameSG2 service groups.

hares -delete nameNIC1 hares -delete nameNIC2 hares -delete csgnic

Note: Only one student can delete the ClusterService NIC resource.

2 Add a proxy resource to each failover service group using the service group naming convention:– nameProxy1– nameProxy2– csgProxy

hares -add nameProxy1 Proxy nameSG1 hares -add nameProxy2 Proxy nameSG2 hares -add csgProxy Proxy ClusterService

3 Set the value for each Proxy TargetResName attribute to NetworkNIC.

hares -modify nameProxy1 TargetResName NetworkNIChares -modify nameProxy2 TargetResName NetworkNIChares -modify csgProxy TargetResName NetworkNIC

4 Set the resources to not critical.

hares -modify nameProxy1 Critical 0hares -modify nameProxy2 Critical 0hares -modify csgProxy Critical 0

5 Enable the resources.

hares -modify nameProxy1 Enabled 1hares -modify nameProxy2 Enabled 1hares -modify csgProxy Enabled 1

6 Verify that the Proxy resources are in an online state.

hares -display nameProxy1hares -display nameProxy2hares -display csgProxy

Page 308: havcs-410-101 a-2-10-srt-pg_2

C–116 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

7 Save the cluster configuration.

haconf -dump

Page 309: havcs-410-101 a-2-10-srt-pg_2

Lab 9 Solutions: Creating a Parallel Service Group C–117Copyright © 2005 VERITAS Software Corporation. All rights reserved.

C

Use the values in the following tables to replace the NIC resources with Proxy resources and create new links.

1 Link the Proxy resources as children of the corresponding IP resources of each service group.

hares -link nameIP1 nameProxy1hares -link nameIP2 nameProxy2hares -link webip csgProxy

2 Switch each service group (nameSG1, nameSG2, ClusterService) to ensure that they can run on each system.

hagrp -switch nameSG1 -to other_systemhagrp -switch nameSG2 -to other_systemhagrp -switch ClusterService -to other_system

Linking Resources and Testing the Service Group

Resource Dependency Definition

Service Group nameSG1

Parent Resource Requires Child Resource

nameIP1 nameProxy1

Resource Dependency Definition

Service Group nameSG2

Parent Resource Requires Child Resource

nameIP2 nameProxy2

Resource Dependency Definition

Service Group ClusterService

Parent Resource Requires Child Resource

webip csgProxy

Page 310: havcs-410-101 a-2-10-srt-pg_2

C–118 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

3 Set all resources to critical.

hares -display | grep Critical | grep 0haconf -makerwhares -modify resource_name Critical 1. . .

4 Save and close the cluster configuration.

haconf -dump -makero

Page 311: havcs-410-101 a-2-10-srt-pg_2

Lab 9 Solutions: Creating a Parallel Service Group C–119Copyright © 2005 VERITAS Software Corporation. All rights reserved.

C

include "types.cf"

cluster vcs (

UserNames = { admin = ElmElgLimHmmKumGlj }

ClusterAddress = "192.168.27.51"

Administrators = { admin }

CounterInterval = 5

)

system train1 (

)

system train2 (

)

group ClusterService (

SystemList = { train1 = 0, train2 = 1 }

AutoStartList = { train1, train2 }

OnlineRetryLimit = 3

Tag = CSG

)

IP webip (

Device = eri0

Address = "192.168.27.42"

NetMask = "255.255.255.0"

)

Proxy csgProxy (

TargetResName = NetworkNIC

)

Sample Configuration File

Page 312: havcs-410-101 a-2-10-srt-pg_2

C–120 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

VRTSWebApp VCSweb (

Critical = 0

AppName = vcs

InstallDir = "/opt/VRTSweb/VERITAS"

TimeForOnline = 5

)

VCSweb requires webip

webip requires csgProxy

group NetworkSG (

SystemList = { train1 = 0, train2 = 1 }

Parallel = 1

AutoStartList = ( train1, train2 }

)

NIC NetworkNIC (

Device = eri0

)

Phantom NetworkPhantom (

)

group nameSG1 (

SystemList = { train1 = 0, train2 = 1 }

AutoStartList = { train1 }

)

DiskGroup nameDG1 (

DiskGroup = nameDG1

)

Page 313: havcs-410-101 a-2-10-srt-pg_2

Lab 9 Solutions: Creating a Parallel Service Group C–121Copyright © 2005 VERITAS Software Corporation. All rights reserved.

C

IP nameIP1 (

Device = eri0

Address = "192.168.27.51"

)

Mount nameMount1 (

MountPoint = "/name1"

BlockDevice = "/dev/vx/dsk/nameDG1/nameVol1"

FSType = vxfs

FsckOpt = "-y"

)

Process nameProcess1 (

PathName = "/bin/ksh"

Arguments = "/name1/loopy name 1"

)

Proxy nameProxy1 (

TargetResName = NetworkNIC

)

Volume nameVol1 (

Volume = nameVol1

DiskGroup = nameDG1

)

nameIP1 requires nameProxy1

nameMount1 requires nameVol1

nameProcess1 requires nameIP1

nameProcess1 requires nameMount1

nameVol1 requires nameDG1

Page 314: havcs-410-101 a-2-10-srt-pg_2

C–122 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

group nameSG2 (

SystemList = { train1 = 0, train2 = 1 }

AutoStartList = { train1 }

)

DiskGroup nameDG2 (

DiskGroup = nameDG2

)

IP nameIP2 (

Device = eri0

Address = "192.168.27.71"

)

Mount nameMount2 (

MountPoint = "/name2"

BlockDevice = "/dev/vx/dsk/nameDG2/nameVol2"

FSType = vxfs

FsckOpt = "-y"

)

Process nameProcess2 (

PathName = "/bin/ksh"

Arguments = "/name2/loopy name 2"

)

Proxy nameProxy2 (

TargetResName = NetworkNIC

)

Volume nameVol2 (

Volume = nameVol2

DiskGroup = nameDG2

)

Page 315: havcs-410-101 a-2-10-srt-pg_2

Lab 9 Solutions: Creating a Parallel Service Group C–123Copyright © 2005 VERITAS Software Corporation. All rights reserved.

C

nameIP2 requires nameProxy2

nameMount2 requires nameVol2

nameProcess2 requires nameIP2

nameProcess2 requires nameMount2

nameVol2 requires nameDG2

Page 316: havcs-410-101 a-2-10-srt-pg_2

C–124 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

Page 317: havcs-410-101 a-2-10-srt-pg_2

Lab 10 Solutions: Configuring Notification C–125Copyright © 2005 VERITAS Software Corporation. All rights reserved.

C

Lab 10 Solutions: Configuring Notification

Page 318: havcs-410-101 a-2-10-srt-pg_2

C–126 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

The purpose of this lab is to configure notification.

Brief instructions for this lab are located on the following page:• “Lab 10 Synopsis: Configuring Notification,” page A-52

Step-by-step instructions for this lab are located on the following page:• “Lab 10: Configuring Notification,” page B-85

Lab 10: Configuring Notification

nameSG1ClusterService nameSG2

NotifierMngr

TriggersTriggersresfaultnofailover

resadminwait

resfaultnofailoverresadminwait

Optional Lab

SMTP Server:

___________________________________

SMTP Server:

___________________________________

Page 319: havcs-410-101 a-2-10-srt-pg_2

Lab 10 Solutions: Configuring Notification C–127Copyright © 2005 VERITAS Software Corporation. All rights reserved.

C

Work with your lab partner to add a NotifierMngr type resource to the ClusterService service group using the information in the design worksheet.

1 Open the cluster configuration.

haconf -makerw

2 Add the resource to the service group.

hares -add notifier NotifierMngr ClusterService

3 Set the resource to not critical.

hares -modify notifier Critical 0

4 Set the required attributes for this resource and any optional attributes, if needed.

Solaris, HP-UX, Linuxhares -modify notifier SmtpServer localhosthares -modify notifier SmtpRecipients -add root Warning

AIXhares -modify notifier SmtpServer localhosthares -modify notifier SmtpRecipients -add root Warninghares -modify notifier PathName /xxx/xxx

Configuring the NotifierMngr Resource

Resource Definition Sample Value Your Value

Service Group ClusterService

Resource Name notifier

Resource Type NotifierMngr

Required Attributes

SmtpServer localhost

SmtpRecipients root Warning

PathName /xxx/xxx (AIX only)

Critical? No (0)

Enabled? Yes (1)

Page 320: havcs-410-101 a-2-10-srt-pg_2

C–128 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

5 Enable the resource.

hares -modify notifier Enabled 1

6 Link the notifier resource to csgproxy.

hares -link notifier csgproxy

7 Bring the resource online on the system running the ClusterService service group.

hares -online notifier -sys your_system

8 Verify that the resource is online.

hares -display notifierps -ef | grep notifier

9 Save the cluster configuration.

haconf -dump

Page 321: havcs-410-101 a-2-10-srt-pg_2

Lab 10 Solutions: Configuring Notification C–129Copyright © 2005 VERITAS Software Corporation. All rights reserved.

C

1 Test the service group by switching it to the other system in the cluster.

hagrp -switch ClusterService -to other_sys

2 Verify that the service group came online properly on the other system.

hastatus -sum

3 Test the service group by switching it back to the original system in the cluster.

hagrp -switch ClusterService -to original_sys

4 Verify that the service group came online properly on the original system.

hastatus -sum

5 Set the notifier resource to critical.

hares -modify notifier Critical 1

6 Save and close the cluster configuration and view the configuration file to verify your changes.

haconf -dump -makeroview /etc/VRTSvcs/conf/config/main.cf

Note: In the next lab, you will see the effects of configuring notification and triggers when you test various resource fault scenarios.

Testing the Service Group

Page 322: havcs-410-101 a-2-10-srt-pg_2

C–130 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

Use the following procedure to configure triggers for notification. In this lab, each student creates a local copy of the trigger script on their own system. If you are working alone in the cluster, copy your completed triggers to the other system.

1 Create a text file in the /opt/VRTSvcs/bin/triggers directory named resfault. Add the following lines to the file:

#!/bin/shecho `date` > /tmp/resfault.msgecho message from the resfault trigger >> /tmp/resfault.msgecho Resource $2 has faulted on System $1 >> /tmp/resfault.msgecho Please check the problem. >> /tmp/resfault.msg/usr/lib/sendmail root </tmp/resfault.msgrm /tmp/resfault.msg

2 Create a text file in the /opt/VRTSvcs/bin/triggers directory named nofailover. Add the following lines to the file.

#!/bin/shecho `date` > /tmp/nofailover.msgecho message from the nofailover trigger >> /tmp/nofailover.msgecho no failover for service group $2 >> /tmp/nofailover.msgecho Please check the problem. >> /tmp/nofailover.msg/usr/lib/sendmail root </tmp/nofailover.msgrm /tmp/nofailover.msg

Optional Lab: Configuring Triggers

Page 323: havcs-410-101 a-2-10-srt-pg_2

Lab 10 Solutions: Configuring Notification C–131Copyright © 2005 VERITAS Software Corporation. All rights reserved.

C

3 Create a text file in the /opt/VRTSvcs/bin/triggers directory named resadminwait. Add the following lines to the file.

#!/bin/shecho `date` > /tmp/resadminwait.msgecho message from the resadminwait trigger >> /tmp/resadminwait.msgecho Resource $2 on System $1 is in adminwait for Reason $3 >> /tmp/resadminwait.msgecho Please check the problem. >> /tmp/resadminwait.msg/usr/lib/sendmail root </tmp/resadminwait.msgrm /tmp/resadminwait.msg

4 Ensure that all trigger files are executable.

chmod 744 resfaultchmod 744 nofailoverchmod 744 resadminwait

5 If you are working alone, copy all triggers to the other system.

Solaris, AIX, HP-UXrcp resfault their_sys:/opt/VRTSvcs/bin/triggersrcp nofailover their_sys:/opt/VRTSvcs/bin/triggersrcp resadminwait their_sys:/opt/VRTSvcs/bin/triggers

Linuxscp file their_sys:directory

Page 324: havcs-410-101 a-2-10-srt-pg_2

C–132 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

Page 325: havcs-410-101 a-2-10-srt-pg_2

Lab 11 Solutions: Configuring Resource Fault Behavior C–133Copyright © 2005 VERITAS Software Corporation. All rights reserved.

C

Lab 11 Solutions: Configuring Resource Fault Behavior

Page 326: havcs-410-101 a-2-10-srt-pg_2

C–134 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

The purpose of this lab is to observe how VCS responds to faults in a variety of scenarios.

Brief instructions for this lab are located on the following page:• “Lab 11 Synopsis: Configuring Resource Fault Behavior,” page A-55

Step-by-step instructions for this lab are located on the following page:• “Lab 11: Configuring Resource Fault Behavior,” page B-93

Lab 11: Configuring Resource Fault Behavior

nameSG1 nameSG2

Critical=0Critical=1FaultPropagation=0FaultPropagation=1ManageFaults=NONEManageFaults=ALLRestartLimit=1

Note: Network interfaces for virtual IP addresses are unconfigured to force the IP resource to fault. In your classroom, the interface you specify is:______

Replace the variable interface in the lab steps with thisvalue.

Note: Network interfaces for virtual IP addresses are unconfigured to force the IP resource to fault. In your classroom, the interface you specify is:______

Replace the variable interface in the lab steps with thisvalue.

Page 327: havcs-410-101 a-2-10-srt-pg_2

Lab 11 Solutions: Configuring Resource Fault Behavior C–135Copyright © 2005 VERITAS Software Corporation. All rights reserved.

C

This part of the lab exercise explores the default behavior of VCS.1 Open the cluster configuration.

haconf -makerw

2 Verify that all resources in the nameSG1 service group are currently set to critical; if not, set them to critical.

hares -display -attribute Critical -group nameSG1hares -modify nameResource1 Critical 1haconf -dump

3 Set the IP and Process resources to not critical in the nameSG1 service group.

hares -modify nameIP1 Critical 0hares -modify nameProcess1 Critical 0

4 Change the monitor interval for the IP resource type to 10 seconds and the offline monitor interval for the IP resource type to 30 seconds.

hatype -modify IP MonitorInterval 10hatype -modify IP OfflineMonitorInterval 30

5 Save the cluster configuration.

haconf -dump

6 Verify that your nameSG1 service group is currently online on your system. If it is not, bring it online or switch it to your system.

hastatus -sumhagrp -switch nameSG1 -to your_sys

Non-Critical Resource Faults

Page 328: havcs-410-101 a-2-10-srt-pg_2

C–136 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

7 Unconfigure the interface corresponding to the virtual IP address—outside of VCS.

Solaris ifconfig interface removeif 192.168.xx.xx

HPifconfig interface inet 0.0.0.0

AIXifconfig interface ipaddress delete

Linuxifconfig interface down

a What happens to the resources?

hares -display -group nameSG1 | grep " State"

The nameIP1 resource should fault.The nameProcess1 resource should go offline.

b Does the service group fail over?

There should be no failover.

c Did you receive e-mail notification?

The notifier and the resfault trigger should send e-mail.

8 Clear any faults.

hares -clear nameIP1

9 Bring the IP and Process resources back online on your system.

hares -online nameIP1 -sys your_syshares -online nameProcess1 -sys your_sys

10 Set the IP and process resource to critical in the nameSG1 service group.

hares -modify nameIP1 Critical 1hares -modify nameProcess1 Critical 1

11 Save the cluster configuration.

haconf -dump

Page 329: havcs-410-101 a-2-10-srt-pg_2

Lab 11 Solutions: Configuring Resource Fault Behavior C–137Copyright © 2005 VERITAS Software Corporation. All rights reserved.

C

1 Verify that all resources in the nameSG1 service group are currently set to critical.

hares -display -attribute Critical -group nameSG1

2 Set all resources to critical, if they are not already set, and save the cluster configuration.

hares -modify nameResource1 Critical 1haconf -dump

3 Verify that your nameSG1 service group is currently online on your system. If it is not online locally, bring it online or switch it to your system.

hastatus -sumhagrp -switch nameSG1 -to your_sys

4 Unconfigure the interface corresponding to the virtual IP address—outside of VCS.

Solaris ifconfig interface removeif 192.168.xx.xx

HPifconfig interface inet 0.0.0.0

AIXifconfig interface ipaddress delete

Linuxifconfig interface down

a What happens to the resources?

Note: The effects of stopping loopy may take up to 60 seconds to be detected.

hares -display -group nameSG1 | grep " State"The nameIP1 resource should fault.The nameIP1 resource should go offline.All other resources come offline.

Critical Resource Faults

Page 330: havcs-410-101 a-2-10-srt-pg_2

C–138 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

b Does the service group fail over?

The group should fail over to the other system (their_sys).

c Did you receive e-mail notification?

The notifier sends two e-mail messages—one for the faulted resource and one for the faulted service group. The resfault trigger should send e-mail if configured.

5 Without clearing faults from the last failover, unconfigure the virtual IP address on their system.

Solaris rsh their_sys ifconfig interface removeif 192.168.xx.xx

HPrsh their_sys ifconfig interface inet 0.0.0.0

AIXrsh their_sys ifconfig interface ipaddress delete

Linuxssh -l root their_sys ifconfig interface down

a What happens to the resources?

hares -display -group nameSG1 | grep " State"The nameIP1 resource should fault.The nameProcess1 resource should go offline.All other resources are brought offline.

b Does the service group fail over?

The group cannot fail over because there are no failover targets left. The group stays offline.

c Did you receive e-mail notification?

The notifier sends two e-mail messages—one for the faulted resource and one for the faulted service group. The resfault and nofailover triggers should send e-mail, if configured.

6 Clear the nameIP1 resource on all systems and bring the nameSG1 service group online on your system.

hares -clear nameIP1hagrp -online nameSG1 -sys your_sys

Page 331: havcs-410-101 a-2-10-srt-pg_2

Lab 11 Solutions: Configuring Resource Fault Behavior C–139Copyright © 2005 VERITAS Software Corporation. All rights reserved.

C

1 Verify that all resources in the nameSG1 service group are currently set to critical.

hares -display -attribute Critical -group nameSG1

2 Set all resources to critical, if they are not already set, and save the cluster configuration.

hares -modify nameResource1 Critical 1haconf -dump

3 Verify that your nameSG1 Service group is currently online on your system. If it is not online locally, bring it online or switch it to your system.

hastatus -sumhagrp -switch nameSG1 -to your_sys

4 Freeze the nameSG1 service group.

hagrp -freeze nameSG1

5 Unconfigure the interface corresponding to the virtual IP address—outside of VCS.

Solaris ifconfig interface removeif 192.168.xx.xx

HPifconfig interface inet 0.0.0.0

AIXifconfig interface ipaddress delete

Linuxifconfig interface down

a What happens to the resources?

hares -display -group nameSG1 | grep " State"The nameIP1 resource should fault and show the state as PARTIAL|FAULTED.The nameProcess1 resource should stay online.

Faults within Frozen Service Groups

Page 332: havcs-410-101 a-2-10-srt-pg_2

C–140 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

b Does the service group fail over?

There is no failover.

c Did you receive e-mail notification?

The notifier and the resfault trigger should send e-mail.

6 Bring up the virtual IP address outside of VCS.Solaris

ifconfig interface addif ipaddress upHP

ifconfig interface inet ipaddress AIX

ifconfig interface inet ipaddress netmask mask aliasLinux

ifconfig interface add ipaddress up

What happens?

hares -display -group nameSG1 | grep " State"

The resource fault should clear on its own, when the agent probes the resource (after the offline monitor interval), which is now online. You can probe the resource to manually check the state more quickly.

7 Unconfigure the virtual IP address outside of VCS to fault the IP resource again. While the resource is faulted, unfreeze the service group.

Solaris ifconfig interface removeif 192.168.xx.xx

HPifconfig interface inet 0.0.0.0

AIXifconfig interface ipaddress delete

Linuxifconfig interface down

Wait for the resource to fault.

hagrp -unfreeze nameSG1

Page 333: havcs-410-101 a-2-10-srt-pg_2

Lab 11 Solutions: Configuring Resource Fault Behavior C–141Copyright © 2005 VERITAS Software Corporation. All rights reserved.

C

8 Did unfreezing the service group cause a failover or any resources to come offline? Explain why or why not.

No. The failover decision is made at the time of the fault.

9 Clear the fault and bring the resource online.

hares -clear nameIP1 hares -online nameIP1 -sys your_sys

Page 334: havcs-410-101 a-2-10-srt-pg_2

C–142 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

This section illustrates service group failover behavior using the ManageFaults and FaultPropagation attributes.1 Verify that all resources in the nameSG1 service group are currently set to

critical.

hares -display -attribute Critical -group nameSG1

2 Set all resources to critical, if they are not already set, and save the cluster configuration.

hares -modify nameResource1 Critical 1haconf -dump

3 Set the FaultPropagation attribute for the nameSG1 service group to off (0).

hagrp -modify nameSG1 FaultPropagation 0

4 Unconfigure the interface corresponding to the virtual IP address—outside of VCS.

Solaris ifconfig interface removeif 192.168.xx.xx

HPifconfig interface inet 0.0.0.0

AIXifconfig interface ipaddress delete

Linuxifconfig interface down

a What happens to the resources?

hares -display -group nameSG1 | grep " State"

The nameIP1 resource should fault.The service group is in the PARTIAL|FAULTED state.The nameProcess1 resource should stay online.

b Does the service group fail over?

There is no failover.

Effects of ManageFaults and FaultPropagation

Page 335: havcs-410-101 a-2-10-srt-pg_2

Lab 11 Solutions: Configuring Resource Fault Behavior C–143Copyright © 2005 VERITAS Software Corporation. All rights reserved.

C

c Did you receive e-mail notification?

The notifier and the resfault trigger should send e-mail.

5 Clear the faulted resource and bring the resource back online.

hares -clear nameIP1 hares -online nameIP1 -sys your_sys

6 Set the ManageFaults attribute for the nameSG1 service group to NONE and set the FaultPropagation attribute back to one (1).

hagrp -modify nameSG1 ManageFaults NONEhagrp -modify nameSG1 FaultPropagation 1

7 Unconfigure the interface corresponding to the virtual IP address—outside of VCS.

Solaris ifconfig interface removeif 192.168.xx.xx

HPifconfig interface inet 0.0.0.0

AIXifconfig interface ipaddress delete

Linuxifconfig interface down

a What happens to the resources?

hares -display -group nameSG1 | grep " State"

The nameIP1 resource should be in the admin wait state.The nameProcess1 should stay online.

b Does the service group fail over?

There is no failover.

c Did you receive e-mail notification?

The resadminwait trigger should send e-mail.

Page 336: havcs-410-101 a-2-10-srt-pg_2

C–144 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

8 Recover the resource from the ADMIN_WAIT state by bringing up the IP address outside of VCS and clearing the AdminWait attribute without a fault.

Note: The ADMIN_WAIT state can be cleared automatically if a monitor interval has run.

Solarisifconfig interface addif ipaddress up

HPifconfig interface inet ipaddress

AIXifconfig interface inet ipaddress netmask mask alias

Linuxifconfig interface add ipaddress up

hagrp -clearadminwait nameSG1 -sys your_sys

9 Unconfigure the interface corresponding to the virtual IP address—outside of VCS.

Solaris ifconfig interface removeif 192.168.xx.xx

HPifconfig interface inet 0.0.0.0

AIXifconfig interface ipaddress delete

Linuxifconfig interface down

a What happens to the resources?

hares -display -group nameSG1 | grep " State"

The nameIP1 resource should be in the ONLINE|ADMIN_WAIT state.The nameProcess1 should stay online.

b Does the service group fail over?

There is no failover.c Did you receive e-mail notification?

The resadminwait trigger should send e-mail.

Page 337: havcs-410-101 a-2-10-srt-pg_2

Lab 11 Solutions: Configuring Resource Fault Behavior C–145Copyright © 2005 VERITAS Software Corporation. All rights reserved.

C

10 Recover the resource from the ADMIN_WAIT state by faulting the service group.

Note: The ADMIN_WAIT state can be cleared automatically if a monitor interval has run.

hagrp -clearadminwait -fault nameSG1 -sys your_sys

The group should now fail over to their_sys.

11 Clear the faulted nameIP1 resource and switch the nameSG1 service group back to your system.

hares -clear nameIP1hagrp -switch nameSG1 -to your_sys

12 Set ManageFaults back to ALL for the nameSG1 service group and save the cluster configuration.

hagrp -modify nameSG1 ManageFaults ALLhaconf -dump

Page 338: havcs-410-101 a-2-10-srt-pg_2

C–146 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

This section illustrates failover behavior of a resource type using restart limits.1 Verify that all resources in the nameSG1 service group are currently set to

critical.

hares -display -attribute Critical -group nameSG1

2 Set all resources to critical, if they are not already set, and save the cluster configuration.

hares -modify nameResource1 Critical 1haconf -dump

3 Set the RestartLimit Attribute for the Process resource type to 1.

hatype -modify Process RestartLimit 1haconf -dump

4 Stop the loopy process running in the nameSG1 service group by sending a kill signal.

ps -ef | grep /name1/loopykill pid

a What happens to the resources?

hares -display -group nameSG1 | grep " State"

The loopy process should be restarted automatically on the same system.

b Does the service group fail over?

There is no failover.

c Did you receive e-mail notification?

There is no notification of restart. However, there should be a log entry.

RestartLimit Behavior

Page 339: havcs-410-101 a-2-10-srt-pg_2

Lab 11 Solutions: Configuring Resource Fault Behavior C–147Copyright © 2005 VERITAS Software Corporation. All rights reserved.

C

5 Stop the loopy process running in the nameSG1 service group by sending a kill signal.

ps -ef | grep /name1/loopykill pid

a What happens to the resources?

Note: It can take approximately 60 seconds to see the effects of stopping the loopy process.

hares -display -group nameSG1 | grep " State"

The resource is faulted because the RestartLimit has been exceeded.

b Does the service group fail over?

The group fails over.

c Did you receive e-mail notification?

The notifier sends two e-mail messages—one for the faulted resource and one for the faulted service group. The resfault trigger should send e-mail if configured.

6 Clear the faulted resource and switch the nameSG1 service group back to your system.

hares -clear nameProcess1 hagrp -switch nameSG1 -to your_sys

7 When all students have completed the lab, save and close the configuration.

haconf -dump -makero

Page 340: havcs-410-101 a-2-10-srt-pg_2

C–148 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

Page 341: havcs-410-101 a-2-10-srt-pg_2

Lab 13 Solutions: Testing Communication Failures C–149Copyright © 2005 VERITAS Software Corporation. All rights reserved.

C

Lab 13 Solutions: Testing Communication Failures

Page 342: havcs-410-101 a-2-10-srt-pg_2

C–150 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

The purpose of this lab is to configure a low-priority link and then pull network cables and observe how VCS responds.

Brief instructions for this lab are located on the following page:• “Lab 13 Synopsis: Testing Communication Failures,” page A-60

Step-by-step instructions for this lab are located on the following page:• “Lab 13 Details: Testing Communication Failures,” page B-101

Lab 13: Testing Communication Failures

TriggerTrigger injeopardyinjeopardy

Optional Lab

1. Configure the InJeopardy trigger (optional).2. Configure a low-priority link.3. Test failures.

trainxxtrainxxtrainxxtrainxx

Page 343: havcs-410-101 a-2-10-srt-pg_2

Lab 13 Solutions: Testing Communication Failures C–151Copyright © 2005 VERITAS Software Corporation. All rights reserved.

C

Use the following procedure to configure triggers for jeopardy notification. In this lab, students create a local copy of the trigger script on their own systems. If you are working alone in the cluster, copy your completed triggers to the other system.1 Create a text file in the /opt/VRTSvcs/bin/triggers directory named

injeopardy. Add the following lines to the file:

#!/bin/shecho `date` > /tmp/injeopardy.msgecho message from the injeopardy trigger >> /tmp/injeopardy.msgecho System $1 is in Jeopardy >> /tmp/injeopardy.msgecho Please check the problem. >> /tmp/injeopardy.msg/usr/lib/sendmail root </tmp/injeopardy.msgrm /tmp/injeopardy.msg

2 Make the trigger file executable.

chmod 744 injeopardy

3 If you are working alone, copy the trigger to the other system.

Solaris, AIXrcp injeopardy their_sys:/opt/VRTSvcs/bin/triggers/injeopardy

HP-UXremsh injeopardy their_sys:/opt/VRTSvcs/bin/triggers/injeopardy

Linuxscp injeopardy their_sys:/opt/VRTSvcs/bin/triggers/injeopardy

4 Continue with the next lab sections. The “Multiple LLT Link Failures—Jeopardy” section of this lab shows the effects of configuring the InJeopardy trigger.

Optional Lab: Configuring the InJeopardy Trigger

Page 344: havcs-410-101 a-2-10-srt-pg_2

C–152 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

Working with your lab partner, use the procedures to create a low-priority link and then fault communication links and observe what occurs in a cluster environment when fencing is not configured.

Adding a Low Priority Link

Object Sample Value Your Value

Public Ethernet interface for link low-pri

Solaris: eri0Sol Mob: dmfe0AIX: en1HP-UX lan0Linux: eth0VA: bge0

Cluster interconnect link 1 Solaris: qfe0Sol Mob: dmfe0AIX: en2HP-UX lan1Linux: eth1VA: bge2

Cluster interconnect link 2 Solaris: qfe1Sol Mob: dmfe1AIX: en3HP-UX lan2Linux: eth2VA: bge3

Host name for sysname file for your_sys

train1

Host name for sysname file for their_sys

train2

Page 345: havcs-410-101 a-2-10-srt-pg_2

Lab 13 Solutions: Testing Communication Failures C–153Copyright © 2005 VERITAS Software Corporation. All rights reserved.

C

1 Save and close the cluster configuration.

haconf -dump -makero

2 Shut down VCS, leaving the applications running on all systems in the cluster.

hastop -all -force

3 Unconfigure GAB on each system in the cluster.

gabconfig -U

4 Unconfigure LLT on each system in the cluster.

lltconfig -U

Page 346: havcs-410-101 a-2-10-srt-pg_2

C–154 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

5 Edit the /etc/llttab LLT configuration file on each system to add a directive for a low-priority LLT link on the public network.

Solaris MobileSkip this step for mobile classrooms. There is only one public interface and it is already configured as a low-priority link.

Solaris Exampleset-cluster 1set-node train1link tag1 /dev/qfe:0 - ether - -link tag2 /dev/qfe:1 - ether - - link-lowpri tag3 /dev/eri:0 - ether - -

AIX Exampleset-cluster 1set-node train1link tag1 /dev/en:2 - ether - -link tag2 /dev/en:3 - ether - - link-lowpri tag3 /dev/en:1 - ether - -

HP-UX Exampleset-cluster 10set-node train1link tag1 /dev/lan:1 - ether - -link tag2 /dev/lan:2 - ether - - link-lowpri tag3 /dev/lan:0 - ether - -

Linux Exampleset-cluster 1set-node train1link tag1 eth1 - ether - -link tag2 eth2 - ether - - link-lowpri tag3 eth0 - ether - -

Virtual Academy Exampleset-cluster 1set-node train1link tag1 bge2 - ether - -link tag2 bge3 - ether - - link-lowpri tag3 bge0 - ether - -

Page 347: havcs-410-101 a-2-10-srt-pg_2

Lab 13 Solutions: Testing Communication Failures C–155Copyright © 2005 VERITAS Software Corporation. All rights reserved.

C

6 Start LLT on each system.

lltconfig -c

7 Verify that LLT is running.

lltconfig

8 Start GAB on each system.

sh /etc/gabtab

Alternatively, you can start GAB using gabconfig. However, sourcing the gabtab is preferred to ensure any changes to /etc/gabtab you may have made are tested.

gabconfig -c -n 2

9 Verify GAB membership.

gabconfig -a

10 Start VCS on each system.

hastart

11 Verify that VCS is running.

hastatus -sum

Page 348: havcs-410-101 a-2-10-srt-pg_2

C–156 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

Note: For Solaris mobile classrooms, skip this section.

1 Copy the lltlink_enable and lltlink_disable utilities from the location provided by your instructor into the /tmp directory.

_____________________________________________________________

2 Change to the /tmp directory.

cd /tmp

3 Change the NIC resource type MonitorInterval attribute to 3600 seconds temporarily for communications testing. This prevents the NetworkNIC resource from faulting during this lab when the low-priority LLT link is pulled.

a Open the cluster configuration.

haconf -makerw

b Modify the MonitorInterval attribute.

hatype -modify NIC MonitorInterval 3600

c Save and close the cluster configuration.

haconf -dump -makero

4 Throughout this lab, use the lltlink_disable command to simulate failure of an LLT link where you are instructed to remove a link.

Notes: – Use lltlink_enable to restore the LLT link.– The utilities prompt you to select an interface.– These classroom utilities are provided to enable you to simulate

disconnecting and reconnecting Ethernet cables without risk of damaging connectors.

– Run the utility from one system only, unless otherwise specified.

Single LLT Link Failure

Page 349: havcs-410-101 a-2-10-srt-pg_2

Lab 13 Solutions: Testing Communication Failures C–157Copyright © 2005 VERITAS Software Corporation. All rights reserved.

C

5 Using the lltlink_disable utility, remove one LLT link and watch for the link to expire in the console or system log file.

Use the lltlink_disable utility to simulate failure of an LLT link (private or low- priority). Type:

./lltlink_disable

Select a link from the displayed list.

6 Verify that the link is down.

lltstat -nvv

7 Restore communications using the lltlink_enable utility.

Replace the removed cable. To use the lltlink_disable utility, type:

./lltlink_enable

Select a link from the displayed list.

8 Verify that the link is now up and communications are restored.

lltstat -nvv

Page 350: havcs-410-101 a-2-10-srt-pg_2

C–158 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

1 Verify the status of GAB.

gabconfig -a

All nodes should have regular membership.

2 Use lltlink_disable to remove all but one LLT link and watch for the link to expire in the console.

Use lltlink_disable to remove all but one LLT links from operation (private or low priority).

./lltlink_disableSelect the first LLT link from the list.

./lltlink_disableSelect the next LLT link from the list.

Solaris MobileRemove only the one high-priority LLT link (dmfe1).

3 Verify that the links are down.

lltstat -nvv

4 Verify the status of GAB.

gabconfig -a

One node should have jeopardy membership.

5 Restore communications using lltlink_enable.

Replace removed cables.

./lltlink_enable Select the first LLT link to restore.

./lltlink_enableSelect the second LLT link to restore.

Multiple LLT Link Failures—Jeopardy

Page 351: havcs-410-101 a-2-10-srt-pg_2

Lab 13 Solutions: Testing Communication Failures C–159Copyright © 2005 VERITAS Software Corporation. All rights reserved.

C

6 Verify that the link is now up and communications are restored.

lltstat -nvv

7 Verify the status of GAB.

gabconfig -a

All nodes should have regular membership.

Page 352: havcs-410-101 a-2-10-srt-pg_2

C–160 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

1 Verify the status of GAB from each system.

gabconfig -a

All nodes should have regular membership.

2 Remove all but one LLT link and watch for the link to expire in the console or system log.

Disable all but one LLT link (private or low priority). For each link, type:./lltlink_disable

Solaris MobileDisable only the one high-priority LLT link (dmfe1).

3 Verify that the links are down from each system.

lltstat -nvv

4 Verify the status of GAB from each system.

gabconfig -a

One node should have jeopardy membership.

5 Remove the last LLT link and watch for the link to expire in the console.

Disable the last LLT link using lltlink_disable../lltlink_disable

6 Verify that all links are down from each system.

lltstat -nvv

7 Verify the status of GAB from each system.

gabconfig -a

Each side of the cluster should only have membership for its node.

Multiple LLT Link Failures—Network Partition

Page 353: havcs-410-101 a-2-10-srt-pg_2

Lab 13 Solutions: Testing Communication Failures C–161Copyright © 2005 VERITAS Software Corporation. All rights reserved.

C

8 What is the status of service groups running on each system?

hastatus -sum

The jeopardy condition should have autodisabled the service groups on the systems on which they are not running. A split brain situation has been avoided.

9 Recover from the network partition.

a Stop HAD on one system but leave services running.

Note: If you have more than two systems in the cluster, you must stop HAD on all systems on either side of the network partition.

hastop -local -force

b If you physically unplugged cables, restore communications reconnecting the LLT link cables.

Note: If you used lltlink_disable to simulate link failure, skip this step.

c Verify that the LLT connections are up.

lltstat -nvv

d Verify that GAB has proper membership.

gabconfig -a

All nodes should have regular membership.

e Start VCS on the system where you stopped it.

hastart

f Verify that each service group is autoenabled.

hastatus -sum

Page 354: havcs-410-101 a-2-10-srt-pg_2

C–162 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

10 Change the NIC resource type MonitorInterval attribute back to 60 seconds.

a Open the cluster configuration.

haconf -makerw

b Modify the MonitorInterval attribute.

hatype -modify NIC MonitorInterval 60

c Save and close the cluster configuration.

haconf -dump -makero

Page 355: havcs-410-101 a-2-10-srt-pg_2

Lab 14 Solutions: Configuring I/O Fencing C–163Copyright © 2005 VERITAS Software Corporation. All rights reserved.

C

Lab 14 Solutions: Configuring I/O Fencing

Page 356: havcs-410-101 a-2-10-srt-pg_2

C–164 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

The purpose of this lab is to set up I/O fencing in a two-node cluster and simulate node and communication failures.

Brief instructions for this lab are located on the following page:• “Lab 14 Synopsis: Configuring I/O Fencing,” page A-66

Step-by-step instructions for this lab are located on the following page:• “Lab 14: Configuring I/O Fencing,” page B-111

Lab 14: Configuring I/O FencingWork with your lab partner to configure fencing.

trainxxtrainxx

Coordinator Disks

nameDG1, nameDG2

Disk 1:___________________

Disk 2:___________________

Disk 3:___________________

Page 357: havcs-410-101 a-2-10-srt-pg_2

Lab 14 Solutions: Configuring I/O Fencing C–165Copyright © 2005 VERITAS Software Corporation. All rights reserved.

C

Lab AssignmentsWorking with your lab partner, use the following procedure and the information provided in the table to configure fencing for your cluster.

Object Sample Value Your Value

Disk assignments for coordinator disk group

cXtXdXsXcXtXdXsXcXtXdXsX

Disk group name oddfendgorevenfendg

/etc/vxfendg oddfendgorevenfendg

UseFence cluster attribute SCSI3

Page 358: havcs-410-101 a-2-10-srt-pg_2

C–166 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

1 Configure a disk group for the coordinator disks.

a Initialize three disks for use in the disk group.

vxdisksetup -i coor_disk1vxdisksetup -i coor_disk2vxdisksetup -i coor_disk3

b Display your cluster ID. Your cluster ID determines your coordinator disk group name.

cat /etc/llttab

c Initialize the disk group.

› If your cluster ID is odd, use oddfendg for the disk group name.

vxdg init oddfendg coor_disk1 coor_disk2 coor_disk3

› If your cluster ID is even, use evenfendg for the disk group name.

vxdg init evenfendg coor_disk1 coor_disk2 coor_disk3

Note: Replace the placeholder string "______fendg" with the appropriate odd or even coordinator disk name throughout the remainder of this lab.

d Deport the disk group.

vxdg deport ______fendg

Configuring Disks and Fencing Driver

Page 359: havcs-410-101 a-2-10-srt-pg_2

Lab 14 Solutions: Configuring I/O Fencing C–167Copyright © 2005 VERITAS Software Corporation. All rights reserved.

C

2 Optional for the classroom: Use the vxfentsthdw utility to verify that the shared storage disks support SCSI-3 persistent reservations. Notes: – For the purposes of this lab, you do not need to test the disks. The disks

used in this lab support SCSI-3 persistent reservations. The complete steps are given here as a guide for real-world use.

– To see how the command is used, you can run vxfentsthdw on a disk not in use; this will enable you to continue with the lab while the vxfentsthdw is running.

– Create a test disk group with one disk and run vxfentsthdw on that test disk group.

vxfentsthdw -g testdg

– Use the -r option to perform read-only testing of data disks.

3 Enter the coordinator disk group name in the /etc/vxfendg fencing configuration file on each system in the cluster.

echo "______fendg" > /etc/vxfendg

4 Start the fencing driver on each system using the vxfen init script.

/etc/init.d/vxfen start

5 Verify that the /etc/vxfentab file has been created on each system and it contains a list of the coordinator disks.

cat /etc/vxfentab

6 Verify the setup of the coordinator disks.

a Verify that port b GAB membership is listed for both nodes.

gabconfig -a

GAB should show port a, b, and h membership for nodes 0 and 1.

Page 360: havcs-410-101 a-2-10-srt-pg_2

C–168 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

b Verify that registrations are assigned to the coordinator disks.

vxfenadm -g all -f /etc/vxfentab

c How many keys are present for each disk and why?

There should be A------- keys for LLT node 0 and B------- keys for LLT node 1 on each coordinator disk for each path to that coordinator disk.Example:

Device Name: /dev/rdsk/c1t9d0s2

Total Number Of Keys: 2

key[0]:

Key Value [Numeric Format]: 65,45,45,45,45,45,45,45

Key Value [Character Format]: A-------

key[1]:

Key Value [Numeric Format]: 66,45,45,45,45,45,45,45

Key Value [Character Format]: B-------

Device Name: /dev/rdsk/c1t10d0s2

Total Number Of Keys: 2

key[0]:

Key Value [Numeric Format]: 65,45,45,45,45,45,45,45

Key Value [Character Format]: A-------

key[1]:

Key Value [Numeric Format]: 66,45,45,45,45,45,45,45

Key Value [Character Format]: B-------

Device Name: /dev/rdsk/c1t11d0s2

Total Number Of Keys: 2

key[0]:

Key Value [Numeric Format]: 65,45,45,45,45,45,45,45

Key Value [Character Format]: A-------

key[1]:

Key Value [Numeric Format]: 66,45,45,45,45,45,45,45

Key Value [Character Format]: B-------

Page 361: havcs-410-101 a-2-10-srt-pg_2

Lab 14 Solutions: Configuring I/O Fencing C–169Copyright © 2005 VERITAS Software Corporation. All rights reserved.

C

1 On each system, verify that you have a Storage Foundation Enterprise license installed for fencing support using vxlicrep.

vxlicrep

Check for this output:Product Name = VERITAS Storage Foundation EnterprisePGR#VERITAS Volume Manager = Enabled

2 Working together, verify that the cluster configuration is saved and closed.

haconf -dump -makero

3 Change to the VCS configuration directory.

cd /etc/VRTSvcs/conf/config

4 Make a subdirectory named test, if one does not already exist.

mkdir test

5 Copy the main.cf and types.cf files into the test subdirectory.

cp main.cf types.cf test

6 Change to the test directory.

cd test

Configuring VCS for Fencing

Page 362: havcs-410-101 a-2-10-srt-pg_2

C–170 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

7 Edit the main.cf file in the test directory on that one system in the cluster to set the value of UseFence to SCSI3.

Partial Example:

# vi main.cf

cluster vcs ( UserNames = { admin = ElmElgLimHmmKumGlj } ClusterAddress = "192.168.27.51" Administrators = { admin } CounterInterval = 5

UseFence = SCSI3. . .)

8 Verify the cluster configuration and correct any errors found.

hacf -verify /etc/VRTSvcs/conf/config/test

9 Stop VCS and shut down the applications. The disk groups must be reimported for fencing to take effect.

hastop -all

10 Copy the main.cf file from the test subdirectory into the configuration directory.

cp main.cf ../main.cf

11 Start the cluster from the system where you edited the configuration file.

hastart

12 Start the cluster in the stale state on the other system in the cluster (where the configuration was not edited).

hastart -stale

13 Verify the status of the cluster.

hastatus -summary

14 Verify that the UseFence cluster attribute is set.

haclus -value UseFence

Page 363: havcs-410-101 a-2-10-srt-pg_2

Lab 14 Solutions: Configuring I/O Fencing C–171Copyright © 2005 VERITAS Software Corporation. All rights reserved.

C

1 If the service groups with disk groups did not come online at cluster startup, bring them online now. This will import the disk groups, which initiate fencing on the data disks. Each student can perform these steps on their service groups.

hastatus -sumhagrp -online nameSG1 -sys your_syshagrp -online nameSG2 -sys your_sys

2 Verify registrations and reservations on the data disks

There should be AVCS keys on LLT node 0 imported disk groups and BVCS on LLT node 1 imported disk groups.

# vxfenadm -g /dev/rdsk/data_disk1

Reading SCSI Registration Keys...

Device Name: /dev/rdsk/c1t0d0s2Total Number Of Keys: 1key[0]:

Key Value [Numeric Format]: 65,86,67,83,0,0,0,0Key Value [Character Format]: AVCS

# vxfenadm -r /dev/rdsk/data_disk2

Reading SCSI Reservation Information...

Device Name: /dev/rdsk/c1t0d0s2Total Number Of Keys: 1Key[0]:

Reservation Type: SCSI3_RESV_WRITEEXCLUSIVEREGISTRANTSONLY

Key Value [Numeric Format]: 65,86,67,83,0,0,0,0Key Value [Character Format]: AVCS

Verifying Data Disks for I/O Fencing

Page 364: havcs-410-101 a-2-10-srt-pg_2

C–172 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

In most cases, the following sections require that you work together with your lab partner to observe how fencing protects data in a variety of failure situations. Steps you can perform on your own are indicated within the procedure.

Scenario 1: Manual Concurrency Violation

Students can try this scenario on their own. Try to import a disk group imported on one system to another system using vxdg with the -C option.

1 On the system where nameDG1 is not imported, attempt to manually import it clearing the host locks.

vxdg -C import nameDG1

2 Were you successful? Describe why or why not.

This command should fail because the node where the disk group is not imported does not have rights to write to the disk, and therefore cannot import the disk group and update the private region header information. The error message should say: VxVM vxdg ERROR V-5-1-587 Disk group nameDG1: import failed: No valid disk found containing disk group. This indicates that data corruption from a possible concurrency violation has been prevented.

Scenario 2: Response to System Failure

Work with your lab partner to observe how VCS responds to system failures.

1 Verify that the nameSG1 and nameSG2 service groups are online on your system if two students are working on the cluster. If you are working alone, ensure that you have a service group online on each system. This scenario requires that disk groups be imported on each system. Switch them, if necessary.

hastatus -sum

2 Verify the registrations on the coordinator disks for both systems.

vxfenadm -g all -f /etc/vxfentab

There should be registrations for both systems.

Testing Communication Failures

Page 365: havcs-410-101 a-2-10-srt-pg_2

Lab 14 Solutions: Configuring I/O Fencing C–173Copyright © 2005 VERITAS Software Corporation. All rights reserved.

C

3 Verify the registrations and reservations on the data disks for the disk groups imported on each system.

# vxdisk list

# vxfenadm -g /dev/rdsk/data_disk

Reading SCSI Registration Keys...

Device Name: /dev/rdsk/c1t0d0s2Total Number Of Keys: 1key[0]:

Key Value [Numeric Format]: 65,86,67,83,0,0,0,0Key Value [Character Format]: AVCS

# vxfenadm -r /dev/rdsk/data_disk

Reading SCSI Reservation Information...

Device Name: /dev/rdsk/c1t0d0s2Total Number Of Keys: 1Key[0]:

Reservation Type: SCSI3_RESV_WRITEEXCLUSIVEREGISTRANTSONLY

Key Value [Numeric Format]: 65,86,67,83,0,0,0,0Key Value [Character Format]: AVCS

4 Fail one of the systems by removing power or hard booting the system. Observe the failure.

LLT and GAB should time out heartbeats from the failed system. The remaining system should fence off the drive.

Page 366: havcs-410-101 a-2-10-srt-pg_2

C–174 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

5 Verify the registrations on the coordinator disks for the remaining system.

There should be registrations for only the remaining system.

# vxfenadm -g all -f /etc/vxfentab

Device Name: /dev/rdsk/c1t9d0s2Total Number Of Keys: 1key[0]:

Key Value [Numeric Format]: 65,45,45,45,45,45,45,45Key Value [Character Format]: A-------

Device Name: /dev/rdsk/c1t10d0s2Total Number Of Keys: 1key[0]:

Key Value [Numeric Format]: 65,45,45,45,45,45,45,45Key Value [Character Format]: A-------

Device Name: /dev/rdsk/c1t11d0s2Total Number Of Keys: 1key[0]:

Key Value [Numeric Format]: 65,45,45,45,45,45,45,45Key Value [Character Format]: A-------

6 Verify that the service groups that were running on the failed system have failed over to the remaining system.

hastatus -sum

7 Verify that the registrations and reservations on the data disks are now for the remaining system.

# vxdisk list

# vxfenadm -g /dev/rdsk/data_disk

Reading SCSI Registration Keys...

Device Name: /dev/rdsk/c1t0d0s2Total Number Of Keys: 1key[0]:

Key Value [Numeric Format]: 65,86,67,83,0,0,0,0Key Value [Character Format]: AVCS

Page 367: havcs-410-101 a-2-10-srt-pg_2

Lab 14 Solutions: Configuring I/O Fencing C–175Copyright © 2005 VERITAS Software Corporation. All rights reserved.

C

# vxfenadm -r /dev/rdsk/data_disk

Reading SCSI Reservation Information...

Device Name: /dev/rdsk/c1t0d0s2Total Number Of Keys: 1Key[0]:

Reservation Type: SCSI3_RESV_WRITEEXCLUSIVEREGISTRANTSONLY

Key Value [Numeric Format]: 65,86,67,83,0,0,0,0Key Value [Character Format]: AVCS

8 Boot the failed system and observe it rejoin cluster membership. Verify cluster membership and verify that the coordinator disks have registrations for both systems again.

gabconfig -avxfenadm -g all -f /etc/vxfentab

Scenario 3: Response to Interconnect Failures

Work with your lab partner to observe how VCS responds to cluster interconnect failures.

1 If you did not already perform this step in the “Testing Communication Failures” lab, copy the lltlink_enable and lltlink_disable utilities from the location provided by your instructor into the /tmp directory.

_____________________________________________________________

2 Change to the /tmp directory.

cd /tmp

3 Change the NIC resource type MonitorInterval attribute to 3600 seconds temporarily for the purposes of communications testing. This prevents the NetworkNIC resource from faulting during this lab when the low-priority LLT link is pulled.

a Open the cluster configuration.

haconf -makerw

Page 368: havcs-410-101 a-2-10-srt-pg_2

C–176 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

b Modify the MonitorInterval attribute.

hatype -modify NIC MonitorInterval 3600

c Save and close the cluster configuration.

haconf -dump -makero

4 Verify that the nameSG1 and nameSG2 service groups are online on your system if two students are working on the cluster. If you are working alone, ensure that you have a service group online on each system. This scenario requires that one disk group be imported on each system. Switch the service groups, if necessary.

hastatus -sum

5 Verify the registrations on the coordinator disks for both systems.

vxfenadm -g all -f /etc/vxfentab

There should be registrations for both systems.

6 Verify the registrations and reservations on the data disks for the disk groups imported on each system.

vxdisk list

vxfenadm -g /dev/rdsk/data_diskvxfenadm -g /dev/rdsk/data_disk. . .

7 Using the lltlink_disable utility, remove all cluster interconnect links from one system. Watch for the link to expire in the console.

For each LLT link, type:

./lltlink_disable

Select an LLT link.

8 Observe LLT and GAB timeouts and membership change.

lltstat -nvvgabconfig -a

Page 369: havcs-410-101 a-2-10-srt-pg_2

Lab 14 Solutions: Configuring I/O Fencing C–177Copyright © 2005 VERITAS Software Corporation. All rights reserved.

C

9 What happens to the systems?

One side of the cluster should panic and reboot. When the rebooted system is back up, VCS cannot start there because it cannot seed.

10 On one system, view the registrations for the coordinator disks.

vxfenadm -g all -f /etc/vxfentab

Only one system’s keys are displayed on the coordinator disks. The other keys have been rejected.

11 What happens to the service groups?

hastatus -sum

The service groups that were running on the system that rebooted have failed over to the running system.

12 Verify that the registrations and reservations on the data disks are now for the remaining system.

vxdisk list

vxfenadm -g /dev/rdsk/data_disk. . .

Only one system’s keys are shown on the data disks. The other keys have been rejected.

13 When the system that rebooted is running, check the status of GAB and HAD.

gabconfig -a

This system is not listed in the GAB, Fence, or HAD membership. It is waiting to seed.

14 Verify that the coordinator disks have registrations for the remaining system only.

vxfenadm -g all -f /etc/vxfentab

Page 370: havcs-410-101 a-2-10-srt-pg_2

C–178 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

15 Recover the system that rebooted.

a Shut down the system.

shutdown -y

b If you physically unplugged the Ethernet cables for the LLT links, reconnect the cluster interconnects.

If you used lltlink_disable to simulate interconnect failure, skip this step.

c Reboot the system.

16 Verify that cluster membership has been established for both systems and both systems are now registered with the coordinator disks.

gabconfig -a

vxfenadm -g all -f /etc/vxfentab

17 Set the monitor interval for the NIC resource type to back to 60.

haconf -makerwhatype -modify NIC MonitorInterval 60haconf -dump -makero

Page 371: havcs-410-101 a-2-10-srt-pg_2

Lab 14 Solutions: Configuring I/O Fencing C–179Copyright © 2005 VERITAS Software Corporation. All rights reserved.

C

Note: Do not complete this section unless directed by your instructor.

1 Verify that the cluster configuration is saved and closed.

haconf -dump -makero

2 Stop VCS and all service groups.

hastop -all

3 Unconfigure the fencing driver.

/etc/init.d/vxfen stop

4 From one system, import and remove the coordinator disk group.

vxdg import ______fendgvxdg destroy ______fendg

5 Use the offline configuration procedure to set the UseFence cluster attribute to the value NONE in the main.cf file and restart the cluster with the new configuration. Note: You cannot set UseFence dynamically while VCS is running.

a Change to the configuration directory.

cd /etc/VRTSvcs/conf/config

b Copy of the main.cf file into the test subdirectory.

cp main.cf test

c Edit the main.cf file in the test directory on one system in the cluster to set the value of UseFence to NONE.

Optional: Removing the Fencing Configuration

Page 372: havcs-410-101 a-2-10-srt-pg_2

C–180 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

Partial Example:

# vi main.cf

cluster vcs ( UserNames = { admin = ElmElgLimHmmKumGlj } ClusterAddress = "192.168.27.51" Administrators = { admin } CounterInterval = 5

UseFence=NONE. . .)

6 Verify the cluster configuration and correct any errors found.

hacf -verify /etc/VRTSvcs/conf/config/test

7 Copy the main.cf file back into the /etc/VRTSvcs/conf/config directory.

cp main.cf ..

8 Start the cluster from the system where you edited the configuration file.

hastart

9 Start the cluster in the stale state on the other system in the cluster (where the configuration was not edited).

hastart -stale

10 Verify the status of the cluster.

hastatus -summary

Page 373: havcs-410-101 a-2-10-srt-pg_2

Appendix DJob Aids

Page 374: havcs-410-101 a-2-10-srt-pg_2

D–2 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

Cluster System StatesSTALE States• STALE_ADMIN_WAIT: The system has a stale configuration and no other

system is in the RUNNING state.• STALE_DISCOVER_WAIT: The system joined the cluster with an invalid

configuration file and is waiting for information from peers.• STALE_PEER_WAIT: The system has no valid configuration file, but another

system is doing a build from disk.

WAIT States• ADMIN_WAIT: This state can occur under these circumstances:

– A .stale flag exists and the main.cf file has a syntax problem.– The system is in local build and receives a disk error while reading

main.cf.– The system is in remote build and the last running system fails.

• CURRENT_DISCOVER_WAIT: The system has joined a cluster and its configuration file is valid.

• CURRENT_PEER_WAIT: The system has a valid configuration file and another system is building a configuration from disk.

BUILD States• LOCAL_BUILD: The system is building a configuration from disk.• REMOTE_BUILD: The system is building a configuration from a peer.

Startup States and Transitions

Peer inADMIN_WAIT

UNKNOWN

CURRENT_DISCOVER_WAIT STALE_DISCOVER_WAIT

ADMIN_WAIT ADMIN_WAIT

LOCAL_BUILD

CURRENT_PEER_WAIT

STALE_PEER_WAIT

STALE_ADMIN_WAIT

REMOTE_BUILD

RUNNING

Peer in LOCAL_BUILD

Peer inRUNNING

Peer inRUNNING

Peer inLOCAL_BUILD

Peer inADMIN_WAIT

Valid configuration on disk Stale configuration on disk

hastart

Peer startsLOCAL_BUILD

INITING

Peer inRUNNING

No Peer

DiskError

The only peer inRUNNING state crashes

Page 375: havcs-410-101 a-2-10-srt-pg_2

Appendix D Job Aids D–3Copyright © 2005 VERITAS Software Corporation. All rights reserved.

DEXITING States• LEAVING: The system is leaving the cluster gracefully. When agents have

been stopped, the system transitions to the EXITING state.• EXITING: The system is leaving the cluster.• EXITED: The system has left the cluster.• EXITING_FORCIBLY: The hastop -local -force command has

caused the system to exit the cluster. Agents are stopped but applications continue to run.

OTHER States• RUNNING: The system is an active member of the cluster.• FAULTED: The system is leaving the cluster unexpectedly (ungracefully). • INITING: The system has joined the cluster.• UNKNOWN: The system has no entry in the configuration and has not joined

the cluster.

Shutdown States and Transitions

RUNNINGRUNNING

LEAVINGLEAVING

EXITINGEXITING

EXITEDEXITED

EXITING_FORCIBLYEXITING_FORCIBLYFAULTEDFAULTED

hastop hastop –local -force

Resources are taken offlin;agents are stopped

Unexpected exit

Page 376: havcs-410-101 a-2-10-srt-pg_2

D–4 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

Resource States and TransitionsThe diagram shows resource states and the transitions between those states.

Resource States and Transitions

onlineoffline

fault

cleanUNKNOWNDOWN

UP

Page 377: havcs-410-101 a-2-10-srt-pg_2

Appendix D Job Aids D–5Copyright © 2005 VERITAS Software Corporation. All rights reserved.

D

Service Group Configuration ProcedureUse this procedure to create a service group.

Note: When you switch a service group to another system, keep the service group running on that system for the duration of the OfflineMonitorInterval (the default is five minutes) to ensure that the agents properly report all resources offline on other systems.

Configuring a Service Group

Check Logs/Fix

Done

Success?

Test Switching

Set Critical Res

Link Resources

Add Service Group

Set SystemList

Set Opt Attributes

Add/Test Resource

Y

N

Resource Flow Chart

More?

N

Y

Test Failover

Page 378: havcs-410-101 a-2-10-srt-pg_2

D–6 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

Resource Configuration ProcedureUse this procedure to configure and test resources.

*Note: Some resources do not need to be disabled and reenabled. Only resources whose agents have open and close entry points, such as MultiNICA, require you to disable and enable them again after fixing the problem. By contrast, a Mount resource does not need to be disabled if, for example, you incorrectly specify the MountPoint attribute.

Configuring a ResourceAdd Resource

Set Non-Critical

Flush Group

Done

NOnline?

Faulted?

Clear Resource

Disable Resource*

Y

Y

Waiting to Go Online

Modify Attributes

Bring Online

Enable Resource*

Verify Offline (OS)Everywhere

Check Log

N

Page 379: havcs-410-101 a-2-10-srt-pg_2

Appendix D Job Aids D–7Copyright © 2005 VERITAS Software Corporation. All rights reserved.

D

List of Notifier Events and TrapsThe following tables specify which events generate traps, e-mail notification, or both. Note that SevereError indicates the highest severity level, and Information, the lowest. Traps specific to Global Cluster option are ranked from Critical, the highest severity, to Normal, the lowest.

Clusters

Agents

Event Severity Level Description

Global service group is online/partial on multiple clusters.(Global Cluster option)

Critical A concurrency violation has occurred for the global service group.

Attributes for global service groups are mismatched.(Global Cluster option)

Major The attributes ClusterList, AutoFailover, and Parallel are mismatched for the same global service group on different clusters.

Remote cluster has faulted.(Global Cluster option)

Major The trap for this event includes information on how to take over the global service groups running on the remote cluster before the cluster faulted.

Heartbeat is down. Warning The connector on the local cluster has lost its heartbeat connection to the remote cluster.

Remote cluster is in RUNNING state.(Global Cluster option)

Normal The local cluster has a complete snapshot of the remote cluster, indicating the remote cluster is in the RUNNING state.

Heartbeat is “alive.”(Global Cluster option)

Normal Self-explanatory.

User has logged on to VCS Information A user log on has been recognized because a user logged on via Cluster Manager, or because a haxxx command was invoked.

Event Severity Level Description

Agent is faulted. Warning The agent has faulted on one node in the cluster.

Agent is restarting Information VCS is restarting the agent.

Page 380: havcs-410-101 a-2-10-srt-pg_2

D–8 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

Resources

Systems

Event Severity Level Description

Resource state is unknown Warning VCS cannot identify the state of the resource.

Resource monitoring has timed out Warning The monitoring mechanism for the resource has timed out.

Resource is not going offline Warning VCS cannot take the resource offline.

Cluster resource health is declined Warning This is used by agents to give additional information on the state of a resource. Health of the resource declined while it was online.

Resource went online by itself Warning (not for first probe)

The resource was brought online on its own.

Resource has faulted Error Self-explanatory.

Resource is being restarted by agent Information The resource is being restarted by its agent.

Cluster resource health is improved Information This is used by agents to give extra information about state of resource. Health of the resource improved while it was online.

Event Severity Level Description

VCS is being restarted by hashadow. Warning Self-explanatory.

VCS is in jeopardy. Warning One node running VCS is in jeopardy.

VCS is up on the first node in the cluster.

Information Self-explanatory.

VCS has faulted. Information Self-explanatory.

A node running VCS has joined cluster.

Information Self-explanatory.

VCS has exited manually. Information VCS has exited gracefully from one node on which it was previously running.

VCS is up but is not in the cluster. Information VCS is running on one node but the node is not visible.

Page 381: havcs-410-101 a-2-10-srt-pg_2

Appendix D Job Aids D–9Copyright © 2005 VERITAS Software Corporation. All rights reserved.

D

Service Groups

Event Severity Level Description

Service group has faulted Error Self-explanatory.

Service group has a concurrency violation

SevereError A failover service group has come online on more than one node in the cluster.

Service group has faulted and cannot be failed over anywhere

SevereError The specified service group has faulted on all nodes where the group could be brought online, and there are no nodes to which the group can fail over.

Service group is online Information Self-explanatory.

Service group is offline Information Self-explanatory.

Service group is autodisabled Information VCS has autodisabled the specified group because one node exited the cluster.

Service group is restarting Information Self-explanatory.

Service group is being switched Information The service group is being taken offline on one node and being brought online on another.

Service group is restarting in response to a persistent resource going online

Information Self-explanatory.

Page 382: havcs-410-101 a-2-10-srt-pg_2

D–10 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

Example Bundled Agent Reference Guide EntriesNIC Agent

Description Monitors the configured NIC. If a network link fails, or if a problem arises with the device card, the resource is marked OFFLINE. The NIC listed in the Device attribute must have an administration IP address, which is the default IP address assigned to the physical interface of a host on a network. This agent does not configure network routes or administration IP addresses.

Entry Point Monitor—Tests the network card and network link. Pings the network hosts or broadcast address of the interface to generate traffic on the network. Counts the number of packets passing through the device before and after the address is pinged. If the count decreases or remains the same, the resource is marked OFFLINE.

State Definitions ONLINE—Indicates that the NIC is working.OFFLINE—Indicates that the NIC has failed.UNKNOWN—Indicates that the device is not configured or is configured incorrectly.

Required Attribute Type andDimension

Definition

Device string-scalar Name of the NIC.

Optional Attributes Type andDimension

Definition

NetworkHosts string-vector List of hosts on the network.If network hosts are specified, the agent sends pings to the hosts to determine if the network connection is alive. Enter the IP address of the host instead of the HostName to prevent the monitor from timing out (DNS problems cause the ping to hang); for example, 166.96.15.22.If network hosts are not specified, the monitor tests the NIC by sending pings the broadcast address on the NIC. If more than one network host is listed, the monitor returns ONLINE if at least one of the hosts is alive.

NetworkType string-scalar Type of network. VCS currently only supports Ethernet (ether).

PingOptimize integer-scalar Number of monitor cycles to detect if configured interface is inactive.A value of 1 optimizes broadcast pings and requires two monitor cycles.A value of 0 performs a broadcast ping during each monitor cycle and detects the inactive interface within the cycle.Default is 1.

Page 383: havcs-410-101 a-2-10-srt-pg_2

Appendix D Job Aids D–11Copyright © 2005 VERITAS Software Corporation. All rights reserved.

D

Requirements for NIC• Verify that each NIC has the correct administrative IP address and subnet

mask.• Verify that each NIC does not have built-in failover support. If it does, disable

it. (If necessary, refer to the NIC documentation.)

Type Definition

type NIC (

static str ArgList[] = { Device, NetworkType, NetworkHosts, PingOptimize }

NameRule = group.Name + "_" + resource.Device

static int OfflineMonitorInterval = 60

static str Operations = None

str Device

str NetworkType

int PingOptimize = 1

str NetworkHosts[]

)

Sample NIC Configurations• Sample 1: Without Network Hosts (Using Default Ping Mechanism)

NIC NIC_le0 (Device = le0PingOptimize = 1)

• Sample 2: With Network Hosts

NIC NIC_le0 (Device = le0NetworkHosts = { "166.93.2.1", "166.99.1.2" })

Page 384: havcs-410-101 a-2-10-srt-pg_2

D–12 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

Mount Agent

Description Brings online, takes offline, and monitors a file system mount point.

Entry Points Online—Mounts a block device on the directory. If the mount process fails, the agent attempts to run the fsck command on the raw device to remount the block device.Offline—Unmounts the file system.Monitor—Determines if the file system is mounted. Checks mount status using the stat and statvfs commands.Clean—See description on the following pages.Info—See description on the following pages.

State Definitions ONLINE—Indicates that the block device is mounted on the specified mount pointOFFLINE—Indicates that the block device is not mounted on the specified mount pointUNKNOWN—Indicates that a problem exists with the configuration

Required Attributes Type andDimension

Description

BlockDevice string-scalar Device for mount point.

FsckOpt string-scalar Options for fsck command. "-y" or "-n" must be included as arguments to fsck; otherwise, the resource cannot come online. VxFS file systems will perform a log replay before a full fsck operation (enabled by "-y") takes place. Refer to the manual page on the fsck command for more information.

FSType string-scalar Type of file system.For example, vxfs or ufs.

MountPoint string-scalar Directory for mount point.

Optional Attributes Type andDimension

Description

MountOpt string-scalar Options for mount command.

SnapUmount integer-scalar If set to 1, this attribute automatically unmounts VxFS snapshots when the file system is unmounted.Default is 0 (No).

Page 385: havcs-410-101 a-2-10-srt-pg_2

Appendix D Job Aids D–13Copyright © 2005 VERITAS Software Corporation. All rights reserved.

D

Info Entry Point (4.x only)

The Mount info entry point executes the command:df -k mount_point

The output displays Mount resource information:Size Used Avail Use%

The following steps are necessary to initiate the info entry point by setting the InfoInterval timing to a value greater than 0. For example,haconf -makerw

hatype -modify Mount InfoInterval 60

In this case, the info entry point is executed every 60 seconds. The command to retrieve information about the Mount resource is:hares -value mountres ResourceInfo

Output includes the following information:Size 2097152

Used 139484

Available 1835332

Used% 8%

Type Definition

type Mount (

static str ArgList[] = { MountPoint, BlockDevice, FSType,

MountOpt, FsckOpt, SnapUmount }

NameRule = resource.MountPoint

str MountPoint

str BlockDevice

str FSType

str MountOpt

str FsckOpt

)

Page 386: havcs-410-101 a-2-10-srt-pg_2

D–14 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

Sample Configuration

Mount export1 (

MountPoint= "/export1"

BlockDevice = "/dev/dsk/c1t1d0s3"

FSType = "vxfs"

FsckOpt = "-n"

MountOpt = "ro"

)

Page 387: havcs-410-101 a-2-10-srt-pg_2

Appendix D Job Aids D–15Copyright © 2005 VERITAS Software Corporation. All rights reserved.

D

Process Agent

Type Definition

type Process (

static str ArgList[] = { PathName, Arguments }

NameRule = resource.PathName

str PathName

str Arguments

)

Sample Process Configurations• Sample 1

Process usr_lib_sendmail (PathName = "/usr/lib/sendmail"Arguments = "bd q1h")

Description Starts, stops, and monitors a process specified by the user.

Entry Points Online—Starts the process with optional arguments.Offline—Terminates the process with a SIGTERM. If the process does not exit, VCS sends a SIGKILL.Monitor—Checks to see if the process is alive by scanning the process table for the name of the executable pathname and argument list.

Required Attribute Type andDimension

Description

PathName string-scalar Defines complete pathname to access an executable program. This path includes the program name. If a process is controlled by a script, the PathName defines the complete path to the shell.Pathname must not exceed 80 characters.

Optional Attribute Type andDimension

Description

Arguments string-scalar Passes arguments to the process. If a process is controlled by a script, the script is passed as an argument. Multiple arguments must be separated by a single space. A string cannot accommodate more than one space between arguments, nor allow for leading or trailing whitespace characters. Arguments must not exceed 80 characters (total).

Page 388: havcs-410-101 a-2-10-srt-pg_2

D–16 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

• Sample 2

include "types.cf"

cluster ProcessCluster (...group ProcessGroup (SystemList = { sysa, sysb }AutoStartList = { sysa })

Process Process1 (PathName = "/usr/local/bin/myprog"Arguments = "arg1 arg2")

Process Process2 (PathName = "/bin/csh"Arguments = "/tmp/funscript/myscript")

// resource dependency tree//// group ProcessGroup// {// Process Process1// Process Process2// }

Page 389: havcs-410-101 a-2-10-srt-pg_2

Appendix D Job Aids D–17Copyright © 2005 VERITAS Software Corporation. All rights reserved.

D

SCSI-3 Persistent ReservationsDisk registration and reservations are performed by the VERITAS fencing driver using a relatively new technology known as SCSI-3 persistent reservations (SCSI-3 PR, or just PR).

PR uses the concepts of registration and reservation. Participating systems register a key with a device (controlling registration is discussed later). Registered systems can then set a reservation mode on these devices. The VERITAS fencing implementation uses a mode called Write Exclusive Registrants Only. This mode ensures that only members registered with the device can write. Other nodes can potentially read to allow for off-host backup schemes.

Current SCSI-3 PR specifications enable VCS to support 32 nodes with multiple paths from each node.

SCSI-3 Persistent Reservation Blocking

With SCSI-3 PR technology, blocking write access is as simple as removing a registration from a device. Only registered members can eject the registration of another member. A member wanting to eject another member issues a preempt and abort command that ejects another node from the membership. Nodes not in the membership cannot issue this command.

Looking at this in another way, this means after a node is ejected, it cannot, in turn, eject another; ejecting is final and atomic.

In the VCS implementation, a node registers the same key for all paths to the device. A single preempt and abort command ejects a node from all paths to the storage device.

Several important concepts are summarized below:• Only a registered node can eject another.• Because a node registers the same key down each path, ejecting a single key

blocks all I/O paths from the node.• After a node is ejected, it has no key registered, and it cannot eject others.

The SCSI-3 PR specification describes the method to control access to disks with the registration and reservation mechanism. The method to determine who can register with a disk and who is eligible to eject another node is implementation- specific.

Page 390: havcs-410-101 a-2-10-srt-pg_2

D–18 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

Best PracticesCluster Interconnect

Using the sysname File

Use the sysname file to specify the local node name. This removes any dependency on the UNIX host name given by the uname -a command. If the host name is changed and no longer matches the llthosts, llttab, and main.cf system name entries, VCS cannot start.

Redundant LLT Links• Two Ethernet LLT heartbeat links are the recommended minimum.• No single point of failure should be allowed anywhere in the cluster

interconnect, including hubs, NICs, and NIC position within the system.• No routers can be used in the path of the interconnect.• Configure the public network as an additional low-priority LLT link.

Shared Storage

Volume Resources

Volume resources are not required. They provide additional monitoring; however, in environments with many volumes, the additional overhead of monitoring all the volumes may be undesirable.

File Systems

Ensure that all file systems controlled by VCS resources are set to manual control in the operating system configuration files. The operating system should not perform any automatic mounts or unmounts.

SANs/Arrays• Shared disks on a SAN must reside in the same zone as all of the nodes in the

cluster.• Data residing on shared storage should be mirrored or protected by a hardware-

based RAID mechanism.• Use redundant storage and paths.• Use multiple single-port HBAs or SCSI controllers rather than multiport

interfaces to avoid single points of failure.• Include all cluster-controlled data in your backup planning and

implementation. Periodically test restoration of critical data to ensure that the data can be restored.

Page 391: havcs-410-101 a-2-10-srt-pg_2

Appendix D Job Aids D–19Copyright © 2005 VERITAS Software Corporation. All rights reserved.

D

Public Network• Allocate a dedicated administrative IP address to each node of the cluster. This

must not be failed over to any other node.• Allocate one or more virtual IP addresses for each service group requiring

access by way of the public network.• Map DNS entries to the service group IP addresses for the cluster.• Note the service group IP addresses s in the hosts file.• When specifying a NetworkHosts for the NIC resource, specify one or more

highly available IP addresses.

Critical ResourcesDuring configuration, consider initially setting all resources to non-critical. This prevents service groups from failing over if you make errors when setting up a new resource. Then set all resources to critical, which should cause a service group to fault and fail over in the event the resource faults.

Deleting a Service GroupDelete all resources before removing a service group. This prevents possible resource faults and error log entries that can occur if a service group with online resources is deleted.

Proxy ResourcesIf you have multiple service groups that use the same network interface, you can reduce monitoring overhead by using Proxy resources instead of NIC resources. If you have many NIC resources, consider using Proxy resources to minimize any potential performance impacts of monitoring.

Outside ServicesMinimize reliance on services that are not within control of the cluster to ensure high availability for your applications. Consider:• Network name resolution services• NFS mounts• NIS

In addition, ensure that external resources, such as DNS and gateways, are highly available.

Multiple Oracle Instance ConfigurationsThe following list describes some best practices for configuring and managing multiple Oracle instances in a VCS environment.• For each SID to be configured, create UNIX accounts with DBA privileges.• Ensure that each Oracle instance has a separate disk group and is configured as

a separate service group.

Page 392: havcs-410-101 a-2-10-srt-pg_2

D–20 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

• Define the /etc/system parameters such that the allocation of semaphore and shared memory is appropriate on all systems.

• Use a dedicated set of binaries for each Oracle instance, even if each instance uses the same Oracle version.

• If your configuration uses the same Oracle version for all instances, install a version on the root disk or, preferably, on a secondary disk. Locate the pfiles in the default location and define several listener processes to ensure clean failover.

• If your configuration has several 8.1.x instances and just one listener, set up a parallel service group.

• If your configuration has different versions of Oracle, create a separate $ORACLE_HOME for each Oracle version.

• Follow the Optimal Flexible Architecture (OFA) standard (/uxx/SID). In cluster configurations, you can adapt the standard to make it more application-specific, for example, /app/uxx/SID.

• Listeners accompanying different versions of Oracle may not be backward compatible. Therefore, if you want to create a single listener.ora file, you must verify that the listener supports the other versions of Oracle in the cluster. You must also create a separate Envfile for each version of Oracle.

• Ensure that each listener listens to a different virtual address. Also, assign different names to listeners and ensure that they do not listen to the same port.

• If you create a single user named oracle and define the variables required by Oracle, you must redefine at minimum the $ORACLE_HOME and $ORACLE_SID variables every time you want to invoke svrmgrl. VERITAS recommends that you define several Oracle users in the passwd file, with each user having the appropriate environment variables, so you can easily identify which level of Oracle code you are running.

• The pfiles must be coordinated between systems. If you have two instances using the Oracle version, keep a copy of both of the init SID.ora files in the default directory, so that if one systems fails, $ORACLE_HOME is set on each system.

Testing• Test services on each failover target system before putting them under VCS

control.• Create a test cluster for performing the initial implementation and testing any

changes.– Test all possible failure scenarios.– Create and execute an acceptance/solution test plan before deploying a

cluster in a production environment and when making any changes.

Page 393: havcs-410-101 a-2-10-srt-pg_2

Appendix D Job Aids D–21Copyright © 2005 VERITAS Software Corporation. All rights reserved.

D

TrainingProvide appropriate training, as follows:• VERITAS Cluster Server

– System administrators– Database administrators– Developers

• VERITAS File System: System administrators• VERITAS Volume Manager: System administrators• VERITAS NetBackup: System and Backup administrators

Page 394: havcs-410-101 a-2-10-srt-pg_2

D–22 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

New Features in VCS 4.1The following features are introduced in VCS version 4.1.

Solaris 10 Local Zone SupportSolaris 10 provides a means of virtualizing operating system services, allowing one or more processes to run in isolation from other activity on the system. Such a "sandbox" is called a “local zone.” Each zone can provide a rich and customized set of services. There is also a "global zone" and processes running in this zone have the same set of privileges available on a Solaris system today.

VCS provides high availability to applications running in local zones by extending the failover capability to zones. VCS is installed in a global zone and all VCS agents and engine components run in the global zone. For applications running within local zones, agents run entry points inside the zones. If a zone configured under VCS control faults, VCS fails over the entire service group containing the zone.

VERITAS Security Services (VxSS)VCS 4.1 is integrated with VERITAS Security Services (VxSS) to provide secure communication between cluster nodes and clients, including the Java and the Web consoles. VxSS uses digital certificates and uses SSL to encrypt communication over the public network.

User Management in the Secure Mode

Change in behavior: If VCS is running in the secure mode, you can add system users to VCS and assign them privileges. You must specify user names in the format username@domain. You cannot assign or change passwords for users when VCS is running in the secure mode.

NFS Lock FailoverVCS 4.1 adds support for failover of NFS 3.0 file locks with the addition of the NFSLock bundled agent. For details, refer to the VERITAS Cluster Server 4.1 Bundled Agents Reference Guide.

JumpStart ComplianceVCS 4.1 is compliant with Solaris JumpStart technology.

Web Console FeaturesThe Web console now includes support for:• Secure clusters• SystemList modification• Static resource type attribute overrides

Page 395: havcs-410-101 a-2-10-srt-pg_2

Appendix D Job Aids D–23Copyright © 2005 VERITAS Software Corporation. All rights reserved.

D

Java Console FeaturesThe Java console now includes support for:• Secure clusters• Static resource type attribute overrides

VCS Login EnvironmentWhen non-root users execute haxxx commands, they are prompted for their VCS user name and password to authenticate themselves. In VCS 4.1, you can use the halogin command to save the authentication information so that you do not have to enter your credentials every time you run a VCS command. You must also set the VCS_HOST environment variable or populate the /etc/.vcshosts file to run commands remotely. Users must have proper cluster- and group-level privileges to execute commands. You cannot remotely run ha commands that require localhost root privileges. See “Logging On to VCS” in the VERITAS Cluster Server User’s Guide, or more information about the halogin command.

Page 396: havcs-410-101 a-2-10-srt-pg_2

D–24 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

New Features in VCS 4.0The following features are introduced in VCS version 4.0.

Global Cluster OptionThe Global Cluster option to VCS enables a collection of VCS clusters to work together to provide wide-area disaster recovery. Previously, the wide-area functionality was available in a separate Global Cluster Manager product. The functionality has now been incorporated into VCS 4.0.

VCS SimulatorVCS Simulator is a tool for simulating any cluster configuration and determining how service groups will behave during cluster or system faults. With the simulator, you can designate and fine-tune configuration parameters, view state transitions, and evaluate complex, multinode configurations. The tool is especially valuable because it enables you to design and evaluate a specific configuration without test clusters or changes to existing production configurations.

I/O FencingVCS 4.0 provides a new capability, called I/O fencing, to arbitrate cluster membership and ensure data integrity in the event of communication failure among cluster members. The I/O fencing kernel module uses SCSI-III Persistent Reservations and designated coordinator disks, as described in the “I/O Fencing” chapter of the VERITAS Cluster Server 4.0 User’s Guide.

Fire DrillFire drill is a procedure for testing the fault readiness of a configuration. A fire drill on a VCS-controlled application uses a separate fire drill service group that contains a copy of the live application’s resources. See the VERITAS Cluster Server 4.0 User’s Guide for more information.

StewardThe Steward mechanism minimizes chances of a wide-area split-brain in two-node clusters. The steward process can run on any system outside of the clusters in a Global Cluster configuration. See the VERITAS Cluster Server 4.0 User’s Guide for more information.

Page 397: havcs-410-101 a-2-10-srt-pg_2

Appendix D Job Aids D–25Copyright © 2005 VERITAS Software Corporation. All rights reserved.

D

Web Console Features• Support for global clustering• Home portal• User management

Java Console Features• Support for Global Clustering• VCS Simulator• Display of agent logs

cpuusage Event TriggerThe new cpuusage event trigger is invoked on systems where CPU usage exceeds the configured threshold value. See the VCS 4.0 User’s Guide for more information.

multinicb Event TriggerThe new multinicb event trigger is invoked when a network device under MultiNICB control changes its state. The trigger is also always called in the first monitor cycle. See the VCS 4.0 User’s Guide for more information.

Action Entry PointThe action entry point enables agents to perform actions that can be completed within a few seconds and that are outside the scope of traditional actions, such as being brought online and taken offline.

Info Entry PointThe info entry point enables agents to gather specific information for an online resource.

New Bundled AgentsThe DNS bundled agent was added in the VCS 4.0 Release. For details, refer to the VERITAS Cluster Server Bundled Agents Reference Guide.

New Attributes• Resource Type Attributes

– ActionTimeout– FireDrill– InfoInterval

Page 398: havcs-410-101 a-2-10-srt-pg_2

D–26 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

– InfoTimeout– LogDbg– MonitorStatsParam– SupportedActions

• Resource Attributes– ComputeStats– MonitorTimeStats– ResourceInfo

• Service Group Attributes– Authority– ClusterFailoverPolicy– ClusterList

• System Attributes– CPUUsage– CPUUsageMonitoring– NoAutoDisable

• Cluster Attributes– AutoStartTimeout– ClusState– ClusterAddress– ConnectorState– Stewards– UserFence

New Attribute CategoryHeartbeat attributes are introduced to VCS 4.0 with the new global cluster features.• AgentState• Arguments• AYAInterval• AYARetryLimit• AYATimeout• CleanTimeOut• ClusterList• InitTimeout• LogDbg• State• StartTimeout• StopTimeout

Page 399: havcs-410-101 a-2-10-srt-pg_2

Appendix EDesign Worksheet: Template

Page 400: havcs-410-101 a-2-10-srt-pg_2

E–28 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

Cluster Interconnect Configuration

First system:

/etc/VRTSvcs/comms/llttab Sample Value Your Value

set-node(host name)

set-cluster(number in host name of odd system)

link

link

/etc/VRTSvcs/comms/llthosts Sample Value Your Value

/etc/VRTSvcs/comms/sysname Sample Value Your Value

Page 401: havcs-410-101 a-2-10-srt-pg_2

Appendix E Design Worksheet: Template E–29Copyright © 2005 VERITAS Software Corporation. All rights reserved.

E

Second system:

Cluster Configuration (main.cf)

/etc/VRTSvcs/comms/llttab Sample Value Your Value

set-node

set-cluster

link

link

/etc/VRTSvcs/comms/llthosts Sample Value Your Value

/etc/VRTSvcs/comms/sysname Sample Value Your Value

Types Definition Sample Value Your Value

Include types.cf

Cluster Definition Sample Value Your Value

Cluster

Required Attributes

UserNames

Page 402: havcs-410-101 a-2-10-srt-pg_2

E–30 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

ClusterAddress

Administrators

Optional Attributes

CounterInterval

System Definition Sample Value Your Value

System

System

Service Group Definition Sample Value Your Value

Group

Required Attributes

FailoverPolicy

SystemList

Optional Attributes

AutoStartList

OnlineRetryLimit

Page 403: havcs-410-101 a-2-10-srt-pg_2

Appendix E Design Worksheet: Template E–31Copyright © 2005 VERITAS Software Corporation. All rights reserved.

E

Resource Definition Sample Value Your Value

Service Group

Resource Name

Resource Type

Required Attributes

Optional Attributes

Critical?

Enabled?

Resource Definition Sample Value Your Value

Service Group

Resource Name

Resource Type

Required Attributes

Page 404: havcs-410-101 a-2-10-srt-pg_2

E–32 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

Optional Attributes

Critical?

Enabled?

Resource Definition Sample Value Your Value

Service Group

Resource Name

Resource Type

Required Attributes

Optional Attributes

Page 405: havcs-410-101 a-2-10-srt-pg_2

Appendix E Design Worksheet: Template E–33Copyright © 2005 VERITAS Software Corporation. All rights reserved.

E

Critical?

Enabled?

Resource Definition Sample Value Your Value

Service Group

Resource Name

Resource Type

Required Attributes

Optional Attributes

Critical?

Enabled?

Page 406: havcs-410-101 a-2-10-srt-pg_2

E–34 VERITAS Cluster Server for UNIX, Fundamentals Copyright © 2005 VERITAS Software Corporation. All rights reserved.

Resource Dependency Definition

Service Group

Parent Resource Requires Child Resource