Advanced Concepts for Clustered Data ONTAP 8.3 - NetApp · Advanced Concepts for Clustered Data...

88
Advanced Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10238 Version 1.1

Transcript of Advanced Concepts for Clustered Data ONTAP 8.3 - NetApp · Advanced Concepts for Clustered Data...

Page 1: Advanced Concepts for Clustered Data ONTAP 8.3 - NetApp · Advanced Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10238 Version 1.1

Advanced Concepts for Clustered DataONTAP 8.3.1

December 2015 | SL10238 Version 1.1

Page 2: Advanced Concepts for Clustered Data ONTAP 8.3 - NetApp · Advanced Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10238 Version 1.1

2 Advanced Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

TABLE OF CONTENTS

1 Introduction...................................................................................................................................... 4

2 Lab Environment............................................................................................................................. 5

3 Lab Activities................................................................................................................................... 7

3.1 Lab Preparation......................................................................................................................... 7

3.1.1 Accessing the Command Line..............................................................................................................................7

3.1.2 Accessing System Manager................................................................................................................................. 9

3.2 Clustered Data ONTAP CLI.....................................................................................................11

3.2.1 Explore the Command Hierarchy....................................................................................................................... 12

3.2.2 Setting the SVM Context....................................................................................................................................15

3.2.3 Node Management CLI...................................................................................................................................... 16

3.2.4 Node-Scoped CLI............................................................................................................................................... 17

3.3 Load-Sharing Mirrors.............................................................................................................. 19

3.3.1 Namespace Overview.........................................................................................................................................19

3.3.2 Load-Sharing Mirror Overview............................................................................................................................19

3.3.3 Exercise.............................................................................................................................................................. 19

3.4 IPspaces, Broadcast Domains, and Subnets....................................................................... 22

3.4.1 Clustered Data ONTAP 8.3 Networking Overview............................................................................................. 22

3.4.2 Exercise.............................................................................................................................................................. 23

3.5 Quality of Service (QoS).........................................................................................................28

3.5.1 Exercise.............................................................................................................................................................. 29

3.6 SnapMirror................................................................................................................................36

3.6.1 Exercise.............................................................................................................................................................. 36

3.7 Disaster Recovery for Storage Virtual Machines................................................................. 56

3.7.1 Exercise.............................................................................................................................................................. 58

3.8 Appendix: Additional Administrative Users and Roles....................................................... 74

3.8.1 Cluster-Scoped Users and Roles....................................................................................................................... 75

Page 3: Advanced Concepts for Clustered Data ONTAP 8.3 - NetApp · Advanced Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10238 Version 1.1

3 Advanced Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

3.8.2 SVM Users and Roles........................................................................................................................................79

3.9 Appendix: Active Directory Authentication Tunneling........................................................ 84

3.10 Automated Nondisruptive Upgrades................................................................................... 85

4 Version History.............................................................................................................................. 87

Page 4: Advanced Concepts for Clustered Data ONTAP 8.3 - NetApp · Advanced Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10238 Version 1.1

4 Advanced Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

1 IntroductionThis Lab Guide provides the steps to complete the Insight 2015 “Hands-on Lab for Advanced Concepts forclustered Data ONTAP 8.3.1”.

1 Lab ObjectivesThis lab provides an introduction into a number of more advanced features found in clustered Data ONTAP,including the Command Line Interface (CLI), load-sharing mirrors, IPspaces, Quality of Service (QoS), clusterpeering, Disaster Recover for Storage Virtual Machines (SVM-DR), administrative users and roles, and ActiveDirectory Authentication Tunneling.

1 PrerequisitesThis lab builds on the concepts covered in the “Basic Concepts for Clustered Data ONTAP 8.3 ” lab, and requiresknowledge of the topics covered in that lab. You should already understand the concepts and know how touse OnCommand System Manager, how to configure a Storage Virtual machine (SVM), and how to createaggregates, volumes, and LIFs. You should also have a basic knowledge of Windows administration. Knowledgeof UNIX is not required, but a Linux virtual machine (VM) is provided.

Your starting point for this lab is a cluster named “cluster1”, with two nodes named “cluster1-01” and “cluster1-02”.There are two SVMs, “svm1” and “svm2”, each hosting a variety of volumes.

Before you start the lab, launch System Manager and get familiar with the cluster configuration, including location,naming, and status of the aggregates, volumes, LIFs, and SVMs.

The terms “Storage Virtual Machine (SVM)” and “Vserver” are used interchangeably in this lab. “SVM” is used todescribe virtualized storage systems as a concept. “Vserver” is the term used to refer to SVMs in the clusteredData ONTAP command line and in the System Manager user interface. SVMs configured in this lab follow thenaming convention “svmN”, where “N” is a number, and “svm” is shorthand for “Storage Virtual Machine”.

Page 5: Advanced Concepts for Clustered Data ONTAP 8.3 - NetApp · Advanced Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10238 Version 1.1

5 Advanced Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

2 Lab EnvironmentThe following figure illustrates the network configuration.

Figure 2-1:

Table 1 shows the host information used in this lab.

Table 1: Host Information

Host Name Operating System Role/Function IP Address

cluster1 clustered Data ONTAP 8.3 cluster 192.168.0.101

cluster1-01 clustered Data ONTAP 8.3 cluster 1, node 1 192.168.0.111

cluster1-02 clustered Data ONTAP 8.3 cluster 1, node 2 192.168.0.112

cluster2 clustered Data ONTAP 8.3 cluster 192.168.0.102

cluster2-01 clustered Data ONTAP 8.3 cluster 2, node 1 192.168.0.121

JUMPHOST Windows 2008 R2 primary desktop for lab 192.168.0.5

rhel1 Red Hat Linux 6.5 Linux server 192.168.0.61

DC1 Windows 2008R2 Active Directory/DNS 192.168.0.253

Table 2 lists the user IDs and passwords used in this lab.

Page 6: Advanced Concepts for Clustered Data ONTAP 8.3 - NetApp · Advanced Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10238 Version 1.1

6 Advanced Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

Table 2: User IDs and Passwords

Host Name User ID Password Comments

JUMPHOST DEMO\Administrator Netapp1!

cluster1 admin Netapp1! Same for individual cluster nodes

cluster2 admin Netapp1! Same for individual cluster nodes

rhel1 root Netapp1!

DC1 DEMO\Administrator Netapp1!

Page 7: Advanced Concepts for Clustered Data ONTAP 8.3 - NetApp · Advanced Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10238 Version 1.1

7 Advanced Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

3 Lab ActivitiesIn this lab, you will perform the following tasks:

• Explore the CLI in more detail, and set it to work in an SVM context.• Navigate the node-scoped CLI.• Check the cluster and SVM administrative roles, users, and groups.• Configure load-sharing mirrors to protect the namespace.• Learn about IPspace, Broadcast Domains, and Subnets.• Use QoS to manage tenants and workloads.• Create intercluster LIFs and create a cluster peering relationship for SnapMirror.• Create a Disaster Recovery for Storage Virtual Machines (SMV DR) relationship from one cluster to

another, perform a cutover operation, and then revert back to the primary.• Configure authentication tunneling for cluster administrators (refer to the appendix).• Add a volume that has a different language setting from the SVM that contains the volume (refer to the

appendix).• Learn about new automated nondisruptive upgrade features in Data ONTAP 8.3.

This is a self-guided lab. You can complete or skip any exercise.

The expected time for you to complete the entire lab is approximately 1 hour and 30 minutes.

Note: Before you begin the lab activities, you should understand how to log into and out of the clusteredData ONTAP system by using the CLI and System Manager.

3.1 Lab Preparation

3.1.1 Accessing the Command Line

PuTTY is the terminal emulation program used in the lab to log into Linux hosts and storage controllers in order torun command line commands.

1. The launch icon for the PuTTY application is pinned to the taskbar on the Windows host JUMPHOST asshown in the following screenshot; just double-click on the icon to launch it.

1

Figure 3-1:

If you already have another PuTTY session open then this step will only bring that session into focuson the display. If your intention is to open another PuTTY session, then right-click on the PuTTY toolbaricon and select PuTTY from the context menu.

Once PuTTY launches, you can connect to one of the hosts in the lab by following the next steps. Thisexample shows a user connecting to the Data ONTAP cluster named cluster1.

Page 8: Advanced Concepts for Clustered Data ONTAP 8.3 - NetApp · Advanced Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10238 Version 1.1

8 Advanced Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

2. By default PuTTY should launch into the “Basic options for your PuTTY session” display as shown in thescreenshot. If you accidentally navigate away from this view just click on the Session category item toreturn to this view.

3. Use the scrollbar in the Saved Sessions box to navigate down to the desired host and double-click itto open the connection. A terminal window will open and you will be prompted to log into the host. Youcan find the correct username and password for the host in the Lab Host Credentials table in the “LabEnvironment” section at the beginning of this guide.

2

3

Figure 3-2:

If you are new to the clustered Data ONTAP CLI, the length of the commands can seem a littleinitimidating. However, the commands are actually quite easy to use if you remember the following threetips:

• Make liberal use of the Tab key while entering commands, as the clustered Data ONTAPcommand shell supports tab completion. If you hit the Tab key while entering a portion of acommand word, the command shell will examine the context and try to complete the rest ofthe word for you. If there is insufficient context to make a single match, it will display a list of allthe potential matches. Tab completion also usually works with command argument values, butthere are some cases where there is simply not enough context for it to know what you want,in which case you will just need to type in the argument value.

• You can recall your previously entered commands by repeatedly pressing the up-arrow key,and you can then navigate up and down the list using the up and down arrow keys.When youfind a command you want to modify, you can use the left arrow, right arrow, and Delete keysto navigate around in a selected command to edit it.

• Entering a question mark character “?” causes the CLI to print contextual help information.You can use this character by itself, or while entering a command.

Page 9: Advanced Concepts for Clustered Data ONTAP 8.3 - NetApp · Advanced Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10238 Version 1.1

9 Advanced Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

The Cluster CLI section of this lab guide covers the operation of the clustered Data ONTAP CLI in muchgreater detail.

Caution: The commands shown in this guide are often so long that they span multiple lines.When you see this, in every case you should include a space character between the text fromadjoining lines.

If you intend to use copy/paste of commands from the guide to the lab, when dealing with multi-line commands you can only copy one line at a time. If you try to copy multiple lines at once thenthe commands will fail in the lab.

3.1.2 Accessing System Manager

On the Jumphost, the Windows 2012R2 Server desktop you see when you first connect to the lab, open the webbrowser of your choice. This lab guide uses Chrome, but you can use Firefox or Internet Explorer if you prefer oneof those. All three browsers already have System Manager set as the browser home page.

1. Launch Chrome to open System Manager.

1

Figure 3-3:

The OnCommand System Manager Login window opens.2. Note the tabs at the top of the browser window. This lab contains multiple clusters, and each tab opens

System Manager for a different cluster.3. Enter the User Name admin, and the Password Netapp1!.4. Click the Sign In button.

Page 10: Advanced Concepts for Clustered Data ONTAP 8.3 - NetApp · Advanced Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10238 Version 1.1

10 Advanced Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

2

3

4

Figure 3-4:

System Manager is now logged in to cluster1, and displays a summary page for the cluster. If you areunfamiliar with System Manager, here is a quick introduction to its layout. Please take a few moments toexpand and browse these tabs to familiarize yourself with their contents.

5. Use the tabs on the left side of the window to manage various aspects of the cluster. The Cluster tabaccesses configuration settings that apply to the cluster as a whole.

6. The Storage Virtual Machines tab allows you to manage individual Storage Virtual Machines (SVMs,also known as Vservers).

7. The Nodes tab contains configuration settings that are specific to individual controller nodes.

Page 11: Advanced Concepts for Clustered Data ONTAP 8.3 - NetApp · Advanced Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10238 Version 1.1

11 Advanced Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

5

6

7

Figure 3-5:

Tip: As you use System Manager in this lab, you may encounter situations where buttonsat the bottom of a System Manager pane are beyond the viewing size of the window, and noscroll bar exists to allow you to scroll down to see them. If this happens, you have two options;either increase the size of the browser window (you might need to increase the resolution ofyour jumphost desktop to accommodate the larger browser window), or in the System Managerwindow, use the tab key to cycle through all the various fields and buttons, which eventuallyforces the window to scroll down to the non-visible items.

3.2 Clustered Data ONTAP CLIThis section provides an introduction to the clustered Data ONTAP Command Line Interface, or CLI. Here youwill learn about the command hierarchy and the various CLI interfaces (clustershell, nodeshell), and also about anumber of the shell's usability features.

When you open an SSH session to a cluster you are usually doing so to the cluster management LIF. The clustermanagement LIF is set up when you first configure the cluster and automatically migrates across the cluster if thehome port or home node on which it is located goes down. When you log into the cluster management LIF youare accessing the clustershell.

If you have not already opened a PuTTY session to cluster1, please do so now.

Page 12: Advanced Concepts for Clustered Data ONTAP 8.3 - NetApp · Advanced Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10238 Version 1.1

12 Advanced Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

3.2.1 Explore the Command Hierarchy

After logging in, you are placed at the top of the command line hierarchy. The commands are built in a commandhierarchy, with associated commands grouped together in branches. These branches are made up of commanddirectories and commands; this organization is similar to the organization of directories and files within a filesystem.

Type ? to list the base commands available at the top level of the hierarchy.

cluster1::> ? up Go up one directory cluster> Manage clusters dashboard> (DEPRECATED)-Display dashboards event> Manage system events exit Quit the CLI session export-policy Manage export policies and rules history Show the history of commands for this CLI session job> Manage jobs and job schedules lun> Manage LUNs man Display the on-line manual pages metrocluster> Manage MetroCluster network> Manage physical and virtual network connections qos> QoS settings redo Execute a previous command rows Show/Set the rows for this CLI session run Run interactive or non-interactive commands in the nodeshell security> The security directory set Display/Set CLI session settings snapmirror> Manage SnapMirror statistics> Display operational statistics storage> Manage physical storage, including disks, aggregates, and failover system> The system directory top Go to the top-level directory volume> Manage virtual storage, including volumes, snapshots, and mirrors vserver> Manage Vserverscluster1::>

Type any base command to move into that branch of the command hierarchy. For example, the volume branchcontains all commands related to volumes. The prompt changes to show you the part of the command tree youare working in.

Type ? again. This time the hierarchy shows you the specific subcommands available for that part of thecommand tree.

cluster1::> volumecluster1::volume> ? aggregate> Manage Infinite Volume aggregate operations autosize Set/Display the autosize settings of the flexible volume. clone> Manage FlexClones create Create a new volume delete Delete an existing volume efficiency> Manage volume efficiency file> File related commands modify Modify volume attributes mount Mount a volume on another volume with a junction-path move> Manage volume move operations offline Take an existing volume offline online Bring an existing volume online qtree> Manage qtrees quota> Manage Quotas, Policies, Rules and Reports rename Rename an existing volume restrict Restrict an existing volume show Display a list of volumes show-footprint Display a list of volumes and their data and metadata footprints in their associated aggregate. show-space Display space usage for volume(s) size Set/Display the size of the volume. snapshot> Manage snapshots

Page 13: Advanced Concepts for Clustered Data ONTAP 8.3 - NetApp · Advanced Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10238 Version 1.1

13 Advanced Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

unmount Unmount a volumecluster1::volume>

To show the syntax for a particular command, enter the command and follow it with ?.

cluster1::volume> size ? -vserver <vserver name> Vserver Name [-volume] <volume name> Volume Name [[-new-size] <text>] [+|-]<New Size>cluster1::volume>

Tab completion works by completing what you are typing, and prompting you for what is recommended next whileyou are still typing part of a command directory or command. It can even provide options for the values requiredto complete the command.

Try tab completion by backspacing to clear the size command, typing the modify command, and pressing the Tabkey. The next option is automatically filled in. Press Tab again to get a list of options, and then type 1 to completethe text svm1. Press Tab again to get the -volume option, and type in the volume name svm1_vol02. Continueusing tab completion until you get to -security-style unix. Before you press Enter, backspace to delete the word“unix”, and type ?.

The output should look like this example:

cluster1::volume> modify -vserver svm1 -volume svm1_vol02 -size 1GB -state online -policy default -user 0 -group 0 -security-style ? mixed ntfs unix

Backspace to delete the modify command, and type .. to move up one level in the command hierarchy, or typetop to return to the root of the command tree.

cluster1::volume> topcluster1::>

Type history to show the commands that you executed in the current session, or use the up arrow to repeatrecently executed commands. Use the right and left arrows, and the backspace key to edit and rerun thecommands. Alternatively, you can use the! (number) syntax to run a previous command in the list.

cluster1::> history 1 rows 0 2 volume 3 topcluster1::>

You may notice the rows 0 command in the history list output shown in this guide (and not shown in your lab).rows 0 disables output paging on the command console. After you run rows 0, the console stops prompting you to“Press <space> to page down, <return> for next line, or ‘q’ to quit”. We suggest you leave the existing paginationsetting in place while you proceed through this lab.

Certain commands require different privilege levels. By default, you are logged in with admin privilege. To enter“advanced” or “diag” privilege, run the set -privilege <level> command, or use set <level> as the shorterversion of the command. An * is appended to the prompt to show that you are not in the default privileged level.

Note: There is no access to advanced or diag privilege commands in System Manager.

The best practice is to initiate non-admin privilege only as needed, then return to admin privilege with thecommands set -priv admin or set admin.

cluster1::> set advancedWarning: These advanced commands are potentially dangerous; use them only when directed to do so by NetApp personnel.Do you want to continue? {y|n}: ycluster1::*>cluster1::*> set admincluster1::>

Page 14: Advanced Concepts for Clustered Data ONTAP 8.3 - NetApp · Advanced Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10238 Version 1.1

14 Advanced Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

You can type abbreviations to run a command. For example, vol show is recognized as volume show. Be awarethat command abbreviations are limited. For instance, there are also volume show-footprint or volume show-spacecommands, so the abbreviation vol sho is not unique to a single command, and therefore not recognized.

You can use pattern matching with wildcards when running commands. For example:

cluster1::> vol show svm2*Vserver Volume Aggregate State Type Size Available Used%--------- ------------ ------------ ---------- ---- ---------- ---------- -----svm2 svm2_root aggr1_cluster1_02 online RW 20MB 18.88MB 5%svm2 svm2_vol01 aggr1_cluster1_01 online RW 1GB 1023MB 0%2 entries were displayed.cluster1::>

When running commands, you see only certain fields by default. To display all fields, run the -instance command.

cluster1::> network interface show -lif cluster_mgmt -instance Vserver Name: cluster1 Logical Interface Name: cluster_mgmt Role: cluster-mgmt Data Protocol: none Home Node: cluster1-01 Home Port: e0c Current Node: cluster1-01 Current Port: e0c Operational Status: up Extended Status: - Is Home: true Network Address: 192.168.0.101 Netmask: 255.255.255.0 Bits in the Netmask: 24 IPv4 Link Local: - Subnet Name: - Administrative Status: up Failover Policy: broadcast-domain-wide Firewall Policy: mgmt Auto Revert: false Fully Qualified DNS Zone Name: none DNS Query Listen Enable: false Failover Group Name: Default FCP WWPN: - Address family: ipv4 Comment: - IPspace of LIF: Defaultcluster1::>

You will often see a very large number of fields for a particular object. To show a few specific fields, limit thenumber of displayed fields by using the -fields qualifier.

Remember, you can use ? to show all possible values. Try using wildcards to show only items with “svm1” in thename.

cluster1::> network interface show ? [ -by-ipspace | -failover | -instance | -fields <fieldname>, ... ] [ -vserver <vserver> ] Vserver Name [[-lif] <lif-name>] Logical Interface Name [ -role {cluster|data|node-mgmt|intercluster|cluster-mgmt} ] Role [ -data-protocol {nfs|cifs|iscsi|fcp|fcache|none}, ... ] Data Protocol [ -home-node <nodename> ] Home Node [ -home-port {<netport>|<ifgrp>} ] Home Port [ -curr-node <nodename> ] Current Node [ -curr-port {<netport>|<ifgrp>} ] Current Port [ -status-oper {up|down} ] Operational Status [ -status-extended <text> ] Extended Status [ -is-home {true|false} ] Is Home [ -address <IP Address> ] Network Address [ -netmask <IP Address> ] Netmask [ -netmask-length <integer> ] Bits in the Netmask [ -auto {true|false} ] IPv4 Link Local [ -subnet-name <subnet name> ] Subnet Name [ -status-admin {up|down} ] Administrative Status [ -failover-policy {system-defined|local-only|sfo-partner-only|ipspace-wide|disabled|broadcast-domain-wide} ]

Page 15: Advanced Concepts for Clustered Data ONTAP 8.3 - NetApp · Advanced Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10238 Version 1.1

15 Advanced Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

Failover Policy [ -firewall-policy <policy> ] Firewall Policy [ -auto-revert {true|false} ] Auto Revert [ -dns-zone {zone-name|none} ] Fully Qualified DNS Zone Name [ -listen-for-dns-query {true|false} ] DNS Query Listen Enable [ -failover-group <failover-group> ] Failover Group Name [ -wwpn <text> ] FCP WWPN [ -address-family {ipv4|ipv6|ipv6z} ] Address family [ -comment <text> ] Comment [ -ipspace <IPspace> ] IPspace of LIFcluster1::> network interface show svm1* -field home-nodevserver lif home-node------- --------------- -----------svm1 svm1_admin_lif1 cluster1-01svm1 svm1_cifs_nfs_lif1 cluster1-012 entries were displayed.cluster1::>

You can set other options to customize the behavior of the CLI. A useful option is to set the default timeout valuefor CLI sessions. Check the settings on your system and, if they are not set, modify the timeout to be 0. Thissetting disables the timeout for your CLI session.

cluster1::> system timeout modify 30cluster1::> system timeout modify 0cluster1::> system timeout showCLI session timeout: 0 minutescluster1::>

The set command, which you already used to specify the privilege level, has other options shown in the nextexample. See what happens when you set different options. Remeber to set the options back before youcontinue.

cluster1::> set ? [[-privilege] {admin|advanced|diagnostic}] Privilege Level [ -confirmations {on|off} ] Confirmation Messages [ -showallfields {true|false} ] Show All Fields [ -showseparator <text (size 1..3)> ] Show Separator [ -active-help {true|false} ] Active Help [ -units {auto|raw|B|KB|MB|GB|TB|PB} ] Data Units [ -rows <integer> ] Pagination Rows ('0' disables) [ -vserver <text> ] Default Vserver [ -node <text> ] Default Node [ -stop-on-error {true|false} ] Stop On Errorcluster1::>

3.2.2 Setting the SVM Context

For many commands, you must specify the SVM by using the -vserver <SVM name> qualifier. This is becauseobjects, such as volume names, only need to be unique within the SVM, but could be repeated across multipleSVMs.

Suppose that you are running a number of commands within the same SVM. In this scenario, you can set acontext to a specific SVM so that you do not need to qualify the commands each time.

Without the SVM context, try the volume show command. You should see the root volume for each node (vol0), aswell as the volumes in all of the SVMs.

cluster1::> volume showVserver Volume Aggregate State Type Size Available Used%--------- ------------ ------------ ---------- ---- ---------- ---------- -----cluster1-01 vol0 aggr0_cluster1_01 online RW 2.85GB 1.16GB 59%cluster1-02 vol0 aggr0_cluster1_02 online RW 2.85GB 1.04GB 63%svm1 svm1_root aggr1_cluster1_01 online RW 20MB 18.88MB 5%svm1 svm1_vol01 aggr1_cluster1_01 online RW 1GB 972.5MB 5%svm1 svm1_vol02 aggr1_cluster1_02 online RW 1GB 972.5MB 5%svm1 svm1_vol03 aggr2_cluster1_01 online RW 1GB 972.5MB 5%svm1 svm1_vol04 aggr2_cluster1_02 online RW 1GB 972.5MB 5%svm2 svm2_root aggr1_cluster1_02 online RW 20MB 18.88MB 5%svm2 svm2_vol01 aggr1_cluster1_01 online RW 1GB 1023MB 0%9 entries were displayed.

Page 16: Advanced Concepts for Clustered Data ONTAP 8.3 - NetApp · Advanced Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10238 Version 1.1

16 Advanced Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

cluster1::>

To display only the volumes in the SVM named svm1, issue volume show -vserver svm1. Alternatively, you can seta temporary context for just svm1. Try this command:

cluster1::> vserver context -vserver svm1Info: Use 'exit' command to return.svm1::> vol showVserver Volume Aggregate State Type Size Available Used%--------- ------------ ------------ ---------- ---- ---------- ---------- -----svm1 svm1_root aggr1_cluster1_01 online RW 20MB 18.88MB 5%svm1 svm1_vol01 aggr1_cluster1_01 online RW 1GB 972.5MB 5%svm1 svm1_vol02 aggr1_cluster1_02 online RW 1GB 972.5MB 5%svm1 svm1_vol03 aggr2_cluster1_01 online RW 1GB 972.5MB 5%svm1 svm1_vol04 aggr2_cluster1_02 online RW 1GB 972.5MB 5%5 entries were displayed.svm1::>

The prompt changes to the SVM that you selected (svm1), and you see only the volumes that belong to svm1. Aslong as you are in the SVM context, you will not have to use the -vserver <SVM name> qualifier.

List the available commands. You will see a different (restricted) command list. For example, there is no storagecommand. This is because the SVM shell is running with sufficient privileges to execute only the specificcommands that are relevant to an SVM. Once you type exit and return to the cluster prompt, you have fullcommand access over all entities in the cluster.

svm1::> ? up Go up one directory dashboard> (DEPRECATED)-Display dashboards exit Quit the CLI session export-policy Manage export policies and rules history Show the history of commands for this CLI session job> Manage jobs and job schedules lun> Manage LUNs man Display the on-line manual pages network> Manage physical and virtual network connections redo Execute a previous command rows Show/Set the rows for this CLI session security> The security directory set Display/Set CLI session settings snapmirror> Manage SnapMirror statistics> Display operational statistics system> The system directory top Go to the top-level directory volume> Manage virtual storage, including volumes, snapshots, and mirrors vserver> Manage Vserverssvm1::> exitcluster1::>

3.2.3 Node Management CLI

Each node in the cluster has its own management LIF. Node management LIFs exist so that you can manageindividual nodes if they lose contact with the rest of the cluster.

Use the following command to display information about the node management LIFs. Each node has an SVM thatowns the management LIF for the node.

cluster1::> network interface show -role node-mgmt Logical Status Network Current Current IsVserver Interface Admin/Oper Address/Mask Node Port Home----------- ---------- ---------- ------------------ ------------- ------- ----cluster1 cluster1-01_mgmt1 up/up 192.168.0.111/24 cluster1-01 e0c true cluster1-02_mgmt1 up/up 192.168.0.112/24 cluster1-02 e0c true2 entries were displayed.cluster1::>

You can establish an SSH session to any of these node management LIFs. Use your admin/Netapp1!credentials. The prompt is the same as the prompt for the cluster management CLI.

Page 17: Advanced Concepts for Clustered Data ONTAP 8.3 - NetApp · Advanced Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10238 Version 1.1

17 Advanced Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

In addition to its own LIF, each node also has its own root volume. The node root volume is bound to the clusternode. It contains configuration files, logs, and other files associated with a node’s normal operation. A node rootvolume is part of the physical cluster infrastructure. It is not associated with an SVM, does not hold user data, anddoes not contain junctions to other volumes.

The CLI you have access to in this lab is exactly the same as if you created an SSH session to the clustermanagement LIF. The difference is that the node management LIF always resides on its own node because itis an IP address used specifically for managing a particular node. The node management LIF does not fail overto another node if the home node is shut down. For this reason, you should use the cluster management LIF tomanage the cluster, because this LIF can, and will, fail over to another node. As long as the cluster is active, youalways have a reachable cluster management LIF.

However, suppose that a node is no longer in the cluster. If the node is still up, you can create an SSH session toits node management LIF to run node-specific diagnostics, because these will not be accessible from the clustermanagement CLI.

3.2.4 Node-Scoped CLI

The node-scoped CLI is also known as the node shell. It provides access to node-specific commands that mightbe required to perform administrative tasks not available in the cluster management CLI.

Administrative tasks that require the use of the node-scoped CLI are rare. The node-scoped CLI is not typicallyused to administer a clustered Data ONTAP system. The node-scoped CLI should be used infrequently and withcare.

You enter the node CLI from the cluster management CLI. Go to the cluster1 PuTTY session for this section,using the procedure described in the “Before You Begin” section of this lab guide.

You can access the node shell through two methods. The method to use depends on whether you want to runone specific command, or a series of commands.

3.2.4.1 Single CommandCheck the names of your nodes by typing node show.

cluster1::> node showNode Health Eligibility Uptime Model Owner Location--------- ------ ----------- ------------- ----------- -------- ---------------cluster1-01 true true 02:19:21 SIMBOXcluster1-02 true true 02:19:05 SIMBOX2 entries were displayed.cluster1::>

Note: See the Model column? Have you noticed any other indication that you are running a Data ONTAPsimulator rather than physical hardware?

To run a single specific command for one node, specify that node by using the node run -node <node-name><command> command.

cluster1::> node run -node cluster1-02 aggr status Aggr State Status Optionsaggr1_cluster1_02 online raid_dp, aggr nosnap=on 64-bitaggr0_cluster1_02 online raid_dp, aggr root, nosnap=on 64-bitaggr2_cluster1_02 online raid_dp, aggr nosnap=on 64-bitcluster1::>

When the command is executed, it displays the aggregates defined on that node and returns you to the clusterprompt.

In this case, node scope syntax is used instead of clustered Data ONTAP syntax, and the output is also formatteddifferently. The node-scoped CLI does not support tab completion.

Page 18: Advanced Concepts for Clustered Data ONTAP 8.3 - NetApp · Advanced Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10238 Version 1.1

18 Advanced Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

The equivalent clustered Data ONTAP CLI command is storage aggregate show.

cluster1::> storage aggregate show -node cluster1-02Aggregate Size Available Used% State #Vols Nodes RAID Status--------- -------- --------- ----- ------- ------ ---------------- ------------aggr0_cluster1_02 3.02GB 141.3MB 95% online 1 cluster1-02 raid_dp, normalaggr1_cluster1_02 102.3GB 101.3GB 1% online 2 cluster1-02 raid_dp, normalaggr2_cluster1_02 102.3GB 101.3GB 1% online 1 cluster1-02 raid_dp, normal3 entries were displayed.cluster1::>

3.2.4.2 Node ShellIf you want to run a number of node-specific commands, start a shell by omitting the command parameter.

cluster1::> node run -node cluster1-02Type 'exit' or 'Ctrl-D' to return to the CLIcluster1-02>

The prompt changes to the node of the shell you are in. To return to the cluster management CLI, enter exit, orpress Ctrl-D. For now, stay in the node shell.

List the available commands.

cluster1-02> ?? fsecurity ping sourceacpadmin halt ping6 statsaggr help pktt storagebackup hostname priority sysconfigcdpd ic priv sysstatcf ifconfig qtree timezoneclone ifgrp quota traceroutecna_flash ifstat rdfile traceroute6coredump ipspace reallocate upsdate key_manager restore_backup uptimedcb keymgr revert_to versiondf license route vfilerdisk logger rshstat vlandisk_fw_update man sasadmin vmservicesdownload maxfiles sasstat volecho mt savecore wafltopems ndmpcopy shelfchk wccenvironment ndp sis wrfilefcadmin netstat smnadmin ypcatfcp options snap ypgroupfcstat partner snapmirror ypmatchfile passwd software ypwhichflexcachecluster1-02>

The following list identifies situations in which you should use the node shell:

• When you modify the size of the node root volume. Using the node shell is necessary because thenode root volume is considered a 7-Mode volume and can be modified only in the node scope.

• When running the snapshot delta command. The cluster management CLI does not currently includethis command. The command is available as in System Manager, through a ZAPI, or it can be run fromthe node shell.

Note: In general, do not perform network configuration or storage provisioning from the node shell. Youshould only use it for those functions that you cannot perform from the cluster management CLI, or fromSystem Manager.

Exit the node shell.

cluster1-02> exitlogoutcluster1::>

Page 19: Advanced Concepts for Clustered Data ONTAP 8.3 - NetApp · Advanced Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10238 Version 1.1

19 Advanced Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

3.3 Load-Sharing Mirrors

3.3.1 Namespace Overview

Flexible volumes containing NAS data are junctioned into the owning SVM in a hierarchy. This hierarchy presentsNAS clients with a unified view of the storage, regardless of the physical location of flexible volumes inside thecluster.

When a flexible volume is created within the SVM, the administrator specifies the junction path for the flexiblevolume. The junction path is a directory location under the root of the SVM where the flexible volume can beaccessed. A flexible volume’s name and its' junction path do not need to be the same.

Junction paths allow each flexible volume to be browsable, like a directory or folder. NFS clients can accessmultiple flexible volumes using a single mount point. CIFS clients can access multiple flexible volumes using asingle CIFS share.

A namespace consists of a group of volumes connected using junction paths. It is the hierarchy of flexiblevolumes within a single SVM as presented to NAS clients.

A namespace that exists natively inside a storage system provides a single point of management for thenamespace, instead of maintaining separate namespaces for NFS (using automount maps), and CIFS (usingDFS). A namespace can reduce or eliminate the reliance on DFS, automount maps, and complex, ad-hoc storageprovisioning scripts. A namespace also facilitates nondisruptive operation by separating the physical location ofNAS storage from its logical location.

An SVM’s top-level flexible volume is known as the SVM root volume. The SVM root volume forms the root ofthe flexible volume hierarchy in an SVM. It is the parent, grandparent, or ancestor of every flexible volume in theSVM’s namespace.

3.3.2 Load-Sharing Mirror Overview

Load-sharing mirrors are used to protect the accessibility of an SVM’s namespace in case an SVM’s root volumebecomes inaccessible.

A load-sharing mirror of a source flexible volume is a full, read-only copy of that flexible volume. Load-sharingmirrors provide read-only access to the contents of the source flexible volume even if the source becomesunavailable. A load-sharing mirror can also be promoted to become the read-write volume.

A cluster might have many load-sharing mirrors of a single source flexible volume. When load-sharing mirrors areused, every node in the cluster should have a load-sharing mirror of the source flexible volume. The node thatcurrently hosts the source flexible volume should also have a load-sharing mirror. Identical load-sharing mirrorson the same node yield no performance benefit.

Load-sharing mirrors are updated on demand, or on a schedule that is defined by the cluster administrator. Writesmade to the mirrored flexible volume are not visible to readers of that flexible volume until the load-sharing mirrorsare updated. Similarly, junctions added in the source flexible volume are not visible to readers until the load-sharing mirrors are updated.

Load-sharing mirrors can only support NAS protocols (CIFS or NFSv3). They do not support NFSv4 clients orSAN client protocol connections (FC, FCoE, or iSCSI).

3.3.3 Exercise

In this exercise, you will create a load-sharing mirror of svm1’s root volume on each node in cluster1. Thepurpose of this exercise is to illustrate the requirement that load-sharing mirrors must be updated after a newvolume is junctioned into svm1’s root volume, and before the volume becomes visible to clients.

1. Create the load sharing mirrors by using the volume create command. Like all volume create commands,this command requires Vserver, volume, and aggregate parameters. The size parameter is specified

Page 20: Advanced Concepts for Clustered Data ONTAP 8.3 - NetApp · Advanced Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10238 Version 1.1

20 Advanced Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

to match the size of svm1’s root volume. The type parameter is set to DP, which is short for “dataprotection.”

From the cluster1 CLI:

cluster1::> volume create -vserver svm1 -volume svm1_root_lsm1 -aggregate aggr1_cluster1_01 -size 20MB -type DP[Job 560] Job is queued: Create svm1_root_lsm1.Job 560] Job succeeded: Successfulcluster1::> volume create -vserver svm1 -volume svm1_root_lsm2 -aggregate aggr1_cluster1_02 -size 20MB -type DP[Job 561] Job is queued: Create svm1_root_lsm2.Job 561] Job succeeded: Successfulcluster1::> volume show -vserver svm1 -volume svm1_root_lsm*Vserver Volume Aggregate State Type Size Available Used%--------- ------------ ------------ ---------- ---- ---------- ---------- -----svm1 svm1_root_lsm1 aggr1_cluster1_01 online DP 20MB 19.89MB 0%svm1 svm1_root_lsm2 aggr1_cluster1_02 online DP 20MB 19.89MB 0%2 entries were displayed.cluster1::>

2. Run the snapmirror create command to create SnapMirror relationships between the new load-sharingmirror volumes and svm1’s root volume. In this command, specify the source and destination volumesby using the “//svm_name/volume_name” syntax. The source of the relationship is svm1’s root volume;the destination is the load-sharing mirror volumes. The relationship type is LS, which is short for “loadsharing.”

Set the update schedule to weekly; this interval is long enough to prevent the relationship to update whileyou are completing this exercise. In a production environment, the update schedule is typically set to ashorter time frame.

From the cluster1 CLI:

cluster1::> snapmirror create -source-path //svm1/svm1_root -destination-path //svm1/svm1_root_lsm1 -type LS -schedule weekly[Job 562] Job is queued: snapmirror create for the relationship with destination "cluster1://svm1/svm1_root_lsm1".[Job 562] Job succeeded: SnapMirror: donecluster1::> snapmirror create -source-path //svm1/svm1_root -destination-path //svm1/svm1_root_lsm2 -type LS -schedule weekly[Job 564] Job is queued: snapmirror create for the relationship with destination "cluster1://svm1/svm1_root_lsm2".[Job 564] Job succeeded: SnapMirror: donecluster1::>

3. Initialize the SnapMirror relationships between svm1’s root volume and the newly created load-sharingmirrors. All the mirrors can be updated with a single command, snapmirror initialize-ls-set. Thiscommand uses the same //svm_name/volume_name syntax used for the source volume. The destinationvolumes do not need to be specified because the cluster already knows about the load-sharing mirrorrelationships.

From the cluster1 CLI:

cluster1::> snapmirror initialize-ls-set -source-path //svm1/svm1_root[Job 565] Job is queued: snapmirror initialize-ls-set for source "cluster1://svm1/svm1_root".cluster1::>

4. Create a new volume in svm1. The junction path for this new volume will be “/parent2”. “/parent2” canbe thought of as a new directory under the root of svm1’s namespace, which lies at “/”. As with the othervolume create commands, specify the SVM (by using the vserver parameter), the volume name, theaggregate in which the volume will initially reside, and its size. In addition, specify the export policy touse for controlling client access to the volume.

From the cluster1 CLI:

cluster1::> volume create -vserver svm1 -volume svm1_vol05 -size 1G -junction-path

Page 21: Advanced Concepts for Clustered Data ONTAP 8.3 - NetApp · Advanced Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10238 Version 1.1

21 Advanced Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

/parent2 -policy default -aggregate aggr1_cluster1_01[Job 566] Job is queued: Create svm1_vol05.[Job 566] Job succeeded: SuccessfulNotice: Volume svm1_vol05 now has a mount point from volume svm1_root. The load sharing (LS) mirrors of volume svm1_root will be updated according to the SnapMirror schedule in place for volume svm1_root. Volume svm1_vol05 will not be visible in the global namespace until the LS mirrors of volume svm1_root have been updated.cluster1::>

5. At this point, you have a new volume in svm1, located in the namespace location “/parent2”. However,because you have not updated the load-sharing mirror of the SVM root volume, this namespace locationis not visible.

If you do not yet have a PuTTY session open to the RHEL linux client named rehl1, open one now(right-click the PuTTY icon on the task bar, and select PuTTY from the context menu, username root,password Netapp1!) and run the following command.

[root@rhel1 ~]# ls /mnt/svm1parent[root@rhel1 ~]#

Notice that you can see the volume “parent”, but not “parent2”?6. To be able to see the new namespace location, the load-sharing mirror set must be updated. You can do

this update by using the snapmirror update-ls-set command, which has a command syntax similar tothe snapmirror initialize-ls-set command used earlier.

From the cluster1 CLI:

cluster1::> snapmirror update-ls-set -source-path //svm1/svm1_root[Job 567] Job is queued: snapmirror update-ls-set for source "cluster1://svm1/svm1_root".cluster1::>

7. Run the snapmirror show command to verify that the mirror relationships have finished their update.Repeat until the mirror state is Idle and the relationship status is Snapmirrored.

From the cluster1 CLI:

cluster1::> snapmirror show ProgressSource Destination Mirror Relationship Total LastPath Type Path State Status Progress Healthy Updated----------- ---- ------------ ------- -------------- --------- ------- --------cluster1://svm1/svm1_root LS cluster1://svm1/svm1_root_lsm1 Snapmirrored Idle - true - cluster1://svm1/svm1_root_lsm2 Snapmirrored Idle - true -2 entries were displayed.cluster1::>

Repeat this command as necessary until the mirror state is Idle, and the relationship status isSnapmirrored.

8. At this point, the new volume should be visible to clients. Go back to the Linux client and run the lscommand to verify that the volume can now be accessed, using ls /mnt/svm1.

From the Linux client:

[root@rhel1 _]# ls /mnt/svm1parent parent2[root@rhel1 ~]

You should be able to see both the parent1 and parent2 volumes.

Page 22: Advanced Concepts for Clustered Data ONTAP 8.3 - NetApp · Advanced Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10238 Version 1.1

22 Advanced Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

3.4 IPspaces, Broadcast Domains, and Subnets

3.4.1 Clustered Data ONTAP 8.3 Networking Overview

Clustered Data ONTAP 8.3 introduces new networking constructs designed to simplify deployment andconfiguration: IPspaces, broadcast domains, and subnets.

3.4.1.1 IPspacesAn IPspace is a logical construct that represents a space containing unique IP addresses. With clustered DataONTAP 8.3, multiple SVMs can have overlapping IP addresses provided that each of those SVMs resides in adifferent IPspace.

When you create an IPspace, it only needs a name. The ipspace create -ipspace my_ipspace command createsan IPspace called “my_ipspace”.

3.4.1.2 Broadcast DomainsBroadcast domains enable you to group network ports that belong to the same layer 2 network. The ports in thegroup can then be used by an SVM for data or management traffic.

Broadcast domains simplify the configuration of clustered Data ONTAP by making it easier to ensure that all portsin a failover group reside in the same layer 2 network, and all ports in the same layer 2 network have the samemaximum transmission unit (MTU) values.

A broadcast domain resides in an IPspace. During cluster initialization, the system creates two default broadcastdomains:

• The “Default” broadcast domain contains ports that are in the “Default” IPspace. These ports are usedprimarily to serve data. Cluster management and node management ports are also in this broadcastdomain.

• The “Cluster” broadcast domain contains ports that are in the “Cluster” IPspace. These ports are usedfor cluster communication, and include all cluster ports from all nodes in the cluster.

If you create unique IPspaces to separate client traffic, then you must create a broadcast domain in each of thoseIPspaces. If your cluster does not require separate IPspaces, then all broadcast domains (and all ports) reside inthe system-created “Default” IPspace.

When you create a broadcast domain, you need to specify the name of the broadcast domain, an IPspace, anMTU value, and a list of ports.

3.4.1.3 SubnetsSubnets in clustered Data ONTAP 8.3 provide a way to provision blocks of IP addresses at a time. They simplifynetwork configuration by allowing the administrator to specify a subnet during LIF creation, rather than an IPaddress and netmask. A subnet object in clustered Data ONTAP does not need to encompass an entire IPsubnet, or even a maskable range within a subnet.

A subnet is created within a broadcast domain, and it contains a pool of IP addresses. You can allocate IPaddresses in a subnet to ports in the broadcast domain when LIFs are created. When you remove the LIFs, the IPaddresses are returned to the subnet pool, and are available for future LIFs.

If you specify a gateway when defining a subnet, a default route to that gateway is automatically added to theSVM when you create a LIF using that subnet.

Page 23: Advanced Concepts for Clustered Data ONTAP 8.3 - NetApp · Advanced Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10238 Version 1.1

23 Advanced Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

3.4.2 Exercise

In this exercise, you will use the new networking constructs introduced with Data ONTAP 8.3: IPspaces,broadcast domains, and subnets. You will examine the new network route command, and view the automaticallycreated failover groups.

Note: These steps are performed in the CLI because you can only create an IPspace through the CLI,and the creation of a subnet through System Manager requires a default gateway. In most productionenvironments, System Manager is sufficient.

Tip: The following steps are performed on cluster2, not cluster1. You will need to open a new PuTTYsession to cluster2 for this exercise.

To create a new IPspace you use the network ipspace create command This command requires only oneargument, ipspace, which contains the name of the IPspace you want to create.

1. Create a new IPspace on cluster2. You will use this IPspace in the next steps to create a broadcastdomain and a subnet.

cluster2::> network ipspace create -ipspace new-ipspacecluster2::> network ipspace showIPspace Vserver List Broadcast Domains------------------- ----------------------------- ----------------------------Cluster Cluster ClusterDefault cluster2, svm1-dr Defaultnew-ipspace new-ipspace -3 entries were displayed.cluster2::>

2. Create a new broadcast domain. You will need to specify a name, an IPspace in which it can reside, aset of physical network ports, and an MTU value.

From the cluster2 CLI:

cluster2::> network port broadcast-domain create -ipspace new-ipspace -broadcast-domain new-broadcast-domain -mtu 1500 -ports cluster2-01:e0g,cluster2-01:e0hcluster2::> network port broadcast-domain showIPspace Broadcast UpdateName Domain Name MTU Port List Status Details------- ----------- ------ ----------------------------- --------------Cluster Cluster 9000 - -Default Default 1500 cluster2-01:e0a complete cluster2-01:e0b complete cluster2-01:e0c complete cluster2-01:e0d complete cluster2-01:e0e complete cluster2-01:e0f completenew-ipspace new-broadcast-domain 1500 cluster2-01:e0g complete cluster2-01:e0h complete3 entries were displayed.cluster2::>

3. Create a new subnet using your newly created IPspace and broadcast domain. The subnet objectrequires a name, a broadcast domain, an IPspace, a subnet mask, and a range of IP addresses.

From the cluster2 CLI:

cluster2::> network subnet create -subnet-name new-subnet -broadcast-domain new-broadcast-domain -ipspace new-ipspace -subnet 192.168.0.0/24 -ip-ranges 192.168.0.170-192.168.0.179cluster2::> network subnet showIPspace: DefaultSubnet Broadcast Avail/Name Subnet Domain Gateway Total Ranges

Page 24: Advanced Concepts for Clustered Data ONTAP 8.3 - NetApp · Advanced Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10238 Version 1.1

24 Advanced Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

--------- ---------------- --------- --------------- --------- ---------------dr-subnet 192.168.0.0/24 Default - 7/10 192.168.0.160-192.168.0.169IPspace: new-ipspaceSubnet Broadcast Avail/Name Subnet Domain Gateway Total Ranges--------- ---------------- --------- --------------- --------- ---------------new-subnet 192.168.0.0/24 new-broadcast-domain - 10/10 192.168.0.170-192.168.0.1792 entries were displayed.cluster2::>

4. The network route command is new in clustered Data ONTAP 8.3. Use this command to view routinginformation without viewing routing groups.

You will not see any changes to the routing table output that are caused by the creation of the IPspace,broadcast domain, and subnet in the previous step because you have not created any SVMs that use theIPspace, broadcast domain, or subnet.

From the cluster2 CLI:

cluster2::> network route showVserver Destination Gateway Metric------------------- --------------- --------------- ------cluster2 0.0.0.0/0 192.168.0.1 20cluster2::>

5. Because all ports in a layer 2 broadcast domain provide the same network connectivity, LIF failovergroups are created automatically in clustered Data ONTAP 8.3 when a broadcast domain is created. Usethe network interface failover-groups show command to view automatically created failover groups.The automatically configured failover groups have the same name as the broadcast domain that youcreated.

From the cluster2 CLI:

cluster2::> network interface failover-groups show FailoverVserver Group Targets---------------- ---------------- --------------------------------------------cluster2 Default cluster2-01:e0a, cluster2-01:e0b, cluster2-01:e0c, cluster2-01:e0d, cluster2-01:e0e, cluster2-01:e0fnew-ipspace new-broadcast-domain cluster2-01:e0g, cluster2-01:e0h2 entries were displayed.cluster2::>

You can create IPspaces by using only the CLI, but you can create subnet objects and broadcastdomains by using either the CLI, or System Manager. In this subsection, you will learn about the SystemManager capabilities for modifying these objects.

First, examine the options available to modify an existing broadcast domain.6. In Chrome, click the tab for cluster2, and sign in to System Manager (username admin, password

Netapp1!).7. In the left pane, click the Cluster tab.8. In the left pane, navigate to cluster2 > Configuration > Network.9. In the “Network” pane, click the Broadcast Domains tab.10. Click Refresh to make sure that you are seeing the latest information.11. In the “Broadcast Domain” list, select the new-broadcast-domain entry.12. Click Edit.

Page 25: Advanced Concepts for Clustered Data ONTAP 8.3 - NetApp · Advanced Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10238 Version 1.1

25 Advanced Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

6

7

8

9

10

11

12

Figure 3-6:

The Edit Broadcast Domains dialog box opens. Examine the options available to modify the broadcastdomain.

13. Click Cancel to close the dialog box.

Page 26: Advanced Concepts for Clustered Data ONTAP 8.3 - NetApp · Advanced Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10238 Version 1.1

26 Advanced Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

13

Figure 3-7:

Next, examine the options available to modify an existing subnet object.14. Click the Subnets tab in the Network pane.15. Click Refresh to make sure that you are seeing the latest information.16. In the Subnets list, select the entry for new-subnet.17. Click Edit.

Page 27: Advanced Concepts for Clustered Data ONTAP 8.3 - NetApp · Advanced Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10238 Version 1.1

27 Advanced Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

14

15

16

17

Figure 3-8:

The “Edit Subnet” dialog box opens. Examine the options available to modify the subnet.18. In the Broadcast Domain area of the dialog box, expand Show ports on this domain. Review the

various settings.19. When finished, click Cancel to discard any changes you might have made.

Page 28: Advanced Concepts for Clustered Data ONTAP 8.3 - NetApp · Advanced Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10238 Version 1.1

28 Advanced Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

18

19

Figure 3-9:

Tip:

Export policies, which restrict which clients can access an exported volume or share, are not covered inthis lab, but export policy misconfiguration is a common problem that can easily be misinterpretted as anetworking problem. If you are able to reach a data LIF through the network by using a utility such as ping,and you have verified that protocol access is enabled and configured properly, check your export policyconfiguration to verify that it allows access from the client you are attempting to use.

If you would like to learn more about export policies and how to troubleshoot them, please refer to the"Securing Clustered Data ONTAP" lab.

3.5 Quality of Service (QoS)Quality of service (QoS) in clustered Data ONTAP allows the cluster administrator to limit the IOPs, or rawthroughput, available to an SVM, LUN, volume, or file (such as a VMDK file). QoS can be used to controlworkloads that excessively consume resources, and to manage tenant service levels natively inside the storagesystem.

Page 29: Advanced Concepts for Clustered Data ONTAP 8.3 - NetApp · Advanced Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10238 Version 1.1

29 Advanced Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

3.5.1 Exercise

In this activity, you examine the QoS configuration using System Manager on cluster1. This exercise uses aworkload generator to drive I/O to an SVM on cluster1. After the workload generator starts, you will configure QoSand see the reduction of I/O operations serviced to the workload generator.

The workload generator runs directly on the Windows jumphost, and targets I/O to the drive letter “Z:”. Thejumphost has the drive letter “Z:” mapped to a CIFS share on svm1 in cluster1. The CIFS share is defined on thevolume “svm1_vol01” inside svm1.

Note: This exercise uses the PuTTY session for cluster1.

From the Windows host JUMPHOST:

1. Double-click the workload.bat file on the left side of the desktop to start the workload generator.

1

Figure 3-10:

2. A Windows command prompt window opens, and starts outputting metrics about the I/O load that it isgenerating against the share mounted on the jump host Z: drive.

Figure 3-11:

In particular, note the values shown for the “ios:” field that quantifies that I/O load. In this exercise, youwill configure QoS to limit these I/O operations, thus reducing the amount of load serviced by the cluster.

Page 30: Advanced Concepts for Clustered Data ONTAP 8.3 - NetApp · Advanced Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10238 Version 1.1

30 Advanced Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

3. In System Manager for cluster1, click the browser tab for cluster1.4. In the left pane, click the Storage Virtual Machines tab.5. Navigate to cluster1 > svm1 > Policies > QoS Policy Groups.6. In the “QoS Policy Group” pane, click Create.

3

4

5

6

Figure 3-12:

The “Create Policy Group” dialog box opens.7. Set the fields in the window as follows:

• Policy Group Name: 100-KB-sec• Maximum Throughput: 100 KB/s.

8. Click theCreate button.

78

Figure 3-13:

Page 31: Advanced Concepts for Clustered Data ONTAP 8.3 - NetApp · Advanced Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10238 Version 1.1

31 Advanced Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

The “Create Policy Group” dialog box closes, and you return to the System Manager window.9. Your newly created policy should be listed in the “QoS Policy Groups” pane.

9

Figure 3-14:

10. In the left pane of System Manager, navigate to Storage Virtual Machines > cluster1 > svm1 >Storage > Volumes.

11. In the “Volumes” pane, select the svm1_vol01 volume.12. From the buttons at the top of the Volumes pane, click the Storage QoS button. If your browser window

is not wide enough to display all the buttons, you can click the small >> button at the right end of therow to reveal the hidden buttons. If you do not even see the >> button, try widening your browserwindow.

Page 32: Advanced Concepts for Clustered Data ONTAP 8.3 - NetApp · Advanced Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10238 Version 1.1

32 Advanced Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

10

11

12

Figure 3-15:

The “Quality of Service Details” dialog box opens.

13. Select the Manage Storage Quality of Service checkbox.14. Click the option to assign the volume to an Existing Policy Group.15. Click Choose.

Page 33: Advanced Concepts for Clustered Data ONTAP 8.3 - NetApp · Advanced Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10238 Version 1.1

33 Advanced Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

13

14

15

Figure 3-16:

The “Select Policy Group” dialog box opens.

16. Select the 100-KB-sec policy group you created earlier.17. Click OK.

16

17

Figure 3-17:

Page 34: Advanced Concepts for Clustered Data ONTAP 8.3 - NetApp · Advanced Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10238 Version 1.1

34 Advanced Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

The “Select Policy Group” dialog box closes, and you return to the “Quality of Service Details” dialogbox.

18. Click OK to apply the policy group to the svm1_vol01 volume. This policy group assignment takes effectas soon as you click OK.

18

Figure 3-18:

19. Quickly go back to the command prompt window that is outputting the metrics from your load generator,and observe that the reported ios: metric has dropped significantly from its previous level. In theexample in the screenshot, the ios: values dropped from the 1500 range down to 100 (note thehighlighting in the screenshot).

Page 35: Advanced Concepts for Clustered Data ONTAP 8.3 - NetApp · Advanced Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10238 Version 1.1

35 Advanced Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

19

Figure 3-19:

20. With the workload generator window in focus, press Ctrl-C. When asked if you want to terminate thebatch job, answer y.

20

Figure 3-20:

The workload generator window closes, ending this exercise.

Page 36: Advanced Concepts for Clustered Data ONTAP 8.3 - NetApp · Advanced Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10238 Version 1.1

36 Advanced Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

3.6 SnapMirrorSnapMirror is the asynchronous replication technology used in clustered Data ONTAP. Asynchronous replicationrefers to data that is replicated (backed up to the same site, or an alternate site) on a periodic interval, rather thanas soon as the data is written.

MetroCluster, introduced with clustered Data ONTAP 8.3, provides synchronous replication. Synchronousreplication refers to data that is replicated (backed up to the same site, or an alternate site) as soon as the data iswritten. MetroCluster configuration is outside the scope of this lab.

Clustered Data ONTAP 8.3 provides a number of SnapMirror enhancements, including a version-flexibleSnapMirror functionality, that allows the source of a SnapMirror relationship to be upgraded first (assuming thatthe source and destination both run a version of clustered Data ONTAP 8.3, or later).

In this lab activity, you create a version-flexible SnapMirror relationship between two volumes in cluster1 andcluster2. To do this, you first set up cluster peering between cluster1 and cluster2 by adding LIFs dedicated tointercluster peering, then establish an authenticated relationship between the clusters. After the cluster peeringrelationship is created, you will create a SnapMirror relationship between a volume on cluster1 (that serves as thesource of the SnapMirror relationship), and another volume on cluster2 (that serves as the disaster recover (DR)copy).

3.6.1 Exercise

3.6.1.1 Create Intercluster LIFs.Before you set up the authenticated relationship between cluster1 and cluster2, the clusters must be able tocommunicate with each other. Intercluster LIFs serve this purpose.

Perform the following tasks to create intercluster LIFs.

Attention: In this exercise, you use System Manager both on cluster1 and on cluster2, so pay specialattention to which cluster you are connected to during each step.

1. In your Chrome browser, click the browser tab for cluster1.2. In the left pane, click the Cluster tab.3. Navigate to cluster1 > Configuration > Network.4. In the Network pane, click the Network Interfaces tab.5. In the Network pane, click the Create button.

Page 37: Advanced Concepts for Clustered Data ONTAP 8.3 - NetApp · Advanced Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10238 Version 1.1

37 Advanced Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

1

2

3

45

Figure 3-21:

The “Create Network Interface” dialog box opens.6. Set the name to intercluster_lif1.7. In the Interface Role section, select the Intercluster Connectivity option.8. In the Port section, expand the Port or Adapters list for cluster1-01 and select port e0c.9. Click Create.

Page 38: Advanced Concepts for Clustered Data ONTAP 8.3 - NetApp · Advanced Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10238 Version 1.1

38 Advanced Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

6

7

8

9

Figure 3-22:

The dialog box closes and you return to the System Manager window.10. Your newly created intercluster_lif1 LIF should be listed under the Network Interface tab in the

Networks pane.11. Every node in cluster1 requires a cluster interconnect LIF, and since cluster1 is a two-node cluster, you

also need to create a cluster interconnect LIF for cluster1-02. Click the Create button again.

Page 39: Advanced Concepts for Clustered Data ONTAP 8.3 - NetApp · Advanced Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10238 Version 1.1

39 Advanced Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

10

11

Figure 3-23:

The “Create Network Interface” dialog window opens.12. Set the name to intercluster_lif2.13. In the Interface Role section, select the Intercluster Connectivity option.14. In the Port section, expand the Port or Adapters list for cluster1-02, and select port e0c.15. Click Create.

Page 40: Advanced Concepts for Clustered Data ONTAP 8.3 - NetApp · Advanced Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10238 Version 1.1

40 Advanced Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

12

13

14

15

Figure 3-24:

The dialog box closes, and you return to the System Manager window. At this point, you have anintercluster LIF on each node in cluster1. When you created both intercluster LIFs, you accepted thedefault to have Data ONTAP automatically select an IP address from the subnet. Review those LIFs toverify which IP addresses Data ONTAP assigned to the intercluster LIFs.

16. System Manager should still show the Network Interface list in the Network pane. Scroll down to thebottom of the list to see the entries for the new intercluster LIFs that you created. The IP addresses ofthose LIFs are included in the list entries.

17. If you click a specific LIF, you can see more detail displayed on the bottom of the pane.

Page 41: Advanced Concepts for Clustered Data ONTAP 8.3 - NetApp · Advanced Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10238 Version 1.1

41 Advanced Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

In this example, the IP addresses for the intercluster LIFs are 192.168.0.158 and 192.168.0.159.However, because Data ONTAP automatically assigns these addresses, it is possible that the values inyour lab are different from the values in the example.

Attention: Record the actual addresses assigned to the intercluster LIFs in your lab becauseyou will need them for a later step of the lab.

16

17

Figure 3-25:

After you create the intercluster LIFs for cluster1, create the intercluster LIFs for cluster2. cluster2contains a single node, so you will create only one intercluster LIF for this cluster.

18. In your Chrome browser, click the browser tab for cluster2.19. In the left pane, click the Cluster tab.20. Navigate to cluster2 > Configuration > Network.21. In the Network pane, click the Network Interfaces tab.22. In the Network pane, click Create.

Page 42: Advanced Concepts for Clustered Data ONTAP 8.3 - NetApp · Advanced Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10238 Version 1.1

42 Advanced Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

18

19

20

21

22

Figure 3-26:

The “Create Network Interface” dialog box opens.23. Set the name to intercluster_lif1 (you can use the same name here that you used on cluster1

because LIF names are scoped to the containing cluster).24. In the Interface Role section, select the Intercluster Connectivity option.25. In the Port section, expand the Port or Adapters list for cluster2-01, and select port e0c.26. Click Create.

Page 43: Advanced Concepts for Clustered Data ONTAP 8.3 - NetApp · Advanced Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10238 Version 1.1

43 Advanced Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

23

24

25

26

Figure 3-27:

The dialog box closes, and you return to the System Manager window.27. Record the IP address that Data ONTAP automatically assigned to your LIF. In this example, the

address is 102.168.0.163, but the value may be different in your lab.

Page 44: Advanced Concepts for Clustered Data ONTAP 8.3 - NetApp · Advanced Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10238 Version 1.1

44 Advanced Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

27

Figure 3-28:

Cluster2 only contains a single node, so this one intercluster LIF is all you need.

Now that all your nodes have intercluster LIFs, it's time to establish the cluster peering relationship.28. In your Chrome browser, click the browser tab for cluster1.29. In the left pane, click the Cluster tab.30. Navigate to cluster1 > Configuration > Peers.31. In the Peers pane, click Create.

Page 45: Advanced Concepts for Clustered Data ONTAP 8.3 - NetApp · Advanced Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10238 Version 1.1

45 Advanced Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

28

29

30

31

Figure 3-29:

The “Create Cluster Peer” dialog box opens.32. In the Passphrase box enter Netapp1!.33. In the Intercluster IP Addresses box, add the IP address that you noted earlier for the intercluster LIF

(intercluster_lif1) from the node cluster2-01.

Caution: In the example shown in this lab, the address was 192.168.0.163, but the addressthat Data ONTAP assigned to the LIF in your lab may be different.

34. Click the Create button.

Page 46: Advanced Concepts for Clustered Data ONTAP 8.3 - NetApp · Advanced Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10238 Version 1.1

46 Advanced Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

32

33

34

Figure 3-30:

The “Confirm Create Cluster Peer” dialog box opens.35. Click OK.

35

Figure 3-31:

The dialog box closes, and you return to the System Manager window.36. An entry for cluster2 now appears in the Peers list, but it is shown as “unavailable” because the

authentication status is still pending. You have initiated a cluster peering operation from cluster1, but tocomplete it, cluster2 must also accept the peering request.

Page 47: Advanced Concepts for Clustered Data ONTAP 8.3 - NetApp · Advanced Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10238 Version 1.1

47 Advanced Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

36

Figure 3-32:

Switch back to cluster2 so that you can accept the cluster peering operation.37. In your Chrome browser, click the browser tab for cluster2.38. In the left pane, click the Cluster tab.39. Navigate to cluster2 > Configuration > Peers.40. In the “Peers” pane, click Create.

Page 48: Advanced Concepts for Clustered Data ONTAP 8.3 - NetApp · Advanced Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10238 Version 1.1

48 Advanced Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

37

38

39

40

Figure 3-33:

The “Create Cluster Peer” dialog box opens.41. In the Passphrase box enter the same password you used earlier, Netapp1!.42. In the Intercluster IP addresses box enter the IP addressess that you noted earlier for the intercluster

LIFs (intercluster_lif1 and intercluster_lif2) from the nodes cluster1-01 and cluster1-02.

Caution: In the example shown in this lab, those addresses were 192.168.0.158 and192.168.0.159, but the addresses that Data ONTAP assigned to the LIFs in your lab may bedifferent.

43. When finished entering the values, click the Create button.

Page 49: Advanced Concepts for Clustered Data ONTAP 8.3 - NetApp · Advanced Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10238 Version 1.1

49 Advanced Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

41

42

43

Figure 3-34:

The “Confirm Create Cluster Peer” dialog box opens.44. Click the OK button.

44

Figure 3-35:

The dialog box closes, and you return to the System Manager window.45. System Manager takes a few moments to create the peer relationship between cluster1 and cluster2.

The authentication status for that relationship should change to “ok” immediately, but the Availabilitycolumn will be at “peering”.

46. Wait a few seconds, then click Refresh every 1–2 seconds until the Availability column changes frompeering to available.

Page 50: Advanced Concepts for Clustered Data ONTAP 8.3 - NetApp · Advanced Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10238 Version 1.1

50 Advanced Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

45

46

Figure 3-36:

At this point, the two clusters have an established peering relationship. Next, you can create aSnapMirror relationship.

3.6.1.2 Create a SnapMirror RelationshipBecause you created a peering relationship between the two clusters, they are now capable of entering into aSnapMirror relationship between each other. In this exercise, you will establish a SnapMirror relationship betweenan SVM volume on each cluster.

1. In your Chrome browser, click the browser tab for cluster1.2. In the left pane, click the Storage Virtual Machines tab.3. In the left pane, navigate to cluster1 > svm1 > Storage > Volumes.4. In the Volumes pane, select the entry for svm1_vol01.5. In the Volumes pane, click the Protect by button on the right side of the button bar. If you don’t see this

button, then it may be hidden because your browser window is not wide enough. In this case, use the >>button at the far right side of the button bar to display the hidden buttons.

6. In the drop-down menu for the Protect by button, select Mirror.

Page 51: Advanced Concepts for Clustered Data ONTAP 8.3 - NetApp · Advanced Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10238 Version 1.1

51 Advanced Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

1

2

3 4

5

6

Figure 3-37:

The “Create Mirror Relationship” dialog box opens.

7. In the “Destination Volume” section, verify that the Cluster list to cluster2, and set the Storage VirtualMachine list to svm1-dr.

8. Note the warning under this list saying that the selected SVM is not peered. Click the Authenticate linkat the end of that sentence.

Page 52: Advanced Concepts for Clustered Data ONTAP 8.3 - NetApp · Advanced Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10238 Version 1.1

52 Advanced Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

78

Figure 3-38:

The Authentication dialog box opens.

9. Set the user name to admin, and the password Netapp1!.10. Click OK.

Page 53: Advanced Concepts for Clustered Data ONTAP 8.3 - NetApp · Advanced Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10238 Version 1.1

53 Advanced Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

9

10

Figure 3-39:

The Authentication dialog box closes and the system processes the SVM peering operation. After a fewseconds, you return to the “Create Mirror Relationship” dialog box.

11. In the “Destination Volume” section, accept the default values that System Manager populated into theVolume Name box (svm1_svm1_vol01_mirrror1) and the Aggregate box (aggr1_cluster2_01).

12. In the Configuration Details section, select the Create version flexible mirror relationshipcheckbox.This is a new feature introduced in 8.3 that removes the limitation requiring the destinationcontroller to have a clustered Data ONTAP operating system major version number equal to orhigher than the major version of the source controller. This allows customers to maintain undisruptedreplication during Data ONTAP upgrade cycles.

13. In the Mirror Schedule list, select the daily value.14. When finished, click Create.

Page 54: Advanced Concepts for Clustered Data ONTAP 8.3 - NetApp · Advanced Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10238 Version 1.1

54 Advanced Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

11

12

13

14

Figure 3-40:

The Create Mirror Relationship wizard begins the process of establishing and initializing the SnapMirrorrelationship between the volumes.

15. When the status of all the initialization operations indicate success, click OK.

Page 55: Advanced Concepts for Clustered Data ONTAP 8.3 - NetApp · Advanced Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10238 Version 1.1

55 Advanced Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

15

Figure 3-41:

You have now successfully established a SnapMirror relationship.To verify the status of thatrelationship you'll need to look at the destination cluster.

16. In Chrome, select the broswer tab for cluster2.17. Select the Storage Virtual Machines tab.18. Navigate to cluster2 > svm1-dr > Protection.19. In the Protection pane, select the relationship for source volume svm1_vol0. This should be the only

relationship listed.20. In the lower pane, click the Details tab.21. Examine the detail of this relationship, which indicate that it is healthy and that the last transfer just

completed a few moments ago.

Page 56: Advanced Concepts for Clustered Data ONTAP 8.3 - NetApp · Advanced Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10238 Version 1.1

56 Advanced Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

16

17

18

19

20

21

Figure 3-42:

This completes the exercise.

3.7 Disaster Recovery for Storage Virtual Machines

3.7Traditional volume SnapMirror requires you to set up a separate mirroring relationship for each volume you wantto mirror. In cases where you want to mirror many volumes for an SVM you have to set up many SnapMirrorrelationships, and even then you have to manually maintain all the configuration for the destination SVM,including setting up LIFs, namespaces, protocols, and so on.

Disaster Recovery for Storage Virtual Machines, also referred to as SVM DR, is a solution that uses SnapMirrorto mirror a storage virtual machine's (SVM's) entire set of volumes and it's configuration. It simplifies failover byminimizing or completely avoiding manual configuration at the destination SVM through automated setup andchange management.

To set up an SVM DR relationship you create one SnapMirror relationship that replicates the entire SVM'scontents, and as you add, remove, or re-junction volumes, SVM DR will automatically apply those changes to thedestination SVM according to your replication schedule, potentially along with other SVM configuration settings.

When you create an SVM DR relationship you can choose to replicate all or a subset of the source SVM'sconfiguration to the Destination SVM. This choice is controlled through the -identity-preserve command lineoption.

When -identity-preserve is set to true, SVM DR replicates the source SVM configuration settings listed thefollowing figure to the destination SVM. Since this mode replicates network identity information, the destinationSVM does require access to the same network resources (physical/virtual networks, Active Directory servers,etc.) as the source SVM has. This is the identity preserve mode that most customers will likely want to deploy fordisaster recovery needs.

Page 57: Advanced Concepts for Clustered Data ONTAP 8.3 - NetApp · Advanced Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10238 Version 1.1

57 Advanced Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

Figure 3-43:

When -identity-preserve is set to false, only a subset of the source SVM's configuration data is replicated to thedestination SVM, as described in the following figure. This mode is intended for replication to different sites thathave different network resources, or to support the creation of additional read-only copies of the SVM within thesame environment as the source SVM.

Page 58: Advanced Concepts for Clustered Data ONTAP 8.3 - NetApp · Advanced Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10238 Version 1.1

58 Advanced Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

Figure 3-44:

As with traditional volume Snapmirror, SVM DR relationships can be broken off, reversed, and re-synchronized,allowing you to cut-over the SVM's services from one cluster to another. If -identity-preserve is set to true thenwhen you stop the source SVM and start the destination SVM, the destination SVM has the same LIFs, IPaddresses, namespace structure, and so on. However, such a switchover is disruptive for both CIFS (whichrequires an SMB reconnect) and NFS (which requires a re-mount).

SVM DR does not replicate iSCSI or FCP configuration in either -identity-preserve mode. The underling volumes,LUNs, and namespace are still replicated, as are the LIFs if -identity-preserve is set to true, but LUN igroup/portsets will not be replicated nor will the SVM's iSCSI/FCP protocol configuration. If you want to support iSCSI/FCP through an SVM DR relationship then you will have to manually configure the iSCSI/FCP protocols, igroups,and portsets on the destination SVM.

3.7.1 Exercise

3.7.1

In this exercise you will be creating an identity-preserve "true" SVM-DR relationship between the source SVMsvm3 on cluster1 to a new SVM named smv3-dr that you will be creating on cluster2. You will then perform a cut-over operation, making svm3-dr the new operational primary, and then reverting the primary back to svm3 again.

Note: This lab utilizes CLI sessions to the storage clusters cluster1 and cluster2, and to the Linux clientrhel1. You will be frequently switching between this sessions, so pay attention to the command prompts inthis exercise to help you issue the commands on the correct hosts.

1. Open a PuTTY sessions to each of cluster1 and cluster2, and log in with the username admin and thepassword Netapp1!.

Page 59: Advanced Concepts for Clustered Data ONTAP 8.3 - NetApp · Advanced Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10238 Version 1.1

59 Advanced Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

2. Open a PuTTY session to rhel1, and log in as root with the password Netapp1!.3. In the PuTTY session for cluster2, display a list of the SVMs on the cluster.

cluster2::> vserver show Admin Operational RootVserver Type Subtype State State Volume Aggregate----------- ------- ---------- ---------- ----------- ---------- ----------cluster2 admin - - - - -cluster2-01 node - - - - -svm1-dr data default running running svm1dr_ aggr1_ root cluster2_ 013 entries were displayed.

cluster2::>

4. Create the destination SVM svm3-dr.

cluster2::> vserver create -vserver svm3-dr -subtype dp-destination[Job 314] Job is queued: Create svm3-dr.[Job 314][Job 314] Job succeeded:

Vserver creation completed

cluster2::>

5. List the SVMs on cluster2.

cluster2::> vserver show Admin Operational RootVserver Type Subtype State State Volume Aggregate----------- ------- ---------- ---------- ----------- ---------- ----------cluster2 admin - - - - -cluster2-01 node - - - - -svm1-dr data default running running svm1dr_ aggr1_ root cluster2_ 01svm3-dr data dp-destination stopped - - running4 entries were displayed.

cluster2::>

Notice that the svm3-dr SVM is administratively running but is operationally stopped.

6. On cluster2, initiate an SVM peering relationship between the svm3-dr and svm3.

cluster2::> vserver peer create -vserver svm3-dr -peer-vserver svm3 -applications snapmirror -peer-cluster cluster1

Info: [Job 315] 'vserver peer create' job queued

cluster2::>

7. View the SVM peering status.

cluster2::> vserver peer show Peer Peer PeeringVserver Vserver State Applications----------- ----------- ------------ ------------------svm1-dr svm1 peered snapmirrorsvm3-dr svm3 initiated snapmirror2 entries were displayed.

cluster2::>

8. On cluster1, view the SVM peering status.

cluster1::> vserver peer show Peer Peer PeeringVserver Vserver State Applications

Page 60: Advanced Concepts for Clustered Data ONTAP 8.3 - NetApp · Advanced Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10238 Version 1.1

60 Advanced Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

----------- ----------- ------------ ------------------svm1 svm1-dr peered snapmirrorsvm3 svm3-dr pending snapmirror2 entries were displayed.

cluster1::>

9. Accept the pending peering request.

cluster1::> vserver peer accept -vserver svm3 -peer-vserver svm3-dr

Info: [Job 1030] 'vserver peer accept' job queued

cluster1::>

10. View the SVM peering status again.

cluster1::> vserver peer show Peer Peer PeeringVserver Vserver State Applications----------- ----------- ------------ ------------------svm1 svm1-dr peered snapmirrorsvm3 svm3-dr peered snapmirror2 entries were displayed.

cluster1::>

11. On cluster2, create the SnapMirror relationship between the source SVM svm3 and the destinationSVM svm3-dr.

cluster2::> snapmirror create -source-path svm3: -destination-path svm3-dr: -type DP -throttle unlimited -identity-preserve true -schedule hourly

cluster2::>

If you are familiar with creating volume SnapMirror relationships from the CLI then this commandshould look familiar, as it is essentially the same command used for volume SnapMirror, but with fewkey differences. Most significant is the format of the values for the -source-path and -destination-patharguments. Path values for volume SnapMirror take the form <svm>:<volume>, whereas for SVMDR paths take the form <svm>:. One other difference is the inclusion of the -identity-preserve trueoption, which indicates that this is an identity preserve relationship, meaning that all of the SVM'sconfiguration information should be replicated to the destination SVM. If you were to instead specify -identity-preserve false then this would instead be an identity discard relationship.

12. Display the state of the cluster's SnapMirror relationships.

cluster2::> snapmirror show ProgressSource Destination Mirror Relationship Total LastPath Type Path State Status Progress Healthy Updated----------- ---- ------------ ------- -------------- --------- ------- --------svm1:svm1_vol01 XDP svm1-dr:svm1_svm1_vol01_mirror1 Snapmirrored Idle - true -svm3: DP svm3-dr: Uninitialized Idle - true -2 entries were displayed.

cluster2::>

Data ONTAP has created the relationship, but not yet initialized it (i.e. it has not initiated the first datatransfer).

13. Initialize the SnapMirror relationship.

cluster2::> snapmirror initialize -destination-path svm3-dr:

cluster2::>

Page 61: Advanced Concepts for Clustered Data ONTAP 8.3 - NetApp · Advanced Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10238 Version 1.1

61 Advanced Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

14. View the status of the SnapMirror relationships again.

cluster2::> snapmirror show ProgressSource Destination Mirror Relationship Total LastPath Type Path State Status Progress Healthy Updated----------- ---- ------------ ------- -------------- --------- ------- --------svm1:svm1_vol01 XDP svm1-dr:svm1_svm1_vol01_mirror1 Snapmirrored Idle - true -svm3: DP svm3-dr: Uninitialized Transferring - true -2 entries were displayed.

cluster2::>

Data has started transferring for the relationship.

Notice that there is only a single entry displayed for the SVM DR relationship, even though behind thescene there are multiple SnapMirror relationships in operation for this relationship.

15. Display the status of all the constituents for the SVM disaster recovery relationships.

cluster2::> snapmirror show -expand ProgressSource Destination Mirror Relationship Total LastPath Type Path State Status Progress Healthy Updated----------- ---- ------------ ------- -------------- --------- ------- --------svm1:svm1_vol01 XDP svm1-dr:svm1_svm1_vol01_mirror1 Snapmirrored Idle - true -svm3: DP svm3-dr: Uninitialized Transferring - true -2 entries were displayed.

cluster2::>

When you initializes an SVM DR relationship, clustered Data ONTAP starts replicating the configurationdata first, which includes details of the source SVM's volumes, and then afterward starts replicatingthe source SVM's constituent volumes. If you issue a snapmirror show -expand command early in theinitialization process then the constituent relationships may not yet exist.

16. Periodically repeat the snapmirror show expand command until you start seeing output for theconstituent relationships.

cluster2::> snapmirror show -expand ProgressSource Destination Mirror Relationship Total LastPath Type Path State Status Progress Healthy Updated----------- ---- ------------ ------- -------------- --------- ------- --------svm1:svm1_vol01 XDP svm1-dr:svm1_svm1_vol01_mirror1 Snapmirrored Idle - true -svm3: DP svm3-dr: Uninitialized Transferring - true -svm3:chn DP svm3-dr:chn Uninitialized Idle - true -svm3:eng DP svm3-dr:eng Uninitialized Idle - true -svm3:fin DP svm3-dr:fin Uninitialized Idle - true -svm3:mfg DP svm3-dr:mfg Uninitialized Idle - true -svm3:prodA DP svm3-dr:prodA Uninitialized Idle - true -svm3:proj1 DP svm3-dr:proj1 Uninitialized Idle - true -8 entries were displayed.

Page 62: Advanced Concepts for Clustered Data ONTAP 8.3 - NetApp · Advanced Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10238 Version 1.1

62 Advanced Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

cluster2::>

17. Periodically issue the snapmirror show command until the relationship status changes to "Idle".

cluster2::> snapmirror show ProgressSource Destination Mirror Relationship Total LastPath Type Path State Status Progress Healthy Updated----------- ---- ------------ ------- -------------- --------- ------- --------svm1:svm1_vol01 XDP svm1-dr:svm1_svm1_vol01_mirror1 Snapmirrored Idle - true -svm3: DP svm3-dr: Snapmirrored Idle - true -2 entries were displayed.

cluster2::>

The relationship has completed initialization, meaning that the destination SVM is now a mirrored copyof the source SVM.

18. Examine the status of the constituent relationships.

cluster2::> snapmirror show -expand ProgressSource Destination Mirror Relationship Total LastPath Type Path State Status Progress Healthy Updated----------- ---- ------------ ------- -------------- --------- ------- --------svm1:svm1_vol01 XDP svm1-dr:svm1_svm1_vol01_mirror1 Snapmirrored Idle - true -svm3: DP svm3-dr: Snapmirrored Idle - true -svm3:chn DP svm3-dr:chn Snapmirrored Idle - true -svm3:eng DP svm3-dr:eng Snapmirrored Idle - true -svm3:fin DP svm3-dr:fin Snapmirrored Idle - true -svm3:mfg DP svm3-dr:mfg Snapmirrored Idle - true -svm3:prodA DP svm3-dr:prodA Snapmirrored Idle - true -svm3:proj1 DP svm3-dr:proj1 Snapmirrored Idle - true -svm3:us DP svm3-dr:us Snapmirrored Idle - true -9 entries were displayed.

cluster2::>

These are likewise now all "Idle".

19. Display a list of the volumes on cluster2.

cluster2::> vol showVserver Volume Aggregate State Type Size Available Used%--------- ------------ ------------ ---------- ---- ---------- ---------- -----cluster2 MDV_CRS_5165c5f0174711e4b3b8005056990685_A aggr1_cluster2_01 online RW 20MB 18.79MB 6%cluster2 MDV_CRS_5165c5f0174711e4b3b8005056990685_B aggr1_cluster2_01 online RW 20MB 18.89MB 5%cluster2-01 vol0 aggr0 online RW 7.17GB 4.23GB 40%svm1-dr svm1_svm1_vol01_mirror aggr1_cluster2_01 online DP 128.0MB 121.3MB 5%svm1-dr svm1_svm1_vol01_mirror1 aggr1_cluster2_01 online DP 128.0MB 121.4MB 5%svm1-dr svm1dr_root aggr1_cluster2_01 online RW 20MB 18.88MB 5%svm3-dr chn aggr1_cluster2_01 online DP 1GB 972.5MB 5%svm3-dr eng aggr1_cluster2_01 online DP 1GB 972.5MB 5%svm3-dr fin aggr1_cluster2_01 online DP 1GB 972.5MB 5%svm3-dr mfg aggr1_cluster2_01 online DP 1GB 972.5MB 5%svm3-dr prodA aggr1_cluster2_01 online DP 1GB 972.5MB 5%svm3-dr proj1 aggr1_cluster2_01 online DP 1GB 972.5MB 5%svm3-dr svm3_root aggr1_cluster2_01 online RW 20MB 18.85MB 5%svm3-dr us aggr1_cluster2_01 online DP 1GB 972.5MB 5%14 entries were displayed.

cluster2::>

You see here that svm3-dr has 8 volumes, which correspond to the 8 volumes on svm3. Also noticethe two MDV* volumes at the beginning of the output; there are special volumes that clustered DataONTAP uses to replicate the SVM DR configuration data from the source SVM to the destination SVM.

20. Display a list of the volume snapshots for svm3-dr.

Page 63: Advanced Concepts for Clustered Data ONTAP 8.3 - NetApp · Advanced Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10238 Version 1.1

63 Advanced Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

Note: Since this command output is lengthy, the following CLI examples will focus on just theeng volume, but in your lab feel free to exclude the -volume eng portion of the command so youcan see the snaphots for all of svm3-dr's volumes.

cluster2::> snapshot show -vserver svm3-dr -volume eng ---Blocks---Vserver Volume Snapshot Size Total% Used%-------- -------- ------------------------------------- -------- ------ ----- eng daily.2015-10-03_0010 168KB 0% 37% daily.2015-10-04_0010 84KB 0% 23% weekly.2015-10-04_0015 192KB 0% 40% hourly.2015-10-04_1205 144KB 0% 33% hourly.2015-10-04_1305 148KB 0% 34% hourly.2015-10-04_1405 144KB 0% 33% hourly.2015-10-04_1505 152KB 0% 35% hourly.2015-10-04_1605 156KB 0% 35% hourly.2015-10-04_1705 148KB 0% 34% vserverdr.0.502f4455-6abf-11e5-b770-005056990685.2015-10-04_175125 0B 0% 0%10 entries were displayed.

cluster2::> exit

Notice the "vserverdr" snapshot created by SnapMirror.

21. On cluster1, display the list of snapshots for svm3's volumes.

cluster1::> snapshot show -vserver svm3 -volume eng ---Blocks---Vserver Volume Snapshot Size Total% Used%-------- -------- ------------------------------------- -------- ------ ----- eng daily.2015-10-03_0010 168KB 0% 34% daily.2015-10-04_0010 84KB 0% 21% weekly.2015-10-04_0015 192KB 0% 38% hourly.2015-10-04_1205 144KB 0% 31% hourly.2015-10-04_1305 148KB 0% 32% hourly.2015-10-04_1405 144KB 0% 31% hourly.2015-10-04_1505 152KB 0% 32% hourly.2015-10-04_1605 156KB 0% 33% hourly.2015-10-04_1705 148KB 0% 32% vserverdr.0.502f4455-6abf-11e5-b770-005056990685.2015-10-04_175125 92KB 0% 22%10 entries were displayed.

cluster1::>

The list of snapshots is the same on both the source and destination volumes.

22. On cluster2, initiate a SnapMirror update to transfer any changes on the source SVM since the lasttransfer took place to the destination SVM.

cluster2::> snapmirror update -destination-path svm3-dr:

cluster2::>

23. Periodically view the status of the SnapMirror relationships until it goes idle.

cluster2::> snapmirror show ProgressSource Destination Mirror Relationship Total LastPath Type Path State Status Progress Healthy Updated----------- ---- ------------ ------- -------------- --------- ------- --------svm1:svm1_vol01 XDP svm1-dr:svm1_svm1_vol01_mirror1 Snapmirrored Idle - true -svm3: DP svm3-dr: Snapmirrored Idle - true -2 entries were displayed.

cluster2::>

24. View the status of the constituent relationships.

cluster2::> snapmirror show -expand

Page 64: Advanced Concepts for Clustered Data ONTAP 8.3 - NetApp · Advanced Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10238 Version 1.1

64 Advanced Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

ProgressSource Destination Mirror Relationship Total LastPath Type Path State Status Progress Healthy Updated----------- ---- ------------ ------- -------------- --------- ------- --------svm1:svm1_vol01 XDP svm1-dr:svm1_svm1_vol01_mirror1 Snapmirrored Idle - true -svm3: DP svm3-dr: Snapmirrored Transferring 896KB true -svm3:chn DP svm3-dr:chn Snapmirrored Idle - true -svm3:eng DP svm3-dr:eng Snapmirrored Idle - true -svm3:fin DP svm3-dr:fin Snapmirrored Idle - true -svm3:mfg DP svm3-dr:mfg Snapmirrored Idle - true -svm3:prodA DP svm3-dr:prodA Snapmirrored Idle - true -svm3:proj1 DP svm3-dr:proj1 Snapmirrored Idle - true -svm3:us DP svm3-dr:us Snapmirrored Idle - true -9 entries were displayed.

cluster2::>

25. Display again the list of svm3-dr's volume snapshots.

cluster2::> snapshot show -vserver svm3-dr -volume eng ---Blocks---Vserver Volume Snapshot Size Total% Used%-------- -------- ------------------------------------- -------- ------ ----- eng daily.2015-10-03_0010 168KB 0% 37% daily.2015-10-04_0010 84KB 0% 23% weekly.2015-10-04_0015 192KB 0% 40% hourly.2015-10-04_1205 144KB 0% 33% hourly.2015-10-04_1305 148KB 0% 34% hourly.2015-10-04_1405 144KB 0% 33% hourly.2015-10-04_1505 152KB 0% 35% hourly.2015-10-04_1605 156KB 0% 35% hourly.2015-10-04_1705 148KB 0% 34% vserverdr.0.502f4455-6abf-11e5-b770-005056990685.2015-10-04_175125 124KB 0% 30% vserverdr.1.502f4455-6abf-11e5-b770-005056990685.2015-10-04_175646 0B 0% 0%11 entries were displayed.

cluster2::>

Now there are two vserverdr* snapshots listed. After your first update, SnapMirror maintains 2 rollingsnapshots on the destination volume going forward.

26. On cluster1, look at the snapshots on the source volumes.

cluster1::> snapshot show -vserver svm3 -volume eng ---Blocks---Vserver Volume Snapshot Size Total% Used%-------- -------- ------------------------------------- -------- ------ ----- eng daily.2015-10-03_0010 168KB 0% 37% daily.2015-10-04_0010 84KB 0% 23% weekly.2015-10-04_0015 192KB 0% 40% hourly.2015-10-04_1205 144KB 0% 34% hourly.2015-10-04_1305 148KB 0% 34% hourly.2015-10-04_1405 144KB 0% 34% hourly.2015-10-04_1505 152KB 0% 35% hourly.2015-10-04_1605 156KB 0% 35% hourly.2015-10-04_1705 152KB 0% 35% vserverdr.1.502f4455-6abf-11e5-b770-005056990685.2015-10-04_175646 96KB 0% 25%10 entries were displayed.

cluster1::>

Even after the first update, the source volumes continues to host a single rolling snapshot forSnapMirror.

27. The Linux host rhe1l has svm3's root namespace volume NFS mounted at the start of the lab. Displaythe /etc/fstab /entry that for this mount. (The /etc/fstab file lists the local disks and NFS filesystems thatshould be automatically mounted at system boot time.)

[root@rhel1 ~]# grep svm3 /etc/fstabsvm3:/ /corp nfs defaults 0 0

Page 65: Advanced Concepts for Clustered Data ONTAP 8.3 - NetApp · Advanced Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10238 Version 1.1

65 Advanced Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

[root@rhel1 ~]#

28. Display the details of that existing mount.

[root@rhel1 ~]# df /corpFilesystem 1K-blocks Used Available Use% Mounted onsvm3:/ 19456 128 19328 1% /corp

[root@rhel1 ~]#

Svm3's namespace root is mounted as /corp on rhel1.

29. List the contents of the /corp directory.

[root@rhel1 ~]# ls /corpeng fin mfg[root@rhel1 ~]#

You have no problem displaying the contents.

Next you initiate a cut-over. As mentioned in the introduction to this exercise, a cut-over is disruptive toNFS clients in this initial release of SVM disaster recovery and so you should unmount the NFS volumefrom the rhel1 client before the cut-over.

30. Unmount the /corp mount.

[root@rhel1 ~]# umount /corp[root@rhel1 ~]#

31. On cluster2, quiesce any running snapmirror operations.

cluster2::> snapmirror quiesce -destination-path svm3-dr:

cluster2::>

32. Verify that the snapmirror relationship is quiesced.

cluster2::> snapmirror show ProgressSource Destination Mirror Relationship Total LastPath Type Path State Status Progress Healthy Updated----------- ---- ------------ ------- -------------- --------- ------- --------svm1:svm1_vol01 XDP svm1-dr:svm1_svm1_vol01_mirror1 Snapmirrored Idle - true -svm3: DP svm3-dr: Snapmirrored Quiesced - true -cluster2::>

33. Break off the SnapMirror relationship.

cluster2::> snapmirror break -destination-path svm3-dr:

cluster2::>

34. Display the status of the SnapMirror relationships.

cluster2::> snapmirror show ProgressSource Destination Mirror Relationship Total LastPath Type Path State Status Progress Healthy Updated----------- ---- ------------ ------- -------------- --------- ------- --------svm1:svm1_vol01 XDP svm1-dr:svm1_svm1_vol01_mirror1 Snapmirrored Idle - true -svm3: DP svm3-dr: Broken-off Idle - true -2 entries were displayed.

Page 66: Advanced Concepts for Clustered Data ONTAP 8.3 - NetApp · Advanced Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10238 Version 1.1

66 Advanced Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

cluster2::>

The relationship for svm3-dr is broken-off.

35. Examine the status of the svm3-dr SVM.

cluster2::> vserver show Admin Operational RootVserver Type Subtype State State Volume Aggregate----------- ------- ---------- ---------- ----------- ---------- ----------cluster2 admin - - - - -cluster2-01 node - - - - -svm1-dr data default running running svm1dr_ aggr1_ root cluster2_ 01svm3-dr data default running stopped svm3_root aggr1_ cluster2_ 014 entries were displayed.

cluster2::>

It is administratively running but operationally stopped, as it should be since you have not cut over yet.

36. Examine the status of svm3-dr's LIFs.

cluster2::> net int show -vserver svm3-dr (network interface show) Logical Status Network Current Current IsVserver Interface Admin/Oper Address/Mask Node Port Home----------- ---------- ---------- ------------------ ------------- ------- ----svm3-dr svm3_cifs_nfs_lif1 up/down 192.168.0.143/24 cluster2-01 e0e true svm3_cifs_nfs_lif2 up/down 192.168.0.144/24 cluster2-01 e0c true2 entries were displayed.

cluster2::>

The LIFS are configured but down.

37. On cluster1, display the status of the SVMs.

cluster1::> vserver show Admin Operational RootVserver Type Subtype State State Volume Aggregate----------- ------- ---------- ---------- ----------- ---------- ----------cluster1 admin - - - - -cluster1-01 node - - - - -cluster1-02 node - - - - -svm1 data default running running svm1_root aggr1_ cluster1_ 01svm2 data default running running svm2_root aggr1_ cluster1_ 02svm3 data default running running svm3_root aggr1_ cluster1_ 016 entries were displayed.

cluster1::>

Svm3 is both administratively and operationally running.

38. Check the status of svm3's LIFs.

cluster1::> net int show -vserver svm3 (network interface show) Logical Status Network Current Current IsVserver Interface Admin/Oper Address/Mask Node Port Home----------- ---------- ---------- ------------------ ------------- ------- ----svm3

Page 67: Advanced Concepts for Clustered Data ONTAP 8.3 - NetApp · Advanced Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10238 Version 1.1

67 Advanced Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

svm3_cifs_nfs_lif1 up/up 192.168.0.143/24 cluster1-01 e0d true svm3_cifs_nfs_lif2 up/up 192.168.0.144/24 cluster1-01 e0e true2 entries were displayed.

cluster1::>

The LIFs are both up. If you compare the IP addresses on these LIFs with the ones you saw a couple ofsteps back for svm3-dr you'll see that they are the same. This is because you specified the -identity-preserve true option when you established the SVM disaster recovery relationship at the beginning ofthis exercise.

39. Stop svm3.

cluster1::> vserver stop -vserver svm3[Job 1033] Job is queued: Vserver Stop.[Job 1033] Job succeeded: DONE

cluster1::>

40. View svm3's status.

cluster1::> vserver show Admin Operational RootVserver Type Subtype State State Volume Aggregate----------- ------- ---------- ---------- ----------- ---------- ----------cluster1 admin - - - - -cluster1-01 node - - - - -cluster1-02 node - - - - -svm1 data default running running svm1_root aggr1_ cluster1_ 01svm2 data default running running svm2_root aggr1_ cluster1_ 02svm3 data default stopped stopped svm3_root aggr1_ cluster1_ 016 entries were displayed.

cluster1::>

The SVM is both administratively and operationally stopped.

41. Examine svm3's LIFs.

cluster1::> net int show -vserver svm3 (network interface show) Logical Status Network Current Current IsVserver Interface Admin/Oper Address/Mask Node Port Home----------- ---------- ---------- ------------------ ------------- ------- ----svm3 svm3_cifs_nfs_lif1 up/down 192.168.0.143/24 cluster1-01 e0d true svm3_cifs_nfs_lif2 up/down 192.168.0.144/24 cluster1-01 e0e true2 entries were displayed.

cluster1::>

The LIFs are also down.

42. On cluster2, view the status of svm3-dr.

cluster2::> vserver show -vserver svm3-dr Admin Operational RootVserver Type Subtype State State Volume Aggregate----------- ------- ---------- ---------- ----------- ---------- ----------cluster2 admin - - - - -cluster2-01 node - - - - -svm1-dr data default running running svm1dr_ aggr1_ root cluster2_ 01

Page 68: Advanced Concepts for Clustered Data ONTAP 8.3 - NetApp · Advanced Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10238 Version 1.1

68 Advanced Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

svm3-dr data default running stopped svm3_root aggr1_ cluster2_ 014 entries were displayed.

cluster2::>

It's still administratively running but is operationally down.

43. Start svm3-dr.

cluster2::> vserver start -vserver svm3-dr[Job 326] Job is queued: Vserver Start.[Job 326] Job succeeded: DONE

cluster2::>

44. View svm3-dr's status again.

cluster2::> vserver show Admin Operational RootVserver Type Subtype State State Volume Aggregate----------- ------- ---------- ---------- ----------- ---------- ----------cluster2 admin - - - - -cluster2-01 node - - - - -svm1-dr data default running running svm1dr_ aggr1_ root cluster2_ 01svm3-dr data default running running svm3_root aggr1_ cluster2_ 014 entries were displayed.

cluster2::>

It is now administratively and operationally running.

45. Examine the status of svm3-dr's LIFs.

cluster2::> net int show -vserver svm3-dr (network interface show) Logical Status Network Current Current IsVserver Interface Admin/Oper Address/Mask Node Port Home----------- ---------- ---------- ------------------ ------------- ------- ----svm3-dr svm3_cifs_nfs_lif1 up/up 192.168.0.143/24 cluster2-01 e0e true svm3_cifs_nfs_lif2 up/up 192.168.0.144/24 cluster2-01 e0c true2 entries were displayed.

cluster2::>

Both LIFs are up and operational.

46. Examine svm3-dr's volumes.

cluster2::> vol show -vserver svm3-drVserver Volume Aggregate State Type Size Available Used%--------- ------------ ------------ ---------- ---- ---------- ---------- -----svm3-dr chn aggr1_cluster2_01 online RW 1GB 972.5MB 5%svm3-dr eng aggr1_cluster2_01 online RW 1GB 972.5MB 5%svm3-dr fin aggr1_cluster2_01 online RW 1GB 972.5MB 5%svm3-dr mfg aggr1_cluster2_01 online RW 1GB 972.5MB 5%svm3-dr prodA aggr1_cluster2_01 online RW 1GB 972.5MB 5%svm3-dr proj1 aggr1_cluster2_01 online RW 1GB 972.5MB 5%svm3-dr svm3_root aggr1_cluster2_01 online RW 20MB 18.85MB 5%svm3-dr us aggr1_cluster2_01

Page 69: Advanced Concepts for Clustered Data ONTAP 8.3 - NetApp · Advanced Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10238 Version 1.1

69 Advanced Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

online RW 1GB 972.5MB 5%8 entries were displayed.

cluster2::>

The volumes are all present and writable.

47. On rhel1, mount up all /etc/fstab entries that are not currently mounted. Alternately, you can use thecommand mount svm3:/ /corp to manually mount /corp.

[root@rhel1 ~]# mount -a[root@rhel1 ~]#

48. View the details of the mount.

[root@rhel1 ~]# df /corpFilesystem 1K-blocks Used Available Use% Mounted onsvm3:/ 19456 128 19328 1% /corp

[root@rhel1 ~]#

49. List the contents of the /corp directory.

[root@rhel1 ~]# ls /corpeng fin mfg[root@rhel1 ~]#

50. Change directory to /corp/mfg/chn.

[root@rhel1 ~]# cd /corp/mfg/chn[root@rhel1 ~]#

51. List the directory contents.

[root@rhel1 ~]# ls[root@rhel1 ~]#

The directory is empty.

52. Create a new volume named prodB and junction it into the namespace at /mfg/chn/prodB.

cluster2::> volume create -vserver svm3-dr -volume prodB -aggregate aggr1_cluster2_01 -space-guarantee volume -policy default -junction-path /mfg/chn/prodB[Job 368] Job is queued: Create prodB.[Job 368] Job succeeded: Successful

cluster2::>

53. on rhel1, list the directory's contents again.

[root@rhel1 ~]# lsprodB[root@rhel1 ~]#

54. cd to the prodB folder.

[root@rhel1 ~]# cd prodB[root@rhel1 ~]#

55. Create a new file named file.txt.

[root@rhel1 ~]# touch file1.txt[root@rhel1 ~]#

56. List the directory contents.

[root@rhel1 ~]# ls

Page 70: Advanced Concepts for Clustered Data ONTAP 8.3 - NetApp · Advanced Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10238 Version 1.1

70 Advanced Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

file1.txt[root@rhel1 ~]#

57. On cluster1, display the status of the SnapMirror relationships.

cluster1::> snapmirror show ProgressSource Destination Mirror Relationship Total LastPath Type Path State Status Progress Healthy Updated----------- ---- ------------ ------- -------------- --------- ------- --------cluster1://svm1/svm1_root LS cluster1://svm1/svm1_root_lsm1 Snapmirrored Idle - true - cluster1://svm1/svm1_root_lsm2 Snapmirrored Idle - true -2 entries were displayed.

cluster1::>

There is currently no relationship from svm3-dr to svm3.

58. Create a SnapMirror SVM disaster recovery relationship from the source SVM svm3-dr to thedestination SVM svm3.

cluster1::> snapmirror create -source-path svm3-dr: -destination-path svm3: -type DP -throttle unlimited -identity-preserve true -schedule hourly

cluster1::>

59. Display the status of the SnapMirror relationships again.

cluster1::> snapmirror show ProgressSource Destination Mirror Relationship Total LastPath Type Path State Status Progress Healthy Updated----------- ---- ------------ ------- -------------- --------- ------- --------svm3-dr: DP svm3: Broken-off Idle - true -cluster1://svm1/svm1_root LS cluster1://svm1/svm1_root_lsm1 Snapmirrored Idle - true - cluster1://svm1/svm1_root_lsm2 Snapmirrored Idle - true -3 entries were displayed.

cluster1::>

SnapMirror creates the relationship. Since there is an existing relationship for the two SVMs from whenit was going the other direction before it was broken off, the Mirror State shows as Broken-off here.

60. Re-sync the relationship.

cluster1::> snapmirror resync -destination-path svm3:

cluster1::>

61. Display the status of the SnapMirror relationships again.

cluster1::> snapmirror show ProgressSource Destination Mirror Relationship Total LastPath Type Path State Status Progress Healthy Updated----------- ---- ------------ ------- -------------- --------- ------- --------svm3-dr: DP svm3: Broken-off Transferring - true -cluster1://svm1/svm1_root LS cluster1://svm1/svm1_root_lsm1 Snapmirrored Idle - true - cluster1://svm1/svm1_root_lsm2 Snapmirrored Idle - true -3 entries were displayed.

cluster1::>

62. Periodically display the status of the constituent relationships until they all show Idle.

cluster1::> snapmirror show -expand

Page 71: Advanced Concepts for Clustered Data ONTAP 8.3 - NetApp · Advanced Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10238 Version 1.1

71 Advanced Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

ProgressSource Destination Mirror Relationship Total LastPath Type Path State Status Progress Healthy Updated----------- ---- ------------ ------- -------------- --------- ------- --------svm3-dr: DP svm3: Broken-off Transferring 2.47MB true -svm3-dr:chn DP svm3:chn Snapmirrored Idle - true -svm3-dr:eng DP svm3:eng Snapmirrored Idle - true -svm3-dr:fin DP svm3:fin Snapmirrored Idle - true -svm3-dr:mfg DP svm3:mfg Snapmirrored Idle - true -svm3-dr:prodA DP svm3:prodA Snapmirrored Idle - true -svm3-dr:prodB DP svm3:prodB Snapmirrored Idle - true -svm3-dr:proj1 DP svm3:proj1 Snapmirrored Idle - true -svm3-dr:us DP svm3:us Snapmirrored Idle - true -cluster1://svm1/svm1_root LS cluster1://svm1/svm1_root_lsm1 Snapmirrored Idle - true - cluster1://svm1/svm1_root_lsm2 Snapmirrored Idle - true -10 entries were displayed.

cluster1::>

If you pay attention to the status of the relationship for the prodB volume while running thesecommands (and if you are fast enough), you'll see it go from Uninitialized to Transferring to Idle whilethe other relationships go from Broken-off to Re-synching to Idle.

63. View the status of the parent relationship.

cluster1::> snapmirror show ProgressSource Destination Mirror Relationship Total LastPath Type Path State Status Progress Healthy Updated----------- ---- ------------ ------- -------------- --------- ------- --------svm3-dr: DP svm3: Snapmirrored Idle - true -cluster1://svm1/svm1_root LS cluster1://svm1/svm1_root_lsm1 Snapmirrored Idle - true - cluster1://svm1/svm1_root_lsm2 Snapmirrored Idle - true -3 entries were displayed.

cluster1::>

Now start the procedure to cut-over from svm3-dr to svm3.64. Quiesce the SnapMirror relationship.

cluster1::> snapmirror quiesce -destination-path svm3:

cluster1::>

65. Verify the relationship is quiesced.

cluster1::> snapmirror show ProgressSource Destination Mirror Relationship Total LastPath Type Path State Status Progress Healthy Updated----------- ---- ------------ ------- -------------- --------- ------- --------svm3-dr: DP svm3: Snapmirrored Quiesced - true -cluster1://svm1/svm1_root LS cluster1://svm1/svm1_root_lsm1 Snapmirrored Idle - true - cluster1://svm1/svm1_root_lsm2 Snapmirrored Idle - true -3 entries were displayed.

cluster1::>

66. Break the SnapMirror relationship.

cluster1::> snapmirror break -destination-path svm3:

Page 72: Advanced Concepts for Clustered Data ONTAP 8.3 - NetApp · Advanced Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10238 Version 1.1

72 Advanced Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

cluster1::>

67. On cluster2, display the status of the svm3-dr SVM.

cluster2:> vserver show Admin Operational RootVserver Type Subtype State State Volume Aggregate----------- ------- ---------- ---------- ----------- ---------- ----------cluster2 admin - - - - -cluster2-01 node - - - - -svm1-dr data default running running svm1dr_ aggr1_ root cluster2_ 01svm3-dr data default running running svm3_root aggr1_ cluster2_ 014 entries were displayed.

cluster2::>

68. Stop the svm3-dr SVM.

cluster2::> vserver stop -vserver svm3-dr[Job 328] Job is queued: Vserver Stop.[Job 328] Job succeeded: DONE

cluster2::>

69. Display the SVM's status again.

cluster2:> vserver show Admin Operational RootVserver Type Subtype State State Volume Aggregate----------- ------- ---------- ---------- ----------- ---------- ----------cluster2 admin - - - - -cluster2-01 node - - - - -svm1-dr data default running running svm1dr_ aggr1_ root cluster2_ 01svm3-dr data default stopped stopped svm3_root aggr1_ cluster2_ 014 entries were displayed.

cluster2::>

70. Display the status of svm3-dr's LIFs.

cluster2::> net int show -vserver svm3-dr (network interface show) Logical Status Network Current Current IsVserver Interface Admin/Oper Address/Mask Node Port Home----------- ---------- ---------- ------------------ ------------- ------- ----svm3-dr svm3_cifs_nfs_lif1 up/down 192.168.0.143/24 cluster2-01 e0e true svm3_cifs_nfs_lif2 up/down 192.168.0.144/24 cluster2-01 e0c true2 entries were displayed.

cluster2::>

The LIFs are down, as you would expect.

71. On cluster1, start the svm3 SVM.

cluster1::> vserver start -vserver svm3[Job 1037] Job is queued: Vserver Start.[Job 1037] Job succeeded: DONE

cluster1::>

Page 73: Advanced Concepts for Clustered Data ONTAP 8.3 - NetApp · Advanced Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10238 Version 1.1

73 Advanced Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

72. Display the svm3 SVM's status.

cluster1::> vserver show Admin Operational RootVserver Type Subtype State State Volume Aggregate----------- ------- ---------- ---------- ----------- ---------- ----------cluster1 admin - - - - -cluster1-01 node - - - - -cluster1-02 node - - - - -svm1 data default running running svm1_root aggr1_ cluster1_ 01svm2 data default running running svm2_root aggr1_ cluster1_ 02svm3 data default running running svm3_root aggr1_ cluster1_ 016 entries were displayed.

cluster1::>

Svm3 is up and running.

73. Display the status of svm3's LIFs.

cluster1::> net int show -vserver svm3 (network interface show) Logical Status Network Current Current IsVserver Interface Admin/Oper Address/Mask Node Port Home----------- ---------- ---------- ------------------ ------------- ------- ----svm3 svm3_cifs_nfs_lif1 up/up 192.168.0.143/24 cluster1-01 e0d true svm3_cifs_nfs_lif2 up/up 192.168.0.144/24 cluster1-01 e0e true2 entries were displayed.

cluster1::>

Svm3's LIFs are both running and operational.

74. Examine svm3's volumes.

cluster1::> vol show -vserver svm3Vserver Volume Aggregate State Type Size Available Used%--------- ------------ ------------ ---------- ---- ---------- ---------- -----svm3 chn aggr1_cluster1_01 online RW 1GB 972.5MB 5%svm3 eng aggr1_cluster1_01 online RW 1GB 972.5MB 5%svm3 fin aggr1_cluster1_01 online RW 1GB 972.5MB 5%svm3 mfg aggr1_cluster1_01 online RW 1GB 972.5MB 5%svm3 prodA aggr1_cluster1_01 online RW 1GB 972.5MB 5%svm3 prodB aggr1_cluster1_01 online RW 20MB 972.5MB 5%svm3 proj1 aggr1_cluster1_01 online RW 1GB 972.5MB 5%svm3 svm3_root aggr1_cluster1_01 online RW 20MB 18.84MB 5%svm3 us aggr1_cluster1_01 online RW 1GB 972.5MB 5%9 entries were displayed.

cluster1::>

75. On rhel, list the status of the /corp mount.

[root@rhel1 prodB]# df /corpdf: `/corp': Stale file handledf: no filesystems processed

Page 74: Advanced Concepts for Clustered Data ONTAP 8.3 - NetApp · Advanced Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10238 Version 1.1

74 Advanced Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

[root@rhel1 prodB]#

The file handle is stale because you did not unmount the NFS filesystem prior to the latest SVM DR cut-over.

76. Change out of the /corp directory tree so you can unmount the NFS volume.

[root@rhel1 prodB] cd[root@rhel1 ~]

77. Unmount /corp.

[root@rhel1 ~]# umount /corp[root@rhel1 ~]#

78. Mount /corp again.

[root@rhel1 ~]# mount -a[root@rhel1 ~]#

79. List the contents of /corp again.

[root@rhel1 ~]# ls /corpeng fin mfg[root@rhel1 ~]#

80. List the contents of the /corp/mfg/chn/prodB directory to see if the file you created on svm3-dr beforethe last re-sync and cut-over is present.

[root@rhel1 ~]# ls /corp/mfg/chn/prodBfile1.txt[root@rhel1 ~]#

Yes, the file is there. It's noteworthy that there was no extra work involved in replicating backconfiguration changes that were made on svm3-dr (from creating and mounting a new volume) when itwas running.

81. On cluster2, re-sync the SnapMirror relationship.

cluster2::> snapmirror resync -destination-path svm3-dr:

cluster2::>

82. Periodically check the status of the SnapMirror relationship until it goes Idle.

cluster2::> snapmirror show ProgressSource Destination Mirror Relationship Total LastPath Type Path State Status Progress Healthy Updated----------- ---- ------------ ------- -------------- --------- ------- --------svm1:svm1_vol01 XDP svm1-dr:svm1_svm1_vol01_mirror1 Snapmirrored Idle - true -svm3: DP svm3-dr: Snapmirrored Idle - true -2 entries were displayed.

cluster2::>

At this point the SVM disaster recovery relationship is back to the state it was in before you initiated anycutover operations.

This concludes this lab exercise.

3.8 Appendix: Additional Administrative Users and RolesClustered Data ONTAP supports the concept of administrative users with roles. Each of these users is associatedwith a particular role that defines the commands that it can use when administering the cluster. Clustered Data

Page 75: Advanced Concepts for Clustered Data ONTAP 8.3 - NetApp · Advanced Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10238 Version 1.1

75 Advanced Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

ONTAP provides a number of predefined roles that can be used; you can also create your own customized roles,if required.

In System Manager, roles and users are grouped separately under the cluster and the SVM. If you use the CLI,you will see roles and users together with the same commands.

3.8.1 Cluster-Scoped Users and Roles

In this section, you will look at the users and roles that apply to the whole cluster.

1. In your Chrome browser, click the browser tab for cluster1.2. In the left pane, click the Cluster tab.3. In the left pane, navigate to cluster1 > Configuration > Security > Roles.4. The Roles pane shows a list of the predefined cluster-wide roles that come with clustered Data ONTAP.

1

2

3

4

Figure 3-45:

Next, take a look at the cluster-wide users.

5. In the left pane, select Users.6. In the “Users” pane, click Add.

Page 76: Advanced Concepts for Clustered Data ONTAP 8.3 - NetApp · Advanced Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10238 Version 1.1

76 Advanced Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

5

6

Figure 3-46:

The “Add User” dialog box opens. Use this dialog box to create a new limited-permission administrativeuser for the cluster.

7. Set the user name to intern, and the password to netapp123.8. Click Add next to the “User Login Methods” pane.9. Set the “Application” drop-down list to ssh, and the “Role” drop-down list to readonly.10. Click OK.

Page 77: Advanced Concepts for Clustered Data ONTAP 8.3 - NetApp · Advanced Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10238 Version 1.1

77 Advanced Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

7

8

9 10

Figure 3-47:

The new user login method you just entered is displayed in the “User Login Methods” list.

11. Click Add at the bottom of the dialog box.

Page 78: Advanced Concepts for Clustered Data ONTAP 8.3 - NetApp · Advanced Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10238 Version 1.1

78 Advanced Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

11

Figure 3-48:

The Add User dialog box closes and you return to the System Manager window.

12. If Chrome prompts you to save the password for this site, click Nope.

12

Figure 3-49:

13. The newly created “intern” account is now included in the list of accounts displayed in the “Users” pane.

Page 79: Advanced Concepts for Clustered Data ONTAP 8.3 - NetApp · Advanced Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10238 Version 1.1

79 Advanced Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

13

Figure 3-50:

14. Start a new PuTTY session to cluster1, and log in as the user intern, using the password netapp123.Try listing what commands are available. Observe that the volume create, or volume move commands,amongst others, are not available to you because the “readonly” role you assigned to the “intern”account prevents access to commands that modify the cluster configuration.

3.8.2 SVM Users and Roles

In this section, you will look at the users and roles that are local to a single SVM.

1. In your Chrome browser, click the browser tab for cluster1.2. In the left pane, click the Storage Virtual Machines tab.3. In the left pane, navigate to cluster1 > svm1 > Configuration > Security > Roles.4. The “Roles” pane now shows a list of predefined SVM-specific roles. In the Roles pane, select the

vsadmin-backup entry.5. Click Edit.

Page 80: Advanced Concepts for Clustered Data ONTAP 8.3 - NetApp · Advanced Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10238 Version 1.1

80 Advanced Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

1

2

3

4

5

Figure 3-51:

The “Edit Role” dialog box opens.

6. Scroll down the Role Attributes list to see the commands that are available to a user with this role. Notethat this role has full access to some commands, read-only access to others, and no access to the rest.

7. Click Cancel to discard any changes you might have made in this dialog box.

Page 81: Advanced Concepts for Clustered Data ONTAP 8.3 - NetApp · Advanced Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10238 Version 1.1

81 Advanced Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

6

7

Figure 3-52:

The “Edit Roles” dialog box closes and focus returns to the System Manager window. Take a look at theother roles for this SVM and observe how their permissions differ.

8. In the left pane, select Users.9. In the Users pane, select the vsadmin user.10. If you look at the “User Login Methods” area at the bottom of the Users pane, you can see that the

vsadmin user has the vsadmin role.

Page 82: Advanced Concepts for Clustered Data ONTAP 8.3 - NetApp · Advanced Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10238 Version 1.1

82 Advanced Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

8

9

10

Figure 3-53:

11. Open a PuTTY session and connect to cluster1. Try to log in to cluster1 as vsadmin with the passwordNetapp1!.

login as: vsadminUsing keyboard-interactive authentication.Password:Access [email protected]’s password:

Remember that the user vsadmin is specifically for administering the SVM svm1. To manage an SVMwith delegated SVM-scoped administration, you must log in to the management LIF for the SVM; in thiscase, svm1.

Identify the management LIF for the svm1 SVM.12. In Chrome, select the browser tab for cluster1.13. Select the Cluster tab.14. Navigate to cluster1 > svm1 > Configuration.15. In the Network pane select the Network Interfaces tab.16. In the network interface list, select the entry for svm1_admin_lif1 and observe it's assigned IP address.

Page 83: Advanced Concepts for Clustered Data ONTAP 8.3 - NetApp · Advanced Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10238 Version 1.1

83 Advanced Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

12

1314

15

16

Figure 3-54:

Tip: Alternatively, use the cluster management CLI and type network interface show whenlogged in as the cluster administrator to obtain this IP address.

17. On this system, the management LIF for svm1 is named “svm1-mgmt” with the IP address192.168.0.147. There is also a connection entry in PuTTY named “cluster1-svm1”. Using the cluster1-svm1 connection entry in PuTTY, the vsadmin user, and the password Netapp1!, connect to svm1over SSH.

login as: vsadminUsing keyboard-interactive authentication.Password:svm1::>

18. As the vsadmin user, attempt to modify a network port or create a new aggregate by using the networkport modify command and the storage aggregate create command.

svm1::> network port modifyError: "port" is not a recognized commandsvm1::> storage aggregate createError: "storage" is not a recognized command

These commands are not available to you as the vsadmin user, because control of logical entitiesinside svm1 is delegated to vsadmin, while network ports and storage aggregates are physical entitiescontrolled by the cluster administrator.

19. As the vsadmin user, run the volume new -aggregate ? command.

cluster1::> volume new -aggregate ? <aggregate name> Aggregate Namecluster1::>

Page 84: Advanced Concepts for Clustered Data ONTAP 8.3 - NetApp · Advanced Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10238 Version 1.1

84 Advanced Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

Attention: You can create new volumes as the vsadmin user, but only on specific aggregates.The reason is that when the svm1 SVM was set up, the cluster administrator configured svm1 toallow volume creation on these aggregates. To view this list, run the vserver show svm1 -fieldsaggr-list command.

cluster1::> vserver show -vserver svm1 -fields aggr-listvserver aggr-list------- -----------------------------------------------------------------------svm1 aggr1_cluster1_01,aggr1_cluster1_02,aggr2_cluster1_01,aggr2_cluster1_02cluster1::>

20. As the vsadmin user, run the network interface modify command.

svm1::> network interface modifyError: "modify" is not a recognized command

Attention: You cannot modify network interfaces as the vsadmin user. The vsadmin userhas the vsadmin role, which provides read-only access to the “network interface” commanddirectory.

3.9 Appendix: Active Directory Authentication TunnelingTo authorize cluster administrators by using Active Directory, you must set up an authentication tunnel througha CIFS-enabled SVM. You must also create one or more cluster user accounts for the domain users. Thisfunctionality requires that CIFS is licensed on the cluster.

This lab environment already has a CIFS-enabled SVM, which is “svm1”. Use svm1 to set up the authenticationtunnel.

Before you begin, verify your lab configuration.

1. Verify that no domain authentication tunnel currently exists.

cluster1::> security login domain-tunnel showThis table is currently empty.

2. After you verify that a domain authentication tunnel does not exist, verify that the CIFS-enabled SVM(svm1) is a member of the appropriate domain, “DEMO.NETAPP.COM”.

cluster1::> vserver cifs show -vserver svm1 Vserver: svm1 CIFS Server NetBIOS Name: SVM1 NetBIOS Domain/Workgroup Name: DEMO Fully Qualified Domain Name: DEMO.NETAPP.COMDefault Site Used by LIFs Without Site Membership: Authentication Style: domain CIFS Server Administrative Status: up CIFS Server Description: List of NetBIOS Aliases: -cluster1::>

3. After you verify that the CIFS-enabled SVM svm1 is a member of the appropriate domain, set up adomain authentication tunnel.

cluster1::> security login domain-tunnel create -vserver svm1

4. With the authentication tunnel configured, a new authentication method is available to you, “domain”.Use this new authentication method to create a new cluster administrator.

cluster1::> security login create -authmethod domain -username DEMO\Administrator -application ssh

Page 85: Advanced Concepts for Clustered Data ONTAP 8.3 - NetApp · Advanced Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10238 Version 1.1

85 Advanced Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

5. You can now log in to the cluster as a domain administrator, using the DOMAIN\username syntax. Opena new PuTTY session as described in the “Before You Begin” section. When prompted for a usernameand password, enter DOMAIN\Administrator as the user name, and Netapp1! as the password.

login as: DEMO\AdministratorUsing keyboard-interactive authentication.Password:cluster1::>

3.10 Automated Nondisruptive UpgradesClustered Data ONTAP 8.3 adds support for automated, nondisruptive software upgrades. These commandsbring the clustered Data ONTAP package into the cluster, perform validation of the cluster to verify that it isprepared for the upgrade, and then perform the actual upgrade. Underneath, there are downloads, takeovers,and givebacks still being performed, but the cluster infrastructure will drive the process. The administrator is ableto view the progress; pause, resume, or cancel an upgrade; and view the cluster update history. Go to the http://support.netapp.com/ site to obtain the clustered Data ONTAP package.

Automated nondisruptive upgrades are available to update clustered Data ONTAP 8.3 to clustered Data ONTAP8.3.x. The code to run the automated upgrades is in clustered Data ONTAP 8.3, so a traditional approach isrequired to get from version 8.2 to version 8.3.

This lab examines the commands used to upgrade the cluster, but does not execute those commands. Thecommands are executed in the cluster1 CLI.

The cluster image directory contains the commands and command subdirectories used to perform automatednondisruptive upgrades. Examine the options that are available under this command directory.

cluster1::> cluster imagecluster1::cluster image> ? cancel-update Cancel an update package> Manage the cluster image package repository pause-update Pause an update resume-update Resume an update show Display currently running image information show-update-history Display the update history show-update-log Display the update transaction log show-update-progress Display the update progress update Manage an update validate Validates the cluster's update eligibilitycluster1::cluster image>

The cluster image package command directory contains the commands used to manage the software packagesthat contain future versions of clustered Data ONTAP. Examine the options that are available under this directory.

cluster1::cluster image> packagecluster1::cluster image package> ? delete Remove a package from the cluster image package repository get Fetch a package file from a URL into the cluster image package repository show Display currently installed image information show-repository Display information about packages available in the cluster image package repositorycluster1::cluster image package>

Use the cluster image update command to upgrade a cluster once a new package has been added to the clusterpackage repository. Enter the cluster image command directory, and examine the parameters that are availablewith the cluster image update command.

cluster1::cluster image package> ..cluster1::cluster image> update ? [-version] <text> Update Version [[-nodes] <nodename>, ...] Node [ -estimate-only [true] ] Estimate Only [ -pause-after {none|all} ] Update Pause (default: none) [ -ignore-validation-warning {true|false} ] Ignore Validation (default:

Page 86: Advanced Concepts for Clustered Data ONTAP 8.3 - NetApp · Advanced Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10238 Version 1.1

86 Advanced Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

false) [ -skip-confirmation {true|false} ] Skip Confirmation (default: false) [ -force-rolling [true] ] Force Rolling Update [ -stabilize-minutes {1..60} ] Minutes to stabilize (default: 8)cluster1::cluster image>

Page 87: Advanced Concepts for Clustered Data ONTAP 8.3 - NetApp · Advanced Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10238 Version 1.1

87 Advanced Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

4 Version History

Version Date Document Version History

1.0 October 2014 Insight 2014

1.0.1 December 2014 Updates for Lab on Demand

1.1 October 2015 Insight 2015

Page 88: Advanced Concepts for Clustered Data ONTAP 8.3 - NetApp · Advanced Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10238 Version 1.1

Refer to the Interoperability Matrix Tool (IMT) on the NetApp Support site to validate that the exactproduct and feature versions described in this document are supported for your specific environment.The NetApp IMT defines product components and versions that can be used to construct configurationsthat are supported by NetApp. Specific results depend on each customer's installation in accordancewith published specifications.

NetApp provides no representations or warranties regarding the accuracy, reliability, or serviceability of anyinformation or recommendations provided in this publication, or with respect to any results that may be obtainedby the use of the information or observance of any recommendations provided herein. The information in thisdocument is distributed AS IS, and the use of this information or the implementation of any recommendations ortechniques herein is a customer’s responsibility and depends on the customer’s ability to evaluate and integratethem into the customer’s operational environment. This document and the information contained herein may beused solely in connection with the NetApp products discussed in this document.

Go further, faster®

© 2015 NetApp, Inc. All rights reserved. No portions of this presentation may be reproduced without prior writtenconsent of NetApp, Inc. Specifications are subject to change without notice. NetApp and the NetApp logo areregistered trademarks of NetApp, Inc. in the United States and/or other countries. All other brands or products aretrademarks or registered trademarks of their respective holders and should be treated as such.