Professional Documents
Culture Documents
Reviewed by
TABLE OF CONTENTS
1 Program Summary................................................................................................................................ 4
5.7 Configure VMware vCenter Server 6.7 and vSphere Clustering ...................................................................59
6 Conclusion .......................................................................................................................................... 62
Acknowledgments .................................................................................................................................... 62
LIST OF TABLES
Table 1) Hardware requirements for the base configuration...........................................................................................7
Table 2) Hardware for scaling the solution by adding two hypervisor nodes. .................................................................7
Table 3) Software requirements for the base FlexPod Express implementation. ...........................................................7
Table 4) Software requirements for a VMware vSphere implementation. ......................................................................8
Table 5) Cabling information for Cisco Nexus switch 3172P A.......................................................................................8
Table 6) Cabling information for Cisco Nexus switch 3172P B. ......................................................................................9
Table 7) Cabling information for NetApp AFF A220 storage controller A. ......................................................................9
Table 8) Cabling information for NetApp AFF A220 storage controller B. ......................................................................9
Table 9) Required VLANs.............................................................................................................................................10
Table 10) VMware virtual machines created. ...............................................................................................................10
Table 11) Information required for NFS configuration. .................................................................................................30
2 FlexPod Express with VMware vSphere 6.7 and NetApp AFF A220 © 2018 NetApp, Inc. All Rights Reserved.
Table 12) Information required for iSCSI configuration. ...............................................................................................32
Table 13) Information required for NFS configuration. .................................................................................................33
Table 14) Information required for SVM administrator addition. ...................................................................................33
Table 15) Information required for CIMC configuration. ...............................................................................................34
Table 16) Information required for iSCSI boot configuration.........................................................................................36
Table 17) Information required for configuring ESXi hosts. ..........................................................................................47
LIST OF FIGURES
Figure 1) FlexPod portfolio. ............................................................................................................................................5
Figure 2) FlexPod Express with VMware vSphere 10GbE architecture. ........................................................................6
Figure 3) Reference validation cabling. ..........................................................................................................................8
3 FlexPod Express with VMware vSphere 6.7 and NetApp AFF A220 © 2018 NetApp, Inc. All Rights Reserved.
1 Program Summary
Industry trends indicate a vast data center transformation toward shared infrastructure and cloud
computing. In addition, organizations seek a simple and effective solution for remote and branch offices,
leveraging the technology with which they are familiar in their data center.
FlexPod® Express is a predesigned, best practice data center architecture that is built on the Cisco
Unified Computing System (Cisco UCS), the Cisco Nexus family of switches, and NetApp® storage
technologies. The components in a FlexPod Express system are like their FlexPod Datacenter
counterparts, enabling management synergies across the complete IT infrastructure environment on a
smaller scale. FlexPod Datacenter and FlexPod Express are optimal platforms for virtualization and for
bare-metal operating systems and enterprise workloads.
FlexPod Datacenter and FlexPod Express deliver a baseline configuration and have the flexibility to be
sized and optimized to accommodate many different use cases and requirements. Existing FlexPod
Datacenter customers can manage their FlexPod Express system with the tools to which they are
accustomed. New FlexPod Express customers can easily adapt to managing FlexPod Datacenter as their
environment grows.
FlexPod Express is an optimal infrastructure foundation for remote and branch offices and for small to
midsize businesses. It is also an optimal solution for customers who want to provide infrastructure for a
dedicated workload.
FlexPod Express provides an easy-to-manage infrastructure that is suitable for almost any workload.
2 Solution Overview
This FlexPod Express solution is part of the FlexPod Converged Infrastructure Program.
4 FlexPod Express with VMware vSphere 6.7 and NetApp AFF A220 © 2018 NetApp, Inc. All Rights Reserved.
Figure 1) FlexPod portfolio.
5 FlexPod Express with VMware vSphere 6.7 and NetApp AFF A220 © 2018 NetApp, Inc. All Rights Reserved.
2.3 Solution Technology
This solution leverages the latest technologies from NetApp, Cisco, and VMware. This solution features
the new NetApp AFF A220 running ONTAP 9.4, dual Cisco Nexus 3172P switches, and Cisco UCS C220
M5 rack servers that run VMware vSphere 6.7. This validated solution uses 10GbE technology. Guidance
is also provided on how to scale compute capacity by adding two hypervisor nodes at a time so that the
FlexPod Express architecture can adapt to an organization’s evolving business needs.
6 FlexPod Express with VMware vSphere 6.7 and NetApp AFF A220 © 2018 NetApp, Inc. All Rights Reserved.
3 Technology Requirements
A FlexPod Express system requires a combination of hardware and software components. FlexPod
Express also describes the hardware components that are required to add hypervisor nodes to the
system in units of two.
Hardware Quantity
AFF A220 HA Pair 1
Table 2 lists the hardware that is required in addition to the base configuration for implementing 10GbE.
Table 2) Hardware for scaling the solution by adding two hypervisor nodes.
Hardware Quantity
Cisco UCS C220 M5 server 2
7 FlexPod Express with VMware vSphere 6.7 and NetApp AFF A220 © 2018 NetApp, Inc. All Rights Reserved.
Table 4 lists the software that is required for all VMware vSphere implementations on FlexPod Express.
Software Version
VMware vCenter server appliance 6.7
8 FlexPod Express with VMware vSphere 6.7 and NetApp AFF A220 © 2018 NetApp, Inc. All Rights Reserved.
Local Device Local Port Remote Device Remote Port
Eth1/25 Cisco Nexus switch 3172P B Eth1/25
9 FlexPod Express with VMware vSphere 6.7 and NetApp AFF A220 © 2018 NetApp, Inc. All Rights Reserved.
5 Deployment Procedures
This document provides details for configuring a fully redundant, highly available FlexPod Express
system. To reflect this redundancy, the components being configured in each step are referred to as
either component A or component B. For example, controller A and controller B identify the two NetApp
storage controllers that are provisioned in this document. Switch A and switch B identify a pair of Cisco
Nexus switches.
In addition, this document describes steps for provisioning multiple Cisco UCS hosts, which are identified
sequentially as server A, server B, and so on.
To indicate that you should include information pertinent to your environment in a step, <<text>>
appears as part of the command structure. See the following example for the vlan create command:
Controller01>vlan create vif0 <<mgmt_vlan_id>>
This document enables you to fully configure the FlexPod Express environment. In this process, various
steps require you to insert customer-specific naming conventions, IP addresses, and virtual local area
network (VLAN) schemes. Table 9 describes the VLANs required for deployment, as outlined in this
guide. This table can be completed based on the specific site variables and used to implement the
document configuration steps.
Note: If you use separate in-band and out-of-band management VLANs, you must create a layer 3
route between them. For this validation, a common management VLAN was used.
The VLAN numbers are needed throughout the configuration of FlexPod Express. The VLANs are
referred to as <<var_xxxx_vlan>>, where xxxx is the purpose of the VLAN (such as iSCSI-A).
Table 10 lists the VMware virtual machines created.
10 FlexPod Express with VMware vSphere 6.7 and NetApp AFF A220 © 2018 NetApp, Inc. All Rights Reserved.
5.1 Cisco Nexus 3172P Deployment Procedure
The following section details the Cisco Nexus 3172P switch configuration used in a FlexPod Express
environment.
11 FlexPod Express with VMware vSphere 6.7 and NetApp AFF A220 © 2018 NetApp, Inc. All Rights Reserved.
Type of ssh key you would like to generate (dsa/rsa) [rsa]: rsa
4. You then see a summary of your configuration, and you are asked if you would like to edit it. If your
configuration is correct, enter n.
Would you like to edit the configuration? (yes/no) [n]: n
5. You are then asked if you would like to use this configuration and save it. If so, enter y.
Use this configuration and save it? (yes/no) [y]: Enter
Note: The default port channel load-balancing hash uses the source and destination IP addresses to
determine the load-balancing algorithm across the interfaces in the port channel. You can achieve
better distribution across the members of the port channel by providing more inputs to the hash
algorithm beyond the source and destination IP addresses. For the same reason, NetApp highly
recommends adding the source and destination TCP ports to the hash algorithm.
2. From configuration mode (config t), enter the following commands to set the global port channel
load-balancing configuration on Cisco Nexus switch A and switch B:
port-channel load-balance src-dst ip-l4port
12 FlexPod Express with VMware vSphere 6.7 and NetApp AFF A220 © 2018 NetApp, Inc. All Rights Reserved.
many ports rather than too few, which allows the default port state to enhance the overall stability of the
network.
Pay close attention to the spanning-tree state when adding servers, storage, and uplink switches,
especially if they do not support bridge assurance. In such cases, you might need to change the port type
to make the ports active.
The Bridge Protocol Data Unit (BPDU) guard is enabled on edge ports by default as another layer of
protection. To prevent loops in the network, this feature shuts down the port if BPDUs from another switch
are seen on this interface.
From configuration mode (config t), run the following commands to configure the default spanning-tree
options, including the default port type and BPDU guard, on Cisco Nexus switch A and switch B:
spanning-tree port type network default
spanning-tree port type edge bpduguard default
Define VLANs
Before individual ports with different VLANs are configured, the layer 2 VLANs must be defined on the
switch. It is also a good practice to name the VLANs for easy troubleshooting in the future.
From configuration mode (config t), run the following commands to define and describe the layer 2
VLANs on Cisco Nexus switch A and switch B:
vlan <<nfs_vlan_id>>
name NFS-VLAN
vlan <<iSCSI_A_vlan_id>>
name iSCSI-A-VLAN
vlan <<iSCSI_B_vlan_id>>
name iSCSI-B-VLAN
vlan <<vmotion_vlan_id>>
name vMotion-VLAN
vlan <<vmtraffic_vlan_id>>
name VM-Traffic-VLAN
vlan <<mgmt_vlan_id>>
name MGMT-VLAN
vlan <<native_vlan_id>>
name NATIVE-VLAN
exit
13 FlexPod Express with VMware vSphere 6.7 and NetApp AFF A220 © 2018 NetApp, Inc. All Rights Reserved.
int eth1/34
description UCS Server A: CIMC
14 FlexPod Express with VMware vSphere 6.7 and NetApp AFF A220 © 2018 NetApp, Inc. All Rights Reserved.
• Providing fast convergence if either the link or a device fails
• Providing link-level resiliency
• Helping provide high availability
The vPC feature requires some initial setup between the two Cisco Nexus switches to function properly. If
you use the back-to-back mgmt0 configuration, use the addresses defined on the interfaces and verify
that they can communicate by using the ping <<switch_A/B_mgmt0_ip_addr>>vrf management
command.
From configuration mode (config t), type the following commands to configure the vPC global
configuration for both switches:
int eth1/25-26
channel-group 10 mode active
int Po10
description vPC peer-link
switchport
switchport mode trunk
switchport trunk native vlan <<native_vlan_id>>
switchport trunk allowed vlan <<nfs_vlan_id>>,<<vmotion_vlan_id>>, <<vmtraffic_vlan_id>>,
<<mgmt_vlan>, <<iSCSI_A_vlan_id>>, <<iSCSI_B_vlan_id>>
spanning-tree port type network
vpc peer-link
no shut
exit
copy run start
int eth1/25-26
channel-group 10 mode active
int Po10
description vPC peer-link
switchport
switchport mode trunk
switchport trunk native vlan <<native_vlan_id>>
switchport trunk allowed vlan <<nfs_vlan_id>>,<<vmotion_vlan_id>>, <<vmtraffic_vlan_id>>,
<<mgmt_vlan>>, <<iSCSI_A_vlan_id>>, <<iSCSI_B_vlan_id>>
spanning-tree port type network
vpc peer-link
no shut
exit
copy run start
15 FlexPod Express with VMware vSphere 6.7 and NetApp AFF A220 © 2018 NetApp, Inc. All Rights Reserved.
Configure Storage Port Channels
The NetApp storage controllers allow an active-active connection to the network using the Link
Aggregation Control Protocol (LACP). The use of LACP is preferred because it adds both negotiation and
logging between the switches. Because the network is set up for vPC, this approach enables you to have
active-active connections from the storage to separate physical switches. Each controller has two links to
each of the switches. However, all four links are part of the same vPC and interface group (IFGRP).
From configuration mode (config t), run the following commands on each of the switches to configure
the individual interfaces and the resulting port channel configuration for the ports connected to the
NetApp AFF controller.
1. Run the following commands on switch A and switch B to configure the port channels for storage
controller A:
int eth1/1
channel-group 11 mode active
int Po11
description vPC to Controller-A
switchport
switchport mode trunk
switchport trunk native vlan <<native_vlan_id>>
switchport trunk allowed vlan <<nfs_vlan_id>>,<<mgmt_vlan_id>>,<<iSCSI_A_vlan_id>>,
<<iSCSI_B_vlan_id>>
spanning-tree port type edge trunk
mtu 9216
vpc 11
no shut
2. Run the following commands on switch A and switch B to configure the port channels for storage
controller B.
int eth1/2
channel-group 12 mode active
int Po12
description vPC to Controller-B
switchport
switchport mode trunk
switchport trunk native vlan <<native_vlan_id>>
switchport trunk allowed vlan <<nfs_vlan_id>>,<<mgmt_vlan_id>>, <<iSCSI_A_vlan_id>>,
<<iSCSI_B_vlan_id>>
spanning-tree port type edge trunk
mtu 9216
vpc 12
no shut
exit
copy run start
3. (Optional) Jumbo frames can be configured throughout the network to enable any applications and
operating systems to transmit these larger frames without fragmentation. If using jumbo frames, both
the endpoints and all the interfaces between the endpoints (layer 2 and layer 3) must support and be
configured for jumbo frames to prevent performance problems caused by fragmenting frames.
Although jumbo frames are configured for this workload, it is optional because few applications do not
support it.
16 FlexPod Express with VMware vSphere 6.7 and NetApp AFF A220 © 2018 NetApp, Inc. All Rights Reserved.
Cisco Nexus Switch A: Cisco UCS Server-A and Cisco UCS Server-B Configuration
int eth1/3-4
switchport mode trunk
switchport trunk native vlan <<native_vlan_id>>
switchport trunk allowed vlan
<<iSCSI_A_vlan_id>>,<<nfs_vlan_id>>,<<vmotion_vlan_id>>,<<vmtraffic_vlan_id>>,<<mgmt_vlan_id>>
spanning-tree port type edge trunk
mtu9216
no shut
exit
copy run start
Cisco Nexus Switch B: Cisco UCS Server-A and Cisco UCS Server-B Configuration
int eth1/3-4
switchport mode trunk
switchport trunk native vlan <<native_vlan_id>>
switchport trunk allowed vlan
<<iSCSI_B_vlan_id>>,<<nfs_vlan_id>>,<<vmotion_vlan_id>>,<<vmtraffic_vlan_id>>,<<mgmt_vlan_id>>
spanning-tree port type edge trunk
mtu 9216
no shut
exit
copy run start
Note: (Optional) Jumbo frames can be configured throughout the network to enable any applications
and operating systems to transmit these larger frames without fragmentation. If using jumbo
frames, both the endpoints and all the interfaces between the endpoints (layer 2 and layer 3)
must support and be configured for jumbo frames to prevent performance problems caused by
fragmenting frames. Although jumbo frames are configured for this workload, it is optional
because few applications do not support it.
Note: To scale the solution by adding additional Cisco UCS servers, run the previous commands with
the switch ports that the newly added servers have been plugged into on switches A and B.
17 FlexPod Express with VMware vSphere 6.7 and NetApp AFF A220 © 2018 NetApp, Inc. All Rights Reserved.
1. Access the HWU application to view the system configuration guides. Click the Controllers tab to view
the compatibility between different version of the ONTAP software and the NetApp storage
appliances with your desired specifications.
2. Alternatively, to compare components by storage appliance, click Compare Storage Systems.
Controller AFF2XX Series Prerequisites
To plan the physical location of the storage systems, see the NetApp Hardware Universe. Refer to the
following sections:
• Electrical requirements
• Supported power cords
• Onboard ports and cables
Storage Controllers
Follow the physical installation procedures for the controllers in the AFF A220 Documentation.
Configuration Worksheet
Before running the setup script, complete the configuration worksheet from the product manual. The
configuration worksheet is available in the ONTAP 9.4 Software Setup Guide.
Note: This system is set up in a two-node switchless cluster configuration.
18 FlexPod Express with VMware vSphere 6.7 and NetApp AFF A220 © 2018 NetApp, Inc. All Rights Reserved.
Cluster Detail Cluster Detail Value
NTP server IP (you can enter more than one) <<var_ntp_server_ip>>
Configure Node A
To configure node A, complete the following steps:
1. Connect to the storage system console port. You should see a Loader-A prompt. However, if the
storage system is in a reboot loop, press Ctrl-C to exit the autoboot loop when you see this message:
Starting AUTOBOOT press Ctrl-C to abort…
10. Press Enter for the user name, indicating no user name.
11. Enter y to set the newly installed software as the default to be used for subsequent reboots.
12. Enter y to reboot the node.
Note: When installing new software, the system might perform firmware upgrades to the BIOS and
adapter cards, causing reboots and possible stops at the Loader-A prompt. If these actions occur, the
system might deviate from this procedure.
13. Press Ctrl-C to enter the Boot menu.
14. Select option 4 for Clean Configuration and Initialize All Disks.
15. Enter y to zero disks, reset config, and install a new file system.
16. Enter y to erase all the data on the disks.
Note: The initialization and creation of the root aggregate can take 90 minutes or more to complete,
depending on the number and type of disks attached. When initialization is complete, the storage
system reboots. Note that SSDs take considerably less time to initialize. You can continue with the
node B configuration while the disks for node A are zeroing.
17. While node A is initializing, begin configuring node B.
19 FlexPod Express with VMware vSphere 6.7 and NetApp AFF A220 © 2018 NetApp, Inc. All Rights Reserved.
Configure Node B
To configure node B, complete the following steps:
1. Connect to the storage system console port. You should see a Loader-A prompt. However, if the
storage system is in a reboot loop, press Ctrl-C to exit the autoboot loop when you see this message:
Starting AUTOBOOT press Ctrl-C to abort…
10. Press Enter for the user name, indicating no user name.
11. Enter y to set the newly installed software as the default to be used for subsequent reboots.
12. Enter y to reboot the node.
Note: When installing new software, the system might perform firmware upgrades to the BIOS and
adapter cards, causing reboots and possible stops at the Loader-A prompt. If these actions occur, the
system might deviate from this procedure.
13. Press Ctrl-C to enter the Boot menu.
14. Select option 4 for Clean Configuration and Initialize All Disks.
15. Enter y to zero disks, reset config, and install a new file system.
16. Enter y to erase all the data on the disks.
Note: The initialization and creation of the root aggregate can take 90 minutes or more to complete,
depending on the number and type of disks attached. When initialization is complete, the storage
system reboots. Note that SSDs take considerably less time to initialize.
20 FlexPod Express with VMware vSphere 6.7 and NetApp AFF A220 © 2018 NetApp, Inc. All Rights Reserved.
1. Follow the prompts to set up node A.
Welcome to the cluster setup wizard.
You can return to cluster setup at any time by typing "cluster setup".
To accept a default or omit a question, do not enter a value.
This system will send event messages and periodic reports to NetApp Technical
Support. To disable this feature, enter
autosupport modify -support disable
within 24 hours.
Otherwise, press Enter to complete cluster setup using the command line
interface:
21 FlexPod Express with VMware vSphere 6.7 and NetApp AFF A220 © 2018 NetApp, Inc. All Rights Reserved.
5. You can also enter feature licenses for Cluster, NFS, and iSCSI.
6. You see a status message stating the cluster is being created. This status message cycles through
several statuses. This process takes several minutes.
7. Configure the network.
a. Deselect the IP Address Range option.
b. Enter <<var_clustermgmt_ip>> in the Cluster Management IP Address field,
<<var_clustermgmt_mask>> in the Netmask field, and <<var_clustermgmt_gateway>>
in the Gateway field. Use the … selector in the Port field to select e0M of node A.
c. The node management IP for node A is already populated. Enter <<var_nodeA_mgmt_ip>>
for node B.
22 FlexPod Express with VMware vSphere 6.7 and NetApp AFF A220 © 2018 NetApp, Inc. All Rights Reserved.
d. Enter <<var_domain_name>> in the DNS Domain Name field. Enter
<<var_dns_server_ip>> in the DNS Server IP Address field.
Note: You can enter multiple DNS server IP addresses.
e. Enter <<var_ntp_server_ip>> in the Primary NTP Server field.
Note: You can also enter an alternate NTP server.
23 FlexPod Express with VMware vSphere 6.7 and NetApp AFF A220 © 2018 NetApp, Inc. All Rights Reserved.
9. When indicated that the cluster configuration has completed, click Manage Your Cluster to configure
the storage.
24 FlexPod Express with VMware vSphere 6.7 and NetApp AFF A220 © 2018 NetApp, Inc. All Rights Reserved.
Set On-Board UTA2 Ports Personality
1. Verify the current mode and the current type of the ports by running the ucadmin show command.
AFF A220::> ucadmin show
Current Current Pending Pending Admin
Node Adapter Mode Type Mode Type Status
------------ ------- ------- --------- ------- --------- -----------
AFF A220_A 0c fc target - - online
AFF A220_A 0d fc target - - online
AFF A220_A 0e fc target - - online
AFF A220_A 0f fc target - - online
AFF A220_B 0c fc target - - online
AFF A220_B 0d fc target - - online
AFF A220_B 0e fc target - - online
AFF A220_B 0f fc target - - online
8 entries were displayed.
2. Verify that the current mode of the ports that are in use is cna and that the current type is set to
target. If not, change the port personality by using the following command:
ucadmin modify -node <home node of the port> -adapter <port name> -mode cna -type target
Note: The ports must be offline to run the previous command. To take a port offline, run the
following command:
network fcp adapter modify -node <home node of the port> -adapter <port name> -state down
Note: If you changed the port personality, you must reboot each node for the change to take effect.
system service-processor network modify –node <<var_nodeB>> -address-family IPv4 –enable true –
dhcp none –ip-address <<var_nodeB_sp_ip>> -netmask <<var_nodeB_sp_mask>> -gateway
<<var_nodeB_sp_gateway>>
25 FlexPod Express with VMware vSphere 6.7 and NetApp AFF A220 © 2018 NetApp, Inc. All Rights Reserved.
Note: The service processor IP addresses should be in the same subnet as the node management IP
addresses.
Note: Both <<var_nodeA>> and <<var_nodeB>> must be able to perform a takeover. Go to step 3 if
the nodes can perform a takeover.
2. Enable failover on one of the two nodes.
storage failover modify -node <<var_nodeA>> -enabled true
4. Go to step 6 if high availability is configured. If high availability is configured, you see the following
message upon issuing the command:
High Availability Configured: true
6. Verify that hardware assist is correctly configured and, if needed, modify the partner IP address.
storage failover hwassist show
Note: The message Keep Alive Status : Error: did not receive hwassist keep
alive alerts from partner indicates that hardware assist is not configured. Run the following
commands to configure hardware assist.
storage failover modify –hwassist-partner-ip <<var_nodeB_mgmt_ip>> -node <<var_nodeA>>
storage failover modify –hwassist-partner-ip <<var_nodeA_mgmt_ip>> -node <<var_nodeB>>
26 FlexPod Express with VMware vSphere 6.7 and NetApp AFF A220 © 2018 NetApp, Inc. All Rights Reserved.
broadcast-domain remove-ports -broadcast-domain Default -ports <<var_nodeA>>:e0c,
<<var_nodeA>>:e0d, <<var_nodeA>>:e0e, <<var_nodeA>>:e0f, <<var_nodeB>>:e0c, <<var_nodeB>>:e0d,
<<var_nodeA>>:e0e, <<var_nodeA>>:e0f
ifgrp create -node << var_nodeB>> -ifgrp a0a -distr-func port -mode multimode_lacp
network port ifgrp add-port -node <<var_nodeB>> -ifgrp a0a -port e0c
network port ifgrp add-port -node <<var_nodeB>> -ifgrp a0a -port e0d
AFF A220::> network port modify -node node_B -port a0a -mtu 9000
27 FlexPod Express with VMware vSphere 6.7 and NetApp AFF A220 © 2018 NetApp, Inc. All Rights Reserved.
Create VLANs in ONTAP
To create VLANs in ONTAP, complete the following steps:
1. Create NFS VLAN ports and add them to the data broadcast domain.
network port vlan create –node <<var_nodeA>> -vlan-name a0a-<<var_nfs_vlan_id>>
network port vlan create –node <<var_nodeB>> -vlan-name a0a-<<var_nfs_vlan_id>>
2. Create iSCSI VLAN ports and add them to the data broadcast domain.
network port vlan create –node <<var_nodeA>> -vlan-name a0a-<<var_iscsi_vlan_A_id>>
network port vlan create –node <<var_nodeA>> -vlan-name a0a-<<var_iscsi_vlan_B_id>>
network port vlan create –node <<var_nodeB>> -vlan-name a0a-<<var_iscsi_vlan_A_id>>
network port vlan create –node <<var_nodeB>> -vlan-name a0a-<<var_iscsi_vlan_B_id>>
Note: Retain at least one disk (select the largest disk) in the configuration as a spare. A best practice is
to have at least one spare for each disk type and size.
Note: Start with five disks; you can add disks to an aggregate when additional storage is required.
Note: The aggregate cannot be created until disk zeroing completes. Run the aggr show command to
display the aggregate creation status. Do not proceed until aggr1_nodeA is online.
Note: For example, in the eastern United States, the time zone is America/New_York. After you begin
typing the time zone name, press the Tab key to see available options.
28 FlexPod Express with VMware vSphere 6.7 and NetApp AFF A220 © 2018 NetApp, Inc. All Rights Reserved.
options snmp.enable on
Note: Use the snmp community delete all command with caution. If community strings are used
for other monitoring products, this command removes them.
3. Enter the authoritative entity's engine ID and select md5 as the authentication protocol.
4. Enter an eight-character minimum-length password for the authentication protocol when prompted.
5. Select des as the privacy protocol.
6. Enter an eight-character minimum-length password for the privacy protocol when prompted.
2. Add the data aggregate to the infra-SVM aggregate list for the NetApp VSC.
vserver modify -vserver Infra-SVM -aggr-list aggr1_nodeA,aggr1_nodeB
3. Remove the unused storage protocols from the SVM, leaving NFS and iSCSI.
vserver remove-protocols –vserver Infra-SVM -protocols cifs,ndmp,fcp
5. Turn on the SVM vstorage parameter for the NetApp NFS VAAI plug-in. Then, verify that NFS has
been configured.
vserver nfs modify –vserver Infra-SVM –vstorage enabled
vserver nfs show
29 FlexPod Express with VMware vSphere 6.7 and NetApp AFF A220 © 2018 NetApp, Inc. All Rights Reserved.
Note: Commands are prefaced by vserver in the command line because storage virtual machines
were previously called servers.
Note: The NetApp VSC automatically handles export policies if you choose to install it after vSphere
has been set up. If you do not install it, you must create export policy rules when additional Cisco
UCS C-Series servers are added.
2. Create a job schedule to update the root volume mirror relationships every 15 minutes.
job schedule interval create -name 15min -minutes 15
30 FlexPod Express with VMware vSphere 6.7 and NetApp AFF A220 © 2018 NetApp, Inc. All Rights Reserved.
snapmirror create -source-path Infra-SVM:rootvol -destination-path Infra-SVM:rootvol_m02 -type LS
-schedule 15min
4. Initialize the mirroring relationship and verify that it has been created.
snapmirror initialize-ls-set -source-path Infra-SVM:rootvol
snapmirror show
2. Generally, a self-signed certificate is already in place. Verify the certificate by running the following
command:
security certificate show
3. For each SVM shown, the certificate common name should match the DNS FQDN of the SVM. The
four default certificates should be deleted and replaced by either self-signed certificates or certificates
from a certificate authority.
Note: Deleting expired certificates before creating certificates is a best practice. Run the security
certificate delete command to delete expired certificates. In the following command, use
TAB completion to select and delete each default certificate.
security certificate delete [TAB] …
Example: security certificate delete -vserver Infra-SVM -common-name Infra-SVM -ca Infra-SVM -
type server -serial 552429A6
4. To generate and install self-signed certificates, run the following commands as one-time commands.
Generate a server certificate for the infra-SVM and the cluster SVM. Again, use TAB completion to
aid in completing these commands.
security certificate create [TAB] …
Example: security certificate create -common-name infra-svm.netapp.com -type server -size 2048 -
country US -state "North Carolina" -locality "RTP" -organization "NetApp" -unit "FlexPod" -email-
addr "abc@netapp.com" -expire-days 365 -protocol SSL -hash-function SHA256 -vserver Infra-SVM
5. To obtain the values for the parameters required in the following step, run the security
certificate show command.
6. Enable each certificate that was just created using the –server-enabled true and –client-
enabled false parameters. Again, use TAB completion.
security ssl modify [TAB] …
Example: security ssl modify -vserver Infra-SVM -server-enabled true -client-enabled false -ca
infra-svm.netapp.com -serial 55243646 -common-name infra-svm.netapp.com
7. Configure and enable SSL and HTTPS access and disable HTTP access.
system services web modify -external true -sslv3-enabled true
Warning: Modifying the cluster configuration will cause pending web service requests to be
interrupted as the web servers are restarted.
Do you want to continue {y|n}: y
system services firewall policy delete -policy mgmt -service http –vserver <<var_clustername>>
Note: It is normal for some of these commands to return an error message stating that the entry does
not exist.
8. Revert to the admin privilege level and create the setup to allow SVM to be available by the web.
set –privilege admin
vserver services web modify –name spi|ontapi|compat –vserver * -enabled true
31 FlexPod Express with VMware vSphere 6.7 and NetApp AFF A220 © 2018 NetApp, Inc. All Rights Reserved.
Create a NetApp FlexVol Volume in ONTAP
To create a NetApp FlexVol® volume, enter the volume name, size, and the aggregate on which it exists.
Create two VMware datastore volumes and a server boot volume.
volume create -vserver Infra-SVM -volume infra_datastore_1 -aggregate aggr1_nodeA -size 500GB -
state online -policy default -junction-path /infra_datastore_1 -space-guarantee none -percent-
snapshot-space 0
volume create -vserver Infra-SVM -volume infra_swap -aggregate aggr1_nodeA -size 100GB -state
online -policy default -junction-path /infra_swap -space-guarantee none -percent-snapshot-space 0
-snapshot-policy none
volume create -vserver Infra-SVM -volume esxi_boot -aggregate aggr1_nodeA -size 100GB -state
online -policy default -space-guarantee none -percent-snapshot-space 0
Note: When adding an extra Cisco UCS C-Series server, an extra boot LUN must be created.
32 FlexPod Express with VMware vSphere 6.7 and NetApp AFF A220 © 2018 NetApp, Inc. All Rights Reserved.
network interface create -vserver Infra-SVM -lif iscsi_lif01b -role data -data-protocol iscsi -
home-node <<var_nodeA>> -home-port a0a-<<var_iscsi_vlan_B_id>> -address
<<var_nodeA_iscsi_lif01b_ip>> -netmask <<var_nodeA_iscsi_lif01b_mask>> –status-admin up –
failover-policy disabled –firewall-policy data –auto-revert false
network interface create -vserver Infra-SVM -lif iscsi_lif02a -role data -data-protocol iscsi -
home-node <<var_nodeB>> -home-port a0a-<<var_iscsi_vlan_A_id>> -address
<<var_nodeB_iscsi_lif01a_ip>> -netmask <<var_nodeB_iscsi_lif01a_mask>> –status-admin up –
failover-policy disabled –firewall-policy data –auto-revert false
network interface create -vserver Infra-SVM -lif iscsi_lif02b -role data -data-protocol iscsi -
home-node <<var_nodeB>> -home-port a0a-<<var_iscsi_vlan_B_id>> -address
<<var_nodeB_iscsi_lif01b_ip>> -netmask <<var_nodeB_iscsi_lif01b_mask>> –status-admin up –
failover-policy disabled –firewall-policy data –auto-revert false
network interface create -vserver Infra-SVM -lif nfs_lif02 -role data -data-protocol nfs -home-
node <<var_nodeA>> -home-port a0a-<<var_nfs_vlan_id>> –address <<var_nodeB_nfs_lif_02_ip>> -
netmask << var_nodeB_nfs_lif_02_mask>> -status-admin up –failover-policy broadcast-domain-wide –
firewall-policy data –auto-revert true
33 FlexPod Express with VMware vSphere 6.7 and NetApp AFF A220 © 2018 NetApp, Inc. All Rights Reserved.
To add the infrastructure SVM administrator and SVM administration logical interface to the management
network, complete the following steps:
1. Run the following command:
network interface create –vserver Infra-SVM –lif vsmgmt –role data –data-protocol none –home-node
<<var_nodeB>> -home-port e0M –address <<var_svm_mgmt_ip>> -netmask <<var_svm_mgmt_mask>> -
status-admin up –failover-policy broadcast-domain-wide –firewall-policy mgmt –auto-revert true
Note: The SVM management IP here should be in the same subnet as the storage cluster management
IP.
2. Create a default route to allow the SVM management interface to reach the outside world.
network route create –vserver Infra-SVM -destination 0.0.0.0/0 –gateway <<var_svm_mgmt_gateway>>
network route show
3. Set a password for the SVM vsadmin user and unlock the user.
security login password –username vsadmin –vserver Infra-SVM
Enter a new password: <<var_password>>
Enter it again: <<var_password>>
Perform Initial Cisco UCS C-Series Standalone Server Setup for Cisco Integrated
Management Server
Complete these steps for the initial setup of the CIMC interface for Cisco UCS C-Series standalone
servers.
Table 15 lists the information needed to configure CIMC for each Cisco UCS C-Series standalone server.
All Servers
1. Attach the Cisco keyboard, video, and mouse (KVM) dongle (provided with the server) to the KVM
port on the front of the server. Plug a VGA monitor and USB keyboard into the appropriate KVM
dongle ports.
2. Power on the server and press F8 when prompted to enter the CIMC configuration.
3. In the CIMC configuration utility, set the following options:
• Network interface card (NIC) mode:
− Dedicated [X]
• IP (Basic):
− IPV4: [X]
34 FlexPod Express with VMware vSphere 6.7 and NetApp AFF A220 © 2018 NetApp, Inc. All Rights Reserved.
− DHCP enabled: [ ]
− CIMC IP: <<cimc_ip>>
− Prefix/Subnet: <<cimc_netmask>>
− Gateway: <<cimc_gateway>>
• VLAN (Advanced): Leave cleared to disable VLAN tagging.
− NIC redundancy
− None: [X]
35 FlexPod Express with VMware vSphere 6.7 and NetApp AFF A220 © 2018 NetApp, Inc. All Rights Reserved.
5. Press F10 to save the CIMC interface configuration.
6. After the configuration is saved, press Esc to exit.
IP address iscsi_lif01a
IP address iscsi_lif02a
36 FlexPod Express with VMware vSphere 6.7 and NetApp AFF A220 © 2018 NetApp, Inc. All Rights Reserved.
Detail Detail Value
IP address iscsi_lif01b
IP address iscsi_lif02b
Infra_SVM IQN
3. Configure the following devices by clicking the device under Add Boot Device, and going to the
Advanced tab.
• Add Virtual Media
− Name: KVM-CD-DVD
− Subtype: KVM MAPPED DVD
− State: Enabled
− Order: 1
• Add iSCSI Boot.
− Name: iSCSI-A
− State: Enabled
− Order: 2
− Slot: MLOM
− Port: 0
• Click Add iSCSI Boot.
− Name: iSCSI-B
37 FlexPod Express with VMware vSphere 6.7 and NetApp AFF A220 © 2018 NetApp, Inc. All Rights Reserved.
− State: Enabled
− Order: 3
− Slot: MLOM
− Port: 1
4. Click Add Device.
5. Click Save Changes and then click Close.
38 FlexPod Express with VMware vSphere 6.7 and NetApp AFF A220 © 2018 NetApp, Inc. All Rights Reserved.
Configure Cisco VIC1387 for iSCSI Boot
The following configuration steps are for the Cisco VIC 1387 for iSCSI boot.
39 FlexPod Express with VMware vSphere 6.7 and NetApp AFF A220 © 2018 NetApp, Inc. All Rights Reserved.
3. Click Add vNIC and then click OK.
4. Repeat the process to add a second vNIC.
a. Name the vNIC iSCSI-vNIC-B.
b. Enter <<var_iscsi_vlan_b>> as the VLAN.
c. Set the uplink port to 1.
5. Select the vNIC iSCSI-vNIC-A on the left.
40 FlexPod Express with VMware vSphere 6.7 and NetApp AFF A220 © 2018 NetApp, Inc. All Rights Reserved.
− Boot LUN: 0
8. Enter the secondary target details.
− Name: IQN number of infra-SVM
− IP address: IP address of iscsi_lif02a
− Boot LUN: 0
Note: You can obtain the storage IQN number by running the vserver iscsi show command.
Note: Be sure to record the IQN names for each vNIC. You need them for a later step.
41 FlexPod Express with VMware vSphere 6.7 and NetApp AFF A220 © 2018 NetApp, Inc. All Rights Reserved.
Note: Be sure to record the IQN names for each vNIC. You need them for a later step.
15. Click Configure ISCSI.
16. Repeat this process to configure iSCSI boot for Cisco UCS server B.
5. Repeat steps 3 and 4 for eth1, verifying that the uplink port is set to 1 for eth1.
42 FlexPod Express with VMware vSphere 6.7 and NetApp AFF A220 © 2018 NetApp, Inc. All Rights Reserved.
Note: This procedure must be repeated for each initial Cisco UCS Server node and each additional
Cisco UCS Server node added to the environment.
Note: This step must be completed when adding additional Cisco UCS C-Series servers.
Note: This step must be completed when adding additional Cisco UCS C-Series servers.
43 FlexPod Express with VMware vSphere 6.7 and NetApp AFF A220 © 2018 NetApp, Inc. All Rights Reserved.
5.5 VMware vSphere 6.7 Deployment Procedure
This section provides detailed procedures for installing VMware ESXi 6.7 in a FlexPod Express
configuration. The deployment procedures that follow are customized to include the environment
variables described in previous sections.
Multiple methods exist for installing VMware ESXi in such an environment. This procedure uses the virtual
KVM console and virtual media features of the CIMC interface for Cisco UCS C-Series servers to map
remote installation media to each individual server.
Note: This procedure must be completed for Cisco UCS server A and Cisco UCS server B.
Note: This procedure must be completed for any additional nodes added to the cluster.
All Hosts
1. Navigate to a web browser and enter the IP address for the CIMC interface for the Cisco UCS C-
Series. This step launches the CIMC GUI application.
2. Log in to the CIMC UI using the admin user name and credentials.
3. In the main menu, select the Server tab.
4. Click Launch KVM Console.
5. From the virtual KVM console, select the Virtual Media tab.
6. Select Map CD/DVD.
Note: You might first need to click Activate Virtual Devices. Select Accept This Session if prompted.
7. Browse to the VMware ESXi 6.7 installer ISO image file and click Open. Click Map Device.
8. Select the Power menu and choose Power Cycle System (Cold Boot). Click Yes.
All Hosts
1. When the system boots, the machine detects the presence of the VMware ESXi installation media.
2. Select the VMware ESXi installer from the menu that appears.
The installer loads. This takes several minutes.
3. After the installer has finished loading, press Enter to continue with the installation.
44 FlexPod Express with VMware vSphere 6.7 and NetApp AFF A220 © 2018 NetApp, Inc. All Rights Reserved.
4. After reading the end-user license agreement, accept it and continue with the installation by pressing
F11.
5. Select the NetApp LUN that was previously set up as the installation disk for ESXi, and press Enter to
continue with the installation.
All Hosts
1. After the server has finished rebooting, enter the option to customize the system by pressing F2.
2. Log in with root as the login name and the root password previously entered during the installation
process.
3. Select the Configure Management Network option.
4. Select Network Adapters and press Enter.
5. Select the desired ports for vSwitch0. Press Enter.
Note: Select the ports that correspond to eth0 and eth1 in CIMC.
45 FlexPod Express with VMware vSphere 6.7 and NetApp AFF A220 © 2018 NetApp, Inc. All Rights Reserved.
6. Select VLAN (optional) and press Enter.
7. Enter the VLAN ID <<mgmt_vlan_id>>. Press Enter.
8. From the Configure Management Network menu, select IPv4 Configuration to configure the IP
address of the management interface. Press Enter.
9. Use the arrow keys to highlight Set Static IPv4 address and use the space bar to select this option.
10. Enter the IP address for managing the VMware ESXi host <<esxi_host_mgmt_ip>>.
11. Enter the subnet mask for the VMware ESXi host <<esxi_host_mgmt_netmask>>.
12. Enter the default gateway for the VMware ESXi host <<esxi_host_mgmt_gateway>>.
13. Press Enter to accept the changes to the IP configuration.
14. Enter the IPv6 configuration menu.
15. Use the space bar to disable IPv6 by unselecting the Enable IPv6 (restart required) option. Press
Enter.
16. Enter the menu to configure the DNS settings.
17. Because the IP address is assigned manually, the DNS information must also be entered manually.
18. Enter the primary DNS server’s IP address <<nameserver_ip>>.
19. (Optional) Enter the secondary DNS server’s IP address.
20. Enter the FQDN for the VMware ESXi host name: <<esxi_host_fqdn>>.
21. Press Enter to accept the changes to the DNS configuration.
22. Exit the Configure Management Network submenu by pressing Esc.
23. Press Y to confirm the changes and reboot the server.
24. Log out of the VMware Console by pressing Esc.
46 FlexPod Express with VMware vSphere 6.7 and NetApp AFF A220 © 2018 NetApp, Inc. All Rights Reserved.
Configure ESXi Host
You need the following information to configure each ESXi host.
Detail Value
ESXi host name
47 FlexPod Express with VMware vSphere 6.7 and NetApp AFF A220 © 2018 NetApp, Inc. All Rights Reserved.
3. If the driver needs to be updated, complete the procedures in the sections that follow
Note: The nenic version used in this configuration is 1.0.16.0. Be sure to check the Cisco UCS
hardware and software compatibility tool for information about the latest supported drivers.
All Hosts
To load the updated versions of the eNIC driver for the Cisco VIC, follow these steps for all the hosts from
the vSphere Client:
1. Click datastore1 in the left navigation pane.
2. Click Datastore browser in the right pane.
48 FlexPod Express with VMware vSphere 6.7 and NetApp AFF A220 © 2018 NetApp, Inc. All Rights Reserved.
3. Select Upload in the datastore browser and upload the vSphere Installation Bundle (VIB) file.
4. Using SSH, connect to the management IP ESXi host. Enter root for the user name and enter the root
password.
Note: You must start the SSH service by navigating to Manage in the left navigation pane and then
clicking services. Right-click the Tech Support Mode (TSM) and TSM-SSH services and select Start
to start the services. Be sure to stop them when you are finished.
6. A message appears indicating that the update has completed successfully and a reboot is required.
You can also see the name of the VIB removed and the VIB installed.
7. Run the following command to reboot the host:
reboot
3. Click iScsiBootvSwitch.
49 FlexPod Express with VMware vSphere 6.7 and NetApp AFF A220 © 2018 NetApp, Inc. All Rights Reserved.
4. Select Edit settings.
5. Change the MTU to 9000 and click Save.
6. Click Networking in the left navigation pane to return to the Virtual Switches tab.
7. Click Add Standard Virtual Switch.
8. Provide the name iScsiBootvSwitch-B for the vSwitch name.
− Set the MTU to 9000.
− Select vmnic3 from the Uplink 1 options.
− Click Add.
Note: Vmnic2 and vmnic3 are used for iSCSI boot in this configuration. If you have additional NICs in
your ESXi host, you might have different vmnic numbers. To confirm which NICs are used for iSCSI
boot, match the MAC addresses on the iSCSI vNICs in CIMC to the vmnics in ESXi.
9. In the center pane, select the VMkernel NICs tab.
10. Select Add VMkernel NIC.
a. Specify a new port group name of iScsiBootPG-B.
b. Select iScsiBootvSwitch-B for the virtual switch.
c. Enter <<iscsib_vlan_id>> for the VLAN ID.
d. Change the MTU to 9000.
e. Expand IPv4 Settings.
f. Select Static Configuration.
g. Enter <<var_hosta_iscsib_ip>> for Address.
h. Enter <<var_hosta_iscsib_mask>> for Subnet Mask.
i. Click Create.
50 FlexPod Express with VMware vSphere 6.7 and NetApp AFF A220 © 2018 NetApp, Inc. All Rights Reserved.
Configure iSCSI Multipathing
To set up iSCSI multipathing on the ESXi hosts, complete the following steps:
1. Select Storage in the left navigation pane. Click Adapters.
2. Select the iSCSI software adapter and click Configure iSCSI.
51 FlexPod Express with VMware vSphere 6.7 and NetApp AFF A220 © 2018 NetApp, Inc. All Rights Reserved.
Note: You can find the iSCSI LIF IP addresses by running the network interface show command
on the NetApp cluster or by looking at the Network Interfaces tab in OnCommand® System
Manager.
52 FlexPod Express with VMware vSphere 6.7 and NetApp AFF A220 © 2018 NetApp, Inc. All Rights Reserved.
3. Right-click VM Network and select Edit. Change the VLAN ID to <<var_vm_traffic_vlan>>.
4. Click Add Port Group.
a. Name the port group MGMT-Network.
b. Enter <<mgmt_vlan>> for the VLAN ID.
c. Make sure that vSwitch0 is selected.
d. Click Add.
5. Click the VMkernel NICs tab.
53 FlexPod Express with VMware vSphere 6.7 and NetApp AFF A220 © 2018 NetApp, Inc. All Rights Reserved.
7. Repeat this process to create the vMotion VMkernel port.
8. Select Add VMkernel NIC.
a. Select New Port Group.
b. Name the port group vMotion.
c. Enter <<vmotion_vlan_id>> for the VLAN ID.
d. Change the MTU to 9000.
e. Expand IPv4 Settings.
f. Select Static Configuration.
g. Enter <<var_hosta_vmotion_ip>> for Address.
h. Enter <<var_hosta_vmotion_mask>> for Subnet Mask.
i. Make sure that the vMotion checkbox is selected after IPv4 Settings.
Note: There are many ways to configure ESXi networking, including by using the VMware vSphere
distributed switch if your licensing allows it. Alternative network configurations are supported in
FlexPod Express if they are required to meet business requirements.
54 FlexPod Express with VMware vSphere 6.7 and NetApp AFF A220 © 2018 NetApp, Inc. All Rights Reserved.
2. Select Mount NFS Datastore.
3. Next, enter the following information in the Provide NFS Mount Details screen:
− Name: infra_datastore_1
− NFS server: <<var_nodea_nfs_lif>>
− Share: /infra_datastore_1
− Make sure that NFS 3 is selected.
4. Click Finish. You can see the task completing in the Recent Tasks pane.
5. Repeat this process to mount the infra_swap datastore:
− Name: infra_swap
− NFS server: <<var_nodea_nfs_lif>>
− Share: /infra_swap
− Make sure that NFS 3 is selected.
55 FlexPod Express with VMware vSphere 6.7 and NetApp AFF A220 © 2018 NetApp, Inc. All Rights Reserved.
Configure NTP
To configure NTP for an ESXi host, complete the following steps:
1. Click Manage in the left navigation pane. Select System in the right pane and then click Time & Date.
56 FlexPod Express with VMware vSphere 6.7 and NetApp AFF A220 © 2018 NetApp, Inc. All Rights Reserved.
Move the Virtual Machine Swap-File Location
These steps provide details for moving the virtual machine swap-file location.
1. Click Manage in the left navigation pane. Select system in the right pane, then click Swap.
3. Click Save.
57 FlexPod Express with VMware vSphere 6.7 and NetApp AFF A220 © 2018 NetApp, Inc. All Rights Reserved.
5.6 Install VMware vCenter Server 6.7
This section provides detailed procedures for installing VMware vCenter Server 6.7 in a FlexPod Express
configuration.
Note: FlexPod Express uses the VMware vCenter Server Appliance (VCSA).
58 FlexPod Express with VMware vSphere 6.7 and NetApp AFF A220 © 2018 NetApp, Inc. All Rights Reserved.
14. On the Ready to Complete Stage 1 screen, verify that the settings you have entered are correct. Click
Finish.
The VCSA installs now. This process takes several minutes.
15. After stage 1 completes, a message appears stating that it has completed. Click Continue to begin
stage 2 configuration.
16. On the Stage 2 Introduction screen, click Next.
17. Enter <<var_ntp_id>> for the NTP server address. You can enter multiple NTP IP addresses.
Note: If you plan to use vCenter Server high availability (HA), make sure that SSH access is enabled.
18. Configure the SSO domain name, password, and site name. Click Next.
Note: Record these values for your reference, especially if you deviate from the vsphere.local domain
name.
19. Join the VMware Customer Experience Program if desired. Click Next.
20. View the summary of your settings. Click Finish or use the back button to edit settings.
21. A message appears stating that you will not be able to pause or stop the installation from completing
after it has started. Click OK to continue.
22. The appliance setup continues. This takes several minutes.
23. A message appears indicating that the setup was successful.
Note: The links that the installer provides to access vCenter Server are clickable.
59 FlexPod Express with VMware vSphere 6.7 and NetApp AFF A220 © 2018 NetApp, Inc. All Rights Reserved.
3. Log in with the user name administrator@vsphere.local and the SSO password you entered during
the VCSA setup process.
4. Right-click the vCenter name and select New Datacenter.
5. Enter a name for the data center and click OK.
60 FlexPod Express with VMware vSphere 6.7 and NetApp AFF A220 © 2018 NetApp, Inc. All Rights Reserved.
Add ESXi Hosts to Cluster
1. Right-click the cluster and select Add Host.
3. The message Verified the configured netdump server is running appears after you
enter the final command.
Note: This process must be completed for any additional hosts added to FlexPod Express.
61 FlexPod Express with VMware vSphere 6.7 and NetApp AFF A220 © 2018 NetApp, Inc. All Rights Reserved.
6 Conclusion
FlexPod Express provides a simple and effective solution by providing a validated design that uses
industry-leading components. By scaling through the addition of additional components, FlexPod Express
can be tailored for specific business needs. FlexPod Express was designed by keeping in mind small to
midsize businesses, ROBOs, and other businesses that require dedicated solutions.
Acknowledgments
The authors would like to acknowledge the following people for their support and contribution to this
design:
• Paul Onofrietti Onotracy
• Arvind Ramakrishnan
Version History
Version Date Document Version History
Version 1.0 October 2018 Initial release.
62 FlexPod Express with VMware vSphere 6.7 and NetApp AFF A220 © 2018 NetApp, Inc. All Rights Reserved.
Refer to the Interoperability Matrix Tool (IMT) on the NetApp Support site to validate that the exact
product and feature versions described in this document are supported for your specific environment. The
NetApp IMT defines the product components and versions that can be used to construct configurations
that are supported by NetApp. Specific results depend on each customer’s installation in accordance with
published specifications.
Copyright Information
Copyright © 2018 NetApp, Inc. All rights reserved. Printed in the U.S. No part of this document covered
by copyright may be reproduced in any form or by any means—graphic, electronic, or mechanical,
including photocopying, recording, taping, or storage in an electronic retrieval system—without prior
written permission of the copyright owner.
Software derived from copyrighted NetApp material is subject to the following license and disclaimer:
THIS SOFTWARE IS PROVIDED BY NETAPP “AS IS” AND WITHOUT ANY EXPRESS OR IMPLIED
WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE, WHICH ARE HEREBY
DISCLAIMED. IN NO EVENT SHALL NETAPP BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
THE POSSIBILITY OF SUCH DAMAGE.
NetApp reserves the right to change any products described herein at any time, and without notice.
NetApp assumes no responsibility or liability arising from the use of products described herein, except as
expressly agreed to in writing by NetApp. The use or purchase of this product does not convey a license
under any patent rights, trademark rights, or any other intellectual property rights of NetApp.
The product described in this manual may be protected by one or more U.S. patents, foreign patents, or
pending applications.
Data contained herein pertains to a commercial item (as defined in FAR 2.101) and is proprietary to
NetApp, Inc. The U.S. Government has a non-exclusive, non-transferrable, non-sublicensable, worldwide,
limited irrevocable license to use the Data only in connection with and in support of the U.S. Government
contract under which the Data was delivered. Except as provided herein, the Data may not be used,
disclosed, reproduced, modified, performed, or displayed without the prior written approval of NetApp,
Inc. United States Government license rights for the Department of Defense are limited to those rights
identified in DFARS clause 252.227-7015(b).
Trademark Information
NETAPP, the NETAPP logo, and the marks listed at http://www.netapp.com/TM are trademarks of
NetApp, Inc. Other company and product names may be trademarks of their respective owners.
63 FlexPod Express with VMware vSphere 6.7 and NetApp AFF A220 © 2018 NetApp, Inc. All Rights Reserved.