VDCA550 Study GuideWritten by: Paul E. Grevink Adventures in a Virtual World: http://paulgrevink.wordpress.com/ e:
[email protected] t: @paulgrevink VDCA Study Guide Version – Section 1 - 2 Page 1 Contents Introduction............................................................................................................................................. 4 Section 1 – Implement and Manage Storage .......................................................................................... 5 Objective 1.1 – Implement Complex Storage Solutions ...................................................................... 5 Determine use cases for and configure VMware DirectPath I/O .................................................... 5 Determine requirements for and configure NPIV ........................................................................... 7 Understand use cases for Raw Device Mapping ............................................................................. 9 Configure vCenter Server storage filters ....................................................................................... 11 Understand and apply VMFS re-signaturing ................................................................................. 12 Understand and apply LUN masking using PSA-related commands ............................................. 14 Configure Software iSCSI port binding .......................................................................................... 19 Configure and manage vSphere Flash Read Cache ....................................................................... 20 Configure Datastore Clusters ........................................................................................................ 24 Upgrade VMware storage infrastructure ...................................................................................... 35 Objective 1.2 – Manage Complex Storage Solutions ........................................................................ 36 Analyze I/O workloads to determine storage performance requirements................................... 36 Identify and tag SSD and local devices .......................................................................................... 37 Administer hardware acceleration for VAAI ................................................................................. 39 Configure and administer profile-based storage .......................................................................... 42 Prepare Storage for maintenance ................................................................................................. 51 Apply space utilization data to manage storage resources........................................................... 52 Provision and manage storage resources according to Virtual Machine requirements ............... 54 Understand interactions between virtual storage provisioning and physical storage provisioning ....................................................................................................................................................... 55 Configure datastore Alarms .......................................................................................................... 55 Create and analyze datastore alarms and errors to determine space availability........................ 56 Objective 1.3 – Troubleshoot complex storage solutions ................................................................. 57 Perform command line configuration of multipathing options .................................................... 57 Change a multipath policy ............................................................................................................. 59 Troubleshoot common storage issues .......................................................................................... 61 Section 2 – Implement and Manage Networking.................................................................................. 71 Objective 2.1 – Implement and manage virtual standard switch (vSS) networks ............................ 71 Create and Manage vSS components............................................................................................ 71 Create and Manage vmkernel ports on standard switches .......................................................... 72 VDCA Study Guide Version – Section 1 - 2 Page 2 Configure advanced vSS settings ................................................................................................... 73 Network Failover Detection: ......................................................................................................... 74 Notify Switches: ............................................................................................................................. 74 Failback: ......................................................................................................................................... 74 Objective 2.2 – Implement and manage virtual distributed switch (vDS) networks ........................ 76 Determine use cases for and applying VMware DirectPath I/O ................................................... 76 Migrate a vSS network to a hybrid or full vDS solution................................................................. 76 Configure vSS and vDS settings using command line tools ........................................................... 81 Analyze command line output to identify vSS and vDS configuration details .............................. 81 Configure NetFlow......................................................................................................................... 82 Determine appropriate discovery protocol................................................................................... 83 Determine use cases for and configure PVLANs ........................................................................... 84 Use command line tools to troubleshoot and identify VLAN configurations ............................... 86 Objective 2.3 – Troubleshoot virtual switch solutions ...................................................................... 87 Understand the NIC Teaming failover types and related physical network settings .................... 87 Determine and apply Failover settings.......................................................................................... 91 Configure explicit failover to conform with VMware best practices ............................................ 91 Configure port groups to properly isolate network traffic ............................................................ 92 Given a set of network requirements, identify the appropriate distributed switch technology to use ................................................................................................................................................. 93 Configure and administer vSphere Network I/O Control .............................................................. 94 Use command line tools to troubleshoot and identify configuration items from an existing vDS97 VDCA Study Guide Version – Section 1 - 2 Page 3 For that reason. For more information about the differences between the VDCA510 and the VDCA550. follow the objectives as close as possible. feedback and questions.wordpress. every Objective starts with one or more references to the VMware documentation. I welcome your comments. This guide is based on the VDCA550 Blueprint.2 Page 4 . In this revision. In the official vSphere 5. If necessary. All this without providing too much information. most pictures have been replaced.5 documentation almost all user actions are performed using the vSphere Web Client. version 3. examples and references to other posts. Write down the essence of every objective (the Summary part).com/the-vcap5-dca-diaries/ . all user actions are performed using the traditional vSphere Client. as found here. provide an alternative. The posts had to meet the following goals: Based on the official Blueprint. in some cases you will see the vSphere Client. read my post on this item.2. provide additional explanation.0 documentation. These posts were written in preparation for my VCAP5-DCA exam (version VDCA510). In case the official documentation is not available or not complete. I hope all this will help you in your preparation for your exam. However in the vSphere 5. I felt I had to write a version for this exam as well. URL: http://paulgrevink. Refer to the official VMware documentation as much as possible. With the release of the VDCA550 exam in spring 2014. VDCA Study Guide Version – Section 1 . instructions.Introduction The first edition of this study guide was first published as a series of posts on my blog “Adventures in a Virtual World”. 2 Page 5 . PCI devices connected to a host can be marked as available for pass through from the Hardware Advanced Settings in the Configuration tab for the host. Each virtual machine can be connected to up to six PCI devices.VCAP5-DCA exam VDCA550 Section 1 – Implement and Manage Storage Objective 1.5.5 vSphere Management Assistant Guide v5.5 vSphere Storage Guide v5. page 119. VDCA Study Guide Version – Section 1 .1 – Implement Complex Storage Solutions Skills and Abilities Determine use cases for and configure VMware DirectPath I/O Determine requirements for and configure NPIV Understand use cases for Raw Device Mapping Configure vCenter Server storage filters Understand and apply VMFS re-signaturing Understand and apply LUN masking using PSA-related commands Configure Software iSCSI port binding Configure and manage vSphere Flash Read Cache Configure Datastore Clusters Upgrade VMware storage infrastructure Tools vSphere Installation and Setup Guide v5. Chapter 5. Summary: vSphere DirectPath I/O allows a guest operating system on a virtual machine to directly access physical PCI and PCIe devices connected to a host. Snapshots are not supported with PCI vSphere Direct Path I/O devices.5 vSphere Command-Line Interface Concepts and Examples v5.5 Configuring and Troubleshooting N-Port ID Virtualization vSphere Client / Web Client vSphere CLI o esxtop /resxtop o vscsiStats o esxcli o vifs o vmkfstools Determine use cases for and configure VMware DirectPath I/O Official Documentation: vSphere Virtual Machine Administration Guide v5. Section “Add a PCI Device in the vSphere Web Client”. Verify that the virtual machine is with ESXi 4. VDCA Study Guide Version – Section 1 . Verify that the PCI devices are connected to the host and marked as available for pass through. However. VMware does not support USB controller passthrough for ESXi hosts that boot from USB devices or SD cards connected through USB channels. For more information. When finished.2 Page 6 . verify that the host has Intel® Virtualization Technology for Directed I/O (VT-d) or AMD I/O Virtualization Technology (IOMMU) enabled in the BIOS. see http://kb.x and later. add a PCI device to the Virtual Machine Configuration. First add a PCI device on the host level. you should disable the USB controller for passthrough.Prerequisites To use DirectPath I/O.vmware. if your ESXi host is configured to boot from a USB device.com/kb/2068645. Action is supported with vSphere Web Client and vSphere Client Figure 1 Installation is a two-step process. 5.htm (Thank you Sean and Ed) Determine requirements for and configure NPIV Official Documentation: vSphere Virtual Machine Administration Guide v5. Contact the switch vendor for information about enabling NPIV on their devices. NPIV support is subject to the following limitations: NPIV must be enabled on the SAN switch.5. Section “Configure Fibre Channel NPIV Settings in the vSphere Web Client”. “N-Port ID Virtualization. Chapter 4.il/vmware-esxi4vmdirectpath. NPIV is supported only for virtual machines with RDM disks.2 Page 7 . Removing the PCI device did not release the reservation. each with unique identifiers. N-port ID virtualization (NPIV) provides the ability to share a single physical Fibre Channel HBA port among multiple virtual ports. Virtual machines with regular virtual disks continue to use the WWNs of the host’s physical HBAs. page 41” Summary: Control virtual machine access to LUNs on a per-virtual machine basis. Detailed information can be found in vSphere Storage Guide v5.co. page 143.Figure 2 Note: Adding a PCI device creates a Memory reservation for the VM. VDCA Study Guide Version – Section 1 .petri. Chapter 5. Other references: A good step-by-step guide can be found at: http://www. pdf Updated for ESXi 5. Therefore. make sure that the RDM files of the virtual machines are located on the same datastore. Figure 3 Other references: VMware vSphere Blog by Cormac Hogan: http://blogs. If the physical HBAs do not support NPIV. what is the value of NPIV? VMware Technical Note ” Configuring and Troubleshooting N-Port ID Virtualization”: http://www. NPIV-enabled virtual machines are assigned exactly 4 NPIV-related WWNs.com/vsphere/2011/11/npivn-port-id-virtualization. You cannot perform Storage vMotion or vMotion between datastores when NPIV is enabled.uk/blog/2009/07/27/npiv-support-in-vmware-esx4/ VDCA Study Guide Version – Section 1 . virtual machines can utilize up to 4 physical HBAs for NPIV purposes.com/files/pdf/techpaper/vsp_4_vsp4_41_npivconfig.0! Simon Long: http://www. which are used to communicate with physical HBAs through virtual ports.vmware.co.vmware.2 Page 8 . Ensure that access is provided to both the host and the virtual machines. And the Big question.simonlong. The physical HBAs on the ESXi host must have access to a LUN using its WWNs in order for any virtual machines on that host to have access to that LUN using their NPIV WWNs. the virtual machines running on that host will fall back to using the WWNs of the host’s physical HBAs for LUN access. NOTE: To use vMotion for virtual machines with enabled NPIV.html Note: A very good example how to configure NPIV. The physical HBAs on the ESXi host must support NPIV. Each virtual machine can have up to 4 virtual ports. Think of an RDM as a symbolic link from a VMFS volume to a raw LUN. The RDM contains metadata for managing and redirecting disk access to the physical device. User-Friendly Persistent Names Dynamic Name Resolution Distributed File Locking File Permissions File System Operations Snapshots VDCA Study Guide Version – Section 1 .virtual-to-virtual clusters as well as physical-to-virtual clusters. and naming. it merges VMFS manageability with raw device access. Use cases for raw LUNs with RDMs are: When SAN snapshot or other layered applications run in the virtual machine. The RDM. is referenced in the virtual machine configuration. Two compatibility modes are available for RDMs: Virtual compatibility mode allows an RDM to act exactly like a virtual disk file. The mapping makes LUNs appear as files in a VMFS volume. including the use of snapshots. Physical compatibility mode allows direct access of the SCSI device for those applications that need lower level control. In any MSCS clustering scenario that spans physical hosts . In this case. Summary: An RDM is a mapping file in a separate VMFS volume that acts as a proxy for a raw physical storage device. RDM offers several benefits (shortlist).5 is dedicated to Raw Device Mappings (starting page 155). cluster data and quorum disks should be configured as RDMs rather than as virtual disks on a shared VMFS. As a result. you can: Use vMotion to migrate virtual machines using raw LUNs. not the raw LUN. permissions. Add raw LUNs to virtual machines using the vSphere Client. The file gives you some of the advantages of direct access to a physical device while keeping some advantages of a virtual disk in VMFS. This chapter starts with an introduction about RDMs and discusses the Characteristics and concludes with information how to create RDMs and how to manage paths for a mapped Raw LUN. The RDM better enables scalable backup offloading systems by using features inherent to the SAN. The RDM allows a virtual machine to directly access and use the storage device. Using RDMs.2 Page 9 . The RDM contains a reference to the raw LUN. Use file system features such as distributed file locking.Understand use cases for Raw Device Mapping Official Documentation: Chapter 17 in the vSphere Storage Guide v5. The conclusions are: For random reads and writes. You cannot map to a disk partition. vMotion SAN Management Agents N-Port ID Virtualization (NPIV) Limitations of Raw Device Mapping The RDM is not available for direct-attached block devices or certain RAID devices.5. Based on ESX 3. you cannot use a snapshot with the disk. The RDM uses a SCSI serial number to identify the mapped device. VMFS and RDM yield a similar number of I/O operations per second. VMFS requires 5 percent more CPU cycles per I/O operation compared to RDM.0” comes to the following conclusion: “Ordinary VMFS is recommended for most virtual disk storage.2 Page 10 . performance of VMFS is very close to that of RDM (except on sequential reads with an I/O block size of 4K). For random reads and writes. RDMs require the mapped device to be a whole LUN. For sequential reads and writes. VMFS requires about 8 percent more CPU cycles per I/O operation compared to RDM. snapshot or mirroring operations. but raw disks might be desirable in some cases” VDCA Study Guide Version – Section 1 . Virtual machine snapshots are available for RDMs with virtual compatibility mode. Both RDM and VMFS yield a very high throughput in excess of 300 megabytes per second depending on the I/O block size. Another paper “Performance Best Practices for VMware vSphere 5. Physical compatibility mode allows the virtual machine to manage its own. Comparing features available with virtual disks and RDMs: Figure 4 In 2008 VMware presented Performance Study “Performance Characterization of VMFS and RDM Using a SAN”. they cannot be used with RDMs. storagebased. If you are using the RDM in physical compatibility mode. For sequential reads and writes. Because block devices and some direct-attach RAID devices do not export serial numbers. tests were ran to compare the performance of VMFS and RDM. RDM Filter Filters out LUNs that are already referenced by an RDM on any host managed by vCenter Server. Same Host and Transports Filter Filters out LUNs ineligible for use as VMFS datastore extents because of host or storage type incompatibility.SameHostAndTransportsFilter config. The LUNs do not show up as candidates to be formatted with VMFS or to be used by a different RDM.filter.2 Page 11 . Summary: When you perform VMFS datastore management operations. Chapter 16 “Working with Datastores”.Other references: Performance Study “Performance Characterization of VMFS and RDM Using a SAN” Configure vCenter Server storage filters Official Documentation: vSphere Storage Guide v5.vpxd. you cannot add a Fibre Channel extent to a VMFS datastore on a local storage device. The filter helps provide a consistent view of all VMFS datastores on all hosts managed by vCenter Server. You can turn off the filters to view all devices. LUNs that use a storage type different from the one the original VMFS datastore uses. The filters help you to avoid storage corruption by retrieving only the storage devices that can be used for a particular operation. There are 4 types of storage filters: config. page 147.vpxd.vpxd.rdmFilter config. vCenter Server uses default storage protection filters. Section “Storage Filtering”.5.filter.hostRescanFilter VMFS Filter RDM Filter Same Host and Transports Filter Host Rescan Filter VMFS Filter Filters out storage devices. Unsuitable devices are not displayed for selection. or LUNs. Host Rescan Filter Automatically rescans and updates VMFS datastores after you perform datastore management operations.filter. the hosts automatically perform a rescan no matter whether you have the Host Rescan Filter on or off. that are already used by a VMFS datastore on any host managed by vCenter Server. VDCA Study Guide Version – Section 1 . For example.vpxd.filter. Prevents you from adding the following LUNs as extents: LUNs not exposed to all hosts that share the original VMFS datastore.vmfsFilter config. NOTE If you present a new LUN to a host or a cluster. like config. type a key. 2. and click Edit.filter. In the Value text box. Click Advanced Settings. Click Add.vpxd. page 141. Figure 5 Other references: Yellow Bricks on Storage Filters: http://www. section “Managing Duplicate VMFS Datastores”. vCenter Server storage protection filters are part of the vCenter Server and are managed with the vSphere Client. Click OK. type False for the specified key.5.com/2010/08/11/storage-filters/ Understand and apply VMFS re-signaturing Official Documentation: vSphere Storage Guide v5.2 Page 12 . Browse to the vCenter Server in the vSphere Web Client object navigator. VDCA Study Guide Version – Section 1 . The filters are turned On by default. In the Key text box.vmfsFilter 5. you can mount the datastore with the existing signature or assign a new signature. 6. and click Settings. Summary: When a storage device contains a VMFS datastore copy. Chapter 13 “Working with Datastores”. To Turn off a Storage Filter 1. Click the Manage tab.yellow-bricks.So. 7. 4. 3. with the original disk. The LUN copy must be writable. and mounts the copy as a datastore distinct from the original. In addition to LUN snapshotting and replication. 4. In the event of a disaster at the primary site. The datastore mounts are persistent and valid across system reboots. or a VMFS datastore copy. ESXi allows both reads and writes to the datastore residing on the LUN copy. Click Add Storage. for example. you mount the datastore copy and power on the virtual machines at the secondary site. When you mount the VMFS datastore. 7. for example. thus resignaturing the datastore. When the storage disk is replicated or snapshotted. You can mount the datastore copy with its original UUID or change the UUID. ESXi can detect the VMFS datastore copy and display it in the vSphere (Web) Client. select Keep Existing Signature. ESXi assigns a new UUID and a new label to the copy. The name present in the VMFS Label column indicates that the LUN is a copy that contains a copy of an existing VMFS datastore. 2. you maintain synchronized copies of virtual machines at a secondary site as part of a disaster recovery plan. the disk copy appears to contain an identical VMFS datastore. Under Mount Options. 5.2 Page 13 . Click the Configuration tab and click Storage in the Hardware panel. with exactly the same UUID X. where snapID is VDCA Study Guide Version – Section 1 . The default format of the new label assigned to the datastore is “snapID-oldLabel”. byte-for-byte. 6. Log in to the vSphere Client and select the server from the inventory panel. the resulting disk copy is identical. select the LUN that has a datastore name displayed in the VMFS Label column and click Next. review the datastore configuration information and click Finish. From the list of LUNs. the following storage device operations might cause ESXi to mark the existing datastore on the device as a copy of the original datastore: LUN ID changes SCSI device type changes. Procedure 1. example: You can keep the signature if. if the original storage disk contains a VMFS datastore with UUID X.Each VMFS datastore created in a storage disk has a unique UUID that is stored in the file system superblock. In the Ready to Complete page. Select the Disk/LUN storage type and click Next. IMPORTANT: You can mount a VMFS datastore copy only if it does not collide with the original VMFS datastore that has the same UUID. As a result. Use datastore resignaturing if you want to retain the data stored on the VMFS datastore copy. To mount the copy. the original VMFS datastore has to be offline. When resignaturing a VMFS copy. from SCSI-2 to SCSI-3 SPC-2 compliancy enablement Mount a VMFS Datastore with an Existing Signature. 3. Procedure as above. The syntax is slightly different while using the esxcli command from the vMA or vCLI. select Assign a New Signature. Procedure for Masking a LUN.html Understand and apply LUN masking using PSA-related commands vSphere Storage Guide v5.2 Page 14 . Use the esxcli commands to mask the paths. Chapter 23 “Understanding Multipathing and Failover”. The resignaturing process is crash and fault tolerant. Figure 6 VDCA Study Guide Version – Section 1 .typepad. When you mask paths. consider the following points: Datastore resignaturing is irreversible.com/virtual_geek/2008/08/afew-technic-1. A spanned datastore can be resignatured only if all its extents are online. you can resume it later. If the process is interrupted. You can run the esxcli command directly in the ESXi shell. or use the vMA or the vCLI. such as an ancestor or child in a hierarchy of LUN snapshots.5. you have to add the –-server=server_name option. except: 6 Under Mount Options.an integer and oldLabel is the label of the original datastore. page 211. Other references: Good Reading from Virtual Geek: http://virtualgeek. You can mount the new VMFS datastore without a risk of its UUID colliding with UUIDs of any other datastore. you create claim rules that assign the MASK_PATH plug-in to the specified paths. The LUN copy that contains the VMFS datastore that you resignature is no longer treated as a LUN copy. When you perform datastore resignaturing. in this example a Datastore named “IX2-iSCSI-LUNMASK”. Summary: The purpose of LUN masking is to prevent the host from accessing storage devices or LUNs or from using individual paths to a LUN. ATA_____GB0160CAABV_____________________________5RX7BZHC____________:3 /vmfs/devices/disks/t10.199204. Log into an ESXI host 2.ATA_____GB0160CAABV_____________________________5RX7BZHC___ _________:3 4c13c151-2e6c6f81-ab84-f4ce4698970c 0 ml110g5-local naa.vmhba32:C0:T0:L0 vmhba32 0 0 0 NMP active local usb. VMware KB 1009449 “Masking a LUN from ESX and ESXi using the MASK_PATH plug-in” is more detailed then the Storage Guide.----.5000144f77827768:1 4f9eca2e-3a28f563-c184-001b2181d256 0 IX2-iSCSI-01 naa.1 vmhba32:C0:T0:L0 state:active mpx.vmware:ml110g5 00023d000001.Open the Datastore “Properties” and “Manage Paths”. display name: IX2-iSCSI-LUNMASK is the device we want to MASK.vmhba32 usb. I have followed the steps in the KB.emc:storage.1998-01.iqn.emc:storage.1 vmhba0:C0:T0:L0 state:active t10.5000144f77827768:1 /vmfs/devices/disks/naa.5000144f80206240:1 4fa53d67-eac91517-abd8-001b2181d256 0 IX2-iSCSI-LUNMASK naa.com.0:0 VDCA Study Guide Version – Section 1 . Look at the Multipath Plug-ins currently installed on your ESX with the command: ~ # esxcfg-mpath -G MASK_PATH NMP 3. Another command to show all devices and paths: ~ # esxcfg-mpath -L vmhba35:C0:T1:L0 state:active naa.com.1998-01.t.2 Page 15 .IX2-iSCSI-01.5000144f80206240 vmhba35 0 1 0 NMP active san iqn. 1. Add a rule to hide the LUN with the command.StorCenterIX2.5000144f80206240:1 /vmfs/devices/disks/naa.t. Find the naa device of the datastore you want to unpresent with the command: ~ # esxcfg-scsidevs -m t10.0:0 vmhba35:C0:T0:L0 state:active naa.5000144f77827768 vmhba35 0 0 0 NMP active san iqn.com.vmware:ml110g5 00023d000001.199204.com.vmhba0 sata.--------MP 0 runtime transport MP 1 runtime transport MP 2 runtime transport MP 3 runtime transport MP 4 runtime transport MP 101 runtime vendor MP 101 file vendor MP 65535 runtime vendor Plugin --------NMP NMP NMP NMP NMP MASK_PATH MASK_PATH NMP Matches --------------------------------transport=usb transport=sata transport=ide transport=block transport=unknown vendor=DELL model=Universal Xport vendor=DELL model=Universal Xport vendor=* model=* This is the default output 4.ATA_____GB0160CAABV_____________________________5RX7BZHC____________ vmhba0 0 0 0 NMP active local sata. List all the claimrules currently on the ESX with the command: ~ # esxcli storage core claimrule list Rule Class Rule Class Type ---------.IX2-iSCSI-02.iqn.------.StorCenterIX2.5000144f80206240:1. vmware:ml110g5 00023d000001.StorCenterIX2.iqn.1998-01.emc:storage.com.vmhba1 sata.----. Verify that the rule is in effect with the command: ~ # esxcli storage Rule Class Rule ---------.vmware:ml110g5 00023d000001. Check all of the paths that device naa.com.5000144f80206240 vmhba35 0 1 0 NMP active san iqn.t.5000144f80206240 has (vmhba35:C0:T1:L0): ~ # esxcfg-mpath -L | grep naa.199204. Reload your claimrules in the VMkernel with the command: ~ # esxcli storage core claimrule load 7.IX2-iSCSI-01. Run the command: ~ # esxcli storage core claimrule list Rule Class Rule Class Type ---------.--------.199204.IX2-iSCSI-02.com. verify that there is no other device with those parameters ~ # esxcfg-mpath -L | egrep "vmhba35:C0.1 Add a rule for this LUN with the command: ~ # esxcli storage core claimrule add -r 103 -t location -A vmhba35 -C 0 -T 1 -L 0 -P MASK_PATH 5.5000144f80206240 vmhba35 0 1 0 NMP active san iqn.1 vmhba35:C0:T0:L0 state:active naa.emc:storage.1998-01.vmware:ml110g5 00023d000001.199204.com.1 As you apply the rule -A vmhba35 -C 0 -L 0.*L0" vmhba35:C0:T1:L0 state:active naa.1998-01.5000144f77827768 vmhba35 0 0 0 NMP active san iqn.com.2 Plugin --------- Matches --------------------------------- NMP NMP transport=usb transport=sata Page 16 .iqn.vmhba1:C0:T0:L0 state:active mpx.vmhba1:C0:T0:L0 vmhba1 0 0 0 NMP active local sata.0:0 Second.StorCenterIX2.IX2-iSCSI-02.--------- Matches --------------------------------- runtime runtime runtime runtime runtime runtime file file transport transport transport transport transport vendor vendor location NMP NMP NMP NMP NMP MASK_PATH MASK_PATH MASK_PATH transport=usb transport=sata transport=ide transport=block transport=unknown vendor=DELL model=Universal Xport vendor=DELL model=Universal Xport adapter=vmhba35 channel=0 runtime vendor NMP vendor=* model=* 6.5000144f80206240 vmhba35:C0:T1:L0 state:active naa.t.----------MP 0 MP 1 MP 2 MP 3 MP 4 MP 101 MP 101 MP 103 target=1 lun=0 MP 65535 core claimrule list Class Type Plugin ------.emc:storage.iqn.--------------MP 0 runtime transport MP 1 runtime transport VDCA Study Guide Version – Section 1 .com.------.StorCenterIX2.Re-examine your claimrules and verify that you can see both the file and runtime class.t. ~ # esxcli storage core claiming reclaim -d naa.ATA_____GB0160CAABV_____________________________5RX7BZHC____________:3 /vmfs/devices/disks/t10. run the command: ~ # esxcfg-mpath -L | grep naa.ATA_____GB0160CAABV_____________________________5RX7BZHC____________) To verify that a masked LUN is no longer an active device.5000144f77827768 307200MB NMP EMC iSCSI Disk (naa.ATA_____GB0160CAABV_____________________________5RX7BZHC____________ DirectAccess /vmfs/devices/disks/t10.5000144f80206240 ~ # esxcli storage core claimrule run 9.5000144f77827768:1 4f9eca2e-3a28f563-c184-001b2181d256 0 IX2-iSCSI-01 The masked datastore does not appear in the list.vmhba32:C0:T0:L0 3815MB NMP Local USB Direct-Access (mpx.vmhba32:C0:T0:L0) naa.5000144f80206240 ~ # Empty output indicates that the LUN is not active. Verify that the masked device is no longer used by the ESX host. ~ # esxcfg-scsidevs -c Device UID Device Type Console Device Size Multipath PluginDisplay Name mpx.vmhba32:C0:T0:L0 DirectAccess /vmfs/devices/disks/mpx.2 Page 17 .5000144f77827768:1 /vmfs/devices/disks/naa.vmhba1:C0:T0:L0 CD-ROM /vmfs/devices/cdrom/mpx.ATA_____GB0160CAABV_____________________________5RX7BZHC___ _________ 152627MB NMP Local ATA Disk (t10.5000144f77827768 DirectAccess /vmfs/devices/disks/naa.MP 2 MP 3 MP 4 MP 101 MP 101 MP 103 target=1 lun=0 MP 103 target=1 lun=0 MP 65535 runtime runtime runtime runtime file runtime transport transport transport vendor vendor location NMP NMP NMP MASK_PATH MASK_PATH MASK_PATH transport=ide transport=block transport=unknown vendor=DELL model=Universal Xport vendor=DELL model=Universal Xport adapter=vmhba35 channel=0 file location MASK_PATH adapter=vmhba35 channel=0 runtime vendor NMP vendor=* model=* 8.5000144f77827768) t10.vmhba1:C0:T0:L0) mpx. Unclaim all paths to a device and then run the loaded claimrules on each of the paths to reclaim them. VDCA Study Guide Version – Section 1 . ~ # esxcfg-scsidevs -m t10. To see all the LUNs use "esxcfg-scsidevs -c" command.vmhba1:C0:T0:L0 0MB NMP Local TSSTcorp CD-ROM (mpx.ATA_____GB0160CAABV_____________________________5RX7BZHC___ _________:3 4c13c151-2e6c6f81-ab84-f4ce4698970c 0 ml110g5-local naa. ~ # esxcli storage core claimrule run Your host can now access the previously masked storage device. Delete the MAS_PATH rule. ~ # esxcli storage core claimrule remove -r 103 3. ~ # esxcli storage core claimrule load 5.--------- Matches --------------------------------- runtime runtime runtime runtime runtime runtime file runtime transport transport transport transport transport vendor vendor location NMP NMP NMP NMP NMP MASK_PATH MASK_PATH MASK_PATH transport=usb transport=sata transport=ide transport=block transport=unknown vendor=DELL model=Universal Xport vendor=DELL model=Universal Xport adapter=vmhba35 channel=0 runtime vendor NMP vendor=* model=* 4. List actual claimrules # esxcli storage core claimrule list Rule Class Rule Class Type ---------.--------------MP 0 runtime transport MP 1 runtime transport MP 2 runtime transport MP 3 runtime transport MP 4 runtime transport MP 101 runtime vendor MP 101 file vendor MP 103 runtime location target=1 lun=0 MP 103 file location target=1 lun=0 MP 65535 runtime vendor Plugin --------- Matches --------------------------------- NMP NMP NMP NMP NMP MASK_PATH MASK_PATH MASK_PATH transport=usb transport=sata transport=ide transport=block transport=unknown vendor=DELL model=Universal Xport vendor=DELL model=Universal Xport adapter=vmhba35 channel=0 MASK_PATH adapter=vmhba35 channel=0 NMP vendor=* model=* 2.----. Reload the path claiming rules from the configuration file into the VMkernel.2 Page 18 . VDCA Study Guide Version – Section 1 . Run the esxcli storage core claiming unclaim command for each path to the masked storage device ~ # esxcli storage core claiming unclaim -t location -A vmhba35 -C 0 -T 1 -L 0 6. Run the path claiming rules.----------MP 0 MP 1 MP 2 MP 3 MP 4 MP 101 MP 101 MP 103 target=1 lun=0 MP 65535 core claimrule list Class Type Plugin ------.------.Procedure for Unmasking a Path 1.--------. ~ # esxcli storage Rule Class Rule ---------. Verify that the claimrule was deleted correctly. especially when you also wanted to configure Jumbo frames (If you should configure Jumbo frames is another question…). Summary: Until vSphere 5. 5. Configuring the network involves creating a VMkernel interface for each physical network adapter that you use for iSCSI and associating all interfaces with the software iSCSI adapter.com/kb/1009449 VMware KB 1015252 “Unable to claim the LUN back after unmasking it”: http://kb. VDCA Study Guide Version – Section 1 . Chapter11 in the vSphere Storage Guide nicely describes the whole process. section “Configuring Software iSCSI Adapter” page 81. Configure networking for iSCSI.vmware. The complete workflow includes: 1. 2. enable Jumbo Frames. some portions needed to be done from the CLI. (Optional) Configure CHAP parameters. “Configuring iSCSI Adapters and Storage”.5.2 Page 19 .Other references: VMware KB 1009449 “Masking a LUN from ESX and ESXi using the MASK_PATH plug-in”: http://kb. Chapter 11.com/kb/1015252 Configure Software iSCSI port binding Official Documentation: vSphere Storage Guide v5. You were not able to do the job using the vSphere Client. describes the complete process. Activate the software iSCSI adapter. Configure discovery information. 4. I have also noticed that Storage vendors often publish manuals which describe the whole process on configuring a specific storage device in conjunction with vSphere. 3. configuring the Software iSCSI adapter was a little bit complicated process. But from now on the whole process can be performed using the vSphere Client.vmware. If needed. If your workload is write-intensive. page 201. Summary: Flash Read Cache lets you accelerate virtual machine performance through the use of host resident flash devices as a cache. Some characteristics of the product You can create a Flash Read Cache for any individual virtual disk. “About VMware vSphere Flash Read Cache”. Also notice: VDCA Study Guide Version – Section 1 . configuring iSCSI with VMware vSphere 5 and Dell Equallogic PS Series Storage Configure and manage vSphere Flash Read Cache Official Documentation: vSphere Storage Guide v5. it does not make much sense. and it is discarded when a virtual machine is suspended or powered off. the cache is migrated (if the virtual flash module on the source and destination hosts are compatible). Chapter 21.If you also want write caching. Before configuring Flash Read Cache for your VMs. become familiar with the typical workload. configuring the iSCSI Software Adapter Example. Flash Read Cache supports write-through or read caching. have a look at FVM of Pernixdata The Flash Read Cache is created only when a virtual machine is powered on. by default.5.Figure 7 Other references: Nice video from Eric Sloof. During vMotions.2 Page 20 . Before the introduction of Flash Read Cache. Flash Read Cache is also supported by High Availability (HA).2 Page 21 . In the vSphere Web Client. BTW. You can use the virtual flash resource for: o cache configuration on the host. DRS supports virtual flash. DRS treats powered-on virtual machines with a Flash Read Cache as soft affined to their current host and moves them only for mandatory reasons or if necessary to correct host over-utilization. Click the Manage tab and click Settings. Flash Read Cache needs Enterprise Plus licensing. Setup and configuration For an individual host. take note of the following limits: Figure 8 SSD cannot be shared between Flash Read Cache and services like Virtual SAN. navigate to the host. o Flash Read Cache configuration on virtual disks. The cache is shared by all virtual machines running on the host. VDCA Study Guide Version – Section 1 . you could create SSD Datastores for Host Cache Configuration. Although each host supports only one virtual flash resource. the Host Cache is used by ESXi as a write back cache for virtual machine swap files. select Virtual Flash Resource Management and click Add Capacity. Figure 10 Configure Flash Read Cache for a Virtual Machine. Select the Enable virtual flash host swap cache check box and specify the cache size.2 Page 22 .5 or later. To configure Host Swap Cache with Virtual Flash Resource: In the vSphere Web Client. In the vSphere Web Client. navigate to the host. Under Virtual Flash. Right-click the host. select All vCenter Actions > Add Virtual Flash Capacity. Flash Read Cache is only available for virtual machines. compatible with ESXi 5. select Virtual Flash Host Swap Cache Configuration and click Edit. Click the Manage tab and click Settings. VDCA Study Guide Version – Section 1 . Figure 9 You can also setup and manage multiple virtual flash resources (for multiple hosts). Under Virtual Flash. navigate to the host. If a guest operating system writes a single 512 byte disk block. then the minimum cache size is 256MB. Reservation is a reservation size for cache blocks. VDCA Study Guide Version – Section 1 . If the cache block size is 4K.2 Page 23 . then the minimum cache size is 1MB. This block size can be larger than the nominal disk block size of 512 bytes. between 4KB and 1024KB. Open the VM settings and expand a disk section (remember Flash Read Cache is configured per disk!) Figure 11 Under advanced. the surrounding cache block size bytes will be cached. Figure 12 Block size is the minimum number of contiguous bytes that can be stored in the cache. you can enable the feature and configure the Reservation and the Block Size. There is a minimum number of 256 cache blocks. Do not confuse cache block size with disk block size. If the cache block size is 1MB. This happens when the virtual machine is being created or cloned. As with clusters of hosts. In other words. Storage DRS kicks in and will generate recommendations or perform Storage vMotions. or when you add a disk to an existing virtual machine. you use datastore clusters to aggregate storage resources. Anti-affinity rules Option to create anti-affinity rules for Virtual Machine Disks. VDCA Study Guide Version – Section 1 . Chapter 12. what a DRS enabled Cluster is to CPU and Memory resources. Also Chapter 13 “Using Datastore Clusters to Manage Storage resources”. you can use vSphere Storage DRS to manage storage resources. a Storage DRS enabled Datastore Cluster is to storage. For example. introduced in vSphere 5. According to VMware: “A datastore cluster is a collection of datastores with shared resources and a shared management interface.5 (2058983).For more info read the Performance of vSphere Flash Read Cache in VMware vSphere 5. the virtual disks of a certain virtual machine must be kept on different datastores In essential. Summary: Datastore Clusters and Storage DRS are new features. I/O latency load balancing Instead of space use thresholds. the datastore's resources become part of the datastore cluster's resources.2 Page 24 . Creating a Datastore Cluster Use the wizard in the Datastores and Datastore Clusters view. see also KB Virtual Flash feature in vSphere 5.5 white paper. The first step is providing a name for the new Datastore Cluster and to decide if you wish to enable (default) Storage DRS. page 95. When you create a datastore cluster.5 . which enables you to support resource allocation policies at the datastore cluster level” The following Resource Management capabilities are available per Datastore cluster: Space utilization load balancing. when a virtual machine disk is being migrated to another datastore cluster. Configure Datastore Clusters Official Documentation: vSphere Resource Management Guide v5. page 89. Other references: For an overview. VMs are automatically placed on a Datastore with Low latency and most free space. When you add a datastore to a datastore cluster. Initial placement.”Creating a Datastore Cluster” . when space use on a datastore exceeds a certain threshold. Datastore clusters are to datastores what clusters are to hosts. I/O latency thresholds can be set. 2 Page 25 . VDCA Study Guide Version – Section 1 .Figure 13 With Storage DRS. you enable these functions: Space load balancing among datastores within a datastore cluster. I/O load balancing among datastores within a datastore cluster. Initial placement for virtual disks based on space and I/O workload. two automation levels are available: Manual or Fully Automated. VDCA Study Guide Version – Section 1 .Figure 14 After enabling SDRS.2 Page 26 . It uses the 90th percentile I/O latency measured over the course of a day to compare against the threshold Under the Advanced option. you disable the following elements of Storage DRS: I/O load balancing among datastores within a datastore cluster. Equallogic storage) or intelligent caching solutions are in use. Storage DRS VDCA Study Guide Version – Section 1 . I/O Latency. It is advised to enable the “I/O Metric for SDRDS recommendations” option. you can configure additional options: Default VM affinity: by default kep VMDKs together.2 Page 27 . But when your Disk Array has Auto tiering enabled (e. When you disable this option. vCenter Server does not consider I/O metrics when making Storage DRS recommendations. Initial placement is based on space only. default threshold is > 15 ms latency. When you disable this option.Figure 15 Next part is setting the Runtime rules.g. Always follow Vendor Best Practices! Storage DRS is triggered based on: Space usage. Initial placement for virtual disks based on I/O workload. default is 5%. default threshold is > 80% utilization. Space utilization difference: This threshold ensures that there is some minimum difference between the space utilization of the source and the destination. the I/O Metric should not be enabled. x and earlier hosts. If datastores in the datastore cluster are connected to ESX/ESXi 4.0 and later. I/O imbalance threshold: A slider without numbers. Check Imbalance very: After this interval (default 8 hours). Storage DRS runs to balance I/O load.2 Page 28 . Storage DRS does not run. but with Aggressive to Conservative settings Figure 16 Select Hosts and Clusters make sure that all hosts attached to the datastores in a datastore cluster must be ESXi 5. will not make migration recommendations from datastore A to datastore B if the difference in free space is less than the threshold value. VDCA Study Guide Version – Section 1 . 2 Page 29 . a few considerations: NFS and VMFS datastores cannot be combined in the same datastore cluster. Datastores in a datastore cluster must be homogeneous to guarantee hardware accelerationsupported behavior. do not include datastores that have hardware acceleration enabled in the same datastore cluster as datastores that do not have hardware acceleration enabled. VDCA Study Guide Version – Section 1 . Replicated datastores cannot be combined with non-replicated datastores in the same Storage-DRS enabled datastore cluster.Figure 17 Selecting datastores. Datastores shared across multiple datacenters cannot be included in a datastore cluster As a best practice. Figure 19 Datastore Clusters offer new options for managing storage.Figure 18 Resume. One of the coolest is Storage DRS maintenance |Mode. There are a few prerequisites: VDCA Study Guide Version – Section 1 .2 Page 30 . Datastores can be placed in maintenance mode to take it out of use to service. just like ESXi hosts in a Cluster. button Advanced options. go to SDRS Automation. There are at least two datastores in the datastore cluster Important: Storage DRS affinity or anti-affinity rules might prevent a datastore from entering maintenance mode. Figure 20 After creating a Storage DRS Cluster using the Wizard. With Storage DRS Automation Level for Virtual Machines. A few options are now available: SDRS Scheduling. Storage DRS Automation Level for Virtual Machines / Virtual Machine Settings. Maintenance mode is available to datastores within a Storage DRS-enabled datastore cluster. You can also override default virtual disk affinity rules. Standalone datastores cannot be placed in maintenance mode No CD-ROM image files are stored on the datastore. VDCA Study Guide Version – Section 1 . You can enable the Ignore Affinity Rules for Maintenance option for a datastore cluster. and select IgnoreAffinityRulesForMaintenance and change the value from 0 to 1. you can edit the settings. Anti-Affinity Rules. Edit the Settings for the Datastore Cluster. you can override the datastore clusterwide automation level for individual virtual machines.2 Page 31 . Figure 22 Note: Restoring VMDK affinity will remove conflicting anti-affinity rules! VDCA Study Guide Version – Section 1 .Figure 21 In the traditional vSphere Client.2 Page 32 . this setting is called Virtual Machine Settings. but are not enforced when a migration is initiated by a user. Anti-affinity rules are enforced during initial placement and Storage DRS-recommendation migrations. During non-peak hours. By default. the traditional vSphere Client did a better job configuring SDRS Scheduling than the vSphere Web Client. a virtual machine's virtual disks are kept together on the same datastore. usually a Start and an End task. Creating a scheduled task results in effectively creating two tasks.2 Page 33 . Changing the automation level and aggressiveness level for a datastore cluster to run less aggressively during peak hours. Figure 23 Storage DRS has Anti-Affinity Rules. when performance is a priority. There are 3 types of (Anti) Affinity rules: VDCA Study Guide Version – Section 1 . BTW: In my case. Anti-affinity rules do not apply to CD-ROM ISO image files that are stored on a datastore in a datastore cluster.With SDRS Scheduling you can create scheduled tasks for: Changing Storage DRS settings for a datastore cluster so that migrations for fully automated datastore clusters are more likely to occur during off-peak hours. section “Set Up Off-Hours Scheduling for Storage DRS in the vSphere Web Client” is not correct. Storage DRS can run in a more aggressive mode and be invoked more frequently The vSphere Resource Management Guide (version (EN-001383-00) . nor do they apply to swapfiles that are stored in user-defined locations. You can create Storage DRS anti-affinity rules to control which virtual disks should not be placed on the same datastore within a datastore cluster. After finishing a task you can edit or remove individual tasks. Figure 24 VM anti-affinity. or Inter-VM Anti-Affinity rules: which VMs should not reside on the same datastore. VMDK affinity rules are enabled by default for all virtual machines that are in a datastore cluster. VMDK affinity rules are enabled by default for all virtual machines that are in a datastore cluster. VMDK anti-affinity. Figure 25 Other references: VDCA Study Guide Version – Section 1 .2 Page 34 . You can override the default setting for the datastore cluster or for individual virtual machines. As mentioned before. You can override the default setting for the datastore cluster or for individual virtual machines. or Intra-VM Anti-Affinity rules: which virtual disks associated with a particular virtual machine must be kept on different datastores. Creating a vmdk anti-affinity rule will break the default vmdk affinity. You will need an ESX/ESXI 4. Upgrade VMware storage infrastructure Official Documentation: vSphere Storage Guide v5. Summary: A VMFS3 Datastore can directly be upgraded to VMFS5. Storage DRS Interoperability on Yellow Bricks. Other references: More info concerning VMFS5 in these two documents: “VMFS-5 Upgrade Considerations” and “What’s New in VMware vSphere™ 5. The upgrade process is non-disruptive. A datastore upgrade is a one-way process. All hosts accessing a VMFS5 Datastore must support this version Before upgrading to VMFS5. check that the volume has at least 2 MB of free blocks and 1 free file descriptor.5 . Chapter 16 “Working with Datastores”. Remember.x host to perform this step.2 Page 35 . before upgrading to VMFS5. A VMFS2 Datastore should first be upgraded to VMFS3. an Upgraded VMFS5 does not have the same characteristics as a newly created VMFS5.0 – Storage” See also my post: VMFS-5 or an upgraded VMFS-3? VDCA Study Guide Version – Section 1 . Storage DRS Interoperability whitepaper by VMware. page 143 has a section on Upgrading VMFS Datastores. 2 Page 36 .5 vCenter Server and Host Management Guide v5. Oracle DB and SAP and lots of related resources.5 vSphere Client / Web Client vdf/df vSphere CLI o esxcli o vmkfstools Analyze I/O workloads to determine storage performance requirements Official Documentation: VMware website “Solutions” section contains information about virtualizing common business applications like Microsoft Exchange.Objective 1. Sharepoint. Figure 26 VDCA Study Guide Version – Section 1 .2 – Manage Complex Storage Solutions Skills and Abilities Analyze I/O workloads to determine storage performance requirements Identify and tag SSD and local devices Administer hardware acceleration for VAAI Configure and administer profile-based storage Prepare storage for maintenance Apply space utilization data to manage storage resources Provision and manage storage resources according to Virtual Machine requirements Understand interactions between virtual storage provisioning and physical storage provisioning Configure datastore Alarms Create and analyze datastore alarms and errors to determine space availability Tools vSphere Storage Guide v5. SQl.5 vSphere Command-Line Interface Concepts and Examples v5. this is different from monitoring the I/O load in a virtual environment. Summary: Identify SSD devices You can identify the SSD devices in your storage network.2 Page 37 .Summary: This topic suggests how to analyse existing I/O workloads in the storage field on (physical) systems to determine the required storage performance in the virtual environment. Before you identify an SSD device. This new chapter is dedicated to SSD devices and contains topics like. VMware and other parties do have tools and documentation on that subject. Figure 27 Tag SSD devices ESXi enables automatic detection of SSD devices by using an inquiry mechanism based on T10 standards. Zabbix. This is different from the information in the vSphere client in the Drive Type Column. Monitoring Tools. Is SSD: true 2. To name a few: VMware Capacity Planner Tool. Procedure 1. # esxcli storage core device list The command output includes the following information about the listed device. page 163. “Tag Devices as SSD”. The other value is false. Windows Perfmon. List the devices. ensure that the device is tagged as SSD. Imho. Identify and tag SSD and local devices Official Documentation: vSphere Storage Guide v5. VDCA Study Guide Version – Section 1 .5. like Nagios. Chapter 18 “Solid State Disks Enablement”. VMware vCenter Performance Graphs. Verify whether the value of the flag Is SSD is true. “Identify SSD Devices” and so on. esxcli storage core claimrule load esxcli storage core claimrule run 6. Note down the SATP associated with the device.html VDCA Study Guide Version – Section 1 . you can trick ESXi and turn a local disk into a SSD device by performing the procedure as presented by William Lam. In case you do not have a SSD device available. 3. Add a PSA claim rule to mark the device as SSD.virtuallyghetto. Other references: How to trick ESXi 5 in seeing an SSD datastore: http://www.If ESXI does not automatically identifies a device as a SSD. esxcli storage nmp device list 2. Unclaim the device. The command output indicates if a listed device is tagged as SSD. there is a procedure to tag a SSD using PSA SATP claimrules The procedure to tag a SSD device is straight forward and has a lot in common with the MASK_PATH procedure. Is SSD: true If the SSD device that you want to tag is shared among multiple hosts.2 Page 38 . There are 4 different ways. Also here 4 possible ways.com/2011/07/how-to-trick-esxi-5-in-seeing-ssd. Verify if devices are tagged as SSD. 1. Reclaim the device by running the following commands. Identify the device to be tagged and its SATP. example by device name esxcli storage core claiming unclaim --type device --device device_name 5. esxcli storage core device list -d device_name 7. for example by specifying the device name esxcli storage nmp satp rule add -s SATP --device device_name --option=enable_ssd 4. make sure that you tag the device from all the hosts that share the device. Administer hardware acceleration for VAAI Official Documentation: vSphere Storage Guide v5.5, Chapter 24 “Storage Hardware Acceleration”, page 231 is dedicated to VAAI. Summary: When the hardware acceleration functionality is supported, the ESXi host can get hardware assistance and perform several tasks faster and more efficiently. The host can get assistance with the following activities: Migrating virtual machines with Storage vMotion Deploying virtual machines from templates Cloning virtual machines or templates VMFS clustered locking and metadata operations for virtual machine files Writes to thin provisioned and thick virtual disks Creating fault-tolerant virtual machines Creating and cloning thick disks on NFS datastores vSphere Storage APIs – Array Integration (VAAI) were first introduced with vSphere 4.1, enabling offload capabilities support for three primitives: 1. Full copy, enabling the storage array to make full copies of data within the array 2. Block zeroing, enabling the array to zero out large numbers of blocks 3. Hardware-assisted locking, providing an alternative mechanism to protect VMFS metadata With vSphere 5.0, support for the VAAI primitives has been enhanced and additional primitives have been introduced: 1. Thin Provisioning, enabling the reclamation of unused space and monitoring of space usage for thin-provisioned LUNs 2. Hardware acceleration for NAS 3. SCSI standardization by T10 compliancy for full copy, block zeroing and hardware-assisted locking Imho, support for NAS devices is one of the biggest improvements. Prior to vSphere 5.0, a virtual disk was created as a thin-provisioned disk, not even enabling the creation of a thick disk. Starting with vSphere 5.0, VAAI NAS extensions enable NAS vendors to reserve space for an entire virtual disk. This enables the creation of thick disks on NFS datastores. NAS VAAI plug-ins are not shipped with vSphere 5.0. They are developed and distributed by storage vendors. Hardware acceleration is On by default, but can be disabled by default. Read my post “Veni, Vidi, VAAI” for more info on how to check the Hardware Acceleration Support Status. It is also possible to add Hardware Acceleration Claim Rules. Remember, you need to add two claim rules, one for the VAAI filter and another for the VAAI plug-in. For the new claim rules to be active, you first define the rules and then load them into your system. VDCA Study Guide Version – Section 1 - 2 Page 39 Procedure 1 Define a new claim rule for the VAAI filter by running: # esxcli --server=server_name storage core claimrule add --claimrule-class=Filter -plugin=VAAI_FILTER 2 Define a new claim rule for the VAAI plug-in by running: # esxcli --server=server_name storage core claimrule add --claimrule-class=VAAI 3 Load both claim rules by running the following commands: # esxcli --server=server_name storage core claimrule load --claimrule-class=Filter # esxcli --server=server_name storage core claimrule load --claimrule-class=VAAI 4 Run the VAAI filter claim rule by running: # esxcli --server=server_name storage core claimrule run --claimrule-class=Filter NOTE Only the Filter-class rules need to be run. When the VAAI filter claims a device, it automatically finds the proper VAAI plug-in to attach. Procedure for installing a NAS plug-in This procedure is different from the previous and presumes the installation of a VIB package. Procedure: 1 Place your host into the maintenance mode. 2 Get and eventually set the host acceptance level: # esxcli software acceptance get # esxcli software acceptance set --level=value This command controls which VIB package is allowed on the host. The value can be one of the following: VMwareCertified, VMwareAccepted, PartnerSupported, CommunitySupported. Default is PartnerSupported 3 Install the VIB package: # esxcli software vib install -v|--viburl=URL The URL specifies the URL to the VIB package to install. http:, https:, ftp:, and file: are supported. 4 Verify that the plug-in is installed: # esxcli software vib list 5 Reboot your host for the installation to take effect When you use the hardware acceleration functionality, certain considerations apply. Several reasons might cause a hardware-accelerated operation to fail. VDCA Study Guide Version – Section 1 - 2 Page 40 For any primitive that the array does not implement, the array returns an error. The error triggers the ESXi host to attempt the operation using its native methods. The VMFS data mover does not leverage hardware offloads and instead uses software data movement when one of the following occurs: • • • • • • • • The source and destination VMFS datastores have different block sizes. The source file type is RDM and the destination file type is non-RDM (regular file). The source VMDK type is eagerzeroedthick and the destination VMDK type is thin. The source or destination VMDK is in sparse or hosted format. The source virtual machine has a snapshot. The logical address and transfer length in the requested operation are not aligned to the minimum alignment required by the storage device. All datastores created with the vSphere Client are aligned automatically. The VMFS has multiple LUNs or extents, and they are on different arrays. Hardware cloning between arrays, even within the same VMFS datastore, does not work TIP: when playing around with esxcli. VMware has put a lot of effort in making esxcli a great command; it contains a lot of build-in help. Examples, If you don’t know how to proceed, just type: # esxcli This command seems out of options... # esxcli storage core claimrule list Rule Class Rule Class Type ---------- ----- ------- --------MP 0 runtime transport MP 1 runtime transport MP 2 runtime transport MP 3 runtime transport MP 4 runtime transport MP 101 runtime vendor MP 101 file vendor MP 65535 runtime vendor Plugin --------NMP NMP NMP NMP NMP MASK_PATH MASK_PATH NMP Matches --------------------------------transport=usb transport=sata transport=ide transport=block transport=unknown vendor=DELL model=Universal Xport vendor=DELL model=Universal Xport vendor=* model=* But type this: ~ # esxcli storage core claimrule list -h Error: Invalid option -h Usage: esxcli storage core claimrule list [cmd options] Description: list List all the claimrules on the system. Cmd options: -c|--claimrule-class=<str> Indicate the claim rule class to use in this operation [MP, Filter, VAAI, all]. So this command will give us more information: ~ # esxcli storage core claimrule list Rule Class Rule Class Type ---------- ----- ------- --------------MP 0 runtime transport VDCA Study Guide Version – Section 1 - 2 -c all Plugin ---------------- Matches -------------------------- NMP transport=usb Page 41 MP 1 runtime MP 2 runtime MP 3 runtime MP 4 runtime MP 101 runtime model=Universal Xport MP 101 file model=Universal Xport MP 65535 runtime Filter 65430 runtime Filter 65430 file Filter 65431 runtime Filter 65431 file Filter 65432 runtime Filter 65432 file Filter 65433 runtime Filter 65433 file Filter 65434 runtime Filter 65434 file Filter 65435 runtime Filter 65435 file VAAI 65430 runtime VAAI 65430 file VAAI 65431 runtime VAAI 65431 file VAAI 65432 runtime VAAI 65432 file VAAI 65433 runtime VAAI 65433 file VAAI 65434 runtime VAAI 65434 file VAAI 65435 runtime VAAI 65435 file ~ # transport transport transport transport vendor NMP NMP NMP NMP MASK_PATH transport=sata transport=ide transport=block transport=unknown vendor=DELL vendor MASK_PATH vendor=DELL vendor vendor vendor vendor vendor vendor vendor vendor vendor vendor vendor vendor vendor vendor vendor vendor vendor vendor vendor vendor vendor vendor vendor vendor vendor NMP VAAI_FILTER VAAI_FILTER VAAI_FILTER VAAI_FILTER VAAI_FILTER VAAI_FILTER VAAI_FILTER VAAI_FILTER VAAI_FILTER VAAI_FILTER VAAI_FILTER VAAI_FILTER VMW_VAAIP_SYMM VMW_VAAIP_SYMM VMW_VAAIP_CX VMW_VAAIP_CX VMW_VAAIP_EQL VMW_VAAIP_EQL VMW_VAAIP_NETAPP VMW_VAAIP_NETAPP VMW_VAAIP_HDS VMW_VAAIP_HDS VMW_VAAIP_LHN VMW_VAAIP_LHN vendor=* model=* vendor=EMC model=SYMMETRIX vendor=EMC model=SYMMETRIX vendor=DGC model=* vendor=DGC model=* vendor=EQLOGIC model=* vendor=EQLOGIC model=* vendor=NETAPP model=* vendor=NETAPP model=* vendor=HITACHI model=* vendor=HITACHI model=* vendor=LEFTHAND model=* vendor=LEFTHAND model=* vendor=EMC model=SYMMETRIX vendor=EMC model=SYMMETRIX vendor=DGC model=* vendor=DGC model=* vendor=EQLOGIC model=* vendor=EQLOGIC model=* vendor=NETAPP model=* vendor=NETAPP model=* vendor=HITACHI model=* vendor=HITACHI model=* vendor=LEFTHAND model=* vendor=LEFTHAND model=* Other references: An overview on VAAi enhancements in vSphere 5“What’s New in VMware vSphere 5. Summary: Important: With vSphere 5.5. still uses the old name. vidi. vaai” Configure and administer profile-based storage Official Documentation: vSphere Storage Guide v5. page 249. Chapter 26 “Using Storage Providers”. page 191.2 Page 42 . Chapter 20 “About Virtual Machine Storage Policies”.5. Figure 28 – [Left] vSphere Web Client . Although in the vSphere Client. “VM Storage Profiles” has been renamed to “VM Storage policies”. Also.[Right] traditional vSphere Client VDCA Study Guide Version – Section 1 .0 Storage” A personal post on this topic: “Veni. The information comes from Storage vendors (See Chapter 20. Each rule can either be an underlying storage capability or a user-defined storage tag. Depending on its placement. Select the vCenter Server. you can describe storage capabilities in terms of Capacity. VDCA Study Guide Version – Section 1 . These are applied per host or per Cluster. 1. a VM is associated with a Storage Policy. with Storage Policies. A single rule set contains one or several rules. If your storage does not support VASA. Figure 29 2. read this post. Storage Capabilities. the VM is compliant or not. It is just a bit cumbersome imho. refer to what a storage system can offer Storage Policies. All hosts in a Cluster or single hosts must be properly licensed. Tags are relatively new and only available in the vSphere Web Client. A storage policy includes one or several rule sets that describe requirements for virtual machine storage resources. then create your User-defined Capabilities by assigning Tags to Datastores. Fault tolerance. type a name and description and select a Category (or create a new one). also known as “vSphere Storage APIs – Storage Awareness” or VASA) or is custom defined. Browse to a datastore in the vSphere Web Client inventor. Replication etc. Important Note: Storage Policies does not support RDMs. performance.In a few words. Each rule describes a specific quality or quantity that needs to be provided by a storage resource. Click the Manage tab and click Tags and click the New Tag icon. The next step is to Enable the VM Storage Policies. In the final step.2 Page 43 . Do not mix clients. describe what users require for their VMs. Assign 3. A few things you should know: User-defined Storage Capabilities have been replaced by Tags. And that is exactly what happens. Procedure using the vSphere Web Client. One rule set can include rules from only a single storage provider. From the vSphere Web Client Home. Click the Create a New VM Storage Policy icon. The next step is defining a Storage Policy for Virtual Machines. the “VM Storage Policy Status”should show “Enabled”.4. 7. Select the hosts or cluster and Enable. Select the vCenter Server instance. From the vSphere Web Client Home. click VM Storage Policies. 6. Type a name and a description for the storage policy. Click the Enable VM Storage Policies icon. Figure 30 5. As a result.2 Page 44 . click VM Storage Policies. Figure 31 VDCA Study Guide Version – Section 1 . 2 Page 45 . Review the list of datastores that match this policy. Select a storage provider from the Rules Based on Vendor Specific Capabilities drop-box. a datastore does not need to satisfy all rule sets within the policy. On the Rule-Set 1 screen. Note: To be eligible. You can add more Rule-Sets. Or Add user-defined capabilities (example). The VDCA Study Guide Version – Section 1 . Figure 32 9. define the first rule set.8. Figure 33 10. Click the Manage tab. and click VMs and Virtual Disks.2 Page 46 . And click Finish. In the vSphere Web Client. Double-click a storage policy. Button Apply to Disks to apply policy to all disks. OK. Figure 35 13. click VM Storage Policies. From the vSphere Web Client Home. Figure 34 11. 14. Click the Monitor tab. The next step is to Apply a Storage policy to a VM. 12. You can check whether virtual machines and virtual disks use datastores that are compliant with the policy.datastore must satisfy at least one rule set and all rules within this set. browse to the virtual machine. Click VDCA Study Guide Version – Section 1 . and click VM Storage Policies. You can edit settings per disk. Apply the storage policy to the virtual machine configuration files by selecting the policy from the Home VM Storage Policy drop-down menu. Buton Manage VM Storage Policies. In fact. Figure 36 15. Procedure using the traditional vSphere Client (Deprecated). and select “Manage Storage Capabilities”.Trigger VM Storage Policy Compliance Check. Ready. Go to “VM Storage Profiles” Figure 37 2. it comes to performing the following tasks to get Profile drive Storage in place: 1. then create your User-defined Capabilities. Add the new Storage Capabilities. If your storage does not support VASA.2 Page 47 . Figure 38 VDCA Study Guide Version – Section 1 . Go to Datastores Figure 41 VDCA Study Guide Version – Section 1 .3. 6.2 Page 48 . (bind to capabilities) Figure 39 4. Assign Storage Capabilities to Datastores (is necessary when using user-defined capabilities). Create your VM Storage Profiles. Result Figure 40 5. VDCA Study Guide Version – Section 1 . check licenses and Enable. Select Hosts or Cluster. KOE-HADRS01 is now enabled. Return. result Figure 43 9. and Figure 42 8.7. now Enable Storage profiles.2 Page 49 . Figure 44 10. Finished.2 Page 50 . Do not forget Propagate to disks. VDCA Study Guide Version – Section 1 . Check Compliance Figure 47 15. 13. Result Figure 46 14. Assign VMs to an associated Storage profile Figure 45 12.11. display the Datastore of choice. a VMware blog post with some real life examples.com/kb/2004098 A sneak-peek at how some of VMware's Storage Partners are implementing VASA. page 133 describes various tasks related to the maintenance of datastore. you can select which hosts should no longer access the datastore. right-click and select Unmount. display the Datastore of choice. the prerequisites are presented one more time. Important NOTE: vSphere HA heartbeating does not prevent you from unmounting the datastore.vmware.2 Page 51 . where it remains mounted. the procedure is almost the same. The datastore is not used for vSphere HA heartbeating. Prepare Storage for maintenance Official Documentation: vSphere Storage Guide v5.Storage Awareness FAQ. The datastore is not managed by Storage DRS. but can no longer be seen from the hosts that you specify. Before finishing the task. If the heartbeating check fails. Storage I/O control is disabled for this datastore. Summary: When you unmount a datastore.Other references: vSphere Storage APIs . Before unmounting VMFS datastores. choose “All vCenter Actions” and select “Unmount Datastore…”. right-click. VDCA Study Guide Version – Section 1 . the vSphere Client displays a warning. http://kb. Chapter 16 “Working with Datastores”. If a datastore is used for heartbeating. The datastore is not part of a datastore cluster. unmounting it might cause the host to fail and restart any active virtual machine. make sure that the following prerequisites are met: No virtual machines reside on the datastore. In the vSphere Web Client. The datastore continues to appear on other hosts. If the datastore is shared. it remains intact.5. The procedure is simple. like unmounting and mounting VMFS or NFS Datastores. In the vSphere Web Client.”Monitoring Storage Resources”. Summary: For me it is not 100% clear what to expect from this one. page 107. Chapter 4. You can export data by clicking the very small button under the table (right corner) VDCA Study Guide Version – Section 1 . you can apply a filter and select additional columns. under the Monitor tab.Figure 48 Mounting a Datastore is a bit simpler. Other references: Apply space utilization data to manage storage resources Official Documentation: vSphere Monitoring and Performance Guide v5. Additionally.5 . There is a slight difference between mounting a shared or unshared VMFS Datastore.2 Page 52 . you can find Storage reports. Near “Report on”. select a category from the list to display information about that category. Provisioned and Free Space. Space Utilization and Performance. multipathing status. Figure 50 However the most informative way is to use the Storage View tab in vCenter. and other storage properties of the VDCA Study Guide Version – Section 1 .2 Page 53 . Use the Reports view to analyze storage space utilization and availability. display relationship tables that provide insight about how an inventory object is associated with storage entities. They also offer summarized storage usage data for the object’s virtual and physical storage resources. Which VMs are located on Datastore.Figure 49 Using the vSphere Client’s “Datastores and Datastore Clusters” view is the place to collect data on: Storage Capacity. This tab offers to options to display storage information: Reports. LSI Logic SAS and VMware Paravirtual). you are selecting your type of Physical Storage and making decisions concerning: Local Storage Networked Storage o Fibre Channel (FC) o Intermet SCSI (iSCSI) o Network-attached Storage (NAS. Chapter 4 goes into detail how to Display. besides capacity. By choosing a Datastore. N. Filter. VMware Storage Policies Profiles can be useful managing your storage. VDCA Study Guide Version – Section 1 . But when it comes to selecting the Datastore that will store your newly created virtual disk. Section “Virtual Disk Configuration”. Maps. aka NFS) o Shared Serial Attached SCSI (SAS) RAID levels Number of physical Disks in the Volume Path Selection Policies When placing a virtual disk on a Datastore. LSI Logic Parallel. you can configure a VM to use vSphere Flash Read Cache.B. Storage topology maps visually represent relationships between the selected object and its associated virtual and physical storage entities The Storage View tab depends on the vCenter Storage Monitoring plug-in.selected object and items related to it. Customize and Export Storage Reports and Maps. Just right-click under an overview.2 Page 54 . page 98. four controller types exist: (Buslogic Parallel.5. be aware of the requested disk performance in terms of R/W speed and IOPS. RDM or Virtual disk (Tick Provision Lazy Zeroed. Listen to end-users and monitor the performance with use of the vSphere Client and/or ESXtop. you are probably making the most important decision. Thick Provision Eager Zeroed or Thin Provision) Additionally. to accelerate virtual machine performance. Type of Disk. Do not search for the Export button. which is enabled by default under normal conditions. Provision and manage storage resources according to Virtual Machine requirements Official Documentation: vSphere Virtual Machine Administration Guide v5. Today for a virtual SCSI controller. Summary: When you are provisioning storage resources to a VM. Chapter 5 “Configuring Virtual Machine Hardware in the vSphere Web Client”. you make several decisions like: Type of Storage Controller. click the Monitor tab.5 . Summary: The vSphere Monitoring and Performance Guide v5.Also read the next section “SCSI and SATA Storage Controller Conditions. Triggered Alarms can be viewed in different ways: In the Alarms sidebar panel Figure 51 To view alarms triggered on a selected inventory object. this objective has a lot in common with the previous one. Chapter 5 “Monitoring Events. VDCA Study Guide Version – Section 1 . click Issues. section “Set an Alarm in the vSphere Web Client”. and Automated Actions”. but more important also how to create your own alarms. and click Triggered Alarms. page 116.5 . Other references: A Configure datastore Alarms Official Documentation: vSphere Monitoring and Performance Guide v5. and Compatibility” Other references: A Understand interactions between virtual storage provisioning and physical storage provisioning Official Documentation: Summary: Imho. Alarms. Limitations. provides detailed information about viewing triggered alarms.2 Page 55 . KB 2001034: “Triggered datastore alarm does not get cleared” VDCA Study Guide Version – Section 1 . after installation of a Dell Equallogic Array.2 Page 56 . In all cases. Figure 52 Depending on the type of Storage. new definitions will be available. and Automated Actions”. It is also a good practice to create an Alarm to monitor Virtual Machine snapshots. pre-configured alarms must be edited in the “Actions” section. For instance. section “View Triggered Alarms and Alarm Definitions in the vSphere Web Client”. Note that vCenter server comes with some pre-configured alarms. Alarms. Summary: Out of the box. Other references: A Create and analyze datastore alarms and errors to determine space availability See the previous bullet vSphere Monitoring and Performance Guide v5. and click Alarm Definitions. click the Manage tab. page 115. Alarms are created / edited by: Select the desired object Go to the Manage tab Select Alarm Definitions. vSphere comes with a set of pre-configured Alarm Definitions. actions taken when alarm state changes should be specified. Chapter 5 “Monitoring Events. also Storage providers may add extra alarms. To view a list of available alarm definitions for a selected inventory object.5 . Other references: More reading. Forgotten snapshots can lead to serious problems. Note that an alarm can only be edited on the level where it is defined. extra alarms will be available. specific for this type of Array. The section Managing Storage Paths and Multipathing Plug-Ins starts with a few important considerations. presents an overview of the available commands.2 Page 57 . Summary: The one and only command to manage PSA multipathing plug-ins is the esxcli command.3 – Troubleshoot complex storage solutions Skills and Abilities Perform command line configuration of multipathing options Change a multipath policy Troubleshoot common storage issues Tools vSphere Installation and Setup Guide v5. unless you want to unmask these devices List Multipathing Claim Rules for a ESXI host: ~ # esxcli storage core claimrule list Rule Class Rule Class Type ---------.5.------.5 vSphere Storage Guide v5.Objective 1.----. By default. the PSA claim rule 101 masks Dell array pseudo devices. “Understanding Multipathing and Failover. The default PSP is VMW_PSP_FIXED.5 vSphere Command-Line Interface Concepts and Examples v5. page 211. To highlight a few: If no SATP is assigned to the device by the claim rules.--------MP 0 runtime transport MP 1 runtime transport MP 2 runtime transport MP 3 runtime transport MP 4 runtime transport MP 101 runtime vendor MP 101 file vendor MP 65535 runtime vendor -c=MP Plugin --------NMP NMP NMP NMP NMP MASK_PATH MASK_PATH NMP Matches --------------------------------transport=usb transport=sata transport=ide transport=block transport=unknown vendor=DELL model=Universal Xport vendor=DELL model=Universal Xport vendor=* model=* This example indicates the following: VDCA Study Guide Version – Section 1 . Chapter 23. the default SATP for iSCSI or FC devices is VMW_SATP_DEFAULT_AA. Do not delete this rule.5 vSphere Client / Web Client vSphere CLI o esxcli o vifs o vmkfstools Perform command line configuration of multipathing options Official Documentation: vSphere Storage Guide v5. 2 Page 58 . The file parameter in the Class column indicates that the rule is defined. Any paths not described in the previous rules are claimed by NMP. one line for the rule with the file parameter and another line with runtime. For a user-defined claim rule to be active. two lines with the same rule number should exist.5000144f80206240 Device Display Name: EMC iSCSI Disk (naa. These are system-defined claim rules that you cannot modify. and Block SCSI transportation. The Class column shows which rules are defined and which are loaded. You can use the MASK_PATH module to hide unused devices from your host. the PSA claim Rule 101 masks Dell array pseudo devices with a vendor string of DELL and a model string of Universal Xport.5000144f80206240) VDCA Study Guide Version – Section 1 . Display Multipathing Modules ~ # esxcli storage core plugin list Plugin name Plugin class ----------. The NMP claims all paths connected to storage devices that use the USB.---------------VMW_SATP_MSA VMW_PSP_MRU VMW_SATP_ALUA VMW_PSP_MRU VMW_SATP_DEFAULT_AP VMW_PSP_MRU VMW_SATP_SVC VMW_PSP_FIXED VMW_SATP_EQL VMW_PSP_FIXED VMW_SATP_INV VMW_PSP_FIXED VMW_SATP_EVA VMW_PSP_FIXED VMW_SATP_ALUA_CX VMW_PSP_FIXED_AP VMW_SATP_SYMM VMW_PSP_FIXED VMW_SATP_CX VMW_PSP_MRU VMW_SATP_LSI VMW_PSP_MRU VMW_SATP_DEFAULT_AA VMW_PSP_FIXED VMW_SATP_LOCAL VMW_PSP_FIXED ~ # Description -----------------------------------------Placeholder (plugin not loaded) Placeholder (plugin not loaded) Placeholder (plugin not loaded) Placeholder (plugin not loaded) Placeholder (plugin not loaded) Placeholder (plugin not loaded) Placeholder (plugin not loaded) Placeholder (plugin not loaded) Placeholder (plugin not loaded) Placeholder (plugin not loaded) Placeholder (plugin not loaded) Supports non-specific active/active arrays Supports direct attached devices Display NMP Storage Devices ~ # esxcli storage nmp device list naa. The runtime parameter indicates that the rule has been loaded into your system. Several low numbered rules. or VAAI. Filter. The Rule Class column in the output describes the category of a claim rule. By default. It can be: MP (multipathingplug-in). IDE. have only one line with the Class of runtime.-----------NMP MP Display SATPs for the Host ~ # esxcli storage nmp satp list Name Default PSP ------------------. SATA. Select the storage device whose paths you want to change and click the Properties tab. you do not have to change the default multipathing settings your host uses for a specific storage device. Select Storage Devices. Using the vSphere Web Client. you can use the Manage Paths dialog box to modify a path selection policy and specify the preferred path for the Fixed policy” Multipath settings apply on a per Storage basis. Path Selection Policy: VMW_PSP_FIXED Path Selection Policy Device Config: {preferred=vmhba35:C0:T0:L0.5000144f77827768) Storage Array Type: VMW_SATP_DEFAULT_AA Storage Array Type Device Config: SATP VMW_SATP_DEFAULT_AA does not support device configuration. Select Storage. Click the button “Edit Multipathing…” Under Path selection policy. “Understanding Multipathing and Failover”. if you want to make any changes. The preferred PSP depends on the storage. section “Setting a Path Selection Policy” .current=vmhba35:C0:T0:L0} Path Selection Policy Device Custom Config: Working Paths: vmhba35:C0:T0:L0 For other commands like masking paths. Select the Manage tab. Select a host. VDCA Study Guide Version – Section 1 .2 Page 59 . follow the manufacturer’s Best practices. describes how to change the Path selection Policy Summary: VMware states: “Generally.current=vmhba35:C0:T1:L0} Path Selection Policy Device Custom Config: Working Paths: vmhba35:C0:T1:L0 naa. Chapter 23. select the desired PSP. Path Selection Policy: VMW_PSP_FIXED Path Selection Policy Device Config: {preferred=vmhba35:C0:T1:L0.5000144f77827768 Device Display Name: EMC iSCSI Disk (naa.1” Understand and apply LUN masking using PSA‐related commands” Other references: Change a multipath policy vSphere Storage Guide v5. page 221. However.Storage Array Type: VMW_SATP_DEFAULT_AA Storage Array Type Device Config: SATP VMW_SATP_DEFAULT_AA does not support device configuration. see section 1.5. 2 Page 60 . Hardware. Storage and Select the Datastore of your choice. Figure 54 Other references: A VDCA Study Guide Version – Section 1 . go to Configuration. Open the Manage Paths dialog and select the desired policy.Figure 53 Use the vSphere Client and from the “Hosts and Clusters” View. section “Managing Paths”. Chapter 22 ”VMKernel and Storage”. I have incorporated that section.Troubleshoot common storage issues Section 6 in the VDCA510 Blueprint was all about troubleshooting. Summary: Multipathing. VMware presents a nice graphic that goes from a VM to the actual storage device drivers.5.5. Use esxcli to troubleshoot VMkernel storage module configurations Official Documentation: vSphere Storage Guide v5. Use esxcli to troubleshoot multipathing and PSA-related issues Official Documentation: vSphere Command-Line Interface Concepts and Examples v5. Chapter 4 “Managing Storage”. page 44. PSA and the related commands have been discussed in ”Objective 1.1 – Implement Complex Storage Solutions”. Summary: I am not sure what to expect from this one.2 Page 61 . Have a look at this rather theoretical chapter. See also this post for a graphical overview of the ESXCLI command. page 207. So for your convenience. VDCA Study Guide Version – Section 1 . The tool of choice to troubleshoot storage performance and configuration issues is ESXCLI. --------vmkernel true procfs true vmkplexer true vmklinux_9 true vmklinux_9_2_0_0 true random true Is Enabled ---------true true true true true true Use esxcli to troubleshoot iSCSI related issues Official Documentation: vSphere Command-Line Interface Concepts and Examples v5. Get familiar with the esxcli command. Chapter 4 “Managing Storage” as a reference.Graphic by VMware This graphic also indicates that every esxcli (namespace) storage command is part of this overview.2 Page 62 .5. page 57.Figure 55 . practice and use the vSphere Command-Line Interface Concepts and Examples v5.5. VDCA Study Guide Version – Section 1 . To get an overview use this command: # esxcli system module list Name Is Loaded ------------------. Chapter 5 “Managing iSCSI Storage”. The esxcli system module namespace allows you to view load and enable VMKernel modules. Remember while troubleshooting iSCSI issues. so also take in consideration issues like: IP configuration of NICs MTU settings on NIICs and switches Configuration of vSwitches So besides the esxci issci commands.2 Page 63 . you will also need the esxcli network command to troubleshoot network related issues. Note: iSCSI Parameters options can be found on four different levels (Red in Figure 2). iSCSI highly depends on IP technology. Note: CHAP authentication options can be found on three levels (Blue in Figure 2): Adapter level (General) Discovery level Target level Figure 56 VDCA Study Guide Version – Section 1 .Summary: Chapter 5 presents a nice overview of the esxcli commands to Setup iSCSI Storage and for listing and setting iSCSI options and Parameters. this one seems more general. add and remove nfs storage. Summary: The previous objectives point to the esxcli command. The esxcli has a name space on nfs: esxcli storage nfs You can list. Other references: VMware KB 2002197 “Troubleshooting disk latency when using Jumbo Frames with iSCSI or NFS datastores”.Other references: VMware KB 1003681 “Troubleshooting iSCSI array connectivity issues”. VMware KB 1003951 “Troubleshooting ESX and ESXi connectivity to iSCSI arrays using hardware initiators”.2 Page 64 . Chapter 4 “Managing Storage”. Use esxtop/resxtop and vscsiStats to identify storage performance issues Official Documentation: Summary: ESXTOP is very useful for troubleshooting storage performance issues. VMware KB 1008083 “Configuring and troubleshooting basic software iSCSI setup” Troubleshoot NFS mounting and permission issues Official Documentation: vSphere Command-Line Interface Concepts and Examples v5.5. VDCA Study Guide Version – Section 1 . section “Managing NFS/NAS Datastores”. Recommended reading on this objective is VMware KB “Troubleshooting connectivity issues to an NFS datastore on ESX/ESXi hosts”. page 50. VMware KB 1003952 “Troubleshooting ESX and ESXi connectivity to iSCSI arrays using software initiators”. However. vscsiStats collects and reports counters on storage activity. usually caused by the disk array. value is 2. esxtop will not provide the full picture of the storage profile. with only latency and throughput statistics. Max.5. Unofficial are a lot of excellent Blog posts. counter CONS/s (SCSI Reservation Conflicts per second). Some info on vscsiStats. The following data are reported in histogram form: IO size Seek distance Outstanding IOs Latency (in microseconds) VDCA Study Guide Version – Section 1 . “Using vscsiStats for Storage Performance Analysis”. Also beware of iSCSI Reservation conflicts. KAVG. Max. Summary: From the Communities: “esxtop is a great tool for performance analysis of all types. Disk latency caused by the VMKernel.Figure 57 Important metrics are: DAVG. GAVG. value is 25. I will mention a few in the “Other references” section. The latency seen at the device driver. Latency analysis of NFS traffic is not possible with esxtop. Max. Since ESX 3. Furthermore. Max. VMware has provided a tool specifically for profiling storage: vscsiStats. from the VMware Communities seems to be the official documentation on this subject. Its data is collected at the virtual SCSI device level in the kernel. This means that results are reported per VMDK (or RDM) irrespective of the underlying storage protocol. value is 25. esxtop only provides latency numbers for Fibre Channel and iSCSI storage. allowed is 20.2 Page 65 . is the sum of DAVG + KAVG. Login on a ESXi host as user with root privileges.com/2010/03/11/new-vscsistats-excel-macro/ To export data: # vscsiStats –w <vmwgid> -p all –c > /root/vscsiStatsexport.wordpress.gabesvirtualworld. More!” The following is a quick step guide to vscsiStats.2 Page 66 . see: http://www. Want to monitor 1 VM? Determine VM worldgroupid with: # vscsiStats –l Figure 58 Start collecting for one VM: # vscsiStats –s –w <vmwgid> Figure 59 – Colllecting data for VC5 Display after 5 minutes: # vscsiStats –w <vmwgid> -p all –c Stops collecting: # vscsiStats –x To create graphs.com/converting-vscsistats-data-into-excel-charts/ and http://dunnsept.csv WinSCP data to desktop VDCA Study Guide Version – Section 1 . Download the macro from here and copy and paste everything between: Sub Process_data() and End Function From the menu: Run Macro. Interpreting the data? Go to: “Using vscsiStats for Storage Performance Analysis”. make sure that you meet this requirement: “it will expect your data to be in column A and the histogram BINS to be in column B:” Figure 60 Create new macro.2 Page 67 . section “Using vscsiStats Results”.csv file in Excel. Import . Example: Good Write performance on my storage. Most write commands complete under 5 ms. Figure 61 VDCA Study Guide Version – Section 1 . gabesvirtualworld. Summary: The official documentation is good reading on how to use the vmkfstools command: ~ # vmkfstools No valid command specified OPTIONS FOR FILE SYSTEMS: vmkfstools -C --createfs [vmfs3|vmfs5] -b --blocksize #[mMkK] -S --setfsname fsName -Z --spanfs span-partition -G --growfs grown-partition deviceName -P --queryfs -h --humanreadable -T --upgradevmfs vmfsPath OPTIONS FOR VIRTUAL DISKS: vmkfstools -c --createvirtualdisk #[gGmMkK] -d --diskformat [zeroedthick| thin| eagerzeroedthick] -a --adaptertype [buslogic|lsilogic|ide] -w --writezeros -j --inflatedisk -k --eagerzero -K --punchzero -U --deletevirtualdisk -E --renamevirtualdisk srcDisk -i --clonevirtualdisk srcDisk -d --diskformat [zeroedthick| thin| eagerzeroedthick| rdm:<device>|rdmp:<device>| 2gbsparse] -N --avoidnativeclone -X --extendvirtualdisk #[gGmMkK] [-d --diskformat eagerzeroedthick] -M --migratevirtualdisk -r --createrdm /vmfs/devices/disks/.”Using vmkfstools”. Ready. Chapter 27..5.yellow-bricks.2 Page 68 .. -q --queryrdm -z --createrdmpassthru /vmfs/devices/disks/.. page 255. Other references: Gabe’s Virtual World: http://www.com/using-vscsistats-the-full-how-to/ Duncan Epping: http://www. -v --verbose # VDCA Study Guide Version – Section 1 .com/2009/12/17/vscsistats/ ESXTOP by Duncan Epping Configure and troubleshoot VMFS datastores using vmkfstools Official Documentation: vSphere Storage Guide v5.. File system label (if any): IX2-iSCSI-01 Mode: public Capacity 299.2 Page 69 . The File Systems option allows you to: List attributes of a VMFS file system. Extend an existing VMFS file system. From here we can see the tree main options: For File systems. use option: -E Clone a virtual disk or RDM. -B --breaklock /vmfs/devices/disks/. use option: -k Rename a virtual disk... ~ # vmkfstools -P /vmfs/volumes/IX2-iSCSI-01 -h VMFS-3.. 216.5000144f77827768:1 Is Native Snapshot Capable: NO ~ # Create a VMFS file system. use option: -c Delete virtual disks. The Virtual Disks options are huge. use option: -K Convert a Zeroedthick to an Eagerzeroedthick virtual disk. use option: -j Remove Zeroed Blocks. The vmkfstools command without options presents a comprehensive overview.8 GB.-g -I -x -e vmfsPath --geometry --snapshotdisk srcDisk --fix [check|repair] --chainConsistent OPTIONS FOR DEVICES: -L --lock [reserve|release|lunreset|targetreset|busreset|readkeys|readresv] /vmfs/devices/disks/. For Devices. For Virtual disks.54 file system spanning 1 partitions. use option: -U Initialize a virtual disk. file block size 8 MB UUID: 4f9eca2e-3a28f563-c184-001b2181d256 Partitions spanned (on "lvm"): naa. Upgrading a VMFS datastore.8 GB available. use option: -w Inflate a Thin disk. you can: Create virtual disks. use option: -i And many more Two important Device options are available: VDCA Study Guide Version – Section 1 .. if its original copy is not online.4 on analyzing Log files. and reset a reservation. esxcfg-volume <options> -l|--list -m|--mount <VMFS UUID|label> -u|--umount <VMFS UUID|label> -r|--resignature <VMFS UUID|label> -M|--persistent-mount <VMFS UUID|label> -U|--upgrade <VMFS UUID|label> -h|--help /vmfs/volumes # List all volumes which have been detected as snapshots/replicas. Analyze log files to identify storage and multipathing problems Official Documentation: Summary: See also Objective 5. lets you reserve a SCSI LUN for exclusive use by the ESXi host. allows you to forcibly break the device lock on a particular partition Other references: VMware KB 1009829 Manually creating a VMFS volume using vmkfstools -C Troubleshoot snapshot and resignaturing issues Official Documentation: Summary: Resignaturing has been discussed in Objective 1. forcing all reservations from the target to be released. Mount a snapshot/replica volume. release a reservation so that other hosts can access the LUN. Show this message. Umount a snapshot/replica volume. Upgrade a VMFS3 volume to VMFS5. Option –B –breaklock. if its original copy is not online. Mount a snapshot/replica volume persistently. Resignature a snapshot/replica volume. Option –L –lock [reserve|release|lunreset|targetreset|busreset].2 Page 70 . The esxcli storage vmfs snaphot command has the same functionality.1. VDCA Study Guide Version – Section 1 . There is also a CLI utility: esxcfg-volume to support resignaturing operations. The concept of a vSS and concepts like Virtual Machine Port Groups and VMkernel Ports is assumed knowledge. Summary: vSphere Standard Switches (vSS) can be created and managed using the vSphere Web Client. The vSphere Networking Guide v5. the traditional vSphere Client or using CLI tools.1 – Implement and manage virtual standard switch (vSS) networks Skills and Abilities Create and Manage vSS components Create and Manage vmkernel ports on standard switches Configure advanced vSS settings Tools vSphere Installation and Setup Guide v5.5 vSphere Client / Web Client Create and Manage vSS components Official Documentation: vSphere Networking Guide v5. Another way to create and configure vSS is using Host Profiles.2 Page 71 . VDCA Study Guide Version – Section 1 .5 is a great introduction and provides a lot of information about the concepts of vSphere Networking and how to setup virtual switches (vSphere Standard Switches and vSphere Distributed Switches).5 vSphere Command-Line Interface Concepts and Examples v5.Section 2 – Implement and Manage Networking Objective 2.5 chapter 3 “Setting Up Networking with vSphere Standard Switches”.5 vSphere Networking Guide v5. 2 Page 72 .Figure 62 The vSphere Command-Line Interface Concepts and Examples v5.5 chapter 5 “Setting Up VMkernel Networking”. chapter 9 “Managing vSphere Networking” provides examples how to set up a vSS. Other references: Create and Manage vmkernel ports on standard switches Official Documentation: vSphere Networking Guide v5.5. These commands are ideal for a scripting the configuration of an ESXi host. Summary: vSphere Networking Guide v5. The newer esxcli command can also be used to create and edit a vSS. VDCA Study Guide Version – Section 1 .5 chapter 5 “Setting Up VMkernel Networking” provides information about setting up VMkernel adapters. The vicfg-vswitch and vicfg-nics commands have been around some time. section “Adding and Modifying VMkernel Network Interfaces” provides examples. Other references: Configure advanced vSS settings Official Documentation: : vSphere Networking Guide v5. Summary: It is not very clear which advanced vSS settings are mentioned. can be configured. if you configure for Jumbo frames: 9000). (Promiscuous Mode. a vSS only shapes outbound network traffic.2 Page 73 .Figure 63 The vSphere Command-Line Interface Concepts and Examples v5. MAC address Changes and Forged Transmits). chapter 9 “Managing vSphere Networking”.5 . VDCA Study Guide Version – Section 1 . Peak Bandwidth and Burst Size. The following options. here you can set the MTU value (default 1500. Traffic Shaping Policies. Security Policies.5. Each Switch and Port Group has the following sections: General. NIC teaming and Failover policies. Average Bandwidth. such as a physical switch port being blocked by spanning tree or that is misconfigured to the wrong VLAN or cable pulls on the other side of a physical switch. if any. requiring its replacement. in addition to link status. No such issue exists with NLB running in multicast mode. This option detects failures. this process is desirable for the lowest latency of failover occurrences and migrations with vMotion.Most people know the load balancing options. Relies solely on the link status that the network adapter provides. The following options are not always clear: Figure 64 Network Failover Detection: Specify the method to use for failover detection.2 Page 74 . If you select Yes (default). Sends out and listens for beacon probes on all NICs in the team and uses this information. but not configuration errors. NOTE Do not use this option when the virtual machines using the port group are using Microsoft Network Load Balancing in unicast mode. whenever a virtual NIC is connected to the virtual switch or whenever that virtual NIC’s traffic would be routed over a different physical NIC in the team because of a failover event. Link Status only (default). If failback is set to No. such as cable pulls and physical switch power failures. to determine link failure. displacing the standby adapter that took over its slot. VDCA Study Guide Version – Section 1 . Beacon Probing. Notify Switches: Select Yes or No to notify switches in the case of failover. In almost all cases. NOTE Do not use beacon probing with IP-hash load balancing. If failback is set to Yes (default). This option determines how a physical adapter is returned to active duty after recovering from a failure. Failback: Select Yes or No to disable or enable failback. This detects many of the failures previously mentioned that are not detected by link status alone. the adapter is returned to active duty immediately upon recovery. a notification is sent out over the network to update the lookup tables on physical switches. a failed adapter is left inactive even after recovery until another currently active adapter fails. 2 Page 75 .Other references: V VDCA Study Guide Version – Section 1 . 2 Page 76 .5 vSphere Command-Line Interface Concepts and Examples v5. This Whitepaper. is intended to help migrating from an environment with vSS to one using vDS.5 vSphere Client / Web Client vSphere CLI o esxcli Determine use cases for and applying VMware DirectPath I/O This subject has also been covered in Objective 1. Chapter 2 and Chapter 3 contain a lot of information on setting up vSphere Standard Switches and vSphere Distributed Switches. Migrate a vSS network to a hybrid or full vDS solution Official Documentation: vSphere Networking Guide v5.5. The deployments and design approaches described in this document are meant to provide guidance as to what physical and virtual switch parameters. This paper discusses and suggests the most effective methods of deployment for the VMware vNetwork Distributed Switch (vDS) in a variety of vSphere 4 environments. “VMware vSphere Distributed Switch Best Practices”.1. Summary: Recommended reading on this subject are these documents: “VMware vNetwork Distributed Switch: Migration and Configuration”.5 vSphere Networking Guide v5. options and features should be considered during the design of a virtual network infrastructure.2 – Implement and manage virtual distributed switch (vDS) networks Skills and Abilities Determine use cases for and applying VMware DirectPath I/O Migrate a vSS network to a hybrid or full vDS solution Configure vSS and vDS settings using command line tools Analyze command line output to identify vSS and vDS configuration details Configure NetFlow Determine appropriate discovery protocol Determine use cases for and configure PVLANs Use command line tools to troubleshoot and identify VLAN configurations Tools vSphere Installation and Setup Guide v5. For each of these deployments.Objective 2.x era. “VMware vSphere 4: Deployment Methods for the VMware vNetwork Distributed Switch”. It also has a chapter on choosing a method for migration to a vDS. It discusses possible scenarios and provides step-by-step examples how to migrate. different VDS design approaches are explained.x whitepaper describes two example deployments. This vSphere 5. released during the vSphere 4. VDCA Study Guide Version – Section 1 . one using rack servers and the other using blade servers. but no specific information on this objective. right click and choose from the menu. Select a Datacenter.2 Page 77 . VDCA Study Guide Version – Section 1 . Figure 65 Select the Source and Destination Network: Figure 66 And select the VMs on the Source network that you want to migrate.Migrate Virtual Machine Port groups One option is to use the “Migrate VM to Another Network…” Wizard. by choosing the icon: Figure 68 VDCA Study Guide Version – Section 1 . choose Manage and Networking. Select “Virtual Switches” and select the vDS to which you want to migrate VMkernel Adapters.Figure 67 And complete. Migrate VMKernel Ports Select a host.2 Page 78 . Now click “Assign port group”. BTW. see previous section. Choose the destination port group on the vDS. VDCA Study Guide Version – Section 1 . by choosing “Migrate virtual machine networking” . you can also migrate VMs. Figure 69 Select Next. Figure 70 Select a VMkernel adapter you want to migrate to the vDS.From here select “Manage VMkernel Adapters”.2 Page 79 . VDCA Study Guide Version – Section 1 . Other references: See above. Figure 72 If everything is OK.Figure 71 The next window will show the impact of this configuration change.2 Page 80 . Complete the migration. Analyze command line output to identify vSS and vDS configuration details See previous objective.Configure vSS and vDS settings using command line tools Official Documentation: vSphere Command-Line Interface Concepts and Examples v5. Veeam and many others. as an installable package on a Windows or Linux Client or as part of the VMware Management Assistant (vMA) The VMware vSphere PowerCLI. PowerShell is a very powerful Command shell and more and more vendors are adding extensions (Cmdlets) to it.should be replaced by: esxcfg-. commands starting with vicfg. VDCA Study Guide Version – Section 1 . like VMware. VMware offers two completely different CLI’s with options to configure and analyze output of vSS and vDS. The concept behind the Microsoft PowerShell is somewhat different.5 Summary: In fact. The VMware vSphere CLI. available on any client that supports Microsoft’s Powershell vSphere CLI commands related to the vSS and vDS are: # esxcli network namespace (now works with FastPass on the vMA) # vicfg-vmknic # vicfg-vswitch # net-dvs (only on a ESXi host) # vicfg-nics # vicfg-route # vmkping (only on a ESXi host) Note: on a ESXi hosts. The complete vSphere Command Line documentation is here. four categories are available: VMHostNetwork VMHostNetworkAdapter VirtualSwitch VirtualPortGroup Other references: Ivo Beerens has put together a nice CLI cheat sheet. If you haven’t done already.2 Page 81 . Concerning Virtual Networking. The complete vSphere PowerCLI documentation is here. available on a ESXi host. it is certainly worth investing some time learning PowerShell. 0 and later. 0 means no sampling! Process Internal Flows only.Configure NetFlow Official Documentation: vSphere Networking Guide v5. with the sampling rate number determining how often NetFlow collects the packets. section “Configure NetFlow with the vSphere Web Client”. A collector with a sampling rate of 5 collects data from every fifth packet. To configure NetFlow. Summary: NetFlow is a network analysis tool that you can use to monitor network monitoring and virtual machine traffic. unrelated switch for each associated host.5. if you want to analyze traffic between 2 or more VMs. With an IP address to the vSphere distributed switch. The official documentation describes the steps to configure NetFlow NetFlow is enabled on the vDS level. rather than interacting with a separate. NetFlow is available on vSphere distributed switch version 5. the Sampling rate. Chapter 9 “Advanced Networking”. A collector with a sampling rate of 2 collects data from every other packet. The sampling rate determines what portion of data NetFlow collects. the NetFlow collector can interact with the vSphere distributed switch as a single switch. Figure 73 VDCA Study Guide Version – Section 1 . page 159. right-click on a vDS. Most important settings: the vDS IP address.0. select “All vCenter Actions” and “Edit NetFlow”.2 Page 82 . The Switch IP address is the Ip address of the vDS.0 and later.Figure 74 “IP address” is the IP address of the NetFlow Collector and the Port number associated. When a Switch Discovery Protocol is enabled for a particular vSphere distributed switch or vSphere standard switch. Other references: Eric Sloof has made a great video on Enabling NetFlow on a vSphere 5 Distributed Switch Determine appropriate discovery protocol Official Documentation: vSphere Networking Guide v5. LLDP is vendor neutral and can be seen as the successor of CDP.5. Summary: Since vSphere 5. Link Layer Discovery Protocol (LLDP) is available for vSphere distributed switches version 5. The BIG question. A Switch discovery protocols allow vSphere administrators to determine which switch port is connected to a given vSphere standard switch or vSphere distributed switch. On both levels an override of the port policies is allowed. and timeout from the vSphere Client Cisco Discovery Protocol (CDP) is available for vSphere standard switches and vSphere distributed switches connected to Cisco physical switches (and other switches which support CDP). you can view properties of the peer physical switch such as device ID. which switch discovery protocol do we use? Imho the answer depends on: VDCA Study Guide Version – Section 1 . two switch discovery protocols are now supported. Netflow needs to be enabled on the dvUplinks layer and/or on the dvPortGroup layer. Chapter 9 “Advanced Networking”. section “Switch Discovery Protocol”. page 159. software version.0.2 Page 83 . Summary: Private VLANs are used to solve VLAN ID limitations and waste of IP addresses for certain network setups. A primary VLAN ID can have multiple secondary VLAN IDs associated with it. A graphic will clarify this.2 Page 84 . Jason Boche discussing LLDP. the physical switch connected to the host needs to be private VLAN-capable and configured with the VLAN IDs being used by ESXi for the private VLAN functionality. Rickard Nobel on troubleshooting ESXi with LLDP. Chapter 4 “Setting up Networking with vSphere Distributed Switches”. Section “Private VLANs”. so that ports on a private VLAN can communicate with ports configured as the primary VLAN. Determine use cases for and configure PVLANs Official Documentation: vSphere Networking Guide v5. all corresponding private VLAN IDs must be first entered into the switch's VLAN database. communicating only with promiscuous ports. communicating with both promiscuous ports and other ports on the same secondary VLAN.5. VDCA Study Guide Version – Section 1 . Ports on a secondary VLAN can be either: o Isolated. A private VLAN is identified by its primary VLAN ID. Primary VLANs are Promiscuous. or o Community. Wikipedia on LLDP. For physical switches using dynamic MAC+VLAN ID based learning. To use private VLANs between a host and the rest of the physical network. Which switch discovery protocols are supported by the connected physical switches? Do we want to enable a switch discovery protocol on a vSS or a vDS? Which output do we want? Other references: Wikipedia on CDP. page 54. I recommend that you watch Eric Sloof’s tutorial on this subject. if you are new to this subject.Figure 75 Origin: http://daxm. step-by-step. However.2 Page 85 .net Configuring pVLANs In the VMware documentation. Figure 76 – Configuration of the pVLAN Other references: Excellent Tutorial on this subject is Eric Sloof’s video on Configuring Private VLANs. An old proverb says: “An excellent video tells you more than 1000 written words”. you can find the whole process. VDCA Study Guide Version – Section 1 . For adjusting VLAN settings on a portgroup.2 Page 86 . Look for entries starting with: /net The esx-vswitch –l command gives an overview of all vSS an dVS. these examples assume we are able to logon to an ESXI host: Troubleshooting means in the first place.conf contains a section on network settings. vMA or your desktop). PowerCLI) and location (Local on a ESXi host. For standard switches use: esxcli network vswitch standard portgroup list The esxtop command in network display is always useful to collect network statistics. there are a few options. including VLAN settings The ESXCLI command does the same. VDCA Study Guide Version – Section 1 . Other references: VMware KB 1004074 Sample configuration of virtual switch VLAN tagging (VST Mode) and discusses the configuration of virtual and physical switches. Apart from which CLI (vSphere CLI. gathering information.Use command line tools to troubleshoot and identify VLAN configurations Official Documentation: The vSphere Networking Guide or even the vSphere Troubleshooting Guide do not provide much information on this subject Summary: Using command line tools to troubleshoot VLAN issues. use the esxcfg-vswitch command with the parameter -v. The /esx/vmware/esx. The Load Balancing policy is one of the available Networking Policies. On the Manage tab. Failover Detection: Link Status/Beacon Probing Network Adapter Order (Active/Standby) Editing these policies for the vSS and vDS are done in two different locations within the vSphere Web Client. For a vSS. and select Virtual switches. Traffic Shaping Policy and so on. The Failover and Load Balancing policies include three parameters: Load Balancing policy: The Load Balancing policy determines how outgoing traffic is distributed among the network adapters assigned to a standard switch. page 77.5 vSphere Networking Guide v5. Summary: Load Balancing and Failover policies determines how network traffic is distributed between adapters and how to reroute traffic in the event of an adapter failure. click Networking. Select a standard switch from the list. Incoming traffic is controlled by the Load Balancing policy on the physical switch.5 vSphere Client / Web Client vSphere CLI o esxcli Understand the NIC Teaming failover types and related physical network settings Official Documentation: vSphere Networking Guide v5.5 vSphere Troubleshooting Guide v5.5.5 vSphere Command-Line Interface Concepts and Examples v5. such as: VLAN. Section “Load balancing and Failover policies”. Chapter 6 “Networking Policies”. click Edit settings and select Teaming VDCA Study Guide Version – Section 1 .3 – Troubleshoot virtual switch solutions Skills and Abilities Understand the NIC Teaming failover types and related physical network settings Determine and apply Failover settings Configure explicit failover to conform with VMware best practices Configure port groups to properly isolate network traffic Given a set of network requirements. Security.Objective 2.2 Page 87 . navigate to the host. In the vSphere Web Client. identify the appropriate distributed switch technology to use Configure and administer vSphere Network I/O Control Use command line tools to troubleshoot and identify configuration items from an existing vDS Tools vSphere Installation and Setup Guide v5. Override on the Port level. Override on the Portgroup level.and Failover. Select the desired vDS. VDCA Study Guide Version – Section 1 . Figure 77 vSS vDS. Click Edit settings and select Teaming and Failover. click Settings.2 Page 88 . In the vSphere Web Client. Select a vDS and the desire Port group. Also on the dvUplink level. On the Manage tab. navigate to the Networking. via Networking. and select Policies. 2 Page 89 . there are four/five options (vSS and vDS respectively): o Route based on the originating port ID: This setting will select a physical uplink based on the originating virtual port where the traffic first entered the vSS or vDS. This method has a higher CPU overhead but a better distribution of bandwith across the physical uplinks. see: http://en. routers and servers. It allows grouping of several physical Ethernet links to create one logical Ethernet link for the purpose of providing fault-tolerance and high-speed links between switches.Figure 78 vDS The first Policy is Load Balancing. When using IP hash load balancing: 1 The physical uplinks for the vSS or vDS must be in an ether channel on the physical switch (LACP. 802.wikipedia. This method is a simple and fast and no single-NIC VM gets more bandwith than can be provided by a single physical adapter. This is the default Load balancing Policy! o Route based on IP hash: This setting will select a physical uplink based on a hash produced using the source and destination IP address. VDCA Study Guide Version – Section 1 . In case you have a stacked switch. This method allows a single-NIC VM might use the bandwith of multiple physical uplinks.org/wiki/EtherChannel EtherChannel is a port link aggregation technology or port-channel architecture used primarily on Cisco switches. you can spread port over > 1 switches.3ad link aggregation support) 1 Ether Channel. o Route based on Physical NIC load (vDS ONLY): This setting determines which adapter traffic is routed to based on the load of the physical NICs listed under Active Adapters. but it uses hashing based on the source MAC address and does not require additional configuration on the physical switch. This method will typically be able to detect physical switch misconfigurations as initiate a failover. o Beacon Probing: This setting will listen for beacon probes on all physical NICs that are part of the team (as well as send out beacon probes). If you choose No then a failed physical adapter that becomes operational will only become active again if/when the standby VDCA Study Guide Version – Section 1 . All port groups using the same physical uplinks should use IP hash load balancing policy Figure 79 . Note: If using Microsoft NLB in unicast mode set this setting to No o Select Yes or No for the Failback policy.Useful info. o Use explicit failover order: This setting uses the physical uplink that is listed first under Active Adapters. Link Status only is not able to detect misconfigurations such as VLAN pruning or spanning tree. This method has low overhead and is compatible with all physical switches. there are two option o Link Status only: Using this will detect the link state of the physical adapter.. Note: Do not use beacon probing when using the IP hash load balancing policy o Select Yes or No for the Notify Switches policy. failure will be detected and failover initiated.2 Page 90 . It will then use the information it receives from the beacon probe to determine the link status. o Route based on source MAC hash: This setting is similar to IP hash in the fact that it uses hasing. This policy requires ZERO physical switch configurations and is true load balancing! The next policy is Network Failover Detection. If the physical switch fails or if someone unplugs the cable from the NIC or the physical switch. Choosing Yes will notify the physical switches to update its lookup tables whenever a failover event occurs or whenever a virtual NIC is connected to the vSS. Choosing Yes will initiate a failback when a failed physical adapter becomes operational.. Summary: The vSphere Networking Guide contains a small section on Networking Best Practices.2 Page 91 . Concerning this objective. The second Portgroup vMotion is configured exactly the other way around. this has three sections o Active Adapters: Physical adapters listed here are active and are being used for inbound/outbound traffic.adapter that was promoted fails The last policy is Failover Order. The Management Network uses vmnic0 as an active uplink and vmnic1 as a Standby adapter. Their utilization is based on the load balancing policy. Configure explicit failover to conform with VMware best practices Official Documentation: vSphere Networking Guide v5. page 185. From my blog post “Configure VMware ESXi 4. provide bandwidth and failover in case of failure.1 Networking” comes this example. o Standby Adapters: Physical adapters listed here are on standby and only used when an active adapter fails or no longer has network connectivity o Unused Adapters: Physical adapters listed here will not be use When choosing the policy “Route based on IP hash”. I do recommend reading this chapter. Chapter 11 “Networking Best Practices”. it is important that the physical uplinks for the vSS or vDS must be in an ether channel on the physical switch! Other references: VMware KB 1004048 Sample configuration of Ether Channel / Link aggregation with ESX/ESXi and Cisco/HP switches Determine and apply Failover settings See previous objective. VDCA Study Guide Version – Section 1 . how to configure explicit failover. These adapters will always be used when connected and operational. the idea is to separate network services from one another.5. page 185.2.168.21.53 vmnic1 Active / vmnic0 Standby Load balancing: Use explicit failover order Failback: No Other references: A Configure port groups to properly isolate network traffic Official Documentation: vSphere Networking Guide v5. Summary: From the VMware Best Practices: Keep the vMotion connection on a separate network devoted to vMotion. When migration with vMotion occurs. VDCA Study Guide Version – Section 1 .5.2 Page 92 .168. You can do this either by using VLANs to segment a single physical network or by using separate physical networks (the latter is preferable). the contents of the guest operating system’s memory is transmitted over the network.53 vmnic0 Active / vmnic1 Standby Load balancing: Use explicit failover order Failback: No vMotion VLAN 21 vMotion is Enabled vmk1: 192.Figure 80 Management Network VLAN 2 Management Traffic is Enabled vmk0: 192. Chapter 11 “Networking Best Practices”. e. While the management of the VMware virtual Distributed Switch is in the domain of the vSphere Administrators. separate traffic by introducing VLANs Create one portgroup per VLAN Separate vSphere Management Traffic (Management. vMotion. Other references: Scott Drummond also collected some best practices in this post. “VMware vSphere 4: Deployment Methods for the VMware vNetwork Distributed Switch”. In either case. create a vSphere standard switch or vSphere distributed switch for each service. Given a set of network requirements. All this without allowing access to the rest of the vSphere platform to the Network administrators. This Whitepaper. is intended to help migrating from an environment with vSS to one using vDS. Best known example is the Cisco Nexus 1000v. unless necessary.2 Page 93 . identify the appropriate distributed switch technology to use Official Documentation: None Summary: Besides the well-known VMware virtual Distributed Switch. This way. firewall appliances and so on. If this is not possible. released during the vSphere 4.To physically separate network services and to dedicate a particular set of NICs to a specific network service. network administrators will tell you the same. In general. FT Logging) from Virtual Machine traffic and Storage traffic (iSCSI). The best reason I can think of to choose for a Cisco Nexus 1000v is in large enterprises where the management of firewalls. core. with a Cisco Nexus 1000v it is possible to completely separate the management of the virtual switches and hand-over to the network administrators. This paper discusses and suggests the most effective methods of deployment for the VDCA Study Guide Version – Section 1 . physical adapters will also be separated Do not configure Virtual Machines with more than one NIC. separate network services on a single switch by attaching them to port groups with different VLAN IDs.g. Other references: “VMware vNetwork Distributed Switch: Migration and Configuration”.and access switches is in the domain of the Network administrators. Instead use firewalls to route traffic to other VLANs. vSphere also supports 3rd party vDS. confirm with your network administrator that the networks or VLANs you choose are isolated in the rest of your environment and that no routers connect them.x era. It discusses possible scenarios and provides step-by-step examples how to migrate. Create separate switches for each categorie. VMware vNetwork Distributed Switch (vDS) in a variety of vSphere 4 environments. It also has a chapter on choosing a method for migration to a vDS. “VMware vSphere Distributed Switch Best Practices”. This vSphere 5.x whitepaper describes two example deployments, one using rack servers and the other using blade servers. For each of these deployments, diferent VDS design approaches are explained. The deployments and design approaches described in this document are meant to provide guidance as to what physical and virtual switch parameters, options and features should be considered during the design of a virtual network infrastructure. Configure and administer vSphere Network I/O Control Official Documentation: vSphere Networking Guide v5.5, Chapter 7 “Managing Network Resources”, Section “vSphere Network I/O Control”, page 121. Summary: vSphere Network I/O Control (NIOC) was introduced in vSphere 4.x. Network resource pools determine the bandwidth that different network traffic types are given on a vSphere distributed switch. When network I/O control is enabled, distributed switch traffic is divided into the following predefined network resource pools: 1. 2. 3. 4. 5. 6. 7. Fault Tolerance traffic, iSCSI traffic, vMotion traffic, Management traffic, vSphere Replication (VR) traffic, NFS traffic, Virtual machine traffic. In vSphere 5 NIOC a new feature is introduced: user-defined network resource pools. With these you can control the bandwidth each network resource pool is given by setting the physical adapter shares and host limit for each network resource pool. Also new is the QoS priority tag. Assigning a QoS priority tag to a network resource pool applies an 802.1p tag to all outgoing packets associated with that network resource pool. Requirements for NIOC: Enterprise Plus license Use vDS Typical steps for NIOC: VDCA Study Guide Version – Section 1 - 2 Page 94 NIOC is enabled by default. Check settings. Select the vDS, select Manage and Resource Allocation. Figure 81 If you need to change the settings; go to Settings, Topology and click the Edit button Figure 82 VDCA Study Guide Version – Section 1 - 2 Page 95 To Create a Network Resource Pool; return to Resource Allocation and Add a new Network Resource Pool by clicking the green plus sign. Figure 83 Provide a logical name. Select the Physical Adapter shares. Options are: High, Normal, Low or a Custom value. If your physical network is configured for QoS priority tagging, select the value. The final step is to associate a Portgroup with the newly created Network Resource Pool. Select the desired Portgroup, click Edit and select General. Under Network resource pool, select the newly defined resource pool Figure 84 VDCA Study Guide Version – Section 1 - 2 Page 96 VDCA Study Guide Version – Section 1 . Other references: VMware Whitepaper VMware Network I/O Control: Architecture.2 Page 97 . Select the Distributed Port Groups tab.nl/2013/01/17/a-primer-on-network-io-control/ VMware Networking Blog on NIOC Use command line tools to troubleshoot and identify configuration items from an existing vDS See also objective “Understand the use of command line tools to configure appropriate vDS settings on an ESXi host” in this section. Figure 85 That’s all. Select a User-defined network resource-pool. Note: a very useful command is: net-dvs. Back to the overview. Performance and Best Practices A primer on NIOC: http://frankdenneman. Documents Similar To Vdca550 Study Guide Skip carouselcarousel previouscarousel next3parV400ProliantGen8Vmware5Benefiting From Server VirtualizationVeeam Backup Mgt Suite Eval Guide 7Zerto Cloud Manager InstallationChapter15 FCMass StorageVMware ESX Server Cluster & Resource Pools ExplainedVS5ICM_M04_vCenterServervsphere-5VS5ICM_M10_ResourceMonitoringConnections & Ports in ESX & ESXiMonitor Your VMware ESX and ESXi Server Configurations With CimTrakVcsone Win InstallVmware Imp FaqsVirtual Support Day Best Practices Virtual Networking June 2012HW_Sentry_1.6.01CachingControlCenterWhatsNewV6Introducing Blue Pill.pptonfi_1_0_goldscimakelatex.15886.carlitochAccess TimesCachingm5zn_01185f28b979e22 (1).pdfrecoveryHP Data Protector SoftwareSubstitutivaAV2InglesInstrumentalADSM-MODIJoseMirandaVm Series Summary SpecsheetTroubleshooting Linux Performance Issues _ Start Troubleshooting at the Lowest Layer Possible _ InformITMapping Design StandardsIBM_XIV_Dec4-6More From plgrodriguesSkip carouselcarousel previouscarousel nextBlueprints Evo XR Cade Bartop.pdf[Windows Server 2012 NIC Teaming (LBFO) Deployment and Management]VSphere 6.5 StorageVmw White Paper Vsphr Whats New 6 5Ficha Tecnica Azera2008-hyundai-azera-54277Auto Door Lock5863MedidasComparador de FundosDownload 49431 Carbonatacao Forcada 2821842Azera_QRG_11_15_06Comparador de Fundos2c 0440752741c90c_10366efdc1d94ae595c8d576e7f39ae8.pdfPIM III - ConsultoriaCompetitive Advantages Hyper-V VMware VSphere-BRZ23126459.pdfLongevidade LightThe Laymans Guide to Steroids 12VS5OS_LECT2_STU_PR.pdfManual Proprietario OutlanderESXi-5.1-PosterTI-GDO-Apostila Gestão Do Desempenho Organizacional-I e II_PGDOMultimidia Outlander ManualUnip Interativ1Footer MenuBack To TopAboutAbout ScribdPressOur blogJoin our team!Contact UsJoin todayInvite FriendsGiftsLegalTermsPrivacyCopyrightSupportHelp / FAQAccessibilityPurchase helpAdChoicesPublishersSocial MediaCopyright © 2018 Scribd Inc. .Browse Books.Site Directory.Site Language: English中文EspañolالعربيةPortuguês日本語DeutschFrançaisTurkceРусский языкTiếng việtJęzyk polskiBahasa indonesiaSign up to vote on this titleUsefulNot usefulMaster Your Semester with Scribd & The New York TimesSpecial offer for students: Only $4.99/month.Master Your Semester with a Special Offer from Scribd & The New York TimesRead Free for 30 DaysCancel anytime.Read Free for 30 DaysYou're Reading a Free PreviewDownloadClose DialogAre you sure?This action might not be possible to undo. Are you sure you want to continue?CANCELOK