dvs-windows-server-2012.pdf

March 29, 2018 | Author: GabrielaDiaz | Category: Desktop Virtualization, Hyper V, Virtualization, Data Center, Remote Desktop Services


Comments



Description

Dell Wyse Datacenter for Microsoft VDI andvWorkspace Reference Architecture 12/19/2014 Version 1.2 THIS DOCUMENT IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES. THE CONTENT IS PROVIDED AS IS, WITHOUT EXPRESS OR IMPLIED WARRANTIES OF ANY KIND. Copyright © 2014 Dell Inc. All rights reserved. Reproduction of this material in any manner whatsoever without the express written permission of Dell Inc. is strictly forbidden. For more information, contact Dell. Dell, the Dell logo, and the Dell badge are trademarks of Dell Inc. Microsoft, Windows and Hyper-V are registered trademarks of Microsoft Corporation in the United States and/or other countries. Intel is a registered trademark and Core is a trademark of Intel Corporation in the U.S and other countries. Other trademarks and trade names may be used in this document to refer to either the entities claiming the marks and names or their products. Dell Inc. disclaims any proprietary interest in trademarks and trade names other than its own. Dell - Internal Use - Confidential - Privileged Contents 1 Introduction................................................................................................................... 1 1.1 Purpose .............................................................................................................................................................1 1.2 Scope .................................................................................................................................................................1 1.3 What’s New in This Release ....................................................................................................................1 2 Solution Architecture Overview ................................................................................2 2.1 Deployment Options .................................................................................................................................. 2 2.2 Solution Layers ............................................................................................................................................. 2 2.2.1 Networking .......................................................................................................................................... 3 2.2.2 Compute ............................................................................................................................................... 3 2.2.3 Management ....................................................................................................................................... 3 2.2.4 Storage .................................................................................................................................................. 3 2.3 Physical Architecture ................................................................................................................................ 4 2.3.1 Local Tier 1 ......................................................................................................................................... 5 2.3.2 Shared Tier 1 ...................................................................................................................................... 7 2.3.3 Shared Infrastructure (VRTX) .................................................................................................... 11 2.3.4 Graphics Acceleration .................................................................................................................. 13 2.3.5 Unified Communications ............................................................................................................. 13 2.3.6 Optional Compute Host ............................................................................................................... 14 3 Hardware Components ............................................................................................. 15 3.1 Network ......................................................................................................................................................... 15 3.1.1 Dell Networking S55 ...................................................................................................................... 15 3.2 Servers ........................................................................................................................................................... 16 3.2.1 PowerEdge T420 ............................................................................................................................. 16 3.2.2 PowerEdge R420 ............................................................................................................................. 17 3.2.3 PowerEdge R620 ............................................................................................................................. 17 3.2.4 PowerEdge R730 ............................................................................................................................. 18 3.2.5 PowerEdge R720 ............................................................................................................................. 19 3.2.6 PowerEdge VRTX ............................................................................................................................. 19 3.3 Storage .......................................................................................................................................................... 21 3.3.1 EqualLogic PS4100E ....................................................................................................................... 21 3.3.2 EqualLogic PS6100E .......................................................................................................................22 3.3.3 EqualLogic PS6100XS..................................................................................................................... 23 3.3.4 EqualLogic Configuration ............................................................................................................24 3.3.5 PowerVault MD1220 ......................................................................................................................25 3.4 Dell Wyse End Points ...............................................................................................................................26 3.4.1 Dell Wyse 3012-T10D ....................................................................................................................26 3.4.2 Dell Wyse 5012-D10D ....................................................................................................................26 3.4.3 Dell Wyse 5290-D90D8 .................................................................................................................26 3.4.4 Dell Wyse 7290-Z90D8 .................................................................................................................. 27 3.4.5 Dell Wyse 5212 AIO ........................................................................................................................ 27 4 Software Components............................................................................................... 28 4.1 Broker Technologies ................................................................................................................................28 4.1.1 Microsoft Remote Desktop Services .......................................................................................28 4.1.2 Dell vWorkspace .............................................................................................................................29 4.2 Hypervisor Platforms ...............................................................................................................................30 4.2.1 Microsoft Hyper-V ..........................................................................................................................30 4.3 Operating Systems ....................................................................................................................................30 ii Dell - Internal Use - Confidential - Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace - Reference Architecture 4.3.1 Microsoft Windows Server 2012 R2.........................................................................................30 4.3.2 Microsoft Windows 8.1 .................................................................................................................30 4.3.3 Microsoft Storage Spaces ............................................................................................................ 31 4.4 Application Virtualization ..................................................................................................................... 31 5 Solution Architecture for Microsoft Remote Desktop Services and Dell vWorkspace .................................................................................................................... 32 5.1 Overview ....................................................................................................................................................... 32 5.1.1 RDS Deployment Options ............................................................................................................ 32 5.1.2 vWorkspace Deployment Options ........................................................................................... 32 5.2 Compute Layer ........................................................................................................................................... 34 5.2.1 Local Tier 1 ....................................................................................................................................... 34 5.2.2 Shared Tier 1 .................................................................................................................................... 36 5.2.3 Shared Infrastructure ................................................................................................................... 37 5.2.4 Graphics Acceleration .................................................................................................................. 38 5.2.5 Unified Communications ............................................................................................................. 39 5.3 Management Layer ................................................................................................................................... 39 5.3.1 SQL Databases ................................................................................................................................. 41 5.3.2 DNS ....................................................................................................................................................... 41 5.3.3 Secure Gateway .............................................................................................................................. 41 5.4 Storage Layer ..............................................................................................................................................42 5.4.1 Local Tier 1 .......................................................................................................................................42 5.4.2 Shared Tier 1 ....................................................................................................................................42 5.4.3 Shared Tier 2 .................................................................................................................................... 43 5.4.4 Shared Infrastructure ...................................................................................................................44 5.5 Network Layer ............................................................................................................................................46 5.5.1 Local Tier 1 .......................................................................................................................................46 5.5.2 Shared Tier 1 ................................................................................................................................... 50 5.5.3 Shared Infrastructure ...................................................................................................................54 5.5.4 NIC Teaming .....................................................................................................................................58 5.6 Scaling Guidance .......................................................................................................................................59 5.6.1 Shared Sessions: R720 ................................................................................................................. 60 5.6.2 Pooled Desktops: R720................................................................................................................ 60 5.6.3 Personal Desktops: R720 ............................................................................................................. 61 5.6.4 Storage Spaces Option: R720 .................................................................................................... 61 5.7 Solution High Availability: R720 .........................................................................................................62 5.7.1 Local Tier 1 .......................................................................................................................................62 5.7.2 Shared Tier 1: R720 ....................................................................................................................... 63 5.7.3 Compute Layer ................................................................................................................................64 5.7.4 Management Layer ........................................................................................................................65 5.7.5 SQL Server High Availability ...................................................................................................... 67 5.7.6 Disaster Recovery and Business Continuity ........................................................................ 67 5.8 Microsoft RDS Communication Flow .................................................................................................68 5.9 Dell vWorkspace Communication Flow ...........................................................................................69 6 Customer Provided Stack Components .................................................................. 70 6.1 Customer Provided Storage Requirements ....................................................................................70 6.2 Customer Provided Switching Requirements ................................................................................70 7 User Profile and Workload Characterization ......................................................... 71 7.1 Profile Characterization Overview .................................................................................................... 71 7.1.1 Standard Profile .............................................................................................................................. 71 7.1.2 Enhanced Profile ............................................................................................................................ 71 7.1.3 Professional Profile ....................................................................................................................... 72 7.1.4 Shared Graphics Profile ............................................................................................................... 72 iii Dell - Internal Use - Confidential - Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace - Reference Architecture 7.2 Workload Characterization Testing Details................................................................................... 73 8 Solution Performance and Testing ......................................................................... 75 8.1 Load Generation and Monitoring ....................................................................................................... 75 8.1.1 Login VSI – Login Consultants .................................................................................................... 75 8.1.2 EqualLogic SAN HQ ........................................................................................................................ 75 8.1.3 Microsoft Performance Monitor ............................................................................................... 75 8.2 Testing and Validation ............................................................................................................................ 75 8.2.1 Testing Process ............................................................................................................................... 75 8.3 Test Results ................................................................................................................................................. 76 8.3.1 Summary of Results ....................................................................................................................... 77 8.3.2 Provisioning Times ........................................................................................................................ 80 8.3.3 Pooled Virtual Desktops (12G) ................................................................................................. 81 8.3.4 Shared Session .................................................................................................................................86 8.3.5 Personal Virtual Desktops ........................................................................................................... 87 8.3.6 Graphics Acceleration .................................................................................................................. 97 8.3.7 Unified Communications .............................................................................................................99 Appendix A – 10-Seat Trial Kit .................................................................................. 101 Introduction .....................................................................................................................................................101 Server Configuration ....................................................................................................................................101 Management and Compute Infrastructure ...................................................................................... 102 Storage Configuration ................................................................................................................................ 103 Appendix B – Secure Gateway .................................................................................. 104 Acknowledgements..................................................................................................... 105 About the Authors ...................................................................................................... 106 iv Dell - Internal Use - Confidential - Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace - Reference Architecture 1 Introduction 1.1 Purpose This document describes:     Dell Wyse Datacenter for Microsoft VDI and vWorkspace Reference Architecture, scaling from 10 to 50,000 desktop virtualization users. A VDI Experience Proof of Concept (POC) Solution, an entry level configuration supporting up to 100 VDI users. A pilot solution for small scale deployments supporting shared sessions, pooled virtual desktops, or personal virtual desktops. Production deployment options encompassing solution models including rack servers, local disks and iSCSI based or direct attached shared storage options, as well as a shared infrastructure platform. This document addresses the architecture design, configuration and implementation considerations for the key components of the architecture required to deliver virtual desktops via Microsoft Windows Server 2012 R2 RDS or Dell vWorkspace 8.5 on Microsoft Hyper-V. 1.2 Scope Relative to delivering the virtual desktop environment, the objectives of this document are to:       Define the detailed technical design for the solution. Define the hardware and software requirements to support the design. Define the constraints which are relevant to the design. Define relevant risks, issues, assumptions and concessions – referencing existing ones where possible. Provide a breakdown of the design into key elements such that the reader receives an incremental or modular explanation of the design. Provide solution scaling and component selection guidance. 1.3 What’s New in This Release The current Reference Architecture for Dell Wyse Datacenter includes the following elements:   1 Local Tier 1 Entry model for Dell PowerEdge R730. Inclusion of the option for the Dell Wyse 5212. Dell - Internal Use - Confidential - Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace - Reference Architecture 2 Solution Architecture Overview 2.1 Deployment Options Dell Wyse Datacenter solutions provide a number of deployment options to meet your desktop virtualization requirements. Our solution is able to provide a compelling desktop experience to a range of employees within your organization from task workers to knowledge workers to power users. The deployment options for Dell Wyse Datacenter solutions are:  Shared Sessions  Pooled Virtual Desktops (Non-persistent)  Personal Virtual Desktops (Persistent) Additionally, our solution includes options for users who require:  Graphics Acceleration  Unified Communication 2.2 Solution Layers The Dell Wyse Datacenter solution leverages a core set of hardware and software components consisting of four primary layers:  Networking Layer  Compute Server Layer  Management Server Layer  Storage Layer These components have been integrated and fully tested to provide the optimal balance of high performance and lowest cost per user. Additionally, the Dell Wyse Datacenter solution includes an approved extended list of optional components in the same categories. These components give IT departments the flexibility to custom tailor the solution for environments with unique VDI feature, scale or performance needs. The Dell Wyse Datacenter solution stack is designed to be a cost effective starting point for IT departments looking to migrate to a fully virtualized desktop environment slowly. This approach allows you to grow the investment and commitment as needed or as your IT staff becomes more comfortable with desktop virtualization architectures. 2 Dell - Internal Use - Confidential - Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace - Reference Architecture 2 Compute RD Virtualization Host** RD Virtualization Host** 2 x CPU R730 HyperCache* 384GB RAM 345 x Pooled Win8.2.2.2. This physical separation of resources provides clean.2. and predictable scaling without the need to reconfigure or move resources within the solution as you grow. linear.Confidential .Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace . PS6100E and PS6100XS iSCSI arrays as well as the PowerVault MD1220 direct attached storage to suit the Tier 1 and Tier 2 capacity and performance requirements of the desktop virtualization solution. 2. The RDSH role is enabled within dedicated VMs on the same or dedicated hosts in the Compute layer. The RDVH role requires Hyper-V as well as hardware assisted virtualization so must be installed into the parent partition of the Hyper-V instance.3.Reference Architecture .3.2.3 256GB RAM 4 x RDSH 2012 R2 VMs 500 x RDSH Sessions 6 x 15k HD Management Management components are dedicated to their own layer so as to not negatively impact the user sessions running in the Compute layer.4.4 Storage The Storage layer consists of options provided by the EqualLogic PS4100E. This switch will host all solution traffic consisting of 1Gb iSCSI and LAN sources for smaller stacks. 2. Additional switches can be added and stacked as required to provide High Availability for the Network layer.1 Or 1x 2 x CPU Hyper-V 1x Hyper-V Compute The Compute layer consists of the server resources responsible for hosting the RDS or vWorkspace user sessions. For more information on Tier 1 and Tier 2 see section 2. 3 Dell . whether shared via RD Session Host (formerly Terminal Server) or pooled and personal desktop VMs on Hyper-V or RDVH.1 Networking Only a single high performance Dell Networking S55 48-port switch is required to get started in the Network layer. R730 HyperCache* 16 x 15k HD 2. Note: Graphics configurations are provided in more detail below in Section 2.Internal Use . The Management layer will host all the VMs necessary to support the RDS or vWorkspace infrastructure. For the Compute layer.Confidential .Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace . In the Shared Tier 1 solution model.Internal Use . HA includes failover clustering and live migration in this model. Shared Infrastructure is a new solution model from Dell based upon the new PowerEdge VRTX platform that integrates networking.Reference Architecture . The following table shows the supported deployment options by each solution model: Local Tier 1 Shared Tier 1 Shared Infrastructure Shared Sessions X X X Pooled Virtual Desktops X X X Personal Virtual Desktops 4 X Graphics Acceleration X X X Unified Communications X X X Dell . This solution model utilizes Shared Tier 2 storage for user profile/data and management VM execution. The Local Tier1 solution model provides for the execution of the Compute layer VMs on locally installed storage. For the Compute layer. In this paradigm. a high-performance shared storage array is added to handle the execution of the Compute and Management layer VMs.3 Physical Architecture The core Dell Wyse Datacenter architecture consists of three models:  Local Tier1  Shared Tier1  Shared Infrastructure “Tier 1” in the Dell Wyse Datacenter context defines from which disk source the VDI sessions execute.2. Internal shared storage facilitates both Tier 1 and Tier 2 requirements in this model. high availability (HA) is provided in this model as N+1 only. servers and storage in a singular chassis. the Compute and Management layers are combined and utilize failover clustering across all hosts. 1 Local Tier 1 The following solutions are based on Local Tier 1 architecture designs.3. 5 Dell . Dell offers an extremely affordable solution capable of supporting up to 100 concurrent virtual desktop users for a minimal investment.3.2.Internal Use .Confidential .Reference Architecture .1. 2.1 Entry To get up and running as quickly as possible with pooled virtual desktops. All VDI roles/ sessions are hosted on a single server and can leverage existing legacy networking where applicable. This architecture leverages an inexpensive single server platform intended to demonstrate the capabilities of VDI for a small environment or focused POC/ trial of Microsoft RDS or Dell vWorkspace.Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace . Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace .2. you can grow into a larger distributed architecture seamlessly incorporating the components and software from the initial POC. we offer a combined pilot solution of up to 100 pooled desktops or shared sessions.Confidential . If additional scaling is desired. Tier 2 storage can be included for scale ready deployments (shown below). As an option.Internal Use .1.2 Pilot For small scale deployments or pilot efforts intended to familiarize your organization with the Local Tier 1 solution architecture. This architecture is non-distributed with all VDI and management functions running on a single host.3.Reference Architecture . 6 Dell . R730 HyperCache* 16 x 15k HD 2.1 Or 10x 2 x CPU Hyper-V 15 x Hyper-V Compute Network The Local Tier 1 solution model provides a scalable rack-based configuration for production deployments that hosts pooled desktops or shared sessions on local disks in the Compute layer.000 x Shared Sessions 6 x 15k HD Shared Tier 1 The Shared Tier 1 solution model is used with an EqualLogic PS6100XS facilitating both Tier 1 and Tier 2 storage requirements.3. In the case of solutions using EqualLogic PS6100XS.3 S55 4x 48 ports 1G LAN iSCSI RD Virtualization Host** RD Virtualization Host** 2 x CPU R730 HyperCache* 384GB RAM 5.Internal Use .3.Production 2. This solution provides maximum Compute host user density for each broker technology and allows clean linear upward scaling. The pooled desktop or shared session VMs are assigned to a dedicated Compute host.Reference Architecture . The Compute hosts communicate with the SOFS via Microsoft’s SMB 3. The image below shows scalability for 5. 7 Dell .000 users.Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace .0 protocol.2 256GB RAM RD Session Host VMs 5. A Microsoft Scale-out File Server (SOFS) is also introduced for capacity optimization via Microsoft's Data Deduplication technology.000 x Pooled Win8.1. the SOFS is attached to the storage array via iSCSI.Confidential . The following solutions are based on Shared Tier 1 architecture designs.2.3. 8 Dell . Refer to section 2.1 Pilot For pilot deployments of the Shared Tier 1 solution architecture.2. 2. a PowerVault MD1220 array with Microsoft Storage Spaces can optionally be used. Personal virtual desktops are the primary deployment option for this model but pooled virtual desktops and shared sessions are also supported.Reference Architecture .Internal Use . For the storage layer. up to 225 users can leverage an environment comprising of a single instance of each solution layer components.Confidential .Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace .3.3 for more information. failover clustering and Cluster Shared Volumes (CSVs) are introduced to provide a level of continual availability for the personal virtual desktops and management VMs.Confidential .000 x Win8. Scalability is achieved by adding additional nodes to each solution layer as required.2.Reference Architecture .Production 2.1 384GB RAM 2 x 15k HD Dell .3.2 9 S55 5x 48 ports 1G LAN iSCSI Hyper-V Cluster RD Virtualization Host** 15 x 2 x CPU Hyper-V Compute Network For production deployments.Internal Use . The image below shows scalability for 5000 users. R730 HyperCache* 5.Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace . Reference Architecture .2.2.Internal Use . 10 Dell . The Compute hosts communicate with the Storage Spaces server via Microsoft’s SMB 3.Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace .0 protocol. Microsoft Storage Spaces can be utilized to further reduce the overall storage costs.3. The Storage Spaces server is attached to the Dell PowerVault MD1220 storage array via SAS.3 Microsoft Storage Spaces Option As an additional shared tier 1 option.Confidential . 2.2.Reference Architecture .3 Shared Infrastructure (VRTX) The Shared Infrastructure model provides integrated network switching and internal Direct Attached Storage (DAS) along with up to four blades. The following solutions are based on Shared Infrastructure architecture designs. [STAYS THE SAME BELOW.Internal Use .] 11 Dell .3. Virtual desktops or shared session VMs will execute on a 10 disk tier.1 Pilot Two blades and 15 total disks provide a pilot solution with combined Compute and Management layers for up to 250 pooled virtual desktops or 250 shared sessions.Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace . The management VMs are segmented on a five disk tier.3. configured for performance.3.Confidential . up to four blades can be deployed for a total of 500 pooled virtual desktops or 500 shared sessions.Confidential .Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace .Internal Use .2 Production For production deployments.3. The internal storage will be tiered with five disks for management VMs and 10 disks per two blades installed. 12 Dell .Reference Architecture .2.3. up to 85 virtual desktop users can be supported by this shared graphics configuration.3.Internal Use . 13 Dell . Based upon solution model and configuration.2.5 Unified Communications A unified communications option can be added to the solution via Microsoft Lync by installing the Microsoft Lync client on the virtual desktop image and installing the Microsoft Lync VDI plugin on the client machine.Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace .3.Confidential .Reference Architecture .4 Graphics Acceleration A graphics acceleration option can be added to the solution by enabling Microsoft RemoteFX vGPU support and adding up to three physical graphics cards to the Compute host.  Local Tier 1  Shared Tier 1  Shared Infrastructure 2. 14 Dell . This configuration will yield up to 100 personal/pooled virtual desktops or 100 shared sessions.6 Optional Compute Host The PowerEdge R420 server can optionally be used as the dedicated Compute host in the Local and Shared Tier 1 configurations.Reference Architecture . This is a great option to receive all the benefits of the Local/Shared Tier solution when lower user density is needed and allows flexibility to scale by adding additional hosts.Internal Use . The R420 cannot be used for graphics acceleration.Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace .3.Confidential .2. 3 Hardware Components 3. General uplink cabling guidance to consider in all cases is that TwinAx is very cost effective for short 10 Gb runs and for longer runs use fiber with SFPs. Model Dell S55 Networking Features 44 x BaseT (10/100/1000) + 4 x SFP Options Redundant PSUs 4 x 1Gb SFP ports the support copper or fiber Uses ToR switch for LAN and iSCSI in Local Tier 1 solution 12Gb or 24Gb stacking (up to eight switches) 2 x modular slots for 10Gb uplinks or stacking modules Guidance:  10Gb uplinks to a core or distribution switch are the preferred design choice using the rear 10Gb uplink modules.1 Network The following sections contain the core network components for the Dell Wyse Datacenter solutions. The high-density S55 design provides 48 GbE access ports with up to four modular 10 GbE uplinks in just 1U to conserve rack space. including IO panel to PSU airflow or PSU to IO panel airflow for hot/cold aisle environments.  The front four SFP ports can support copper cabling and can be upgraded to optical if a longer run is needed. If 10Gb to a core or distribution switch is unavailable the front 4 x 1Gb SFP ports can be used.1.1 Dell Networking S55 The Dell Networking S-Series S55 1/10 GbE Top of Rack (ToR) switch is optimized for lowering operational costs while increasing scalability and improving manageability at the network edge. A “scale-as-you-grow” ToR solution that is simple to deploy and manage. 3.Internal Use .Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace .Reference Architecture . please visit: LINK 15 Dell .Confidential . and redundant. The S55 incorporates multiple architectural features that optimize data center network efficiency and reliability. up to eight S55 switches can be stacked to create a single logical switch by utilizing Dell Networking’s stacking technology and high-speed stacking modules. hot-swappable power supplies and fans. For additional information on the S55 switch and Dell Networking. 16 Dell . providing high performance at a relatively low price of entry. For additional information about the Dell PowerEdge T420.1.2. if greater port count or redundancy is desired.2 Servers 3.3.Reference Architecture .1.Confidential . Please reference the following Dell Networking whitepaper for specifics on stacking best practices and configuration: LINK 3.Internal Use . Supporting the Intel Xeon E5-2400 and E5-2400 v2 processor families and up to 384GB RAM.1 PowerEdge T420 The Dell PowerEdge T420 is the tested and validated server platform of choice for the Entry or POC solution. please visit: LINK. Each switch will need a stacking module plugged into a rear bay and connected with a stacking cable. Uplinks need to be configured on all switches in the stack back to the core to provide redundancy and failure protection. The best practice for switch stacks greater than two is to cable in a ring configuration with the last switch in the stack cabled back to the first.1 Dell Networking S55 Stacking The ToR switches in the Network layer can be optionally stacked with additional switches. the T420 provides a solid server platform to get started with VDI.Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace . Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace .Internal Use .2. The R420 server is well-suited for virtualized environments. 3. and can utilize up to 8 x 2. The R420 features the Intel Xeon E5-2400 and E5-2400 v2 processor families. supports up to 768GB RAM.2 PowerEdge R420 The Dell PowerEdge R420 is a dual socket server offering powerful performance and scalability in a dense 1U rack form factor.2.3.Confidential . 1U rack server that has a large memory footprint and impressive I/O options making it an outstanding general-purpose platform. The R620 is powered by the Intel Xeon E5-2600 and E5-2600v2 product families.5” SAS disks. For additional information about the Dell PowerEdge R420. and supports up to 8 x 2. a substantial memory footprint up to 384GB RAM. please visit: LINK.3 PowerEdge R620 The Dell PowerEdge R620 is a hyper-dense.5” SAS disks. dual socket. For additional information about the Dell PowerEdge R620. please visit: LINK 17 Dell .Reference Architecture . Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace . please visit: LINK 18 Dell .Reference Architecture . With the Intel® Xeon® processor E5-2600 v3 product family and up to 24 DIMMs of DDR4 RAM.4 PowerEdge R730 Our best-in-class (13G) Dell PowerEdge R730 provides the highest desktop virtualization density and performance for a rack Compute host.3.Confidential .2. larger and higher-performing virtual machines for data centers and cloud platforms. the R730 has the processing cycles and threads and large memory footprint necessary to deliver more.Internal Use . For additional information about the Dell PowerEdge R730. The Dell PowerEdge R720 offers uncompromising performance and scalability in a 2U form factor. please visit: LINK 16 x 15K 12 SAS Drives x 15K SASOption drives option 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 EST 300GB 15k SAS 300GB 15k SAS 300GB 15k SAS 300GB 15k SAS 300GB 15k SAS 300GB 15k SAS 300GB 15k SAS 300GB 15k SAS 300GB 15k SAS 300GB 15k SAS 300GB 15k SAS 300GB 15k SAS 300GB 15k SAS 2 x 15K SAS drives option 1 10=OFF 100=GRN 1000=ORG ST 2 4 6 5 7 2 1 3 750W iDRAC 1 2 3 750W 4 8 x 1Gb Network Ports 3.2.5” SAS disks. For additional information about the Dell PowerEdge R720.Reference Architecture . and supports up to 16 x 2. Configurable with up to four PowerEdge blade servers and 25 x 2.3.6 PowerEdge VRTX The shared infrastructure platform for the Dell Wyse Datacenter solution is the all in one Dell PowerEdge VRTX. please visit: LINK 19 Dell .Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace .Internal Use . can host up to 768GB RAM. For additional information about the Dell PowerEdge VRTX.2. the PowerEdge VRTX provides a flexible performance and capacity for small and midsize businesses as well as remote/branch offices of larger enterprises.5” SAS disks in consolidated 5U form factor.5 PowerEdge R720 The Dell PowerEdge R720 (12G) provides high desktop density and performance for a rack Compute host.Confidential . This dual socket CPU platform runs the fastest Intel Xeon E5-2600 and E5-2600v2 families of processors. efficiency and scalability. The M620 offers remarkable computational density. half-height blade server which offers a blend of density.6. scaling up to 20 cores. For additional information about the Dell PowerEdge VRTX.Reference Architecture . These features make the M620 an ideal server for the PowerEdge VRTX deployments.3.Internal Use .Confidential . two socket Intel Xeon processors and 24 DIMMs (768GB RAM) of DDR3 memory in an extremely compact half-height blade form factor. performance.1 PowerEdge M620 The PowerEdge M620 is a feature-rich. please visit: LINK 20 Dell .Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace .2. dual-processor. 3 Storage 3.3. please visit: LINK Model EqualLogic PS4100E Features 12 drive bays (NL-SAS/ 7200k RPM). dual HA controllers.3.1 EqualLogic PS4100E The EqualLogic PS4100E array can be used for Tier 2 storage of management VM storage and user data up to 500 users.Internal Use . SAN HQ. Snaps/Clones. For additional information on the PS4100E array.Reference Architecture . Async replication.Confidential . 1Gb Options Uses 12TB – 12 x 1TB HDs 24TB – 12 x 2TB HDs 36TB – 12 x 3TB HDs Tier 2 array for up to 500 users or less in Local Tier 1 solution model (1Gb) 12 x NL SAS drives Hard Drives 0 1 2 3 4 5 6 7 8 9 10 11 CONTROL MODULE 12 MANAGEMENT ETHERNET 0 ETHERNET 1 PWR SERIAL PORT ERR ACT STANDBY ON/OFF CONTROL MODULE 12 MANAGEMENT ETHERNET 0 ETHERNET 1 PWR SERIAL PORT ERR ACT STANDBY ON/OFF 1Gb Ethernet ports 21 Dell .Privileged Mgmt ports Dell Wyse Datacenter for Microsoft VDI and vWorkspace . Async replication. SAN HQ.2 EqualLogic PS6100E The EqualLogic PS6100E array can be used for Tier 2 storage of management VM storage and user data.3. 4U chassis Options 24TB – 24 x 1TB HDs 48TB – 24 x 2TB HDs 72TB – 24 x 3TB HDs 96TB – 24 x 4TB HDs 22 Dell .Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace .3. dual HA controllers.Internal Use . 1Gb.Reference Architecture Uses Tier 2 array for Shared Tier data in local Tier 1 solution model (1Gb) .Confidential . For additional information on the PS6100E array. please visit: LINK Model EqualLogic PS6100E Features 24 drive bays (NL-SAS/ 7200k RPM). Snaps/Clones. Reference Architecture .Confidential . SAN HQ. Async replication.3 EqualLogic PS6100XS For Shared Tier 1. dual HA controllers. both high-speed. The PS6100XS array is a Dell Fluid Data™ solution with a virtualized scale-out architecture that delivers enhanced storage performance and reliability that is easy to manage and scale for future needs.Internal Use . (1Gb – iSCSI) Dell .2TB 10K SAS Tier 1 and Tier 2 array for Shared Tier 1 solution model requiring greater per user capacity. For additional information on the PS6100XS array.3.3. 1Gb 13TB – 7 x 400GB SSD + 17 x 600GB 10K SAS Tier 1 and Tier 2 array for Shared Tier 1 solution model (1Gb – iSCSI) 26TB – 7 x 800GB SSD + 17 x 1. please visit: LINK Model EqualLogic PS6100XS 23 Features Options Uses 24 drive hybrid array (SSD + 10K SAS). low-latency solid-state disk (SSD) technology and highcapacity HDDs from a single chassis are utilized. Snaps/Clones.Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace . Confidential .3.3. Manage shared Tier 1 arrays used for hosting VDI sessions together.Reference Architecture .4 EqualLogic Configuration Each tier of EqualLogic storage is to be managed as a separate pool or group to isolate specific workloads.Internal Use . while managing shared Tier 2 arrays used for hosting management server role VMs and user data together. 24 Dell .Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace . the PowerVault MD1220 directattached storage array is used to satisfy the Tier 1 and Tier 2 storage requirements.3.Reference Architecture .3. see the LINK.Confidential .2k 1 TB SAS drives NOTE: This solution requires multi-path connectivity to the array. 25 Dell . Model PowerVault MD1220 Features Uses 24 drive bays with dual controller with: Tier 1 and Tier 2 array for the Shared Tier 1 solution model using Microsoft Storage Spaces.Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace .Internal Use .  (4) 400 GB Toshiba SSD drives  (20) 7. For more information.5 PowerVault MD1220 For the Shared Tier 1 with Microsoft Storage Spaces Option. 3. 3. hospitality. VMware PCoIP. and dual digital high resolution displays with rotation. 3.Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace .4. and healthcare.Reference Architecture . and behind monitors.1 Dell Wyse 3012-T10D The Dell Wyse 3012-T10D handles everyday tasks with ease and also provides multimedia acceleration for task workers who need video.Internal Use . HD Flash and multimedia. reliable thin client.2 GHz Dual Core ARM System-onChip (SoC) processor. the Dell Wyse 5290-D90D8 offers cool. It’s also great for kiosks. to walls. the Dell Wyse 5290-D90D8 packs dual-core processing power into a compact form factor for knowledge workers who need performance for demanding virtual Windows® desktops and cloud applications. including manufacturing. creating cool workspaces in every respect.4. It features dual-core processing power and an integrated graphics engine for a fulfilling Windows® 8 user experience.2 Dell Wyse 5012-D10D Ultra-high performance in a compact package Power users and knowledge workers will love the exceptionally fast speed and power from the new dual-core driven Dell Wyse 5012-D10D. Operating with less than 9 watts of energy. Users will enjoy integrated graphics processing and additional WMV9 & H264 video decoding capabilities from the Marvell ARMADA™ PXA2128 1. and VMware® Horizon View certified. Using less than 7 watts of electricity. unified communications. and multi-touch displays in a wide variety of environments.3 Dell Wyse 5290-D90D8 A strong. 3D simulation and modelling. The Dell Wyse 5012-D10D even supports power users’ most demanding workloads: high quality audio and video. Kiosk displays will look great on a thin client that is Microsoft RemoteFX®. In addition. the Dell Wyse 5012-D10D handles everything from demanding multimedia applications to business content creation and consumption with ease. each and every Dell Wyse 5012-D10D contributes – quietly and coolly – to lowering your organization’s carbon footprint. the Dell Wyse 3012-T10D is one of the only affordable thin clients to support dual monitors with monitor rotation. The Dell Wyse 5012-D10D is Citrix HDX. Knowledge workers will enjoy rich content creation and consumption as well as everyday multimedia. Averaging 9 watts. enabling increased productivity by providing an extensive view of task work. Users will enjoy smooth roaming and super-fast 802. With a 1.Confidential . Citrix® HDX.4. retail. potentially lowering your overall carbon footprint.11 a/b/g/n wireless at 2.4 and 5 GHz with dual antennas. and HD video-enabled. the Dell Wyse 3012-T10D’s small size enables discrete mounting options: under desks. quiet operations. Microsoft® RemoteFX. It also supports legacy peripherals via an optional USB adapter.4 Dell Wyse End Points 3. with reduced power usage and emissions.4 GHz AMD G series APU with integrated graphics engine. CAD/CAM. Designing smooth playback of high bit-rate HD video and graphics in such a small box hasn’t been at the expense of energy consumption and heat emissions either. 26 Dell . graphics-rich applications. Gigabit Ethernet and integrated dual band WiFi options. Like all Dell Wyse cloud clients. all contribute to help lower an organization’s carbon footprint through power requirements that are a fraction of traditional desktop PCs. 3.4. Ports for parallel.5 Dell Wyse 5212 AIO The Wyse 5212 AIO all-in-one (AIO) offers versatile connectivity options for use in a wide range of industries.4. flexible connectivity. A simple one-cord design and out-of-box automatic setup makes deployment effortless while remote management from a simple file server.0 offer fast.3. or Wyse Cloud Client Manager can help lower your total cost of ownership as you grow from just a few thin clients to tens of thousands.4 Dell Wyse 7290-Z90D8 The versatile Dell Wyse 7290-Z90D8 gives people the freedom to mix and match a broad range of legacy and cutting edge peripheral devices.Internal Use .Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace .Confidential . Built-in speakers. users can link to their peripherals and quickly connect to the network while working with processing-intensive. Its energy efficient processor – which out-performs other more power-hungry alternatives – and silent fan-less design.Reference Architecture . the new Dell Wyse 7290-Z90D8 is one cool operator. and USB 3.0 ports. serial. It even supports a second attached display for those who need a dual monitor configuration. a camera and a microphone make video conferencing and desktop communication simple and easy.Wyse Device Manager (WDM). With four USB 2. 27 Dell . and use resources on those servers. The core components of RDS are:  Remote Desktop Virtualization Host Remote Desktop Virtualization Host (RD Virtualization Host) integrates with Hyper-V to deploy pooled or personal virtual desktop collections within your organization. RemoteApp and Desktop Connection provides a customized view of RemoteApp programs and session-based desktops in a session collection. In Windows Server 2012 R2. improves remote worker efficiency.Reference Architecture .4 Software Components 4.  Remote Desktop Licensing Remote Desktop Licensing (RD Licensing) manages the licenses required to connect to a Remote Desktop Session Host server or a virtual desktop. Remote Desktop Services offers enhanced support for session shadowing. and helps secure critical intellectual property while simplifying regulatory compliance. and RemoteFX virtualized GPU support for DX11. dynamic display handling.1. session-based desktops. or through a web browser.  Remote Desktop Session Host Remote Desktop Session Host (RD Session Host) enables a server to host RemoteApp programs or session-based (shared) desktops. and track the availability of licenses.Internal Use . and session-based desktops. allowing users to work anywhere.1 Microsoft Remote Desktop Services Microsoft Remote Desktop Services (RDS) accelerates and extends desktop and application deployments to any device.1.Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace . and RemoteApp programs and virtual desktops in a virtual desktop collection. RD Connection Broker also enables you to evenly distribute the load among RD Session Host servers in a session collection or pooled virtual desktops in a pooled virtual desktop collection. and applications.  Remote Desktop Web Access Remote Desktop Web Access (RD Web Access) enables users to access RemoteApp and Desktop Connection through the Start menu on a computer that is running Windows 8. and session-based desktops on an internal corporate network from any Internet-connected device. 28 Dell . RemoteApp programs.1 Broker Technologies 4. Users can connect to RD Session Host servers in a session collection to run programs.  Remote Desktop Connection Broker Remote Desktop Connection Broker (RD Connection Broker) allows users to connect to their existing virtual desktops. improved RemoteApp behavior.  Remote Desktop Gateway Remote Desktop Gateway (RD Gateway) enables authorized users to connect to virtual desktops.1. You can use RD Licensing to install. improved compression and bandwidth usage. quick reconnect for remote desktop clients. save files. RemoteApp programs. Remote Desktop Services enables virtual desktop infrastructure (VDI). issue.Confidential . online storage deduplication. Windows 7. and Hyper-V hosts in a vWorkspace farm that sends a heartbeat signal and other information to the connection broker. vWorkspace Diagnostics and Monitoring provides real-time and historical data for user experience. Connection Broker servers. such as configuration data.  Data Collector Service The vWorkspace Data Collector service is a Windows service on RDSH servers.  Management Database The vWorkspace Management Database is required to perform administrative functions.Confidential . and farm databases.Internal Use .  Hyper-V Catalyst Components vWorkspace Hyper-V Catalyst Components increase the scalability and performance of virtual computers on Hyper-V Hosts.  Management Console The vWorkspace Management Console is an integrated graphical interface that helps you perform various management and administrative functions and can be installed on any workstation or server. Web Access servers. Secure Gateway servers. virtual desktops. The user’s [endpoint?] sends a request to the connection broker to access their virtual environment. applications. and other hosted resource sessions.Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace .Reference Architecture . administrative tasks and results.  29 Web Access Dell . virtual desktops. The management database stores all the information relevant to a vWorkspace farm.2 Dell vWorkspace Dell vWorkspace™ is an enterprise class desktop virtualization management solution which enables blended deployment and support of virtual desktops. Hyper-V catalyst components consist of two components: HyperCache and HyperDeploy. The core components of vWorkspace are:  Connection Broker The vWorkspace Connection Broker helps users connect to their virtual desktops. HyperCache provides read IOPS savings and improves virtual desktop performance through selective RAM caching of parent VHDs.For additional information about the enhancements in RDS in Microsoft Windows Server 2012 R2. RDSH servers/applications. The connection broker processes the request by searching for available desktops. HyperDeploy manages parent VHD deployment to relevant Hyper-V hosts and enables instant cloning of Hyper-V virtual computers.1. please visit: LINK 4. profile servers. hypervisor performance.  User Profile Management vWorkspace User Profile Management uses virtual user profiles as an alternative to roaming profiles in a Microsoft Windows environment including virtual desktops and RD Session Hosts. and then redirects the user to the available managed desktop or application.  Diagnostics and Monitoring Built on Dell Software’s Foglight platform. EOP Print servers. and information regarding client connections to virtual desktops and RDSH environments. shared sessions and virtualized applications. The virtual user profiles eliminate potential profile corruption and accelerate logon and logoff times by combining the use of a mandatory profile with a custom persistence layer designed to preserve user profile settings between sessions. 1. native printer feature support and clientless support for LAN connected print servers and remote site print servers. 64TB virtual disks. With this release also come the introduction of Microsoft Cloud OS and an update of products and services to further enable customers’ shift to cloud enablement. For additional information about the enhancements to Hyper-V in Microsoft Windows Server 2012 R2.1 Microsoft Hyper-V Microsoft Hyper-V™ is a scalable and feature-rich virtualization platform that helps organizations of all sizes realize considerable cost savings and operational efficiencies. providing several user centric features.Confidential .3 Operating Systems 4.3. vWorkspace 8.3.5. Dell EOP and more.2 Hypervisor Platforms 4. networking.0 as well as provides several enhancements to Diagnostics and Monitoring. Hyper-V in Windows Server 2012 R2 now supports an industry leading 320 logical processors.Internal Use . security and applications. It helps users to retrieve the list of available applications and desktops by using their web browser.  Secure Gateway vWorkspace Secure Gateway is an SSL gateway that simplifies the deployment of applications over the Internet and can provide proxy connections to vWorkspace components such as RDP sessions. please visit: LINK 4. and connection brokers.Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace . With updates to the user interface. online services. intelligent font embedding. For additional information about the enhancements in Microsoft Windows Server 2012 R2.1 helps keeps a consistent user experience across virtual and physical instances. Hyper-V Catalyst Components. management. and 1. Lync 2013.  EOP Print Server vWorkspace EOP Print is a single-driver printing solution that satisfies both client-side and network printing needs in a vWorkspace environment by providing bandwidth usage control. This release introduces a host of new features and enhancements.vWorkspace Web Access is a web application that acts as a web-based portal to a vWorkspace farm.1 Microsoft Windows Server 2012 R2 Microsoft Windows Server 2012 R2 is the latest iteration of the Windows Server operating system environment.1 is an update to the latest Windows desktop operating system.1. For additional information about the enhancements in Microsoft Windows 8.2. Windows 8. After successful authentication.000 virtual machines per cluster. including virtualization. and App-V 5. applications. their published desktops and applications are displayed in the web browser. 4TB of physical memory. For additional information about the enhancements in Dell vWorkspace 8.024 active virtual machines per host as well as 64-node clusters and 8. please visit: LINK 4. please visit: LINK 4.Reference Architecture . please visit: LINK 30 Dell . security and more.1 Microsoft Windows 8.5 includes support for Microsoft Windows Server R2. the Web Access client. storage.2 Microsoft Windows 8. Windows 8. Reference Architecture . since virtualized applications are never installed on an end point. Using Storage Spaces. physical desktops. App-V also removes potential conflicts. please visit: LINK 4. To learn more about application virtualization and how it integrates into a RDS environment please visit: LINK For more information about vWorkspace and App-V integration. This solution requires the new Storage Spaces feature set provided by using Windows Server 2012 R2.Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace . streamed to desktop clients or presented to session-based users on a RDSH host. please visit: LINK For information on deploying Microsoft Windows Server 2012 R2 Storage Spaces using Dell PowerVault MD1220. connected as well as disconnected clients.Confidential . reviews the administration guide: LINK 31 Dell . For additional information about the Microsoft Storage Spaces. and commodity SAS adapters. App-V provides a scalable framework that can be managed by System Center Configuration Manager for a complete management solution. administrators can deploy a resilient and highly available storage system utilizing costefficient hardware – commodity SAS JBOD (“Just a Bunch Of Disks”) enclosures. virtual desktops.4. Once an application has been packaged using the Microsoft Application Virtualization Sequencer. it can be saved to removable media. such as legacy application compatibility. while also providing high resiliency and operational simplicity.3 Microsoft Storage Spaces Storage Spaces is a virtualization capability introduced in Windows Server 2012 which enables users to dramatically reduce the cost of highly available storage for virtualized or physical deployments.Internal Use .4 Application Virtualization Microsoft Application Virtualization (App-V) provides multiple methods to deliver virtualized applications to RDS environments. App-V can help reduce the costs and time associated with managing gold master VM and PC images with integrated applications.3. provide easy access to a densely shared session environment. Both layers. hosted by the RDSH role (formerly Terminal Services).  Computer Groups Types – Computer Groups can be for virtual or physical computers running Windows XP Pro to Windows 8 Enterprise or Server 2003 R2 to 2012 R2.Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace . Additionally there is limited support for Linux computer groups. Each user VM is assigned a dedicated slice of the host server’s resources to guarantee the performance of each desktop. simple.Confidential . simple. but Linux is outside of the scope of this reference architecture. (An RDS CAL is required for each user or device accessing this type of environment. This is the most cost effective option and a great place to start with Microsoft RDS. 2008 (32 or 64 bit). RD Session Host Sessions are well-suited for task based workers using office productivity and line of business applications. (An RDS CAL is required for each user or device accessing this type of environment.) 5. Applications can be built into gold images or published via RemoteApp. The Compute layer is where VDI desktop VMs execute.1.Internal Use . without needs for supporting complex peripheral devices or applications with extreme memory or CPU requirements.Solution Architecture for Microsoft Remote Desktop Services and Dell vWorkspace 5 5.Reference Architecture . 2012. Each RDP-based session shares the total available server resources with all other sessions logged in concurrently on the server.1 Overview This solution architecture follows a distributed model where solution components exist in layers. vWorkspace RD Session Hosts can deliver full desktops or seamless application sessions from Windows Server Virtual Machines running Windows Server 2003 R2 (32 or 64 Bit).)  Pooled VMs are the non-persistent user desktop VMs traditionally associated with VDI. the Management layer being dedicated to the broker management role VMs.  Sessions. All changes made by Personal VM users will persist through logoffs and reboots making this a truly personalized computing experience. all within a single. while inextricably linked. scale independently.)  Personal VMs are persistent 1-to-1 desktop VMs assigned to a specific entitled user. Desktop Clouds are elastic in nature and automatically expand as additional Hyper-V Compute Hosts are added to vWorkspace. 5.2 vWorkspace Deployment Options Dell vWorkspace provides a number of delivery options to meet your needs. The desktop VM is dedicated to a single user while in use then returned to the pool at logoff or reboot and reset to a pristine gold image state for the next user. all within a single. 2008 R2. wizard-driven environment that is easy to set up and manage.1 RDS Deployment Options Microsoft RDS provides a number of VDI options to meet your needs.1. New Compute Dell . o 32 Desktop Cloud – provides users with access to a single virtual machine from a pool of available virtual machines on one or more non-clustered Hyper-V Servers with local storage. wizard-driven environment that is easy to set up and manage. and 2012 R2. (An RDS CAL is required for each user or device accessing this type of environment.  RD Session Host Sessions – Provide easy access to a densely shared session environment. Each desktop VM is assigned a dedicated portion of the host server’s resources to guarantee the performance of each desktop. or rebooted and reset to a pristine gold image state for the next user. o Temporary Virtual Desktop – are the non-persistent user desktop VMs traditionally associated with VDI.Microsoft Software Assurance covered device accessing this type of environment. Physical Computers can be persistently or temporarily assigned to users or devices. from which they provision new virtual machines locally. 33 Dell . All changes made by Personal VM users will persist through logoffs and reboots making this a truly personalized computing experience. and at logoff are re-provisioned from the parent VHDX (instant copy of the virtual machine template). o Physical Computers – Like Virtual Desktop Computer Groups.Reference Architecture . o Persistent Virtual Desktop Groups – 1-to-1 desktop VMs assigned to a specific entitled user or device.Hosts automatically receive instant copies of the virtual machine templates. A Microsoft VDA license is required for each non-Microsoft Software Assurance covered device accessing this type of environment. Applications can be built into gold images or published via RemoteApp.Confidential .Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace . Desktop Cloud virtual machines are well suited for task based workers using office productivity and line of business applications. The desktop VM is dedicated to a single user or device while in use then returned to the computer group at logoff. Common use cases for connections to physical computers are remote software development and remote access to one’s office PC. Please contact Dell or Microsoft for more information on licensing requirements for VDI.Internal Use . A Microsoft VDA license is required for each non. Desktop Cloud virtual machines are temporarily assigned to a user or device at logon. Confidential . this model supports rack servers only. Due to the local disk requirement in the Compute layer.0Ghz) 2 x Intel Xeon E5-2690v2 Processor (3. 8GB SD 2 x 750W PSUs 2 x 750W PSUs Dell . The physical memory configuration varies slightly as to whether it will be hosting pooled desktops or RDSH VMs as seen below.6Ghz) 2 x Intel Xeon E5-2697v2 Processor (2. Hot Plug.2 Compute Layer 5. 8GB SD iDRAC8.5. vWorkspace Hyper-V Catalyst Components or the RDVH role for RDS is installed in the Hyper-V parent partition. Hot Plug.Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace . Redundant Power Supply 750W Local Tier 1 Compute Host (Pooled) PowerEdge R720 34 Local Tier 1 Compute Host (RDSH) PowerEdge R730 Local Tier 1 Compute Host (RDSH) PowerEdge R720 2 x Intel Xeon E5-2690v2 Processor (3. Redundant Power Supply 750W Dual. 1GB NV Cache Broadcom 5720 1Gb QP NDC (LAN) Broadcom 5720 QP 1GB Net.0Ghz) 256GB Memory (16 x 16GB DIMMs @ 1600Mhz) 128GB Memory (8 x 16GB DIMMs @ 1600MHz) Microsoft Windows Server 2012 R2 Hyper-V Microsoft Windows Server 2012 R2 Hyper-V 12 x 300GB 15K SAS 6Gbps disks 12 x 300GB 15K SAS 6Gbps disks PERC H710P Integrated RAID Controller – RAID10 PERC H710P Integrated RAID Controller – RAID10 Broadcom 5720 1Gb QP NDC (LAN) Broadcom 5720 1Gb QP NDC (LAN) Broadcom 5719 1Gb QP NIC (LAN) Broadcom 5719 1Gb QP NIC (LAN) iDRAC7 Enterprise w/ vFlash. Enterprise with vFlash 8GB SD Dual. Local Tier 1 Compute Host (Pooled) PowerEdge R730 2 x Intel Xeon E5-2697v2 Processor (2. Daughter Card Broadcom 5719 1Gb QP NIC (LAN) Broadcom 5719 QP 1GB Net.1 Local Tier 1 In the Local Tier 1 model.6Ghz) 384GB Memory (24 x 16GB DIMMs @ 1900Mhz) 256GB Memory (16 x 16GB DIMMs @ 1900MHz) Microsoft Windows Server 2012 R2 Hyper-V Microsoft Windows Server 2012 R2 Hyper-V 16 x 300GB 15K SAS 6Gbps disks 6 x 300GB 15K SAS 6Gbps disks PERC H710P Integrated RAID Controller – RAID10 PERC H730 RAID Controller. pooled desktop and shared sessions execute on local storage on each Compute host. 8GB SD iDRAC7 Enterprise w/ vFlash.2. Up to four RDSH VMs may be provisioned to support the total user session count.Internal Use . Interface Card iDRAC7 Enterprise w/ vFlash.Reference Architecture . Reference Architecture .When using the optional PowerEdge R420 as the Compute host.5Ghz) 128GB Memory (8 x 16GB DIMMs @ 1600Mhz) Microsoft Windows Server 2012 R2 Hyper-V 8 x 300GB 15K SAS 6Gbps disks PERC H710P Integrated RAID Controller – RAID10 Broadcom 5719 1Gb QP NDC (LAN) Broadcom 5719 1Gb QP NIC Low Profile (LAN) iDRAC7 Enterprise w/ vFlash. Local Tier 1 Compute Host PowerEdge R420 2 x Intel Xeon E5-2450v2 Processor (2.Internal Use .Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace .Confidential . the hardware configuration is the same regardless if used for virtual desktops or shared sessions. 8GB SD 2 x 550W PSUs 35 Dell . Interface Card iDRAC Enterprise With vFlash 8GB SD Dual.Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace .2. Higher scale and HA options are not offered with this bundle. 1GB NV Cache Broadcom 5720 QP 1GB Net. Dedicated connectivity to the shared storage will be facilitated via Microsoft Scale Out File Servers. Shared Tier 1 Compute Host PowerEdge R730 2 x Intel Xeon E5-2697v2 Processor (2. all VDI server roles and desktop sessions are hosted on a single server in this model so there is no need for external storage. Hot-Plug. 100 User Compute Host PowerEdge T420 2 x Intel Xeon E5-2450 V2 (2.Confidential .Internal Use . virtual desktop sessions execute on shared storage so there is no need for additional local disks on each server. Daughter Card Broadcom 5719 QP 1GB Net.For the Entry solution.2 Shared Tier 1 In the Shared Tier 1 model.5Ghz) 128GB Memory (8 x 16GB DIMMs @ 1600Mhz) (VDI) Microsoft Windows Server 2012 R2 Hyper-V 8 x 300GB 15K SAS 6Gbps disks RAID 10 (OS + VDI) PERC H710P Integrated RAID Controller 2 x Broadcom 5719 1Gb QP NIC (LAN) 2 x 750W PSUs 5. All other Compute host configurations are the same as seen in the Local Tier 1 model.6Ghz) 256GB Memory (24 x 16GB DIMMs @ 1900Mhz) Microsoft Windows Server 2012 R2 Hyper-V 2 x 300GB 15K SAS 6Gbps disks PERC H730 RAID Controller. Redundant Power Supply 750W 36 Dell .Reference Architecture . 5Ghz) 128GB Memory (8 x 16GB DIMMs @ 1600Mhz) Microsoft Windows Server 2012 R2 Hyper-V 2 x 300GB 15K SAS 6Gbps disks PERC H710P Integrated RAID Controller – RAID1 Broadcom 5719 1Gb QP NDC (LAN) Broadcom 5719 1Gb QP NIC Low Profile (LAN) iDRAC7 Enterprise w/ vFlash.Confidential .0Ghz) 256GB Memory (16 x 16GB DIMMs @ 1600Mhz) Microsoft Windows Server 2012 Hyper-V 2 x 300GB 15K SAS 6Gbps disks PERC H710P Integrated RAID Controller – RAID1 Broadcom 57810-k 1Gb/ 10Gb DP KR NDC PCIe mezz cards for fabric B and C (included) iDRAC7 Enterprise w/ vFlash.0Ghz) 256GB Memory (16 x 16GB DIMMs @ 1600Mhz) Microsoft Windows Server 2012 R2 Hyper-V 2 x 300GB 15K SAS 6Gbps disks PERC H710P Integrated RAID Controller – RAID1 Broadcom 5720 1Gb QP NDC (LAN) Broadcom 5719 1Gb QP NIC (LAN) iDRAC7 Enterprise w/ vFlash.Reference Architecture . The PowerEdge M620 blade servers are configured in a similar fashion to the R720s for Shared Tier 1. 8GB SD 37 Dell .Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace . 8GB SD 2 x 550W PSUs 5. 8GB SD 2 x 750W PSUs PowerEdge R420 Compute host option: Shared Tier 1 Compute Host PowerEdge R420 2 x Intel Xeon E5-2450v2 Processor (2.2.Internal Use . Shared Infrastructure Compute/Mgmt Host PowerEdge M620 2 x Intel Xeon E5-2690v2 Processor (3.Shared Tier 1 Compute Host PowerEdge R720 2 x Intel Xeon E5-2690v2 Processor (3.3 Shared Infrastructure The Shared Infrastructure model is similar to the Shared Tier 1 model in that virtual desktop sessions execute on shared storage. Local Tier 1 Compute Host (Graphics) PowerEdge R720 Shared Tier 1 Compute Host (Graphics) PowerEdge R720 2 x Intel Xeon E5-2680v2 Processor (2. AMD FirePro™ S7000 AMD FirePro™ S9000 AMD FirePro™ W7000 Number of GPUs 1 1 1 Number of Cores 1280 1792 1280 Total Memory/card 4GB 6GB 6GB Architecture Model LT1. For LT1 and ST1. For VRTX the Compute host configuration is the same for both graphics and non-graphics options.1. the maximum VRAM allocation is dynamic and depends on the minimum amount of system memory that the virtual machine starts with. The dedicated VRAM is either 128MB or 256MB depending on the resolution and number of monitors assigned to the virtual machine. 64MB) 38 Dell .Confidential .4.2.5. The formula used to determine the amount of shared memory is: TotalSystemMemoryAvailableForGraphics = MAX(((TotalSystemMemory .Reference Architecture . 8GB SD 2 x 1100W PSUs 2 x 1100W PSUs AMD graphics cards are supported as show below.8Ghz) 256GB Memory (16 x 16GB DIMMs @ 1600Mhz) 256GB Memory (16 x 16GB DIMMs @ 1600Mhz) Microsoft Windows Server 2012 R2 Hyper-V Microsoft Windows Server 2012 R2 Hyper-V 12 x 300GB 15K SAS 6Gbps disks 2 x 300GB 15K SAS 6Gbps disks PERC H710P Integrated RAID Controller – RAID10 PERC H710P Integrated RAID Controller – RAID1 Broadcom 5720 1Gb QP NDC (LAN) Broadcom 5720 1Gb QP NDC (LAN) Broadcom 5719 1Gb QP NIC (LAN) Broadcom 5719 1Gb QP NIC (LAN) iDRAC7 Enterprise w/ vFlash.1 GPU memory utilization for RemoteFX vGPU enabled VMs In Windows 8.8Ghz) 2 x Intel Xeon E5-2680v2 Processor (2.2. ST1 VRTX Max # of Cards/host 3 2 3 Number of Users per Host 75 85 75 5.512) / 2).Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace . The shared VRAM allocated varies between 64MB and 1GB depending on the minimum amount of system memory that is assigned to the virtual machine when it starts. 8GB SD iDRAC7 Enterprise w/ vFlash. ST1 LT1.Internal Use .4 Graphics Acceleration Graphics acceleration is offered as a deployment option and alternate configuration for Compute hosts to support shared graphics virtual desktops. the Compute host configuration depends on the architecture model and graphics card model. No RDS or vWorkspace roles need to be enabled in the parent partition of Management hosts. The Compute host configuration does not change for this deployment option. Pooled and Personal virtual desktops are supported.Internal Use . etc. logs. please visit: LINK 5. The VMs execute on shared storage through iSCSI connectivity. 8GB SD 2 x 550W PSUs The management role requirements for the base solution are summarized below.2.Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace . and are identical for both the Local and Shared Tier 1 models. do not require local disk to host the management VMs.Reference Architecture . IIS web files. in the management volume. Local/Shared Tier 1 Management Host PowerEdge R420 2 x Intel Xeon E5-2450v2 Processor (2.Confidential .5Ghz) 64GB Memory (4 x 16GB DIMMs @ 1600Mhz) Microsoft Windows Server 2012 R2 Hyper-V 2 x 300GB 15K SAS 6Gbps disks PERC H710P Integrated RAID Controller – RAID1 Broadcom 5719 1Gb QP NDC (LAN) Broadcom 5719 1Gb QP NIC Low Profile (LAN) iDRAC7 Enterprise w/ vFlash. For additional information about the Microsoft Lync VDI 2013 plugin.3 Management Layer The Management host configuration consists of VMs running in Hyper-V child partitions with the pertinent RDS or vWorkspace roles enabled. please visit: LINK 5.5 Unified Communications Unified communications is offered as a deployment option to support instant messaging and user conferencing enabled by Microsoft Lync and the VDI Plugin.With a minimum system memory configured at 512MB. virtual machines would get 64MB of shared memory and the following total VRAM: Maximum number of monitors in virtual machine setting Maximum Resolution 1 monitor 2 monitors 4 monitors 8 monitors 1024 x 768 192MB 320MB 320MB 320MB 1280 x 1024 192MB 320MB 320MB 320MB 1600 x 1200 192MB 320MB 320MB 1920 x 1200 192MB 320MB 320MB 2560 x 1600 320MB 320MB For additional information about Microsoft RemoteFX. The Management hosts have reduced RAM and CPU. Use data disks for role-specific application files/ data. Present Tier 2 volumes with a special purpose (called out above) in the format specified below based upon broker technology: 39 Dell . Confidential .5GB|24GB 3 120 2048  vWorkspace Dedicated Management Layer Role vCPU Startup Dynamic Memory RAM (GB) Min|Max Buffer Weight NIC OS vDisk (GB) Tier 2 Volume (GB) Connection Broker & RD Licensing 4 4 512MB|8GB 20% Med 1 40 - vWorkspace Diagnostics and Monitoring 2 4 512MB|6GB 20% Med 1 60 - Web Access 2 4 512MB|6GB 20% Med 1 40 - Secure Gateway 2 4 512MB|4GB 20% Med 1 40 - Profiles Storage Server & Universal Print Server 2 4 512MB|6GB 20% Med 1 60 - File Server 1 4 512MB|6GB 20% Med 1 40 2048 SQL Server 4 8 512MB|10GB 20% Med 1 40 210 TOTALS 17 32 4GB|46GB 7 320 2158 Combined Compute and Management Layer Role vCPU Startup Dynamic Memory RAM (GB) Min|Max Buffer Weight NIC OS vDisk (GB) Tier 2 Volume (GB) Broker + RD Lic + WebAccess + SSLGW 2 4 512MB|8GB 20% Med 1 40 - vWorkspace Diagnostics and Monitoring 2 4 512MB|6GB 20% Med 1 60 - File + Profiles + Print Server 2 4 512MB|6GB 20% Med 1 60 2048 SQL Server 2 8 512MB|10GB 20% Med 1 40 210 TOTALS 8 20 2GB|30GB 4 200 2158 40 Dell .Reference Architecture .Internal Use .Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace . RDS Role vCPU Startup Dynamic Memory RAM (GB) Min|Max Buffer NIC OS vDisk (GB) Tier 2 Volume (GB) Weight RD Connection Broker & 2 RD Licensing 4 512MB|8GB 20% Med 1 40 - RD Web Access & RD Gateway 2 4 512MB|8GB 20% Med 1 40 - File Server 1 4 512MB|8GB 20% Med 1 40 2048 TOTALS 5 12 1. 3. This architecture provides configuration guidance using a dedicated SQL Server VM to serve the environment. If logging is enabled please note there will be an increase in CPU and disk IOPS due to this.2 DNS DNS plays a crucial role in the environment not only as the basis for Active Directory but will be used to control access to the various Dell and Microsoft software components. the CNAME “VDISQL” is created to point to SQLServer1.Confidential . No infrastructure SQL client connections would need to be touched.Reference Architecture . the preferred approach would be to connect to <CNAME>\<instance name>.1 Troubleshooting If you need to trouble shoot the secure gateway.6 for scaling guidance See Appendix B 5. If a failure scenario was to occur and SQLServer2 would need to start serving data. access to components that may live on one or more servers during the initial deployment. 5. So instead of connecting to SQLServer1\<instance name> for every device that needs access to SQL.3.3. and consumable software components need to have a presence in DNS. Use caution during database setup to ensure that SQL data. we would simply change the CNAME in DNS to point to SQLServer2. a connection to the server name\ instance name must be used. in which case the database needs to be separated into separate named instances. it is advised that if this is done it should be of a short duration (few minutes). this includes forward and reverse lookups and preferably via a dynamic and AD-integrated namespace.1 DNS for SQL To access the SQL data sources. Pay consideration for eventual scaling. 41 Dell .1 SQL Databases The vWorkspace databases will be hosted by a single dedicated SQL 2012 Server VM in the Management layer. logs. CNAMEs and the round robin DNS mechanism should be employed to provide a front-end “mask” to the back-end server actually hosting the service or data source.Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace . logging can be turned on.Internal Use .3. Initial placement of all databases into a single SQL instance is fine unless performance becomes an issue.2. 5. Best practices defined by Dell and Microsoft are to be adhered to. instead of connecting to server names directly. and TempDB are properly separated onto their respective volumes and auto-growth is enabled for each database. For example. 5. as well as protect for future scaling (HA).3. VMs.3 Secure Gateway Please review section 5. to ensure optimal database performance.3. All hosts.5. In environments with fewer than 2500 seats SQL Server Express can be used to minimize licensing costs. alias these connections in the form of DNS CNAMEs. Microsoft best practices and organizational requirements are to be adhered to. To simplify this process. in some cases this has been observerd to be up to 2 times the CPU and up to 3 times the disk IOPS. either directly or via ODBC. 6Ghz) 64GB Memory (4 x 16GB DIMMs @ 1600Mhz) Microsoft Windows Server 2012 R2 2 x 300GB 15K SAS 6Gbps disks PERC H710P Integrated RAID Controller – RAID1 Broadcom 5720 1Gb QP NDC Broadcom 5719 1Gb QP Low Profile NIC iDRAC7 Enterprise w/ vFlash.5.2 Purpose File System Shared Tier 1 Choosing the Shared Tier 1 storage option means that the Compute hosts use two locally installed 300GB 15k drives for the host OS (parent partition). 8GB SD 2 x 750W PSUs 42 Dell .4. All Tier 2 data will be facilitated through the Shared Tier 1 array. For solutions using iSCSI. Volumes Size (GB) Storage Array OS 135 Tier 1 Host Operating System NTFS VDI 1600 Tier 1 Pooled + Shared VDI NTFS 5.1 Choosing the Local Tier 1 storage option means that the Compute hosts use 12 locally installed 300GB 15k drives for the host OS (parent partition) and the pooled desktops or shared session VMs.Confidential .Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace .4 Storage Layer Local Tier 1 5. The personal virtual desktops leverage shared storage via Microsoft Scale Out File servers connected to a high performance Dell storage array. a single volume per EqualLogic PS6100XS array is capable of facilitating up to 5000 IOPS for VM hosting purposes.4. To achieve the required performance level.Reference Architecture . A single volume per local Tier 1 Compute host is sufficient to host the provisioned VMs along with their respective temporary data. RAID 10 must be used across all local disks. Volumes Size (GB) Storage Array Purpose File System 6144 Tier 1 Desktop VMs NTFS Management 500 Tier 2 Management VMs NTFS User Data Tier 2 File Server NTFS VDI 2048 The follow is the hardware configuration for the Microsoft Scale-Out File Server Microsoft Scale Out File Server PowerEdge R620 2 x Intel Xeon E5-2650v2 Processor (2.Internal Use . 6Ghz) 64GB Memory (4 x 16GB DIMMs @ 1600Mhz) Microsoft Windows Server 2012 R2 2 x 300GB 15K SAS 6Gbps disks PERC H710P Integrated RAID Controller – RAID1 Broadcom 5720 1Gb QP NDC Broadcom 5719 1Gb QP Low Profile NIC 2 x SAS 6Gbps HBA Low Profile Controller iDRAC7 Enterprise w/ vFlash. The solution as designed presents all SQL disks using VHDX formats. Larger disk sizes can be chosen to meet the capacity needs of the customer.Reference Architecture .Confidential . The recommendation for larger scale and mission critical deployments with higher performance requirements is to use RAID 10 or RAID 6 to maximize performance and recoverability. The EqualLogic PS4100E array can be used for smaller scale deployments up to 500 users or the PS6100E array for larger scale deployments. The table below outlines the volume requirements for Tier 2.Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace . 8GB SD 2 x 750W PSUs 5. Volumes Size (GB) Storage Array Purpose File System VDI 6144 Tier 1 Desktop VMs NTFS Management 500 Tier 2 Management VMs NTFS User Data 2048 Tier 2 File Server NTFS The follow is the hardware configuration for the Microsoft Storage Spaces Server Microsoft Storage Spaces PowerEdge R620 2 x Intel Xeon E5-2650v2 Processor (2.Internal Use .3 Shared Tier 2 Tier 2 is shared iSCSI storage used to host the management server VMs and user data. The two following tables depict the component volumes required to support a 500 user environment based upon broker technology. If two Storage Spaces servers are used. The user data can be presented either via a VHDX or NTFS pass-through disk. a single volume per MD1220 array is capable of facilitating up to 5000 IOPS for VM hosting purposes. RAID 50 can be used in smaller deployments but is not recommended for critical environments. 43 Dell . it is recommended to create at least two volumes for VDI. Intent to scale should be considered when making the initial investment.4. to ensure that both the file servers are load balanced.For solutions using Microsoft Storage Spaces. Additional management volumes can be created as needed along with size adjustments as applicable for user data and profiles. File Server NTFS User Data 2048 6 Tier 2 File Server NTFS 20 6 Tier 2 User profiles NTFS User Profiles 44 Dell .5” SAS disks to be shared with each server blade in the cluster.5” 10K SAS 4 Blade Up to 500 desktops 20 x 300GB 2.5” 15K SAS 5 x 1. RDS Volumes  Size (GB) Storage Array Purpose File System Management 500 Tier 2 RDS VMs.Internal Use .4 Shared Infrastructure The VRTX chassis contains up to 25 available 2.Reference Architecture File System .5” 15K SAS 5 x 1. Solution Model Features Tier 1 Storage (VDI disks) Tier 2 Storage (mgmt.4.Confidential .5” 10K SAS VRTX solution volume configuration: Volumes Size (GB) RAID Disk Pool VDI 1024 10 Tier 1 VDI Desktops NTFS Management 200 6 Tier 2 Mgmt VMs. File Server NTFS User Data 2048 Tier 2 File Server NTFS User Profiles 20 Tier 2 User profiles NTFS SQL Data 100 Tier 2 SQL NTFS SQL Logs 100 Tier 2 SQL NTFS TempDB Data 5 Tier 2 SQL NTFS TempDB Logs 5 Tier 2 SQL NTFS SQL Witness 1 Tier 2 SQL (optional) NTFS Templates/ ISO 200 Tier 2 ISO/ gold image storage (optional) NTFS vWorkspace Volumes 5. File Server NTFS User Data 2048 Tier 2 File Server NTFS User Profiles 20 Tier 2 User profiles NTFS Templates/ ISO 200 Tier 2 ISO/ gold image storage (optional) NTFS Size (GB) Storage Array Purpose File System Management 500 Tier 2 vWorkspace VMs. + user data) 2 Blade Up to 250 desktops 10 x 300GB 2.Privileged Purpose Dell Wyse Datacenter for Microsoft VDI and vWorkspace .2TB 2.2TB 2. Reference Architecture File System .Internal Use .Confidential .Volumes Size (GB) RAID Disk Pool SQL DATA 100 6 Tier 2 SQL NTFS SQL LOGS 100 6 Tier 2 SQL NTFS TempDB Data 5 6 Tier 2 SQL NTFS TempDB Logs 5 6 Tier 2 SQL NTFS 200 6 Tier 2 ISO/ gold image storage (optional) NTFS Templates/ ISO 45 Dell .Privileged Purpose Dell Wyse Datacenter for Microsoft VDI and vWorkspace . Internal Use . Trunk DRAC VLAN Mgmt VLAN VDI VLAN Si Core switch ToR switches iSCSI SAN Compute hosts Mgmt hosts 46 Dell .5. for the upper limit of the stack. All ToR traffic has been designed to be layer 2/ switched locally.Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace . Only the management servers connect to iSCSI shared storage in this model.Reference Architecture .5 Network Layer 5. a single Dell Networking S55 switch can be shared among all network connections for both Management and Compute layer components.Confidential .1 Local Tier 1 In the Local Tier 1 architecture.5. The following diagrams illustrate the logical data flow in relation to the core switch. with all layer 3/ routable VLANs trunked from a core or distribution switch. 1. The Compute hosts will not need access to iSCSI storage since they are hosting the VDI sessions on local disk.2 MANAGEMENT ETHERNET 0 ETHERNET 1 LAN PWR SERIAL PORT ERR ACT STANDBY ON/OFF PS4100E (Tier 2) Hyper-V Networking The network configuration in this model will vary slightly between the Compute and Management hosts.5. The following outlines the VLAN requirements for the Compute and Management hosts in this solution model:    47 Compute hosts (Local Tier 1) o Management VLAN: Configured for hypervisor infrastructure traffic – L3 routed via core switch o VDI VLAN: Configured for VDI session traffic – L3 routed via core switch Management hosts (Local Tier 1) o Management VLAN: Configured for hypervisor management traffic – L3 routed via core switch o iSCSI VLAN: Configured for iSCSI traffic – L2 switched only via ToR switch o VDI Management VLAN: Configured for VDI infrastructure traffic – L3 routed via core switch An optional iDRAC VLAN can be configured for all hardware management traffic – L3 routed via core switch Dell .Internal Use .5.Confidential .5.Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace .Reference Architecture .1.1 Cabling Diagram Compute Hosts MGMT Hosts 4 1 10=OFF 3 100=GRN 2 1000=ORG 1 10=OFF iDRAC 750W iDRAC iDRAC 750W 4 100=GRN iDRAC 3 10=OFF 3 2 100=GRN 2 1 1 1000=ORG 2 7 1000=ORG 10=OFF 100=GRN 1000=ORG 750W 10=OFF 7 6 1 5 100=GRN 5 4 3 1000=ORG 2 ST 2 ST 1 6 750W 4 1 Force10 S55 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 ACT Ma RS-232 USB-B CONTROL MODULE 12 MANAGEMENT ETHERNET 0 ETHERNET 1 PWR r ste PSU1 FAN1 2 S ALM SY PSU2 FAN STACK ID LNK LNK/SPD 0 ACT 44 45 46 47 Ethernet USB-A SERIAL PORT ERR ACT STANDBY ON/OFF CONTROL MODULE 12 iSCSI 5. Internal Use .Confidential .Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace . The LAN traffic from the server to the ToR switch should be configured as a LAG to maximize bandwidth. Following best practices and in solutions that may desire larger scales. The LAN team should be connected to a HyperV switch.In this solution architecture. and the parent OS can utilize the Mgmt team directly. Each Local Tier 1 Compute host will have a quad port NDC as well as an add-on 1Gb quad port NIC. The Compute hosts will require two NIC teams: One for LAN and the other for management of the Hyper-V parent OS. LAN and iSCSI traffic will be segmented in dedicated VLANs but combined within a single switch to minimize the initial network investment. Mgmt Roles Desktop VMs File Server vNIC vNIC AN 10 vNIC 10 AN VL VLAN 20 vNIC VL vNIC Management OS Hyper-V Switch MGMT NIC Team – LAN pNIC 1 pNIC 2 NIC Team – Mgmt pNIC 1 pNIC 2 LAG Force10 S55 Si Core 48 Dell .Reference Architecture . this traffic should be separated into discrete switches. Mgmt Roles vNIC File Server vNIC Management OS File Share Volume VM Volumes Hyper-V Switch MGMT vNIC MPIO .The Management hosts have a slightly different configuration since they will additionally access iSCSI storage.Internal Use .iSCSI 1Gb 1Gb NIC Team – LAN 1Gb 1Gb Cluster/ CSV Live Migration vNIC vNIC NIC Team – Mgmt 1Gb 1Gb LAG Force10 S55 Si Core EQL 49 Dell . iSCSI should be isolated onto the NIC pair used for MPIO and connections from all three NIC pairs should pass through both the NDC and add-on NIC. The LAN traffic from the server to the ToR switch should be configured as a LAG. Three ports of both the NDC and add-on NIC will be used to form two NIC teams and one MPIO pair.Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace . VLAN IDs should be specified in the Hyper-V switch and vNICs designated for management OS functions.Confidential . The add-on NIC for the Management hosts will also be a 1Gb quad port NIC.Reference Architecture . both Management and Compute servers connect to shared storage in this model though the SOFS. The figure below illustrates Tier1 with EqualLogic storage.5.2 Shared Tier 1 In the Shared Tier 1 architecture for rack servers.Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace . All ToR traffic has designed to be layer 2/ switched locally.5. virtual switch assignments.Reference Architecture . The following diagrams illustrate the server NIC to ToR switch connections.Confidential . VDI VLAN Live Migration VLAN Trunk SAN DRAC VLAN Mgmt VLAN Si Core switch ToR switches iSCSI Compute hosts Mgmt hosts Scale Out File Server 50 Dell .Internal Use . as well as logical VLAN flow in relation to the core switch. with all layer 3/ routable VLANs routed through a core or distribution switch. the array is directly connected to the SOFS using SAS cables with no additional networking requirement.Internal Use .1 MGMT Hosts Compute Hosts 1000=ORG 1000=ORG 100=GRN 10=OFF 1000=ORG 100=GRN 4 3 2 10=OFF 1 iDRAC 750W 750W iDRAC iDRAC 4 3 2 1 iDRAC 1000=ORG 1000=ORG 100=GRN 10=OFF 3 750W 2 1 100=GRN 2 7 750W 100=GRN 7 6 1 5 10=OFF 5 4 3 10=OFF 2 ST 2 ST 1 6 4 1 1 Force10 S55 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 ACT Ma RS-232 r ste PSU1 FAN1 2 S ALM SY PSU2 FAN STACK ID USB-B ACT LNK LNK/SPD 0 44 46 45 47 Ethernet USB-A CONTROL MODULE 11 MANAGEMENT ETHERNET 0 ETHERNET 1 ETHERNET 2 ETHERNET 3 PWR SERIAL PORT ERR ACT 10=OFF 100=GRN 1000=ORG STANDBY ON/OFF 1 1 2 3 750W 750W CONTROL MODULE 11 4 3 2 MANAGEMENT ETHERNET 0 ETHERNET 1 ETHERNET 2 ETHERNET 3 PWR iDRAC SERIAL PORT ERR ACT STANDBY ON/OFF PS6100XS Scale Out File Server SMB LAN iSCSI In the case of Tier1 with Microsoft Storage Spaces and MD1220 array.Reference Architecture .2.5.Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace .Cabling Diagram 5.Confidential . Compute Hosts MGMT Hosts 4 1 10=OFF 100=GRN 3 1000=ORG 2 10=OFF 1 iDRAC 4 750W iDRAC 10=OFF 3 100=GRN 2 1000=ORG 1 iDRAC 10=OFF 10=OFF 100=GRN 1000=ORG 3 750W 2 1 100=GRN 2 7 750W 100=GRN 7 6 1 5 1000=ORG 5 4 3 1000=ORG 2 ST 2 ST 1 6 iDRAC 750W 4 1 Force10 S55 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 ACT Ma RS-232 USB-B r ste PSU1 FAN1 2 S ALM SY PSU2 FAN STACK ID LNK LNK/SPD 0 ACT 44 45 46 47 Ethernet USB-A MD12 Series 10=OFF 3 100=GRN 1000=ORG 1 2 3 IN OUT EMM IN OUT EMM 1 750W 2 750W 4 MD12 Series iDRAC MD1220 Storage Spaces Server LAN SMB SAS 51 Dell . The Management hosts will access the shared storage via iSCSI.Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace .5. The desktop VMs will connect to the Hyper-V switch. The Compute host will be configured with an additional NIC team to connect to shared storage via the Scale Out File Server and SMB. iSCSI and LAN traffic will be physically separated into discrete fabrics. The Management host is configured with Dell MPIO to connect to shared storage via iSCSI. Configure the LAN traffic from the server to the ToR switch as a LAG.2.Confidential . The Compute hosts will access the shared storage via SMB and the Scale Out File Server.Reference Architecture . trunked from Core o SMB VLAN: Configured for SMB traffic – L2 switched only via ToR switch o VDI VLAN: Configured for VDI session traffic – L3 routed via core switch Management hosts (Shared Tier 1) o Management VLAN: Configured for hypervisor management traffic – L3 routed via core switch o Live Migration VLAN: Configured for Live Migration traffic – L2 switched only. one for the Hyper-V switch and one for the management OS. The following outlines the VLAN requirements for the Compute and Management hosts in this solution model:    Compute hosts (Shared Tier 1) o Management VLAN: Configured for hypervisor management traffic – L3 routed via core switch o Live Migration VLAN: Configured for Live Migration traffic – L2 switched only. 52 Dell .2 Hyper-V Networking The network configuration in this model will vary slightly between the Compute and Management hosts. Each Shared Tier 1 Compute and Management host will have a 1 Gb quad port NDC as well as a 1Gb quad port NIC.Internal Use . Isolate iSCSI or SMB onto its own virtual switch with redundant ports.5. Connections from all three virtual switches should pass through both the NDC and add-on NICs per the diagram below. while the management OS will connect directly to the NIC team dedicated to the management network. trunked from Core o iSCSI VLAN: Configured for iSCSI traffic – L2 switched only via ToR switch o VDI Management VLAN: Configured for VDI infrastructure traffic – L3 routed via core switch An optional iDRAC VLAN can be configured for all hardware management traffic – L3 routed via core switch Following best practices. Two NIC teams should be configured. Mgmt Roles vNIC File Server vNIC Management OS File Share Volume VM Volumes Hyper-V Switch MGMT vNIC MPIO .VM Volumes VM Volumes Desktop VMs MPIO .Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace .Reference Architecture .Internal Use .iSCSI 1Gb 1Gb vNIC vNIC vNIC Management OS Hyper-V Switch MGMT vNIC Cluster/ CSV Live Migration vNIC vNIC SOFS NIC Team – SMB 1Gb 1Gb NIC Team – LAN 1Gb 1Gb NIC Team – Mgmt 1Gb 1Gb LAG Si Force10 S55 Core Management hosts are configured in the same manner as the Local Tier 1 model.iSCSI 1Gb 1Gb NIC Team – LAN 1Gb 1Gb Cluster/ CSV Live Migration vNIC vNIC NIC Team – Mgmt 1Gb 1Gb LAG Force10 S55 Si Core EQL 53 Dell .Confidential . VLAN assignments.5. as well as logical VLAN flow in relation to the core switch.Privileged Compute + Mgmt hosts Dell Wyse Datacenter for Microsoft VDI and vWorkspace .5.Confidential .Internal Use .Reference Architecture . Core switch ToR switch Integrated Switch Internal Storage 54 Dell . with all layer 3/ routable VLANs trunked from a core or distribution switch. The following diagram illustrates the logical relationship of the VRTX chassis to the integrated switch connections.3 Shared Infrastructure Trunk iDRAC VLAN Mgmt VLAN VDI VLAN All ToR traffic connecting to the VRTX integrated switch should be layer 2/ switched locally. The switched method.Reference Architecture .Confidential .3.1 Cabling Diagram * Diagram shows Network HA configuration 55 Dell . 5. shown below also the Dell Wyse Datacenter solution default.The VRTX chassis can be configured with either switched (default) or pass-through modules.Internal Use .Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace . supports up to eight external ports for uplinks. This solution configuration only makes use of the single A fabric in default form. External uplinks should be cabled and configured in a LAG to support the desired amount of upstream bandwidth.5. Confidential . The PCIe mezz cards included in the B fabric will be used to connect to ports provided by the external 1Gb NICs in the PCIe slots of the VRTX chassis (one per blade).Reference Architecture . One consolidated virtual switch should be configured for use by both the desktop and management VMs. this can be achieved by adding Broadcom 5720 1Gb NICs to the PCIe slots in the VRTX chassis that will connect to the pre-populated PCIe mezzanine cards in each blade server. Logical representation: 5. The following outlines the recommended VLANs for use in the solution:  56 Compute + Management hosts o Management VLAN: Configured for hypervisor and broker management traffic – L3 routed via core switch o VDI VLAN: Configured for VDI session traffic – L3 routed via core switch o Live Migration VLAN: Configured for Live Migration traffic – L2 switched only.3.2 Network HA For configurations requiring networking HA. As you can see by the graphic below.Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace .5.3.5.3 Hyper-V Networking The Hyper-V configuration utilizes NIC teaming to load balance and provide resiliency for network connections. trunked from Core o An optional iDRAC VLAN should be configured for the VRTX iDRAC traffic – L3 routed via core switch Dell . A PCIe NIC must be added for each blade in the chassis as these connections are mapped 1:1. each M620 in the VRTX chassis will use the 10Gb NDC (throttled to 1Gb) in the A fabric to connect to ports on the internal 1Gb switch.5.Internal Use . This provides an alternative physical network path out of the VRTX chassis for greater bandwidth and redundancy using additional fabrics. Reference Architecture Core .Internal Use .Confidential .Per host Hyper-V virtual switch configuration: Mgmt Roles Desktop VMs vNIC vNIC VLAN 20 Live Migration VLAN 10 vNetworks vNIC VDI MGMT vNIC Cluster/CSV Hyper-V Switch pNIC 1 pNIC 2 Si 1 Gb Integrated Switch 57 Dell .Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace . 58 Dell .Reference Architecture .Internal Use . NIC teaming should be configured in the Hyper-V host using the native Windows server software and assigned to the Hyper-V switch to be used by the VMs. Team interfaces from the Mgmt team should be specified using VLANs so that the individual management OS components can communicate individually. ensure that the “Hyper-V port” load balancing mode is selected. Additional vNICs for other management OS functions should be attached to the Mgmt Hyper-V switch. iSCSI should be configured to use MPIO drivers which does not need to be teamed or presented to any VMs.4 NIC Teaming Native Windows Server 2012 R2 NIC Teaming is utilized to load balance and provide resiliency for network connections.5. All NICs and switch ports should be set to auto negotiate.Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace .Confidential . If connecting a NIC team to a Hyper-V switch.5. Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace .Internal Use . Migrate databases to a responsiveness of reads/ dedicated SQL server and writes increase the number of management nodes Additional RAM and CPU for the management nodes File Services Concurrent connections.Confidential .6 Scaling Guidance Each component of the solution architecture scales independently according to the desired number of supported users. Brokers scale differently from other management VMs. The Dell PowerEdge R720 was used as the compute host for scaling guidance. added to Farm Responsiveness of Network Add additional monitoring servers and migrate databases to a dedicated SQL server Additional virtual machine resources (RAM and CPU) The scalability tables below indicate the solution model and operating system grouped by deployment option. as does the hypervisor in use.Reference Architecture . Additional RAM and CPU for the management nodes Monitoring Services Managed agents/units (dependent on SQL performance as well) Additional RAM and CPU for the management nodes Secure Gateway Concurrent connections. File services can also be migrated to the optional NAS device to provide high availability. 59 Dell . Component Metric Horizontal scalability Vertical scalability Virtual Desktop Host/Compute Servers VMs per physical host Additional hosts and clusters added as necessary Additional RAM or CPU compute power Broker Servers Desktops per instance (dependent on SQL performance as well) Additional servers added to the farm Additional virtual machine resources (RAM and CPU) RDSH Servers Desktops per instance Additional virtual RDSH servers Additional physical added to the farm servers to host virtual RDSH servers Database Services Concurrent connections.  The components can be scaled either horizontally (by adding additional physical and virtual servers to the server pools) or vertically (by adding virtual resources to the infrastructure)  Eliminate bandwidth and performance bottlenecks as much as possible  Allow future horizontal and vertical scaling with the objective of reducing the future cost of ownership of the infrastructure. Additional hosts. Split user profiles and home responsiveness of reads/ directories between multiple writes file servers in the cluster.5. 6.Reference Architecture .Confidential .2 Pooled Desktops: R720 Dell .Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace .60 5.6.Internal Use .1 Shared Sessions: R720 5. Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace .4 Storage Spaces Option: R720 Shared Tier 1 .3 Personal Desktops: R720 5.Internal Use .Storage Spaces Option Standard Tier 1 Physical Enhanced Tier 1 Physical Professional Tier 1 Physical User Storage File User Storage File User Storage File Count Arrays Servers Count Arrays Servers Count Arrays Servers 225 1 1 150 1 1 100 1 1 1000 1 1 750 2 1 500 1 1 2000 2 1 1350 2 1 900 2 1 3000 3 2 2100 3 2 1400 2 1 4000 4 2 2700 4 2 1800 3 2 5000 4 2 3450 5 3 2300 4 2 6000 5 3 4050 6 3 2700 4 2 7000 6 3 4800 7 4 3200 5 3 8000 7 4 5400 8 4 3600 6 3 9000 8 4 6000 9 5 4000 6 3 10000 8 4 6750 10 5 4500 7 4 61 Dell .Confidential .Reference Architecture .5.6.6. 1 62 Local Tier 1 Dell . 5.7 Solution High Availability: R720 High availability (HA) is offered to protect each layer of the solution architecture.5. individually if desired.Confidential .Reference Architecture . Hyper-V clustering is introduced in the Management layer. additional Compute and Management hosts are added to their respective layers. additional ToR switches for LAN and iSCSI are added to the Network layer and stacked to provide redundancy as required.7.Internal Use .Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace . and SQL is mirrored or clustered. Following the N+1 model. All applicable management server roles can be duplicated in the cluster and utilize native load balancing functionality if available. and equally spreading each host’s network connections across both.  A number of enhancements occur at the Management layer. which will be configured with the original as a stack. the first of which is the addition of another host. an additional Hyper-V host is added to provide N+1 protection provided for pooled desktops and shared sessions.Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace .Reference Architecture .2 Shared Tier 1: R720 The HA options provides redundancy for all critical components in the stack while improving the performance and efficiency of the solution as a whole. The Management hosts are configured in a failover cluster to allow live migration of management VMs. SQL will also receive greater protection through the addition and configuration of a SQL mirror with a witness. Dell .7. Failover clusters are utilized with Shared Tier 1 storage for personal desktop HA.Confidential . 63  An additional switch is added at the Network layer.Internal Use .  At the Compute layer with Local Tier 1 storage.5. 7. To protect the Compute layer. connection brokers will be configured to provision reserve capacity.7. with the additional Compute hosts added. Care must be taken to ensure that VM provisioning does not exceed the capacity provided by additional hosts. The Compute layer in the Local Tier 1 model does not leverage shared storage so hypervisor HA does not provide a benefit here.Internal Use .Confidential .3.1 Local Tier 1 The optional HA bundle adds an additional host in the Compute and Management layers to provide redundancy and additional processing power to spread out the load.2 Shared Tier 1 For personal desktops in the Shared Tier 1 model.5. Local Tier 1 – Compute HA Connection Broker Pooled Desktop and RDSH VMs N+1 5.3.Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace .7. Shared Tier 1 – Compute HA Failover Cluster Manager Personal Desktop VMs 64 Dell .3 Compute Layer 5. Care must be taken to ensure that VM provisioning does not exceed the capacity provided by additional hosts.Reference Architecture . the Compute layer hosts will be configured in a failover cluster to insure availability in the event of a host failure. Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace . The Microsoft license server will be protected from host hardware failure and licensing grace period by default.  An additional connection broker is added for active/active load balancing.Reference Architecture . The management roles which support configurations storage in SQL will be protected via the SQL mirror. If desired.For pooled desktops and RDSH VMs in this model.7. the Compute layer hosts are provided using typical N+1 fashion but provided access to shared storage. we will also add an additional host but will add a few more layers of redundancy.Internal Use .  The storage volume that hosts the Management VMs will be upgraded to a CSV. 65 Dell . An additional host is added to the pool and can be configured to absorb additional capacity or as a standby node should another fail.  SQL Server mirroring is configured with a witness to further protect SQL.4 Management Layer To implement HA for the Management layer for all models. Shared Tier 1 – Compute HA Connection Broker Pooled Desktop and RDSH VMs N+1 5. it can be optionally protected further via the form of a cold stand-by VM residing on an opposing Management host. The following will protect each of the critical infrastructure components in the solution:  The Management hosts will be configured in a failover cluster (Node and Disk Majority).Confidential . File Server NTFS Yes 50 Tier 2 SQL Data Disk NTFS Yes 100 50 Tier 2 SQL Logs Disk NTFS Yes 2 5 50 Tier 2 SQL TempDB Data Disk NTFS Yes SQL TempDB Logs 2 5 50 Tier 2 SQL TempDB Logs Disk NTFS Yes SQL Witness 1 1 50 Tier 2 SQL Witness Disk NTFS Yes Quorum 1 - 500MB 50 Tier 2 Hyper-V Cluster Quorum NTFS Yes User Data - 2048 50 Tier 2 File Server NTFS No User Profiles - 20 50 Tier 2 User profiles NTFS No Templates/ ISO - 200 50 Tier 2 ISO/ gold image storage (optional) NTFS Yes File System CSV  Purpose vWorkspace Volumes Host Size (GB) RAID Storage Array Management 1 500 50 Tier 2 vWorkspace Infrastructure NTFS Yes Management 2 500 50 Tier 2 vWorkspace Infrastructure NTFS Yes SQL Data 2 100 50 Tier 2 SQL Data Disk NTFS Yes SQL Logs 2 100 50 Tier 2 SQL Logs Disk NTFS Yes SQL TempDB Data 2 5 50 Tier 2 SQL TempDB Data Disk NTFS Yes SQL TempDB Logs 2 5 50 Tier 2 SQL TempDB Logs Disk NTFS Yes SQL Witness 1 1 50 Tier 2 SQL Witness Disk NTFS Yes Quorum 1 - 500MB 50 Tier 2 Hyper-V Cluster Quorum NTFS Yes User Data - 2048 50 Tier 2 File Server NTFS No User Profiles - 20 50 Tier 2 User profiles NTFS No Templates/ ISO - 200 50 Tier 2 ISO/ gold image storage (optional) NTFS Yes 66 Dell .Confidential .Reference Architecture . File Server NTFS Yes Tier 2 RDS VMs.Internal Use .Privileged Purpose Dell Wyse Datacenter for Microsoft VDI and vWorkspace .The following storage volumes are applicable in a two node Management layer HA scenario:  RDS Volumes Host Size (GB) RAID Storage Array Management 1 500 50 Tier 2 Management 2 500 50 SQL Data 2 100 SQL Logs 2 SQL TempDB Data File System CSV RDS VMs. 7. network. Mirror all critical databases to provide HA protection.Internal Use . DNS will be used to control access to the active SQL server.Reference Architecture .5 SQL Server High Availability HA for SQL will be provided via a three server synchronous mirror configuration that includes a witness (High safety with automatic failover). Once the initial replica is delivered from the primary site to the replica site.1 for more details. 67 Dell .6 Disaster Recovery and Business Continuity DR and BC can be achieved natively via Hyper-V Replicas. This technology can be used to replicate VMs from a primary site to a DR or BC site over the WAN asynchronously.Confidential . Place the mirror and witness VMs on the second or later Management hosts. or storage provider.Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace .7. please refer to section 5.7. Multiple recovery points can be stored and maintained. This configuration will protect all critical data stored within the database from physical server as well as virtual server problems. The following article details the step-by-step mirror configuration: LINK Additional resources can be found in TechNet: LINK1 and LINK2 Please refer to the following Dell Software support article for more information about vWorkspace SQL Server mirror support: LINK 5. Hyper-V Replicas are unbiased as to underlying hardware platform and can be replicated to any server. Place the principal VM that will host the primary copy of the data on the first Management host. incremental VM write changes are replicated using log file updates. to restore a VM to a specific point in time. using snapshots.5. Reference Architecture .8 Microsoft RDS Communication Flow RD Gateway RDP over HTTPS Internet External RDP Client RD Web Access RD License Server HTTPS RPC RD Connection Broker RPC Web Client RDP RD Virtualization Hosts RD Session Hosts Scale Out File Server Active Directory LDAP RDP Client SMB LDAP SAN iSCSI Pooled Collection Session Collection RDP Sessionbased Desktops Gold Master Personal Collection Pooled Virtual Desktops Personal Virtual Desktops File Server SMB SMB SMB 68 Dell .Confidential .5.Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace .Internal Use . Confidential .Internal Use .9 69 Dell vWorkspace Communication Flow Dell .Reference Architecture .Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace .5. Feature Minimum Requirement Switching Capacity Line rate switch 1Gbps Ports 7x per Management server 5x per Compute Server Notes Dell Wyse Datacenter leverages 1Gbps network connectivity for all network traffic in all solution models.Reference Architecture .1Q tagging and portbased VLAN support. Stacking Capability Yes Dell . RAID Support 10. 6. Dell Wyse Datacenter for Microsoft VDI and vWorkspace .1 Customer Provided Storage Requirements In the event that a customer wishes to provide his or her own storage array solution for a Dell Wyse Datacenter solution.Confidential . the following minimum hardware requirements must be met. Protect T2 storage via RAID 6.Privileged The ability to stack switches into a consolidated management framework is preferred to minimize disruption and planning when uplinking to core networks. 6 RAID 10 is leveraged for local Compute host storage and high performance shared arrays. 5x per Storage Array 5x per Scale Out File Server 70 VLAN Support IEEE 802. Feature Minimum Requirement Total Tier 2 Storage Space User count and workload dependent Tier 2 Drive Support 7200rpm NL SAS Tier 1 IOPS Requirement (Total Users) x 10 IOPS Tier 2 IOPS Requirement (Total Users) x 1 IOPS Data Networking 1GbE Ethernet Shared Array Controllers 1 with >4GB cache 4GB of cache minimum per controller is recommended for optimal performance and data protection.2 Notes Up to six 1Gb ports per host are required.6 Customer Provided Stack Components 6. Customer Provided Switching Requirements In the event that a customer wishes to provide his or her own rack network switching solution for a Dell Wyse Datacenter solution. the following minimum hardware requirements must be met.Internal Use . User Profile and Workload Characterization 7 It’s important to understand the user workloads when designing a Desktop Virtualization Solution.75GB This user workload leverages a shared desktop image emulates a medium knowledge worker. The Dell Desktop Virtualization Solution methodology includes a Blueprint process to assess and categorize a customer’s environment according to the profiles defined in this section. memory.1 Standard Profile The Standard user profile consists of simple task worker workloads.1. Typically a repetitive application use profile with a non-personalized virtual desktop image. Startup Min/Max IOPS 1GB 512MB/3GB Dell .Reference Architecture .1. typical office productivity applications and web browsing for research/training.Privileged 7-8 VDI Session Disk Space OS Image Notes 3. There are three levels. network and Disk I/O. Enhanced Profile The Enhanced user profile consists of email. Startup Min/Max IOPS 1GB 512MB/2GB 2-3 VDI Session Disk Space OS Image Notes 2. 7. User Profile Standard VM vCPU 1 7. each of which is bound by specific metrics and capabilities. In a virtual desktop environment the image is dynamically created from a template for each user and returned to the desktop pool for reuse by other users. Dell Wyse Datacenter for Microsoft VDI and vWorkspace . User Profile Enhanced 71 VM vCPU 2 VM Memory VM Memory Approx. The workload requirements for a basic user is the lowest in terms of CPU.1 Profile Characterization Overview 7. Sample use cases may be a kiosk or call-center use cases which do not require a personalized desktop environment and the application stack is static.2 VM Memory VM Memory Approx. In the Dell Desktop Virtualization solution this will map directly to the SLA levels we offer in our Integrated Stack. This will allow moderate density and scalability of the infrastructure.1GB This user workload leverages a shared desktop image emulates a task worker. network and Disk I/O requirements and will allow the greatest density and scalability of the infrastructure. The workload requirement for an Enhanced user is moderate and most closely matches the majority of office worker profiles in terms of CPU.Confidential .Internal Use . Only two apps are open simultaneously and session idle time is approximately one hour and fortyfive minutes. Up to five applications are open simultaneously and session idle time is approximately 2 minutes. There is minimal image personalization required in an Enhanced user profile. memory. The Professional user requires extensive image personalization. Additional applications are opened simultaneously and session idle time is two minutes Shared Graphics Profile The Shared Graphics user profile is identical to the Enhanced user profile except for the addition of a HTML5 graphical application and the RemoteFX vGPU. The workload requirements for a Professional user are heavier than typical office workers in terms of CPU.Internal Use .7. The user has moderate-to-large file size (access.75GB This user workload leverages a shared desktop image emulates a medium knowledge worker. transfer requirements). Startup Min/Max IOPS 1GB 512MB/3GB Dell .Confidential . Web browsing use is typically research/training driven.1. User Profile Shared Graphics 72 VM vCPU 2 VM Memory VM Memory Approx. This will limit density and scalability of the infrastructure. network and disk I/O. All office applications are configured and utilized. macros. Dell Wyse Datacenter for Microsoft VDI and vWorkspace .Reference Architecture .1. menu layouts etc. memory.Privileged 8-10 VDI Session Disk Space OS Image Notes 3. save. Startup Min/Max IOPS 1GB 1GB/4GB 10-12 VDI Session Disk Space OS Image Notes 6GB This user workload leverages a shared desktop image emulates a high level knowledge worker. for shortcuts.4 VM Memory VM Memory Approx. User Profile VM vCPU Professional 2 7. There is some graphics creation or editing done for presentations or content creation tasks. Up to five applications are open simultaneously and session idle time is approximately 2 minutes. similar to Enhanced Users.3 Professional Profile The Professional user profile is an advanced knowledge worker. each consecutive Login VSI user logon will start a different segments.  Excel 2010.  Approximately 2 minutes of idle time is included to simulate real-world users.Reference Architecture .  Once a session has been started the medium workload will repeat every 48 minutes.  The keyboard type rate is 160 ms for each character.  During each loop the response time is measured every 3-4 minutes. a very large randomized sheet is opened. This ensures that all elements in the workload are equally used throughout the test.Confidential .  Doro PDF Printer & Acrobat Reader.  FreeMind. one instance to measure response time.  The loop is divided in four segments. one instance to review and edit document. PDF.  Only apps used are Internet Explorer and Excel.  PowerPoint 2010. Dell Wyse Datacenter for Microsoft VDI and vWorkspace . Each loop will open and use: 73 Dell .  Internet Explorer. the word document is printed and reviewed to PDF. IE.Internal Use . a presentation is reviewed and edited.  The medium workload opens up to five applications simultaneously.  Word 2010.  Only two apps are open simultaneously.2 Workload Characterization Testing Details User Profile Standard Enhanced VM Memory 2GB 3GB OS Image Shared Shared Login VSI Workload Light Medium Workload Description This workload emulates a task worker. and Java/FreeMind.  The light workload is very light in comparison to medium. browse 10 messages.  Idle time total is about 1 hour and 45 minutes This workload emulates a medium knowledge working using Office.Privileged  Outlook 2010. browsing different webpages and a YouTube style video (480p movie trailer) is opened three times in every loop.7. a Java based Mind Mapping application.  There are more PDF printer actions in the workload.Internal Use .  Increased the time the workload plays a flash game. These instances stay open throughout the workload loop. or.Privileged Workload Description The heavy workload is based on the medium workload except that the heavy workload:  Begins by opening four instances of Internet Explorer.  Begins by opening two instances of Adobe Reader. The custom medium workload is based on the medium workload except that it includes:  Microsoft Fishbowl HTML5 application (30+ FPS) Dell Wyse Datacenter for Microsoft VDI and vWorkspace .  Instead of 480p videos a 720p and a 1080p video are watched.User Profile Professional Shared Graphics 74 VM Memory 4GB 3GB OS Image Shared plus Profile Virt. Private Shared plus Profile Virt.Reference Architecture .  The idle time is reduced to 2 minutes.Confidential . or. Private Login VSI Workload Heavy Custom Medium (Graphics) Dell . These instances stay open throughout the workload loop. 8 Solution Performance and Testing 8. and Hyper-V platform. Each user load is tested against four runs. RD Session Hosts. This will involve a team member logging into a session during the run and completing tasks similar to the User Workload description. VDI desktops). the performance analysis scenario will be to launch a user session every 30 seconds. this has allowed the team to understand the IOPS required by each layer of the Shared Tier 1 solution model. First.3 Microsoft Performance Monitor Microsoft Performance Monitor was utilized to collect resource utilization data for tests performed on the desktop VMs. Adobe Acrobat Reader etc. Once the user is connected the workload is started via a logon script which starts the test script once the user environment is configured by the login script. SAN HQ has been used to provide IOPS data at the SAN level. Each launcher system can launch connections to a number of 'target' machines (i. At different stages of the testing the testing team will complete some manual “User Experience” Testing while the environment is under load.) on each VDI desktop. Microsoft Office.1 Login VSI – Login Consultants Login VSI is the de-facto industry standard tool for testing VDI environments and server-based computing / terminal services environments.Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace . While this experience will be subjective.Internal Use . with the launchers being managed by a centralized management console. which is used to configure and manage the Login VSI environment. particularly under high load. a pilot run to validate that the infrastructure is functioning and valid data can be captured.2.g. and then.1.e. 8.Reference Architecture . 75 Dell . For all workloads.1 Testing Process The purpose of the single server testing is to validate the architectural assumptions made around the server stack.1 Load Generation and Monitoring 8. Once all users have logged in. It installs a standard collection of desktop application software (e. three subsequent runs allowing correlation of data. 8. 8.1.2 Testing and Validation 8. it then uses launcher systems to connect a specified number of users to available desktops within the environment.2 EqualLogic SAN HQ EqualLogic SANHQ was used for monitoring the Dell EqualLogic storage units in each bundle. it will help provide a better understanding of the end user experience of the desktop sessions. and ensure that the data gathered is reliable.Confidential .1. all will run workload activities at steady-state for 48-60 minutes and then logoffs will commence. 1 VMs were configured for memory and CPU as follows.Internal Use .5 were both evaluated as the broker/virtualization components. For all tests. Hyper-V role on Microsoft Windows Server 2012 R2 was the hypervisor used. SMB 3.5GHz. The same configuration was used for the Microsoft Storage Spaces tests except a Dell PowerVault MD1220 storage array was used instead of an EqualLogic PS6100XS array. compute host validation was performed on Dell R720 and T420 servers using local Tier 1 storage and Dell M620 servers on the Dell PowerEdge VRTX shared infrastructure platform. the compute hosts had 256GB of RAM while the compute hosts for the shared session desktop tests used only 128GB of RAM. For the Local Tier 1 Entry tests. the M620 can support the same processors as the R720 which was not true for the previous generation Intel Sandy Bridge processors. 3.8 GHz.8. a single Dell T420 server configured with dual Intel E5-2450v2.0 GHz.1 Dell vWorkspace 8.5 Microsoft Scale-Out File Server. For the pooled and personal desktop tests. 8 core processors and 128GB of RAM was used for the compute and management roles combined. Using Intel Ivy Bridge processors.Reference Architecture .0. 2. For personal desktop tests. The Windows 8.Confidential . With the exception of graphics acceleration tests and the Local Tier 1 Entry tests. all compute hosts used dual Intel E5-2690 v2. 2. Microsoft Windows Server 2012 R2 RDS and Dell vWorkspace 8. the compute hosts used dual Intel E5-2680 v2.3 Test Results This validation was designed to evaluate the capabilities and performance of the new components of the solution: o o o o o o o o o Microsoft Windows Server 2012 R2 Microsoft Windows 8. For the graphics acceleration tests.Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace . and Data Deduplication Intel Ivy Bridge processors (E5-2690 v2 and E5-2680 v2) Microsoft Lync 2013 VDI Plug-in Dell EqualLogic PS6100XS Dell PowerVault MD1220 Dell PowerEdge T420 For pooled and shared session desktop tests.0 file share hosted on a highly available Microsoft Scale-Out File Server utilizing Shared Tier 1 storage on a Dell EqualLogic PS6100XS storage array. compute host validation was performed on Dell R720 servers while the desktop VHDX files were stored on a data deduplicated SMB 3. 10 core processors. Validation was performed using Dell Wyse Datacenter standard testing methodology using LoginVSI 4 load generation tool for VDI benchmarking that simulates production user workloads. 10 core processors due to heat and power restraints for a GPU configured server. User Workload 76 vCPUs Hyper-V Start up Memory Hyper-V Minimum Memory Hyper-V Max Memory OS Bit Level Standard User 1 1GB 512MB 2GB x32 Enhance User / Shared Graphics 2 1GB 512MB 3GB x32 Professional User 2 1GB 1GB 4GB x64 Dell . 1.1 Summary of Results The tables below summarize the user densities that were obtained while remaining in acceptable resource limits (typically below 85% of available resources).Internal Use .Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace .3. Dell . 8. 77 Professional 1001 3-4 IOPS/User 75 7-8 IOPS/User 501 10-11 Densities for Standard and Professional workloads are calculated based on Enhanced workload results and empirical data.Reference Architecture .8.1 Entry Solution Standard Desktop Type Enhanced Users / Steady Users / Steady Users / Steady Host State Host State Host State IOPS/User Pooled 1.3.Confidential . 1.Confidential .2 Enterprise Solution: R730 Standard Desktop Type Enhanced Professional Users/ Steady Users/ Steady Users/ Steady Host State Host State Host State IOPS/User IOPS/User IOPS/User Pooled 345 3-4 205 7-8 145 10-11 Personal2 345 3-4 205 7-8 145 10-11 Shared Session3 500 1-2 300 2-3 280 2-4 2. 3. 78 Memory requirement for shared session (RDSH) configuration is 256GB vs. the IOPS per user is up to 80% less for shared sessions. Personal test results were achieved using a Shared Tier 1 solution model.Reference Architecture .8.Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace .3. Additionally.Internal Use . 384GB for pooled/personal VDI configuration. Dell . 79 Dell . Maximum users with 3 x AMD S7000 GPU cards are 75 while max with 2 x AMD S9000 GPU cards is 85. 5. Personal test results were achieved using a Shared Tier 1 solution model.Reference Architecture . the IOPS per user is up to 80% less for shared sessions. 6.3. Memory requirement for shared session (RDSH) configuration is 50% less than pooled/personal VDI configuration. Additionally.Internal Use .8.Confidential .1.3 Enterprise Solution: R720 Standard Desktop Type Enhanced Professional Shared Graphics Users/ Steady Users/ Steady Users/ Steady Users/ Steady Host State Host State Host State Host State IOPS/User IOPS/User IOPS/User IOPS/User Pooled 225 3-4 150 7-8 100 10-11 75-851 8-10 Personal2 225 3-4 150 7-8 100 10-11 75-851 8-10 Shared Session3 225 1-2 150 2-3 100 2-4 4.Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace . Below is a sample of times observed during testing: Broker Number of Desktops Desktop Provisioning Concurrency Time RDS 150 10 53 minutes vWorkspace 150 10 26 minutes In conclusion.1.8. vWorkspace demonstrated ~50% faster provisioning times than RDS due to HyperDeploy technology. Both RDS and vWorkspace allow the number of desktops being provisioned in parallel (concurrent desktop provisioning) to be adjusted. create the clone desktop VMs. Although adjusting these to higher values will speed up deployment time.Confidential .Internal Use . and prepare the VMs) was recorded during tests. 80 Dell .Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace .4 Shared Infrastructure (VRTX) Solution Standard Desktop Type Enhanced Professional Users / Steady Users / Steady Users / Steady Host State Host State Host State IOPS/User IOPS/User IOPS/User Pooled (2 Blades) 250 3-4 150 7-8 100 10-11 Pooled (4 Blades) 500 3-4 300 7-8 200 10-11 Shared Session (2 Blades) 250 1-2 150 2-3 100 2-4 Shared Session (4 Blades) 500 1-2 300 2-3 200 2-4 8. The default concurrency for RDS is set to 1 while the default for vWorkspace is 10.2 Provisioning Times The provisioning time (the time to export the desktop template.3. we recommend setting the value no higher than 10 for a production environment to avoid overutilization of resources.Reference Architecture .3. Users began logging off at 12:21PM and completed at 12:32PM during which IOPS peaked at 1521. IOPS spiked to 1152 during the logon period from 10:34AM to 11:13AM while users were logging in every 30 seconds. For comparison. During steady state from 11:14AM to 12:20PM. All other utilization was similar in both tests.3.Internal Use .Enhanced User Workload (75 Users) The results below were obtained from the single PowerEdge T420 host with vWorkspace deployment as configured in the Entry level architecture.Confidential .Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace . the graph below shows the total IOPS from a 75 user test using RDS deployment.3 Pooled Virtual Desktops (12G) 8. all users are executing the test workload and IOPS averaged 496 yielding about 7 IOPS/user.Reference Architecture . Although this is below the CPU threshold.3. Therefore. 81 Dell .3.1 Local Tier 1 Entry . it is not recommended to run additional enhanced desktops due to memory utilization and the fact that the management VMs are running on the same host. CPU utilization spiked to about 80% during the steady state period of testing.8. It should be noted that the vWorkspace HyperCache significantly reduced the Read IOPS thereby lowering the total IOPS. memory utilization is at the 85% threshold. IOPS shown are total for management VMs and desktops. Reference Architecture . IOPS spiked to 1638 during the logon period from 2:39PM to 3:55PM while users were logging in every 30 seconds. subsequent tests with 10 additional desktops placed the utilization over the threshold for the duration of the steady state so it was determined that 150 would be the most suitable density. all users are executing the test workload and IOPS averaged 1017 yielding about 7 IOPS/user.Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace . Network utilization remained relatively low and well within limits for a 1Gbps switching infrastructure.3. 8. During steady state from 3:56PM to 4:44PM.Internal Use .2 Local Tier 1 Production . However. 82 Dell . Memory utilization is well below the threshold leaving additional capacity for dynamic memory allocation.3.IOPS during RDS deployment testing: Network utilization remained relatively low and well within limits for a 1Gbps switching infrastructure. Users began logging off at 4:45PM and completed at 5:00PM during which IOPS peaked at 2689. CPU utilization spiked to 80% which is below the threshold and suggests the system could support more desktops.Enhanced User Workload (150 Users) The results below were obtained from the compute host as configured in the Local Tier 1 Production architecture with RDS deployment.Confidential . and Dell PowerVault storage array.Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace . file server hosts.3 The results below show the resource utilization on the compute hosts.Internal Use . Memory consumption averaged about 150GB during the steady state. the graphs below represent active resource utilization. 83 Dell .3. local IOPS for the compute host are not displayed here as they have no bearing on the results. Two file servers were configured in a highly available failover cluster and hosted the scale-out file server (SOFS) role.Enhanced User Workload (150 Users) 8. The systems under test contained 128GB of RAM but as can be seen in the Memory Utilization graph. resource utilization on the file server hosts was very low during the test. The network traffic solely for VHDX communication can be seen in the next section under file server. As you can see in the following graphs.3. Network utilization for the compute host is much higher due to the fact that the VHDX files for the desktops reside on the SMB file share hosted on the scale-out file server. This graph represents a combination of the network traffic between the compute host and file server for VHDX communication as well as traffic generated for the test workload. 32GB will be more than sufficient.Confidential .Shared Tier 1 Microsoft Storage Spaces . The CPU spiked to 85% and under the threshold for the remainder of the test. Since the disk files for the desktops reside on the storage array (shared Tier 1). File Server: Since the file servers are configured in a failover cluster hosting the scale-out file server role.Reference Architecture . Compute Host: The CPU and memory utilization for the single compute host are similar to the results seen in the pooled desktop test. Reference Architecture .Confidential . the average IOPS for SAS drives is 717 and 2711 for the SSD drives. The total IOPS is around 3428.Internal Use . 84 Dell . The graph below shows the IOPS of the physical disks in the MD1220 array. During steady state.Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace .The Network graph represents the amount of VHDX communication seen on the front-end between the compute host and the file server. Memory utilization is around the 85% threshold and therefore. two M620 blade servers belong to a Microsoft Failover Cluster. all users are executing the test workload and IOPS averaged 720 yielding about 5 IOPS/user. we see the benefits of the HyperCache feature of vWorkspace in IOPS savings. CPU utilization spiked to about 85% but remained below the threshold for the duration of the test. Network utilization remained relatively low and well within limits for a 1Gbps switching infrastructure. IOPS for the virtual desktops spiked to 1409 during the logon period from 1:25PM to 2:46PM while users were logging in every 30 seconds.8. During steady state from 2:47PM to 3:44PM.3.Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace .4 Shared Infrastructure (VRTX) – Enhanced User Workload (150 Users) The results below were obtained from the compute host as configured in the Shared Infrastructure Pilot architecture with vWorkspace deployment. all virtual desktops and management VMs were running on a single host for the duration of the test to simulate the effects of a failover scenario.Confidential .3. 85 Dell . Users began logging off at 3:45PM and completed at 3:56PM during which IOPS peaked at 1586. however. it is not recommended to run additional enhanced desktops. Again.Reference Architecture .Internal Use . In this configuration. Confidential .3.6 IOPS/user. IOPS averaged about 169 but spiked to 638 at the beginning of the period due to a large number of sessions logging off at the same time.1 Local Tier 1 Production . the RDSH environment was configured in the following manner on the compute host. Network utilization remained relatively low and well within limits for a 1Gbps switching infrastructure. However.Reference Architecture . During this time.4. all users are executing the test workload and IOPS peaked to about 327 and then averaged 217 yielding about 1. 86 Dell . average memory consumed was approximately 115GB or about 784MB/user.8. IOPS reached 295 during the logon period from 10:35AM to 11:50AM while users were logging in every 30 seconds. CPU utilization spiked to about 78% which is below the threshold and suggests the system could support more sessions. memory utilization reached the threshold so additional sessions are not recommended.3. During steady state between 11:51AM and 12:51PM while users are executing the test.Internal Use .Enhanced User Workload (150 Users) For the shared session tests. Users began logging off at 12:51PM and completed at 1:10PM.Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace . During steady state from 11:51AM to 12:50PM. Hyper-V Compute Host CPU Resources Memory (GB) RDSH VM1 RDSH VM2 RDSH VM3 RDSH VM4 40 logical cores (20 physical) 10 vCPUs 10 vCPUs 10 vCPUs 10 vCPUs 128 Dynamic 16GB (max 31GB) Dynamic 16GB (max 31GB) Dynamic 16GB (max 31GB) Dynamic 16GB (max 31GB) The results below were taken from the compute server hosting the four RDSH VMs as configured in the Local Tier 1 Production architecture.4 Shared Session 8. Reference Architecture . Additionally. The first set of results show the utilization of 150 desktops hosted on a single compute server while the second set of results show the impact of expanding the desktops to 300 and adding an additional compute host.1 The results below show the resource utilization on the compute hosts. Memory consumption averaged about 170GB during the steady state workload phase between 11:00AM and 12:27PM.3. Two file servers were configured in a highly available failover cluster and hosted the scale-out file server (SOFS) role. Since the disk files for the desktops reside on the storage array (shared Tier 1). the read/write IOPS ratio is typically about 5% / 95%.Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace .Enhanced User Workload (150 – 300 Users) 8. 87 Dell . and Dell EqualLogic storage array. Network utilization for the compute host is much higher due to the fact that the VHDX files for the desktops reside on the SMB file share hosted on the scale-out file server. local IOPS for the compute host are not displayed here as they have no bearing on the results.3. The CPU spiked to 85.7Mbps per desktop.3% during the logon period but stayed under the threshold for the remainder of the test. The network traffic solely for VHDX communication can be seen in the next section under file server. file server hosts.5. This results in much lower read IOPS as most of the read blocks are cached as metadata and only writes are primarily passed through to the storage. This graph represents a combination of the network traffic between the compute host and file server for VHDX communication as well as traffic generated for the test workload.8. There was a spike to about 760Mbps during the steady state test execution phase.Confidential . As a result. utilization averaged about 402Mbps yielding about 2.Internal Use . however. the share for the desktop VHDX files is a CSV that has data deduplication enabled.5 Personal Virtual Desktops Shared Tier 1 Production . Single Compute Host: The CPU and memory utilization for the single compute host are similar to the results seen in the pooled desktop test. During the steady state test phase from 11:00AM to 12:27PM. The systems under test contained 128GB of RAM but as can be seen in the Memory Utilization graph. On the front-end.Internal Use . iSCSI traffic reached a peak of about 500Mbps during the login period.Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace . 48GB will be more than sufficient. resource utilization on the file server hosts was very low during the test. During the logoff period between 12:28PM and 12:44PM. a brief spike to 496Mbps was seen but the utilization averaged about 175Mbps or 1. On the back-end.File Server: Since the file servers are configured in a failover cluster hosting the scale-out file server role. the graphs below represent active resource utilization. As you can see in the following graphs.3Mbps per desktop. During the steady state test phase. The Network graph represents the amount of VHDX communication seen on the front-end between the compute host and the file servers while the iSCSI graph represents the iSCSI network traffic on the back-end between the file servers and storage array.2Mbps per desktop. 88 Dell .Reference Architecture . utilization peaked at 339Mbps. network utilization reached a peak of about 551Mbps during the login period from 9:45AM to 10:59AM.Confidential . iSCSI traffic averaged about 197Mbps yielding about 1. Confidential .Reference Architecture .Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace .Internal Use . IOPS high point (~1625) for the volume containing the desktop VHDX files: 89 Dell .The picture below shows the I/O graphs for the entire test period with the IOPS high point (1662) for all volumes indicated during the steady state test phase. Two Compute Hosts – 300 Desktops: The utilization was similar for both compute hosts involved in this test.Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace . The CPU spiked to approximately 90% on one host and about 84% on the other but both stayed under the threshold for the remainder of the test.Reference Architecture .IOPS profile during the steady state test phase from 11:00AM to 12:27PM: IOPS averaged about 1094 during the steady state period or approximately 7.Confidential .3 IOPS/User. Compute Host 1: 90 Dell . Memory consumption for both hosts averaged about 170GB during the steady state workload phase between 2:12PM and 3:20PM.Internal Use . The systems under test contained 128GB of RAM but as can be seen in the Memory Utilization graph.Compute Host 2: The Network graphs for both hosts represent a combination of the network traffic between the compute hosts and file servers for VHDX communication as well as traffic generated for the test workload.Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace . File Server: Since the file servers are configured in a failover cluster hosting the scale-out file server role. resource utilization on the file server hosts was very low during the test. 91 Dell . There was a spike to about 501Mbps on the first host and about 574Mbps on the second compute host. 48GB will be more than sufficient.Internal Use . The network traffic solely for VHDX communication can be seen in the next section under file server.Reference Architecture . As you can see in the following graphs.3Mbps per desktop. the graphs below represent active resource utilization. During the steady state test execution phase from 2:12PM to 3:20PM. the network utilization averaged about 333Mbps on one host and 354Mbps on the other yielding about 2.Confidential . On the back-end. utilization peaked at 486Mbps. network utilization reached a peak of about 589Mbps during the login period from 11:42AM to 2:11PM. During the steady state test phase.Reference Architecture .Confidential .2Mbps per desktop.Internal Use . On the front-end. During the logoff period between 3:21PM and 3:41PM.The Network graph represents the amount of VHDX communication seen on the front-end between the compute hosts and the file servers while the iSCSI graph represents the iSCSI network traffic on the back-end between the file servers and storage array. a brief spike to 633Mbps was seen but the utilization averaged about 358Mbps or 1. iSCSI traffic spiked over 605Mbps and averaged about 363Mbps yielding about 1. 92 Dell .2Mbps per desktop.Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace . The picture below shows the I/O graphs for the entire test period with the IOPS high point (2489) for all volumes indicated during the steady state test phase. During the steady state test phase from 2:12PM to 3:20PM. iSCSI traffic reached a peak of about 661Mbps during the login period. Although the IOPS per user is lower during this 300 desktop test. 93 Dell . this does not indicate a trend with adding more desktops and compute hosts as subsequent tests with additional desktops indicates that IOPS/user typically stays in the 7-8 IOPS range.IOPS high point (~2422) for the volume containing the desktop VHDX files: IOPS profile during the steady state test phase from 2:12PM to 3:20PM: IOPS averaged about 1847 during the steady state period or approximately 6.2 IOPS/User.Internal Use .Reference Architecture .Confidential .Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace . only 596GB of space is being used on the storage array.Data Deduplication: The picture below shows the deduplication savings (93%) on the volume containing the SMB file share for the personal desktops. 94 Dell .Confidential . With deduplication enabled.Reference Architecture .Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace .Internal Use . More details: All personal desktop tests were performed while the volume hosting the VHDX files was at least at 90% optimized. This volume contains the VHDX files for 600 personal desktops that are consuming over 9TB of space. Internal Use . file server hosts.5. Compute Host: The CPU and memory utilization for the single compute host are similar to the results seen in the pooled desktop test. local IOPS for the compute host are not displayed here as they have no bearing on the results.Reference Architecture .2 The results below show the resource utilization on the compute hosts. Network utilization for the compute host is much higher due to the fact that the VHDX files for the desktops reside on the SMB file share hosted on the scale-out file server.Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace . The network traffic solely for VHDX communication can be seen in the next section under file server. Memory consumption averaged about 150GB during the steady state. The CPU spiked to 85% and under the threshold for the remainder of the test. Two file servers were configured in a highly available failover cluster and hosted the scale-out file server (SOFS) role.3.Confidential . The systems under test contained 128GB of RAM but as can be seen in the Memory Utilization graph. resource utilization on the file server hosts was very low during the test. This graph represents a combination of the network traffic between the compute host and file server for VHDX communication as well as traffic generated for the test workload. 32GB will be more than sufficient. File Server: Since the file servers are configured in a failover cluster hosting the scale-out file server role. the graphs below represent active resource utilization. and Dell PowerVault storage array. As you can see in the following graphs. Since the disk files for the desktops reside on the storage array (shared Tier 1). 95 Dell .Shared Tier 1 Microsoft Storage Spaces – Enhanced User Workload (150 Users) 8. This volume contains the VHDX files for 150 personal desktops that are consuming over 1TB of space.Confidential .Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace . 96 Dell . The total IOPS is around 3096.Internal Use .Reference Architecture .The Network graph represents the amount of VHDX communication seen on the front-end between the compute host and the file server. Data Deduplication: The deduplication savings was approximately 93% on the volume containing the SMB file share for the personal desktops. the average IOPS for SAS drives is 214 and 2882 for the SSD drives. The graph below shows the IOPS of the physical disks in the MD1220 array. During steady state. IOPS spiked to 1697 during the logon period from 11:42AM to 12:09PM while users were logging in every 20 seconds. Network utilization hit a spike to 524Mbps during the steady state test phase but averaged about 197Mbps during this period and remained relatively low throughout the test. The server was configured with two AMD S9000 GPU cards and the maximum density reached was 85 desktops.8.Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace .Internal Use .6 Graphics Acceleration 8. 97 Dell .Confidential .3.8GB per desktop) during the steady state workload phase between 12:10PM and 1:09PM. Memory utilization is well below the threshold and consumption averaged about 151GB (1.Shared Graphics User Workload (85 Users) The results below were obtained from the compute host as configured in the Local Tier 1 Graphics Acceleration architecture. During steady state from 12:10PM to 1:09PM.3.Reference Architecture .6. Users began logging off at 1:10PM and completed at 1:20PM during which occurred a brief IOPS spike to about 2500. all users are executing the test workload and IOPS averaged 564 yielding about 7 IOPS/user.1 Local Tier 1 Production . CPU utilization reached about 70% but stayed well below the threshold for the entire test. GPU Performance Analysis Although CPU. memory. disk. density was determined by the GPU utilization.Confidential . and network utilization results were well within limits.Internal Use .Reference Architecture .Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace . Below is the GPU utilization for the two AMD S9000 cards captured soon after the steady state test phase started: Process Explorer graph showing GPU utilization during the same period: 98 Dell . Graph below shows the RemoteFX Output FPS for all sessions: 8.3.7 Unified Communications 8.3.7.1 Lync 2013 VDI Plug-in In a virtual desktop scenario, the Lync 2013 VDI plug-in offloads encoding and decoding of media from the server hosting the desktops to the local client connecting to the desktop. Tests were performed to show the impact of making audio and video calls with and without the plug-in. 99 Dell - Internal Use - Confidential - Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace - Reference Architecture Audio Call Sample: During a call playing the same audio clip, processor utilization within the virtual desktop not paired with the Lync plug-in spiked to about 14% while only spiking to about 7% with the plug-in enabled. Average processor utilization was about 4% with the plug-in and about 5.3% without. On average, this was about a 25% reduction in processor utilization while reaching 50% reduction on spikes. Video Call Sample: During a video call, processor utilization within the virtual desktop not paired with the Lync plug-in spiked to about 36% while only spiking to about 3% with the plug-in enabled. Average processor utilization was about 2% with the plug-in and about 25% without. This was about a 92% reduction in processor utilization on average and during spikes. No discernible difference was noticed in memory utilization although additional testing may show an impact in this area as well. 100 Dell - Internal Use - Confidential - Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace - Reference Architecture Appendix A – 10-Seat Trial Kit Introduction To get up and running as quickly as possible with pooled virtual desktops, Dell offers an extremely affordable solution capable of supporting 10 concurrent virtual desktop users for a minimal investment. This architecture leverages an inexpensive single server platform intended to demonstrate the capabilities of VDI for a small environment or focused POC/ trial of Microsoft RDS or Dell vWorkspace. All VDI roles/ sessions are hosted on a single server and can leverage existing legacy networking where applicable. Server Configuration The PowerEdge T110 II is the server platform of choice for this offering, providing high performance at an extremely low price of entry. Supporting the Intel Xeon E3-1200 series of CPUs and up to 32GB RAM, the T110 provides a solid server platform to get started with VDI. 101 Dell - Internal Use - Confidential - Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace - Reference Architecture 10 users will experience excellent performance with additional resource headroom available in reserve.2k Disks RAID 10 (OS + VDI) PERC H200 Integrated RAID Controller Broadcom 5722 1Gb NIC (LAN) 305W PSU Based on the server hardware configuration.Reference Architecture .Confidential . The consumption numbers below are based on average performance: Task Worker Users CPU (%) RAM (GB Consumed) Disk (IOPS) Network (Kbps) 10 72 20 77 150 Management and Compute Infrastructure The solution architecture for the 10 user trial kit combines the Compute. To maximize server resources.All VDI server roles and desktop sessions are hosted on a single server in this model so there is no need for external storage. Higher scale and HA options are not offered with this bundle.1Ghz) 32GB Memory (4 x 8GB DIMMs @ 1600Mhz) (VDI) Microsoft Windows Server 2012 Hyper-V 4 x 500GB SATA 7. Management. the connection broker and license server roles are enabled within the Hyper-V parent partition.Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace . while the File server and VDI sessions exist as VMs within child partitions. 10 User Compute Host PowerEdge T110 II 1 x Intel Xeon E3-1220 V2 (3. 102 Dell .Internal Use . and Storage layers onto a single server-based platform. portions. only the file server VM requires that specific physical resources be assigned.Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace . RDS Since the RDCB and Licensing roles will be enabled within the Hyper-V parent partition. no roles will be enabled within the Hyper-V parent partition. 103 Volumes Size RAID Storage Purpose File System OS + VDI 1TB 10 Tier 1 Host OS/ Mgmt roles + VDI Sessions NTFS Dell .Confidential .Internal Use . All VDI management roles and VDI sessions will be enabled via VMs running in child partitions. Role vCPU Startup RAM (GB) Dynamic Memory File Server 1 1 512MB|2GB 20% Med Pooled VDI VMs 1 512MB 512MB|2GB 20% Med  Min|Max Buffer NIC OS + Data vDisk (GB) Tier 2 Volume (GB) 1 40 + 10 50 1 20 - Weight vWorkspace As is the case in the larger distributed architecture. Role vCPU Startup RAM (GB) Dynamic Memory Min|Max VDI Mgmt VM 1 4 512MB|8GB 20% Med Pooled VDI VMs 1 512MB 512MB|2GB 20% Med Buffer NIC OS + Data vDisk (GB) Tier 2 Volume (GB) 1 40 50 1 20 - Weight Storage Configuration The 10 User POC solution includes 4 total hard drives configured in RAID10 to host the Windows Server OS as well as VDI sessions. A single Windows Server VM is sufficient to run the VDI mgmt.Reference Architecture . This configuration will maximize available performance and data protection. 104 Dell . you can use an organization certificate if you so wish. User Density: We tested the following user connections 125. Certificates: We created a self-signed certificate for use. For number of connections and user density.Confidential . Please refer to vWorkspace documentation to do this.Reference Architecture .Appendix B – Secure Gateway EOPx: For our testing purposes we tested RDP sessions with EOPx on.Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace .Internal Use . 250 and 500 see scaling guide. that the secure gateway virtual machine can handle. Sr.Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace . Thanks to Derrick Isoka. persistent virtual desktops.Internal Use . and Remote Desktop Services in general. Program Manager at Microsoft. Thanks to the Microsoft Remote Desktop Services/VDI team for their efforts and support of Dell Wyse Datacenter solutions. Principal Architect for Cloud Client Computing at Dell. Thanks to Michael Marchenko for his insight and help with Microsoft Storage Spaces. 105 Dell .Reference Architecture .Confidential .Acknowledgements Thanks to Peter Fine. for his knowledge and information regarding Microsoft RemoteFX vGPU configuration. for his expertise and contributions to the Dell Wyse Datacenter solutions and reference architectures. Confidential . Reed Martin is a Sr.Reference Architecture . Systems Engineer in the Cloud Client Computing Group at Dell with extensive experience validating VDI solutions by Microsoft (RDS). Principal Engineering Architect in the Cloud Client Computing Group at Dell. Senthil Baladhandayutham is the Solutions Development Manager in the Cloud Client Computing Group at Dell. Jerry Van Blaricom is a Systems Principal Engineer in the Cloud Client Computing Group at Dell. Systems Engineer in the Cloud Client Computing Group at Dell. managing the development and delivery of Enterprise class Cloud Client Computing based on Dell datacenter components and core virtualization platforms. Farzad is recognized for enterprise level technology development and has led various R&D teams across a broad set of solutions and products including storage. John has years of deep operational experience in IT and holds a Bachelor’s degree in Computer Engineering from the University of Limerick.About the Authors Steven Hunt is the Principal Engineering Architect for Microsoft based solutions in the Cloud Client Computing Group at Dell. Citrix (XenDesktop/XenApp) and VMware (View) Cloud Client Computing. Bala Chandrasekaran is a Principal Engineer in the Cloud Client Computing Group at Dell. and Citrix (XenDesktop). VMware (View). Jerry has extensive experience with the design and implementation of a broad range of enterprise systems and is focused on making Dell’s virtualization offerings consistently best in class. and system architectures. Technology Marketing Manager in the Cloud Client Computing Group at Dell. data management.Internal Use .Privileged Dell Wyse Datacenter for Microsoft VDI and vWorkspace . John Waldron is a Sr. Bala has over a decade of experience designing virtualization infrastructure solutions. Shruthin Reddy is a Sr. Microsoft (RDS). servers. Steven has over a decade of experience with Dell (vWorkspace). Farzad Khosrowpour is a Systems Sr. 106 Dell .
Copyright © 2024 DOKUMEN.SITE Inc.