NPIV VIO Presentation



Comments



Description

NPIV and the IBM Virtual I/O Server (VIOS) October 2008 © 2006 IBM Corporation NPIV Overview ► N_Port ID Virtualization (NPIV) is a fibre channel industry standard method for virtualizing a physical fibre channel port. ► NPIV allows one F_Port to be associated with multiple N_Port IDs, so a physical fibre channel HBA can be shared across multiple guest operating systems in a virtual environment. ► On POWER, NPIV allows logical partitions (LPARs) to have dedicated N_Port IDs, giving the OS a unique identity to the SAN, just as if it had a dedicated physical HBA(s). NPIV specifics PowerVM VIOS 2.1 - GA Nov 14 NPIV support now has planned GA of Dec 19 Required software levels – – – – – – VIOS Fix Pack 20.1 AIX 5.3 TL9 SP2 AIX 6.1 TL2 SP2 HMC 7.3.4 FW Ex340_036 Linux and IBM i planned for 2009 Required HW – POWER6 520,550,560,570 only at this time, Blade planned for 2009 – 5735 PCIe 8Gb Fibre Channel Adapter unique WWPN generation (allocated in pairs)*** Each virtual FC HBA has a unique and persistent identity Compatible with LPM (live partition mobility) VIOS can support NPIV and vSCSI simultaneously Each physical NPIV capable FC HBA will support 64 virtual ports HMC-managed and IVM-managed servers 1 VIOS FC Adapters FC Adapters Storage Virtualiser Pass Through mode VIOS Admin in charge SAN NPIV SAN SAN Admin Back in charge EMC 5000 LUN IBM 4700 LUN EMC 5000 LUN IBM 4700 LUN .Storage Virtualisation VIO client Generic SCSI disk Virtual SCSI Adapters SCSI SAS vSCSI VIOS Note Path code And Devices difference VI O With NPIV S VIO client EMC 5000 LUN Virtual FC Adapters IBM 2105 LUN 2. 1 TL02. 1 VIOS FC Adapters New PCIe 8Gbit Fibre Channel adapters (can run 2 or 4 Gbit) Entry SAN switch must be NPIV capable Disk Sub-System does not need to be NPIV capable SAN Fabric can be 2.4 NPIV What you need? Supports SCSI-2 reserve/release SCSI-3 persistent reserve AIX 5.3 TL09.2 New EL340 Firmware (disruptive) VIOS 2. RHEL 5.3. AIX 6. RHEL 4.7. SLES 10 SP2. 4 or 8 Gbit (not 1 Gbit) IBM 4700 LUN EMC 5000 LUN .HMC 7.1 POWER6 only VIO client EMC 5000 LUN VI O S Virtual FC Adapters IBM 2105 LUN 2. ► ► ► HMC 7.4 configure Virtual FC Adapter Just like virtual SCSI On both Client and Server Virtual I/O Server .3. 1 1.VI O S NPIV What you do? 2. 1 2.VI O S NPIV What you do? 2. Once Created: LPAR Config Manage Profiles Edit click FC Adapter Properties and the WWPN is available . ► ► ► 2. 1 VIOS connect the virtual FC adapter to the physical FC adapter With vfcmap lsmap –all –npiv lsnports shows physical ports supporting NPIV 4.0.1. .VI O S NPIV What you do? $ ioslevel 2.0 $ lsdev | grep FC fcs0 Available FC Adapter fscsi0 Available FC SCSI I/O Controller Protocol Device vfchost0 Available Virtual FC Server Adapter $ vfcmap -vadapter vfchost0 -fcp fcs0 vfchost0 changed $ 3. SAN Zoning To allow the LPAR access to the LUN via the new WWPN Allow both WWPN and on any Partition Mobility target. monitoring…. Oracle. multipathing. should work right out of the box ► storage provisioning / ease-of-use ► Zoning / LUN masking ► physical <-> virtual device compatibility ► tape libraries ► SCSI-2 Reserve/Release and SCSI3 Persistent Reserve – clustered/distributed solutions ► Load balancing (active/active) ► solutions enablement (HA.…) ► Storage. backup / restore. . apps.. Copy Services.NPIV benefits ► NPIV allows storage administrators to used existing tools and techniques for storage management ► solutions such as SAN managers. firmware.8G HBAs. HMC. ► The hypervisor does not reuse the WWPNs that are assigned to the virtual Fibre Channel client adapter on the client logical partition.NPIV implementation ► Install the correct levels of VIOS. and NPIV capable/enabled SAN and storage ► Virtual Fibre channel adapters are created via the HMC ► The VIOS owns the server VFC. . the client LPAR owns the client VFC ► Server and Client VFCs are mapped one-to-one with the vfcmap command in the VIOS ► The POWER hypervisor generates WWPNs based on the range of names available for use with the prefix in the vital product data on the managed system. Target Storage SUBSYSTEM must be zoned and visible from source and destination systems for LPM to work. Active/Passive storage controllers must BOTH be in the SAN zone for LPM to work Do NOT include the VIOS physical 8G adapter WWPNs in the zone You should NOT see any NPIV LUNs in the VIOS Load multi-path code in the client LPAR. don’t delete a profile with a VFCin it. Save the partition profile with VFCs in it.Things to consider WWPN pair is generated EACH time you create a VFC. Make a copy. Make sure the partition profile is backed up for local and disaster recovery! Otherwise you’ll have to create new VFCs and map to them during a recovery. Just like a real HBA. you get a NEW pair of WWPNs. NEVER is re-created or re-used. NOT in the VIOS Monitor VIOS CPU and Memory – NPIV impact is unclear to me at this time No ‘passthru’ tunables in VIOS . If you create a new VFC. lscfg -vl fcsx ► In A(X client lpar. shows virtual fibre channel properties .NPIV useful commands vfcmap -vadapter vfchostN -fcp fcsX ► maps the virtual FC to the physical FC port vfcmap -vadapter vfchostN -fcp ► un-maps the virtual FC from the physical FC port lsmap –all –npiv ► shows the mapping of virtual and physical adapters and current status ► lsmap –npiv –vadapter vfchostN shows same ofr one VFC lsdev -dev vfchost* ► lists all available virtual Fibre Channel server adapters lsdev -dev fcs* ► lists all available physical Fibre Channel server adapters lsdev –dev fcs* -vpd ► shows all physical FC adapter properties lsnports ► shows the Fibre Channel adapter NPIV readiness of the adapter and the SAN switch. html .software.NPIV resources ► Redbooks: SG24-7590-01 PowerVM Virtualization on IBM Power Systems (Volume 2): Managing and Monitoring SG24-7460-01 IBM PowerVM Live Partition Mobility ► VIOS latest info: http://www14.com/webapp/set2/sas/f/vios/home.ibm. Questions . BACKUP VIOS SLIDES . 570.each port provides single initiator ► Automatically adjusts to SAN fabric 8 Gbps.5 – 150m) 4Gb (.5 – 380m) 8Gb (. 4 Gbps.multimode 50/125 micron fibre. ► Use multimode fibre optic cables with short-wave lasers: – OM3 . 2 Gbps ► LED on card indicates link speed Ports have LC type connectors ► Cables are the responsibility of the customer.#5735 PCIe 8Gb Fibre Channel Adapter Supported on 520.5 – 21m) – OM1 . 550. 500 MHz*km bandwidth ● 2Gb (.5 – 50m) .5 – 300m) 4Gb (.multimode 50/125 micron fibre. 200 MHz*km bandwidth ● 2Gb (. 575 Dual port adapter . 2000 MHz*km bandwidth ● 2Gb (.5 – 70m) 8Gb (.5 – 150m) – OM2 .5 – 500m) 4Gb (.5/125 micron fibre.multimode 62. 560.5 – 150m) 8Gb (. VIOS is the SCSI Target server LPAR owns physical I/O resources client LPAR sees standard SCSI devices. accesses LUNs via a virtual SCSI adapter VIOS is a standard storage subsystem transport layer is the interpartition communication channel provided by PHYP (reliable msg transport) SRP(SCSI Remote DMA Protocol) LRDMA(logical redirected DMA) .Virtual SCSI client LPAR (ie virtual machine) is the SCSI initiator. RHEL4) or later ƒ IBM i Boot from VSCSI devices Multi-pathing for VSCSI devices . or file) Adapter and device sharing Multiple I/O Servers per system.Virtual SCSI (continued) SCSI peripheral device types supported: ƒ Disk (backed by logical volume.3 or later ƒ Linux(SLES9+. or file) ƒ Optical (backed by physical optical. typically deployed in pairs VSCSI client support: ƒ AIX 5. RHEL3 U3+. physical volume. Basic vSCSI Client And Server Architecture Overview I/O Server virtual server adapter I/O client I/O client I/O client virtual client adapter physical HBA and storage PHYP . The VIOS facilitates adapter sharing only. . Heterogeneous storage is pooled by the VIOS into a homogeneous pool of block storage and then allocated to client LPARs in the form of generic SCSI LUNs. providing an FCP connection from the client to the SAN. With NPIV. the VIOS serving NPIV is a passthru.vSCSI vio client generic scsi disk SCSI VIOS VIOS VIOS NPIV vio client EMC EMC FCP VIOS generic scsi disk IBM 2105 FC HBAs FC HBAs FC HBAs FC HBAs SAN SAN EMC IBM 2105 EMC IBM 2105 The vSCSI model for sharing storage resources is storage virtualizer. Rather than a storage virtualizer. The VIOS performs SCSI emulation and acts as the SCSI Target. there is no device level abstraction or emulation. the VIOS's role is fundamentally different. vSCSI VIOS AIX VIOS LVM multipathing LVM LVM multipathing multipathing Disk Driver Disk Driver Disk Driver fibre channel HBAs PHYP VSCSI target VSCSI HBA VSCSI HBA VSCSI target fibre channel HBAs SAN . NPIV VIOS AIX VIOS LVM multipathing VFC HBA VFC HBA VFC HBA VFC HBA VFC HBA PHYP VFC HBA VFC HBA VFC HBA Disk Driver fibre channel HBAs passthru module VFC HBA VFC HBA passthru module fibre channel HBAs SAN . monitoring VIOS DS4000. managing. DS6000. DS8000 WWPN vio client NPIV enabled SAN N P I V WWPN vio client WWPN SVC tape library N P I V WWPN WWPN vio client WWPN vio client VIOS HDS EMC NetApp vFC adapter pair .NPIV – provisioning. Live Partition Mobility(LPM) and NPIV VIOS vio client WWPN VIOS WWPN vio client N P I V WWPN N P I V vio client WWPN NPIV enabled SAN N P I V WWPN vio client WWPN vio client WWPN WWPN N P I V WWPN WWPN vio client vio client WWPN WWPN vio client VIOS VIOS • WWPNs are allocated in pairs . IBM System p Heterogeneous multipathing VIOS#1 Passthru module AIX A NPIV NPIV Fibre HBA POWER Hypervisor Storage Controller SAN Switch SAN Switch A B C Fibre HBA D C’ D’ A’ ’ B’ © 2006 IBM Corporation . VIOS block diagram (vSCSI and NPIV) NPIV ports POWER Server passthru module LPARs vSCSI devices (SCSI LUNS) block virtualization filesystems LVM multi-pathing disk | optical physical adapters FC/NPIV | SCSI | iSCSI | SAS | USB | SATA virtual devices back by a file virtual devices backed by a logical volume virtual devices backed by a pathing device virtual devices physical peripheral device virtual tape NPIV physical storage . iSCSI. USB.././. or i5/OS) physical storage Fibre channel. Linux.iso b1: ... SCSI..vSCSI basics VIOS File backed disk storage pool (/var/vios/storagepools/pool_name) /var/vios/storagepools/pool1/foo1 Virtual optical media repository (/var/vios/VMLibrary) /var/vios/VMLibrary/foo2././././foo2.... SAS./foo1 a2 – .iso) Logical Volume storage pool (/dev/VG_name) /dev/storagepool_VG/lv_client12 Physical device backed devices (/dev) /dev/hdisk10 /dev/lv_client20 /dev/powerpath0 /dev/cd0 /dev/sas0 NPIV (/dev) /dev/fscsi0 <-> WWPN p2v mapping devices S C S I E M U L A T I O N a1: – ./lv_client12 b2: /dev/hdisk10 b3: /dev/lv_client20 b4: /dev/powerpath0 b5: /dev/cd0 b6: /dev//sas0 POWER Server LPARs (AIX. SATA a1 b1 b6 b3 b2 b1 b5 b4 a2 c1: /dev/fscsi0 e1 vSCSI Target PHYP .. Data flow using LRDMA for vSCSI devices vscsi client data buffer Da ta vscsi initiator vscsi target I/O server physical adapter driver (L RD M control A) phyp pci adapter . VSCSI redundancy using multipathing at the client I/O Server AIX client MPIO I/O Server disk driver vscsi target vscsi initiator vscsi initiator vscsi target PHYP SAN . Direct attach fibre channel block diagram AIX generic disk driver data buffer fibre channel HBA DD Da ta phyp FC HBA SCSI Initiator . NPIV block diagram AIX generic disk driver VIOS data buffer Da ta VFC client passthru module fibre channel HBA DD phyp FC HBA SCSI Initiator . Testing VIOS physical fibre chan HBA System p/i Server logical partitions POWER5 Server Linux AIX VIOS v S C S I AIX AIX AIX AIX A1 A2 A3 A4 A5 A6 A7 A8 Virtual SCSI POWER Hypervisor physical physical fibre chan fibre chan HBA HBA External Storage ie. . DS8K A1 A2 A3 A4 A5 A6 A7 A8 Available via optional Advanced POWER Virtualization or POWER Hypervisor and VIOS features. ► Use multimode fibre optic cables with short-wave lasers: – OM3 .5 – 300m) 4Gb (.5 – 150m) 4Gb (. 2000 MHz*km bandwidth ● 2Gb (. 560.multimode 50/125 micron fibre.multimode 50/125 micron fibre.5 – 150m) 8Gb (.5 – 70m) 8Gb (.5 – 380m) 8Gb (.5 – 21m) – OM1 . 200 MHz*km bandwidth ● 2Gb (. 4 Gbps. 500 MHz*km bandwidth ● 2Gb (. 570.5 – 50m) .#5735 PCIe 8Gb Fibre Channel Adapter Supported on 520. 575 Dual port adapter .5 – 150m) – OM2 .each port provides single initiator ► Automatically adjusts to SAN fabric 8 Gbps. 550.5/125 micron fibre. 2 Gbps ► LED on card indicates link speed Ports have LC type connectors ► Cables are the responsibility of the customer.multimode 62.5 – 500m) 4Gb (. Questions © 2008 IBM Corporation . IBM Corporation. Send license inquires. The information contained in this document has not been submitted to any formal IBM test and is provided "AS IS" with no warranties or guarantees either expressed or implied. There is no guarantee these measurements will be the same on generallyavailable systems. and may vary by country. Rates are based on a client's credit rating. Some measurements quoted in this document may have been made on development-level systems. and the information is subject to change without notice. The furnishing of this document does not give you any license to these patents. Consult your local IBM business contact for information on the IBM offerings available in your area. Information in this document concerning non-IBM products was obtained from the suppliers of these products or other public sources. Any performance data contained in this document was determined in a controlled environment. Regardless. All prices shown are IBM's United States suggested list prices and are subject to change without notice. IBM Global Financing offerings are provided through IBM Credit Corporation in the United States and other IBM subsidiaries and divisions worldwide to qualified commercial and government clients. Other restrictions may apply. in writing. extension or withdrawal without notice. Some measurements quoted in this document may have been estimated through extrapolation. All statements regarding IBM future direction and intent are subject to change or withdrawal without notice. IBM is not responsible for printing errors in this document that result in pricing or information inaccuracies. New Castle Drive. equipment type and options. Revised September 26. Users of this document should verify the applicable data for their specific environment. IBM may have patents or pending patent applications covering subject matter in this document. Armonk. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products.Special notices This document was developed for IBM offerings in the United States as of the date of publication. financing terms. IBM hardware products are manufactured from new parts. 2006 © 2008 IBM Corporation . Rates and offerings are subject to change. or new and serviceable used parts. to IBM Director of Licensing. reseller prices may vary. offering type. IBM may not make these offerings available in other countries. and represent goals and objectives only. NY 10504-1785 USA. All examples cited or described in this document are presented as illustrations of the manner in which some IBM products can be used and the results that may be achieved. Actual environmental costs and performance characteristics will vary depending on individual client configurations and conditions. our warranty terms apply. Actual results may vary significantly and are dependent on many factors including system hardware configuration and software design and configuration. Itanium. Tivoli Enterprise. AMD Opteron is a trademark of Advanced Micro Devices. IBM Systems Director Active Energy Manager. System z. Blue Gene. Cloudscape. Lotus. AS/400. System Storage. General Purpose File System. Notes. POWER2. GPFS. THINK. in the United States. SPECapc. IntelliStation. Tivoli. PartnerLink. DB2 Universal Database. Power Systems Software (logo). Operating System/400. System p. other countries or both. Chipkill. SPEC OMP. other countries or both. HACMP/6000. Tivoli (logo). POWER. SPECweb. System i. POWER6.Special notices (cont. A current list of IBM trademarks is available on the Web at "Copyright and trademark information" at www. Inc. RISC System/6000. Cell Broadband Engine is a trademark of Sony Computer Entertainment Inc. Other company. SPECimap and SPECsfs are trademarks of the Standard Performance Evaluation Corp (SPEC). BladeCenter. .S.org wordmarks and the Power and Power. Windows and the Windows logo are registered trademarks of Microsoft Corporation in the United States. registered or common law trademarks owned by IBM at the time this information was published. ClusterProven. PartnerWorld. If these and other IBM trademarked terms are marked on their first occurrence in this information with a trademark symbol (® or ™). SPECint. POWER4. PowerVM (logo). PowerHA. POWER4+. other countries. other countries or both.com/legal/copytrade. InfiniBand. DS8000. ibm. WebSphere. IBM Business Partner (logo). UNIX is a registered trademark of The Open Group in the United States. PowerPC. POWER5. NetBench is a registered trademark of Ziff Davis Media in the United States. System p5. HASM. Power Systems (logo). other countries or both. pSeries. SPECfp. POWER5+. Revised April 24.org. Rational. Power Architecture. InfiniBand Trade Association and the InfiniBand design marks are trademarks and/or service marks of the InfiniBand Trade Association. SPEChpc.) IBM. i5/OS. Micro-Partitioning. PowerExecutive. these symbols indicate U. SPECviewperf. TPC-C and TPC-H are trademarks of the Transaction Performance Processing Council (TPPC). TME 10. POWER3. or both. HACMP. product and service names may be trademarks or service marks of others. other countries or both. POWER Hypervisor. LoadLeveler. Inc. DB2.com AIX. AltiVec is a trademark of Freescale Semiconductor.shtml The Power Architecture and Power. xSeries.ibm. other countries or both. Power Everywhere. Power Family. the IBM logo. 2008 © 2008 IBM Corporation . AIX 5L. SPECmail. Inc. Microsoft. zSeries. Intel. Lotus Notes. SPECjAppServer. AIX 6 (logo). SPECjvm. Chiphopper. SPECjbb. Power Systems Software. RS/6000. Pentium are registered trademarks and Xeon is a trademark of Intel Corporation or its subsidiaries in the United States. PowerVM. z/OS. iSeries. OS/400. i5/OS (logo). Power Systems. Linux is a registered trademark of Linus Torvalds in the United States.org logos and related marks are trademarks and service marks licensed by Power. DS6000. AIX (logo). ESCON. Enterprise Workload Manager. POWER6+. Workload Partitions Manager and X-Architecture are trademarks or registered trademarks of International Business Machines Corporation in the United States. EnergyScale. Such trademarks may also be registered or common law trademarks in other countries. Tivoli Management Environment. Java and all Java-based trademarks and logos are trademarks of Sun Microsystems. DS4000.
Copyright © 2024 DOKUMEN.SITE Inc.