Power6 vSCSI to Power7 vFC (NPIV) migrationRajesh Shah UNIX Systems Administrator III Classified - Internal use NPIV Overview Classified - Internal use Backup prior migration Important: Ensure all backups of the source client LPAR is in place …before migration Classified - Internal use Basic steps when migrating from vSCSI to NPIV -remove vscsi / vhosts on source (can be done later) On Target -create Virtual fibre channel at VIO -create Virtual FC client at LPAR -vfcmap -Locate the WWPNs for the destination client Fibre Channel adapter from the HMC and remap SAN storage that was originally being mapped for source partition to the WWPNs for the destination partition.Internal use . Classified . STEPS DONE BEFORE AND AFTER MIGRATION on client LPAR During Outage window On OLD p6 (source) steps AFTER migration after p7 lpar is booted up (NEW)target lpar a) Enable NFS/NAS mounts b)Change/verify LUN attributes c) REBOOT d)SYSTEM verification e)APPLICATION verification ITS BIG OK HERE g) cleanup a)TL updates b)Install SAN multipath codes SAN vendor specified (EMCODM) c)Etherchannel drop d)NIC recfg on single en interface e)Disable NFS/NAS mounts f)Reboot g)shutdown NOTE: source P6 client lpar LUN’s serial numbers and target P7 lpar virtual wwpn’s info given to Storage team for LUN masking and rezoning target LUN to virtual wwpns on P7 lpar Classified .Internal use . Internal use EMC SAN .P5/P6 lpar P7 lpar VIO SAN team in background using SVC Hot Array level migration method migrates ONLINE all real-time DATA from IBM SAN to EMC SAN until real outage window SVC IBM SAN Classified . Internal use EMC SAN . SAN team rezones EMC SAN storage to the destination p7 lpar’s virtual wwpns SVC IBM SAN Classified .P6 lpar P7 lpar VIO During REAL OUTAGE WINDOW. Migrated LUN Classified .Internal use . 3 Technology Level 09 or above • AIX 6.Internal use .1 or above • AIX 5.Prerequisites for multipath NPIV configuration • IBM POWER6 or above processor based hardware • 8Gb PCIe Dual-port FC Adapter • SAN switch are NPIV enabled • VIOS version 2.1 Technology Level 02 or above Classified . Classified .Basic steps when migrating from vSCSI to NPIV -remove source vscsi / vhosts (can be done later) -create Virtual fibre channel at VIO -create Virtual FC client at LPAR -vfcmap -Locate the WWPNs for the destination client Fibre Channel adapter from the HMC and remap SAN storage that was originally being mapped for source partition to the WWPNs for the destination partition.Internal use . Setting up NPIV containers on p7 (target) prior migration 1)Create a virtual fibre adapter in VIO server 2)Create a virtual fibre adapter in your VIO client 3)Map the virtual adapter in the VIO server to physical fibre adapter using vfcmap command 4)Give the virtual worldwide name (WWN) to your SAN team Classified .Internal use . NOT like Virtual SCSI adapter mapping where you have to provide only the slot numbers and it does not check for the existence of the client.VIOS configuration You have to create dual VIOS instances on the System p server to avail maximum redundancy for the paths. Classified . Each VIOS should own at least 1 number of 8Gb FC Adapter (FC#5735) and one Virtual FC Server Adapter to map it to the client LPAR. like proper Virtual SCSI adapter and/or Virtual Ethernet Adapter mapping. the VIOS and AIX LPAR profile configuration has to be completed with all the other details required for running the VIOS and the AIX LPAR. for doing a Virtual FC Server Adapter to Client Adapter mapping both the VIOS and AIX LPAR should exist in the system. physical SCSI/RAID adapter mapping to the VIOS and/or AIX LPAR etc.Internal use . This is because. NOTE: Before doing any NPIV related configuration. Creating Virtual FC Server Adapters Open the first VIOS’s profile for editing and navigate to the Virtual Adapters tab.Internal use . select the client LPAR from the dropdown which will be connected to this server adapter and input its planned Virtual FC Client Adapter slot number as shown below Classified . click on Actions > Create > Fibre Channel Adapter as shown in the figure below Input the Virtual FC Server Adapter slot number. Internal use .Classified . Internal use . edit the profile of the second VIOS and add the Virtual FC Server Adapter details Classified .Same way. Internal use .Classified . If the value for the ‘fabric’ parameter shows as ‘1’ that means that HBA port is connected to a SAN switch supporting the NPIV feature.Internal use . this value will be ‘0’ and if there is no SAN connectivity there won’t be any output for the ‘lsnports’ command. If the SAN switch does not support NPIV. For mapping the Virtual FC Server Adapters to the HBA port. the HBA port should be already connected to the SAN switch enabled with NPIV support.MAP virtual FC server adapters to physical HBA ports Once the Virtual FC Server Adapters are created. it has to be mapped to the physical HBA ports for connecting it to the SAN fabric. This will connect the Virtual FC Client Adapters to the SAN fabric through Virtual FC Server Adapter and physical HBA port. Classified . You can run the command ‘lsnports’ in VIOS ‘$’ prompt to know whether the connected SAN switch is enabled for NPIV. In the VIOS. We can run the ‘lsdev –vpd | grep vfchost’ command to know which device represents the Virtual FC adapter on any specific slot. vfcmap -vadapter vfchost10 -fcp fcs0 vfcmap -vadapter vfchost11 -fcp fcs1 Classified . the Virtual FC adapters will be shown as ‘vfchost’ devices. we can go ahead mapping the Virtual FC adapter to those HBA ports.Internal use .If the value for ‘fabric’ is ‘1’. Internal use .Classified . Internal use .Repeat the above mentioned steps in the second VIOS LPAR also. If you have more client LPARs. repeat the steps for all those LPARs in both the VIOS LPARs. Classified . AIX LPAR Configuration Now we need to create the Virtual FC Client Adapters in the AIX LPAR Classified .Internal use . Internal use .Creating Virtual FC Client Adapters Open the LPAR profile for the AIX LPAR for editing and add the Virtual FC Client Adapters by navigating to the Virtual Adapters tab and selecting Actions > Create > Fibre Channel Adapter as shown Classified . Internal use .Input the first Virtual FC Client Adapter slot number details as shown in below slide You need to make sure that the slot numbers entered is exactly matching with the slot numbers entered while creating the Virtual FC Server Adapter in the first VIOS LPAR. This adapter will be the first path for the SAN access in the AIX LPAR Classified . Internal use .Classified . Make sure the slot numbers match with the slot numbers we have entered in the second VIOS LPAR while creating the Virtual FC Server Adapter Classified .Do same mapping for second VIO Create the second Virtual FC Client Adapter.Internal use . Classified .Internal use . ACTIVATE TARGET P7 LPAR USING HMC in SMS menu Classified .Internal use . Internal use .From hmc push all active and inactive wwpns to the switch Use correct cec and lpar name us6thmc01:~> chnportlogin -m CCRUTCEC01 -o login -p lparname -n default Ensure above command completes with Exit Status 0 us6thmc01:~> lsnportlogin -m CCRUTCEC01 --filter "profile_names=default“ |grep lpar Classified . images/EMCODM smitty installp -> select EMC Symmetric FCP MPIO fileset from the list and select option for installing requisite fileset to Yes NIC Reconfiguration Part on source client lpar TO BE DONE FROM TERMINAL CONSOLE From HMC client lpar console login to source lpar Ifconfig en2 down detach Then remove Etherchannel smitty etherchannel – remove Etherchannel remove ent1 virtual adapter configure IP on en0 interfance by smitty tcpip -> Classified .images cd /usr/sys/inst.Internal use .images /usr/sys/inst.EMCODM filesets install on source client lpar Install the EMC Symmetrix ODM filesets from us6nim02 mount us6nim02: /usr/sys/inst. cd /usr/sys/inst.Steps during Actual Outage window On Source client lpar.images/6100-07-03-1207 smitty update_all and update to latest 6.Internal use .1 TL7 SP3 level Do oslevel –s and ensure its at that level Classified .AIX TL update before migration Start . . ensure oslevel –s is correct level and make sure you can login from network Take OS backup so we have latest modified sysback with EMCODM filesets and new NIC configuration and TL updates .shutdown –Fr After reboot of client lpar . Disable NFS/NAS mount points shutdown source lpar # shutdown –F now THIS is the start of real outage window time Classified .Internal use . then Shutdown lpar Activate lpar in sms menu Select Boot Options Configure Boot Device order Select the 1st Boot Device -> Hard Drive -> SAN -> List all devices Select the first path disk in the list to boot from Then exit the SMS menu and let it boot Once p7 lpar is booted Change/Verify LUN attributes Enable NFS/NAS mounts SYSTEM VERIFICATION . You can change hcheck_interval =60 for all LUN’s Classified .SAN TEAM WILL REZONE VMAX LUN’s to Target virtual wwpns During the actual outage window they will rezone VMAX LUN’s and present to P7 target LPAR’s virtual wwpns P7 TARGET LPAR STEPS Activate the P7 target lpar and let it boot on its own If lpar cannot find the boot disk on its own.Internal use REBOOT . NFS MOUNTS – IMPORTANT change NFS and NAS mounts in /etc/filesystems to mount=true change all VMAX LUN attributes as below for all VMAX disks chdev –l hdiskX –a reserve_policy=no_reserve –P chdev –l hdiskX –a algorithm=round_robin –P chdev –l hdiskX –a queue_depth=80 –P Verify that VMAX LUN’s are seen as MPIO VRAID by below command EMC Symmetrix FCP lsdev –Cc disk Make sure you are able to connect from outside network to migrated lpar Once system verification is done shutdown –Fr After system comes up ensure all filesystems.Internal use . pv Is fine Application Verfication Ask Application team to verify application/db from their end Once Application Verification is done Classified . Internal use .OK MIGRATION COMPLETE … THEN HERE IT IS BIG Classified . vscsi lsdev –Cc disk . lsdev –Cc adapter will list all.Internal use .CLEANUP on target client lpar Remove old definitions of hdisk. Remove old adapters in defined state Ex: rmdev –dl vscsi0 and same for vscsi1 # cfgmgr -> ensure they are gone NOTE: hdisk numbering will change after migration for rootvg and othervg’s Classified . End of Presentation Classified .Internal use .