Deep dive into PowerVC Standard 1.2.1.0 using Storage Volume Controller and Brocade 8510-4 FC switches in a multifabric environment

Before reading this post I highly encourage you to read my first post about PowerVC because this one will be focused on the standard edition specificity. I had the chance to work on PowerVC express with IVM and local storage and now with a PowerVC standard with an IBM Storage Volume Controller & Brocade fibre channel switches. A few things are different between these two versions (particularly the storage management). Virtual Machines created by PowerVC standard will use NPIV (Virtual fibre channel adapters) instead of virtual vSCSI adapters. Using local storage or using an SVC in a multi fabric environment are two different things and PowerVC ways to capture/deploy and manage virtual machines are totally different. The PowerVC configuration is more complex and you have to manage the fibre channel ports configuration, the storage connectivity groups and templates. Last but no least the PowerVC standard edition is Live Partition Mobility aware. Let’s have a look on all the standard version specificity. But before you start reading this post I have to warn you that this one is very long (It’s always hard for me to write short posts :-)). Last thing, this post is the result of one month of work on PowerVC mostly on my own, but I had to thanks IBM guys for helping about a few problems (Paul, Eddy, Jay, Phil, …). Cheers guys!

Prerequisites

PowerVC standard needs to connect to the Hardware Management Console, to the Storage Provider, and to the Fibre Channel Switches. Be sure ports are open between PowerVC, the HMC, the Storage Array, and the Fibre Channel Switches :

  • Port TCP 12443 between PowerVC and the HMC (PowerVC is using the HMC K2 Rest API to communicate with the HMC)
  • Port TCP 22 (ssh) between PowerVC and the Storage Array.
  • Port TCP 22 (ssh) between PowerVC and the Fibre Channel Switches.

pvcch

Check your storage array is compatible with PowerVC standard (for the moment only IBM Storwise storage and IBM Storage Volume Controller are supported). All Brocade switches with a firmware 7 are supported. Be careful the PowerVC Redbook is not up-to-date about this : all Brocade switches are supported (An APAR and a PMR are opened about this mistake)

This post was written with this PowerVC configuration :

  • PowerVC 1.2.0.0 x86 version & PowerVC 1.2.1.0 PCC version.
  • Storage Volume Controller with EMC VNX2 Storage array.
  • Brocade DCX 8510-4.
  • Two Power770+ with firmware latest AM780 firmware.

PowerVC standard storage specifics and configuration

PowerVC needs to control the storage to create or delete luns, to create hosts and it also needs to control the fibre channel switches to create and delete zone for the virtual machines. If you are working with multi fibre channel adapters with many ports you have also to configure the storage connectivity groups and the fibre channels ports to tell which port to use and in which case (you may want to create virtual machines for development only on two virtual fibre channel adapters and production one on four). Let’s see how to to this :

Adding storage and fabric

  • Add the storage provider (in my case a Storage Volume Controller but it can be any IBM Storwise family storage array) :
  • blog_add_storage

  • PowerVC will ask you a few questions while adding the storage provider (for instance which pool will be the default pool for the deployment of the virtual machines). You can next check in this view the actual size and remaining size of the used pool :
  • blog_storage_added

  • Add each fibre channel switch (in my case two switches one for fabric A and the second one for the fabric B) (be very careful with the fabric designation (A or B), it will be used later when creating storage templates and storage connectivity groups) :
  • blog_add_frabric

  • Each fabric can be viewed and modified afterwards :
  • blog_fabric_added

Fibre Channel Port Configuration

If you are working in a multi fabric environment you have to configure the fibre channel ports. For each port the first step is to tell PowerVC on which fabric the port is connected. In my case here is the configuration (you can refer to the colours on the image below, and on the explications below) :

pb connectivty_zenburn

  • Each Virtual I/O Server has 2 fibre channel adapters with four ports.
  • For the first adapter : first port is connected to Fabric A, and last port is connected to Fabric B.
  • For the second adapter : first port is connected to Fabric B, and last port is connected to Fabric A.
  • Two ports (port 1 and 2) are remaining free for future usage (future growing).
  • For each port I have to tell PowerVC if the port is connected on : (With PowerVC 1.2.0.0 you have to do this manually and check on the fibre channel switch where are the ports connected. With PowerVC 1.2.1.0 it is automatically detected by PowerVC :-))
  • 17_choose_fabric_for_each_port

    • Connected on Fabric A ? (check the image below) (check switch command to find if the port is connected on the fibre channel switch)
    • blog_connected_fabric_A

      switch_fabric_a:FID1:powervcadm> nodefind 10:00:00:90:FA:3E:C6:D1
      No device found
      switch_fabric_a:FID1:powervcadm> nodefind 10:00:00:90:FA:3E:C6:CE
      Local:
       Type Pid    COS     PortName                NodeName                 SCR
       N    01fb40;    2,3;10:00:00:90:fa:3e:c6:ce;20:00:01:20:fa:3e:c6:ce; 0x00000003
          Fabric Port Name: 20:12:00:27:f8:79:ce:01
          Permanent Port Name: 10:00:00:90:fa:3e:c6:ce
          Device type: Physical Unknown(initiator/target)
          Port Index: 18
          Share Area: Yes
          Device Shared in Other AD: No
          Redirect: No
          Partial: No
          Aliases: XXXXX59_3ec6ce
      
    • Connected on Fabric B ? (check the image below) (check switch command to find if the port is connected on the fibre channel switch)
    • blog_connected_fabric_B

      switch_fabric_b:FID1:powervcadm> nodefind 10:00:00:90:FA:3E:C6:D1
      Local:
       Type Pid    COS     PortName                NodeName                 SCR
       N    02fb40;    2,3;10:00:00:90:fa:3e:c6:d1;20:00:01:20:fa:3e:c6:d1; 0x00000003
          Fabric Port Name: 20:12:00:27:f8:79:d0:01
          Permanent Port Name: 10:00:00:90:fa:3e:c6:d1
          Device type: Physical Unknown(initiator/target)
          Port Index: 18
          Share Area: Yes
          Device Shared in Other AD: No
          Redirect: No
          Partial: No
          Aliases: XXXXX59_3ec6d1
      switch_fabric_b:FID1:powervcadm> nodefind 10:00:00:90:FA:3E:C6:CE
      No device found
      
    • Free, not connected ? (check the image below)
    • blog_not_connected

  • At the end each fibre channel port has to be configured with one of these three choices (connected on Fabric A, connected on Fabric B, Free/not connected).

Port Tagging and Storage Connectivity Group

Fibre channel ports are now configured, but we have to be sure that when deploying a new virtual machine :

  • Each virtual machine will be deployed with four fibre channel adapters (I am in a CHARM configuration).
  • Each virtual machine is connected on the first Virtual I/O Server to the Fabric A and Fabric B on different adapters (each adapter on a different CEC).
  • Each virtual machine is connected to the second Virtual I/O Server to Fabric A and Fabric B on different adapters.
  • I can choose to deploy the virtual machine using fcs0 (Fabric A) and fcs7 (Fabric B) on each Virtual I/O Server or using fcs3 (Fabric B) and fcs4 (Fabric A). Ideally half of the machines will be created with the first configuration and the second half one the second configuration.

To do this you have to tag each port with a tag of the name of your choice, and then create a storage connectivity group. A storage connectivity is a constraint that is used for the deployment of virtual machine :

pb_port_tag_zenburn

  • Two tags are created and set on each ports, fcs0(A)_fcs7(B), and fcs3(B)_fcs4(A) :
  • blog_port_tag

  • Two connectivity groups are created to force the usage of tagged fibre channel ports when deploying a virtual machine.
    • When creating a connectivity group you have to choose the Virtual I/O Server(s) used when deploying a virtual machine using this connectivity group. It can be useful to tell PowerVC to deploy development machines on a single Virtual I/O Server, and production one on dual Virtual I/O Server :
    • blog_vios_connectivity_group

    • In my case connectivity groups are created to restrict the usage of fibre channel adapters. I want to deploy on fibre channel ports fcs0/fcs7 or fibre channel ports fcs3/fcs4. Here are my connectivity groups :
    • blog_connectivity_1
      blog_connectivity_2

    • You can check a sum-up of your connectivity group. I wanted to add this image because I think the two images (provided in PowerVC) are better than text to explain what is a connectivity group :-) :
    • 22_create_connectivity_group_3

Storage Template

If you are using different pools or different storage arrays (for example, in my case I can have different storage arrays behind my Storage Volume Controller) you may want to tell PowerVC to deploy virtual machines on a specific pool or with a specific type (I want for instance, my machines to be created on compressed luns, on thin provisioned luns, or on thick provisioned luns). In my case I’ve created two different templates to create machines on thin or compressed lun. Easy !

  • When creating a storage template you first have to choose the storage pool :
  • blog_storage_template_select_storage_pool

  • Then choose the type of lun for this storage template :
  • blog_storage_template_create

  • Here are exemple with my two storage templates :
  • blog_storage_list

A deeper look on VM capture

I you read my last article about PowerVC express version you know that capturing an image could take some time when using local storage, “dding” a whole disk is long, copying a file to the PowerVC host is long. But don’t worry PowerVC standard solve this problem easily by using all the potential of the IBM Storage (In my case a Storage Volume Controller) … the solution FlashCopies, more specifically what we call a FlashCopy-Copy (to be clear : a FlashCopy-Copy is a full copy of a lun : there are no more relationship between the source lun being copied on the FlashCopy lun (the FlashCopy is created with the autodelete argument)) . Let me explain to you how PowerVC standard manages the virtual machine capture :

  • The activation engine has be run, the virtual machine to be captured is stopped.
  • The user launch the capture by using PowerVC.
  • A FlashCopy-Copy is created from the storage side, we can check it from the GUI interface :
  • blog_flash_copy_pixelate_1

  • Checking with the SVC command line we can see that (use catauditlog command to check this) :
    • A new volume called volume-Image-[name_of_the_image] is created (all captured images will be called volume-Image-[name]), taking care of the storage template (diskgroup/pool, grainsize, rsize ….)
    • # mkvdisk -name volume-Image_7100-03-03-1415 -iogrp 0 -mdiskgrp VNX_XXXXX_SAS_POOL_1 -size 64424509440 -unit b -autoexpand -grainsize 256 -rsize 2% -warning 0% -easytier on
      
    • A FlashCopy-Copy with the id of boot volume of the virtual machine to capture as source, and the id of the image’s lun as target is created :
    • # mkfcmap -source 865 -target 880 -autodelete
      
    • We can check the vdisk 865 is the boot volume of the captured machine and has a FlashCopy running:
    • # lsvdisk -delim :
      id:name:IO_group_id:IO_group_name:status:mdisk_grp_id:mdisk_grp_name:capacity:type:FC_id:FC_name:RC_id:RC_name:vdisk_UID:fc_map_count:copy_count:fast_write_state:se_copy_count:RC_change:compressed_copy_count
      865:_BOOT:0:io_grp0:online:0:VNX_00086_SAS_POOL_1:60.00GB:striped:0:fcmap0:::600507680184879C2800000000000431:1:1:empty:1:no:0
      
    • The FlashCopy-Copy is prepared and started (at this step we can already use our captured image, the copy is running in background) :
    • # prestartfcmap 0
      # startfcmap 0
      
    • While the copy of the FlahsCopy is running we can check the advancement (we can check it too by logging on the GUI too) :
    • IBM_2145:SVC:powervcadmin>lsfcmap
      id name   source_vdisk_id source_vdisk_name target_vdisk_id target_vdisk_name            group_id group_name status  progress copy_rate clean_progress incremental partner_FC_id partner_FC_name restoring start_time   rc_controlled
      0  fcmap0 865             XXXXXXXXX7_BOOT 880             volume-Image_7100-03-03-1415                     copying 54       50        100            off                                       no        140620002138 no
      
      IBM_2145:SVC:powervcadmin>lsfcmapprogress fcmap0
      id progress
      0  54
      
    • After the FlashCopy-Copy is finished, there are no more relationship between the source volume and the finished FlashCopy. The captured image is a vdisk :
    • IBM_2145:SVC:powervcadmin>lsvdisk 880
      id 880
      name volume-Image_7100-03-03-1415
      IO_group_id 0
      IO_group_name io_grp0
      status online
      mdisk_grp_id 0
      mdisk_grp_name VNX_XXXXX_SAS_POOL_1
      capacity 60.00GB
      type striped
      [..]
      vdisk_UID 600507680184879C280000000000044C
      [..]
      fc_map_count 0
      [..]
      
    • The is no more fcmap for the source volume :
    • IBM_2145:SVC:powervcadmin>lsvdisk 865
      [..]
      fc_map_count 0
      [..]
      

Deployment mechanism

blog_deploy3_pixelate

Deploying a virtual machine with the standard version is very similar as deploying a machine with the express version. The only thing different is the possibility to choose the storage template (with the constraints of the storage connectivity group)

View from the Hardware Management Console

PowerVC is using the Hardware Management Console new k2 rest API to create the virtual machine, if you want to go further and check the commands used on the HMC you can check it with the lssvcevents command :

time=06/21/2014 17:49:12,text=HSCE2123 User name powervc: chsysstate -m XXXX58-9117-MMD-658B2AD -r lpar -o on -n deckard-e9879213-00000018 command was executed successfully.
time=06/21/2014 17:47:29,text=HSCE2123 User name powervc: chled -r sa -t virtuallpar -m 9117-MMD*658B2AD --id 1 -o off command was executed successfully.
time=06/21/2014 17:46:51,"text=HSCE2123 User name powervc: chhwres -r virtualio --rsubtype fc -o a -m 9117-MMD*658B2AD -s 29 --id 1 -a remote_slot_num=6,remote_lpar_id=8,adapter_type=server co
mmand was executed successfully."
time=06/21/2014 17:46:40,"text=HSCE2123 User name powervc: chsyscfg -m 9117-MMD*658B2AD -r prof -i lpar_name=deckard-e9879213-00000018,""virtual_fc_adapters+=""""6/CLIENT/1//29//0"""""",name=l
ast*valid*configuration -o apply --override command was executed successfully."
time=06/21/2014 17:46:32,text=HSCE2245 User name powervc: Activating the partition 8 succeeded on managed system 9117-MMD*658B2AD.
time=06/21/2014 17:46:17,"text=HSCE2123 User name powervc: chhwres -r virtualio --rsubtype fc -o a -m 9117-MMD*658B2AD -s 28 --id 1 -a remote_slot_num=5,remote_lpar_id=8,adapter_type=server co
mmand was executed successfully."
time=06/21/2014 17:46:06,"text=HSCE2123 User name powervc: chsyscfg -m 9117-MMD*658B2AD -r prof -i lpar_name=deckard-e9879213-00000018,""virtual_fc_adapters+=""""5/CLIENT/1//28//0"""""",name=l
ast*valid*configuration -o apply --override command was executed successfully."
time=06/21/2014 17:45:57,text=HSCE2245 User name powervc: Activating the partition 8 succeeded on managed system 9117-MMD*658B2AD.
time=06/21/2014 17:45:46,"text=HSCE2123 User name powervc: chhwres -r virtualio --rsubtype fc -o a -m 9117-MMD*658B2AD -s 30 --id 2 -a remote_slot_num=4,remote_lpar_id=8,adapter_type=server co
mmand was executed successfully."
time=06/21/2014 17:45:36,text=HSCE2123 User name powervc: chhwres -r virtualio --rsubtype fc -o r -m 9117-MMD*658B2AD -s 29 --id 1 command was executed successfully.
time=06/21/2014 17:45:27,"text=HSCE2123 User name powervc: chsyscfg -m 9117-MMD*658B2AD -r prof -i lpar_name=deckard-e9879213-00000018,""virtual_fc_adapters+=""""4/CLIENT/2//30//0"""""",name=l
ast*valid*configuration -o apply --override command was executed successfully."
time=06/21/2014 17:45:18,text=HSCE2245 User name powervc: Activating the partition 8 succeeded on managed system 9117-MMD*658B2AD.
time=06/21/2014 17:45:08,text=HSCE2123 User name powervc: chhwres -r virtualio --rsubtype fc -o r -m 9117-MMD*658B2AD -s 28 --id 1 command was executed successfully.
time=06/21/2014 17:45:07,text=User powervc has logged off from session id 42151 for the reason:  The user ran the Disconnect task.
time=06/21/2014 17:45:07,text=User powervc has disconnected from session id 42151 for the reason:  The user ran the Disconnect task.
time=06/21/2014 17:44:50,"text=HSCE2123 User name powervc: chhwres -r virtualio --rsubtype scsi -o a -m 9117-MMD*658B2AD -s 23 --id 1 -a adapter_type=server,remote_lpar_id=8,remote_slot_num=3
command was executed successfully."
time=06/21/2014 17:44:40,"text=HSCE2123 User name powervc: chsyscfg -m 9117-MMD*658B2AD -r prof -i lpar_name=deckard-e9879213-00000018,virtual_scsi_adapters+=3/CLIENT/1//23/0,name=last*valid*c
onfiguration -o apply --override command was executed successfully."
time=06/21/2014 17:44:32,text=HSCE2245 User name powervc: Activating the partition 8 succeeded on managed system 9117-MMD*658B2AD.
time=06/21/2014 17:44:22,"text=HSCE2123 User name powervc: chhwres -r virtualio --rsubtype fc -o a -m 9117-MMD*658B2AD -s 25 --id 2 -a remote_slot_num=2,remote_lpar_id=8,adapter_type=server co
mmand was executed successfully."
time=06/21/2014 17:44:11,"text=HSCE2123 User name powervc: chsyscfg -m 9117-MMD*658B2AD -r prof -i lpar_name=deckard-e9879213-00000018,""virtual_fc_adapters+=""""2/CLIENT/2//25//0"""""",name=l
ast*valid*configuration -o apply --override command was executed successfully."
time=06/21/2014 17:44:02,text=HSCE2245 User name powervc: Activating the partition 8 succeeded on managed system 9117-MMD*658B2AD.
time=06/21/2014 17:43:50,text=HSCE2123 User name powervc: chhwres -r virtualio --rsubtype scsi -o r -m 9117-MMD*658B2AD -s 23 --id 1 command was executed successfully.
time=06/21/2014 17:43:31,text=HSCE2123 User name powervc: chled -r sa -t virtuallpar -m 9117-MMD*658B2AD --id 1 -o off command was executed successfully.
time=06/21/2014 17:43:31,text=HSCE2123 User name powervc: chled -r sa -t virtuallpar -m 9117-MMD*658B2AD --id 2 -o off command was executed successfully.
time=06/21/2014 17:42:57,"text=HSCE2123 User name powervc: chsyscfg -m 9117-MMD*658B2AD -r prof -i lpar_name=deckard-e9879213-00000018,""virtual_eth_adapters+=""""32/0/1665//0/0/zvdc4/fabbb99d
e420/all/"""""",name=last*valid*configuration -o apply --override command was executed successfully."
time=06/21/2014 17:42:49,text=HSCE2245 User name powervc: Activating the partition 8 succeeded on managed system 9117-MMD*658B2AD.
time=06/21/2014 17:41:53,text=HSCE2123 User name powervc: chsyscfg -m 9117-MMD*658B2AD -r lpar -p deckard-e9879213-00000018 -n default_profile -o apply command was executed successfully.
time=06/21/2014 17:41:42,text=HSCE2245 User name powervc: Activating the partition 8 succeeded on managed system 9117-MMD*658B2AD.
time=06/21/2014 17:41:36,"text=HSCE2123 User name powervc: mksyscfg -m 9117-MMD*658B2AD -r lpar -i name=deckard-e9879213-00000018,lpar_env=aixlinux,min_mem=8192,desired_mem=8192,max_mem=8192,p
rofile_name=default_profile,max_virtual_slots=64,lpar_proc_compat_mode=default,proc_mode=shared,min_procs=4,desired_procs=4,max_procs=4,min_proc_units=2,desired_proc_units=2,max_proc_units=2,s
haring_mode=uncap,uncap_weight=128,lpar_avail_priority=127,sync_curr_profile=1 command was executed successfully."
time=06/21/2014 17:41:01,"text=HSCE2123 User name powervc: mksyscfg -m 9117-MMD*658B2AD -r lpar -i name=FAKE_1403368861661,profile_name=default,lpar_env=aixlinux,min_mem=8192,desired_mem=8192,
max_mem=8192,max_virtual_slots=4,virtual_eth_adapters=5/0/1//0/1/,virtual_scsi_adapters=2/client/1//2/0,""virtual_serial_adapters=0/server/1/0//0/0,1/server/1/0//0/0"",""virtual_fc_adapters=3/
client/1//2//0,4/client/1//2//0"" -o query command was executed successfully."

blog_deploy3_hmc1

As you can see on the picture below four virtual fibre channel adapters are created taking care of the constraints of the storage connectivity groups create earlier (looking on the Virtual I/O Server vfcmaps are ok …) :

blog_deploy3_hmc2_pixelate

padmin@XXXXX60:/home/padmin$ lsmap -vadapter vfchost14 -npiv
Name          Physloc                            ClntID ClntName       ClntOS
------------- ---------------------------------- ------ -------------- -------
vfchost14     U9117.MMD.658B2AD-V1-C28                8 deckard-e98792 AIX

Status:LOGGED_IN
FC name:fcs3                    FC loc code:U2C4E.001.DBJN916-P2-C1-T4
Ports logged in:2
Flags:a
VFC client name:fcs2            VFC client DRC:U9117.MMD.658B2AD-V8-C5

padmin@XXXXX60:/home/padmin$ lsmap -vadapter vfchost15 -npiv
Name          Physloc                            ClntID ClntName       ClntOS
------------- ---------------------------------- ------ -------------- -------
vfchost15     U9117.MMD.658B2AD-V1-C29                8 deckard-e98792 AIX

Status:LOGGED_IN
FC name:fcs4                    FC loc code:U2C4E.001.DBJO029-P2-C1-T1
Ports logged in:2
Flags:a
VFC client name:fcs3            VFC client DRC:U9117.MMD.658B2AD-V8-C6

View from the Storage Volume Controller

The SVC side is pretty simple, two steps, FlashCopy-Copy of the volume-Image (the one created at the capture step) (the source of the FlashCopy is the volumeImage-[name] lun) and a host creation for the new virtual machine :

  • Creation of a FlashCopy-Copy with the volume used for the capture as source :
  • blog_deploy3_flashcopy1

    # mkvdisk -name volume-boot-9117MMD_658B2AD-deckard-e9879213-00000018 -iogrp 0 -mdiskgrp VNX_00086_SAS_POOL_1 -size 64424509440 -unit b -autoexpand -grainsize 256 -rsize 2% -warning 0% -easytier on
    # mkfcmap -source 880 -target 881 -autodelete
    # prestartfcmap 0
    # startfcmap 0
    
  • The host is created using the height wwpns of the newly created virtual machine (I repaste here the lssyscfg command to check the wwpns are the same :-)
  • hscroot@hmc1:~> lssyscfg -r prof -m XXXXX58-9117-MMD-658B2AD --filter "lpar_names=deckard-e9879213-00000018"
    name=default_profile,lpar_name=deckard-e9879213-00000018,lpar_id=8,lpar_env=aixlinux,all_resources=0,min_mem=8192,desired_mem=8192,max_mem=8192,min_num_huge_pages=0,desired_num_huge_pages=0,max_num_huge_pages=0,mem_mode=ded,mem_expansion=0.0,hpt_ratio=1:128,proc_mode=shared,min_proc_units=2.0,desired_proc_units=2.0,max_proc_units=2.0,min_procs=4,desired_procs=4,max_procs=4,sharing_mode=uncap,uncap_weight=128,shared_proc_pool_id=0,shared_proc_pool_name=DefaultPool,affinity_group_id=none,io_slots=none,lpar_io_pool_ids=none,max_virtual_slots=64,"virtual_serial_adapters=0/server/1/any//any/1,1/server/1/any//any/1",virtual_scsi_adapters=3/client/1/XXXXX60/29/0,virtual_eth_adapters=32/0/1665//0/0/zvdc4/fabbb99de420/all/0,virtual_eth_vsi_profiles=none,"virtual_fc_adapters=""2/client/2/XXXXX59/30/c050760727c5004a,c050760727c5004b/0"",""4/client/2/XXXXX59/25/c050760727c5004c,c050760727c5004d/0"",""5/client/1/XXXXX60/28/c050760727c5004e,c050760727c5004f/0"",""6/client/1/XXXXX60/23/c050760727c50050,c050760727c50051/0""",vtpm_adapters=none,hca_adapters=none,boot_mode=norm,conn_monitoring=1,auto_start=0,power_ctrl_lpar_ids=none,work_group_id=none,redundant_err_path_reporting=0,bsr_arrays=0,lpar_proc_compat_mode=default,electronic_err_reporting=null,sriov_eth_logical_ports=none
    
    # mkhost -name deckard-e9879213-00000018-06976900 -hbawwpn C050760727C5004A -force
    # addhostport -hbawwpn C050760727C5004B -force 11
    # addhostport -hbawwpn C050760727C5004C -force 11
    # addhostport -hbawwpn C050760727C5004D -force 11
    # addhostport -hbawwpn C050760727C5004E -force 11
    # addhostport -hbawwpn C050760727C5004F -force 11
    # addhostport -hbawwpn C050760727C50050 -force 11
    # addhostport -hbawwpn C050760727C50051 -force 11
    # mkvdiskhostmap -host deckard-e9879213-00000018-06976900 -scsi 0 881
    

    blog_deploy3_svc_host1
    blog_deploy3_svc_host2

View from fibre channel switches

On the two fibre channel switches four zones a created (do not forget the zones used for the Live Partition Mobility). These zone can be easily identified by their names. All PowerVC zones are prefixed by “powervc” (unfortunately names are truncated) :

  • Four zones are created on the fibre channel switch of the fabric A :
  • switch_fabric_a:FID1:powervcadmin> zoneshow powervc_eckard_e9879213_00000018c050760727c50051500507680110f32c
     zone:  powervc_eckard_e9879213_00000018c050760727c50051500507680110f32c
                    c0:50:76:07:27:c5:00:51; 50:05:07:68:01:10:f3:2c
    
    switch_fabric_a:FID1:powervcadmin> zoneshow powervc_eckard_e9879213_00000018c050760727c5004c500507680110f385
     zone:  powervc_eckard_e9879213_00000018c050760727c5004c500507680110f385
                    c0:50:76:07:27:c5:00:4c; 50:05:07:68:01:10:f3:85
    
    switch_fabric_a:FID1:powervcadmin> zoneshow powervc_eckard_e9879213_00000018c050760727c5004d500507680110f385
     zone:  powervc_eckard_e9879213_00000018c050760727c5004d500507680110f385
                    c0:50:76:07:27:c5:00:4d; 50:05:07:68:01:10:f3:85
    
    switch_fabric_a:FID1:powervcadmin> zoneshow powervc_eckard_e9879213_00000018c050760727c50050500507680110f32c
     zone:  powervc_eckard_e9879213_00000018c050760727c50050500507680110f32c
                    c0:50:76:07:27:c5:00:50; 50:05:07:68:01:10:f3:2c
    
  • Four zones are created on the fibre channel switch of the fabric B :
  • switch_fabric_b:FID1:powervcadmin> zoneshow powervc_eckard_e9879213_00000018c050760727c5004e500507680120f385
     zone:  powervc_eckard_e9879213_00000018c050760727c5004e500507680120f385
                    c0:50:76:07:27:c5:00:4e; 50:05:07:68:01:20:f3:85
    
    switch_fabric_b:FID1:powervcadmin> zoneshow powervc_eckard_e9879213_00000018c050760727c5004a500507680120f32c
     zone:  powervc_eckard_e9879213_00000018c050760727c5004a500507680120f32c
                    c0:50:76:07:27:c5:00:4a; 50:05:07:68:01:20:f3:2c
    
    switch_fabric_b:FID1:powervcadmin> zoneshow powervc_eckard_e9879213_00000018c050760727c5004b500507680120f32c
     zone:  powervc_eckard_e9879213_00000018c050760727c5004b500507680120f32c
                    c0:50:76:07:27:c5:00:4b; 50:05:07:68:01:20:f3:2c
    
    switch_fabric_b:FID1:powervcadmin> zoneshow powervc_eckard_e9879213_00000018c050760727c5004f500507680120f385
     zone:  powervc_eckard_e9879213_00000018c050760727c5004f500507680120f385
                    c0:50:76:07:27:c5:00:4f; 50:05:07:68:01:20:f3:85
    

Activation Engine and Virtual Optical Device

All my deployed virtual machines are connected to one of the Virtual I/O Server by a vSCSI adapter. This vSCSI adapter is used to connect the virtual machine to a virtual optical device (a virtual cdrom) needed by the activation engine to reconfigure the virtual machine. Looking in the Virtual I/O Server the virtual media repository is filled with customized iso files needed to activate the virtual machines :

  • Here is the output of the lsrep command on one of my Virtual I/O Server is by PowerVC :
  • padmin@XXX60:/home/padmin$ lsrep 
    Size(mb) Free(mb) Parent Pool         Parent Size      Parent Free 
        1017     1014 rootvg                   279552           110592 
    
    Name                                                  File Size Optical         Access 
    vopt_1c967c7b27a94464bebb6d043e6c7a6e                         1 None            ro 
    vopt_b21849cc4a32410f914a0f6372a8f679                         1 None            ro 
    vopt_e9879213dc90484bb3c5a50161456e35                         1 None            ro
    
  • At the time of writing this post the vSCSI adapter is not deleted after the virtual machines activation, but this one is only used at the first boot of the machines :
  • blog_adapter_for_ae_pixelate

  • Even better you can mount this iso and check it is used by the activation engine. The network configuration to be applied at reboot is written in an xml file. For those -like me- who have ever played with VMcontrol it may remember you the deploy command used in VMcontrol :
  • root@XXXX60:# cd /var/vio/VMLibrary
    root@XXXX60:/var/vio/VMLibrary# loopmount -i vopt_1c967c7b27a94464bebb6d043e6c7a6e -o "-V cdrfs -o ro" -m /mnt
    root@XXXX60:/var/vio/VMLibrary# cd /mnt
    root@XXXX60:/mnt# ls
    ec2          openstack    ovf-env.xml
    root@XXXX60:/mnt# cat ovf-env.xml
    <Environment xmlns="http://schemas.dmtf.org/ovf/environment/1" xmlns:ovf="http://schemas.dmtf.org/ovf/envelope/1" xmlns:ovfenv="http://schemas.dmtf.org/ovf/environment/1" xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData" xmlns:vssd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_VirtualSystemSettingData" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" ovfenv:id="vs0">
        <PlatformSection>
        <Locale>en</Locale>
      </PlatformSection>
      <PropertySection>
      <Property ovfenv:key="com.ibm.ovf.vmcontrol.system.networking.ipv4defaultgateway" ovfenv:value="10.244.17.1"/><Property ovfenv:key="com.ibm.ovf.vmcontrol.system.networking.hostname" ovfenv:value="deckard"/><Property ovfenv:key="com.ibm.ovf.vmcontrol.adapter.networking.slotnumber.1" ovfenv:value="32"/>;<Property ovfenv:key="com.ibm.ovf.vmcontrol.system.networking.dnsIPaddresses" ovfenv:value="10.10.20.10 10.10.20.11"/><Property ovfenv:key="com.ibm.ovf.vmcontrol.adapter.networking.usedhcpv4.1" ovfenv:value="false"/><Property ovfenv:key="com.ibm.ovf.vmcontrol.adapter.networking.ipv4addresses.1" ovfenv:value="10.244.17.35"/><Property ovfenv:key="com.ibm.ovf.vmcontrol.adapter.networking.ipv4netmasks.1" ovfenv:value="255.255.255.0"/><Property ovfenv:key="com.ibm.ovf.vmcontrol.system.networking.domainname" ovfenv:value="localdomain"/><Property ovfenv:key="com.ibm.ovf.vmcontrol.system.timezone" ovfenv:value=""/></PropertySection>
    

Shared Ethernet Adapters auto management

This part is not specific to the standard version of PowerVC but I wanted to talk about this here. You probably already knows that PowerVC is built on top of OpenStack and OpenStack is clever. The product doesn’t want to keep unnecessary objects in your configuration. I was very impressed by the management of the networks and of the vlans, PowerVC is managing and taking care of your Shared Ethernet Adapter for you. You don’t have to remove not used vlan, and to add by hand new vlans (just add the network in PowerVC), here are a few examples :

  • If you are adding a vlan in PowerVC you have the choice to select the Shared Ethernet Adapter for this vlan. For instance you can choose not to deploy this vlan on a particular host :
  • blog_network_do_not_use_pixelate

  • If you deploy a virtual machine on this vlan this one will be automatically added to the Shared Ethernet Adapter if this is the first machine using this vlan :
  • # chhwres -r virtualio --rsubtype vnetwork -o a -m 9117-MMD*658B2AD --vnetwork 1503-zvdc4 -a vlan_id=1503,vswitch=zvdc4,is_tagged=1
    
  • If you are moving a machine from one host to one another and this machine is last to use this vlan, the vlan will be automatically cleaned up and removed from the Shared Ethernet Adapter.
  • I have in my configuration two Shared Ethernet Adapters each one on a different virtual switch. Good news : PowerVC is vswitch aware :-)
  • This link is explaining this in details (not the redbook): Click here

Mobility

PowerVC standard is able to manage the mobility of your virtual machines. Machines can be relocated on any hosts on the PowerVC pool. You do not have anymore to remind you the long and complicated migrlpar command, PowerVC is taking care of this for you, just by clicking the migrate button :

blog_migrate_1_pixelate

  • Looking in the Hardware Management Console lssvcevents, you can check that the migrlpar command is taking care of the storage connectivity group created earlier, and is going to map the lpar on adapter fcs3 and fcs4 :
  • # migrlpar -m XXX58-9117-MMD-658B2AD -t XXX55-9117-MMD-65ED82C --id 8 -i ""virtual_fc_mappings=2//1//fcs3,4//1//fcs4,5//2//fcs3,6//2//fcs4""
    
  • On the Storage Volume Controller, the host created with the Live Partition Mobility wwpns are correctly activated while the machine is moving to the other host :
  • blog_migrate_svc_lpm_wwpns_greened

About supported fibre channel switches : all FOS >= 6.4 are ok !

At the time of writing this post things are not very clear about this. Checking in the Redbook the only supported models of fibre channel switches are IBM SAN24B-5 and IBM SAN48B-5. I’m using Brocade 8510-4 fibre channel switches and they are working well with PowerVC. After a couple of calls and mails with the PowerVC development team it seems that all Fabric OS superior or equals to version 6.4 are ok. Don’t worry if the PowerVC validator is failing, it may appends, just open a call to get the validator working with you switch model (have problems in version 1.2.0.1 but nor more problem with the latest 1.2.1.0 :-))

Conclusion

PowerVC is impressive. In my opinion PowerVC is already production ready. Building a machine with four virtual NPIV fibre channel adapter in five minutes is something every AIX system administrator has dreamed of. Tell your boss this is the right way to build machines, and invest for the future by deploying PowerVC : it’s a must have :-) :-) :-) :-)! Need advice about it, need someone to deploy it ? Hire me !

sitckers_resized

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>