Automating systems deployment & other new features : HMC8, IBM Provisioning Toolkit for PowerVM and LPM Automation Tool

I am involved in a project where we are going to deploy dozen of Power Systems (still Power7 for the moment, and Power8 in a near future). All the systems will be the same : same models with the same slots emplacements and the same Virtual I/O Server configuration. To be sure that all my machines are the same and to allow other people (who are not aware of the design or are not skilled enough to do it by themselves) I had to find a solution to automatize the deployment of the new machines. For the virtual machines the solution is now to use PowerVC but what about the Virtual I/O Servers, what about the configuration of the Shared Ethernet Adapters. In other words what about the infrastructure deployment ? I spent a week with an IBM US STG Lab services consultant (Bonnie Lebarron) for a PowerCare (you have now a PowerCare included with every high end machine you buy) about the IBM Provisioning Toolkit for PowerVM (which is a very powerful tool that allows you to deploy your Virtual I/O Server and your virtual machines automatically) and the Live Partition Mobility Automation tool. With the new Hardware Management Console (8R8.2.0) you now have the possibility to create templates not just for the new virtual machines creation, but also to deploy create and configure your Virtual I/O Severs. The goal of this post is to show that there are different way to do that but also to show you the new features embedded with the new Hardware Management Console and to spread the world about those two STG Labs Services wonderful tools that are well know in US but not so much in Europe. So it’s a HUGE post, just take what is useful for you in it. Here we go :

Hardware Management Console 8 : System templates

The goal of the systems templates is to deploy a new server in minutes without having to logging on different servers to do some tasks, you now just have to connect on the HMC to do all the work. The systems templates will deploy the Virtual I/O Server image by using your NIM server or by using the images stored in the Hardware Management Console media repository. Please note a few points :

  • You CAN’T deploy a “gold” mksysb of your Virtual I/O Server using the Hardware Management Console repository. I’ve tried this myself and it is for the moment impossible (if someone has a solution …). I’ve tried two different ways. Creating a backupios image without the mksysb flag (it will produce a tar file impossible to upload on the image repository, but usable by the installios command). Creating a backupios image with the mksysb flag and use the mkcd/mkdvd command to create iso images. Both method were failing at the installation process.
  • The current Virtual I/O Server images provided in the Eelectonic Software Delivry (2.2.3.4 at the moment) are provided in the .udf format and not the .iso format. This is not a huge problem, just rename both files to .iso before uploading the file on the Hardware Management Console.
  • If you want to deploy your own mksysb you can still choose to use your NIM server, but you will have to manually create the NIM objects, and to manually configure a bosinst installation (in my humble opinion what we are trying to do is to reduce manual interventions, but you can still do that for the moment, that’s what I do because I don’t have the choice). You’ll have to give the IP address of the NIM server and the HMC will boot the Virtual I/O Servers with the network settings already configured.
  • The Hardware Management Console installation with the media repository is based on the old well known installios command. You still need to have the NIM port opened between your HMC and the Virtual I/O Server management network (the one you will choose to install both Virtual I/O Servers) (installios is based on NIMOL). You may experience some problems if you already install your Virtual I/O Servesr this way and you may have to reset some things. My advice is to always run these three commands before deploying a system template :
  • # installios -F -e -R default1
    # installios -u 
    # installios -q
    

Uploading an iso file on the Hardware Management Console

Upload the images on the Hardware Management Console, I’ll not explain this in details …:

hmc_add_virtual_io_server_to_repo
hmc_add_virtual_io_server_to_repo2

Creating a system template

To create a system template you have first to copy an existing predefined template provided by the Hardware Management Console (1) and then edit this template to fit you own needs (2) :

create_template_1

  • You can’t edit the physical I/O part when editing a new template, you first have to deploy a system with this template to choose the physical I/O for each Virtual I/O Server and then capture this deployed system as an HMC template. Change the properties of your Virtual I/O Server :
  • create_template_2

  • Create your Shared Ethernet Adapters : let’s say we want to create one Shared Ethernet Adapter in sharing mode with four virtual adapters :
  • Adapter 1 : PVID10, vlans=1024;1025
  • Adapter 2 : PVID11, vlans=1028;1029
  • Adapter 3 : PVID12, vlans=1032;1033
  • Adapter 4 : PVID13, vlans=1036;1037
  • In the new HMC8 the terms are changing and are not the same : Virtual Network Bridge = Shared Ethernet Adapter; Load (Balance) Group = A pair of virtual adapters with the same PVID on both Virtual I/O Server.
  • Create the Shared Ethernet Adapter with the first (with PVID10) and the second (with PVID11) adapter and the first vlan (vlan 1024 has to be added on adapter with PVID 10) :
  • create_sea1
    create_sea2
    create_sea3

  • Add the second vlan (the vlan 1028) in our Shared Ethernet Adapter (Virtual Network Bridge) and choose to put it on the adapter with PVID 11 (Load Balance Group 11) :
  • create_sea4
    create_sea5
    create_sea6

  • Repeat this operation for the next vlan (1032), but this time we have to create new virtual adapters with PVID 12 (Load Balance Group 12) :
  • create_sea7

  • Repeat this operation for the next vlan (1036), but this time we have to create new virtual adapters with PVID 13 (Load Balance Group 13).
  • You can check on this picture our 4 virtual adapters with two vlans for each ones :
  • create_sea8
    create_sea9

  • I’ll not detail the other part which are very simple to understand. You can check at the end our template is created 2 Virtual I/O Servers and 8 virtual networks.

The Shared Ethernet Adapter problem : Are you deploying a Power8/Power7 with a 780 firmware or a Power6/7 server ?

When creating a system template you probably notice that when your are defining your your Shared Ethernet Adapters … sorry your Virtual Network Bridges there is no possibility to create any control channel adapters or any possibility to assign a vlan id for this control channel. If you choose to create the system template by hand with the HMC the template will be usable by all Power8 systems and all Power7 system with a firmware that allows you to create a Shared Ethernet Adapter without any control channel (780 firmwares). I’ve tried this myself and we will check that later. If you are deploying a system template an older power 7 system the deployment will fail because of this reason. You have two solutions to this problem. Create your first system “by hand” and create your Shared Ethernet Adapters with control channel on your own and then capture the system to redeploy on other machines or you have the choice to edit the XML of you current template to add the control channel adapter in it …no comments.

failed_sea_ctl_chan

If you choose to edit the template to add the control channel on your own, export your template as an xml file and edit it by hand (here is an example on the picture below), and then re-imported the modified xml file :

sea_control_channel_template

Capture an already deployed system

As you can see creating a system template from scratch can be hard and cannot match all your needs especially with this Shared Ethernet Adapter problem. My advice is to deploy by hand or by using the toolkit your first system and then capture the system to create and Hardware Management Console template based on this one. By doing this all the Shared Ethernet Adapters will be captured as configured, the ones with control channels and the ones without control channel. It can match all the cases without having to edit the xml file by hand.

  • Click “Capture configuration as template with physical I/O” :
  • capture_template_with_physical_io

  • The whole system will be captured and if you put your physical I/O in the same slot (as we do in my team) each time you deploy a new server you will not have to choice which physical I/O will belong to which Virtual I/O server :
  • capture_template_with_physical_io_capturing

  • In the system template library you can check that the physical I/O are captured and that we do not have to define our Shared Ethernet Adapter (the screenshot below shows you 49 vlans ready to be deployed) :
  • capture_template_library_with_physical_io_and_vlan

  • To do this don’t forget to edit the template and check the box “Use captured I/O information” :
  • use_captured_io_informations

    Deploying a system template

    BE VERY CAREFUL BEFORE DEPLOYING A SYSTEM TEMPLATE ALL THE ALREADY EXISTING VIRTUAL I/O SERVERS AND PARTITIONS WILL BE REMOVED BY DOING THIS. THE HMC WILL PROMPT YOU A WARNING MESSAGE. Go in the template library and right click on the template you want to deploy, then click deploy :

    reset_before_deploy1
    reset_before_deploy2

    • If you are deploying a “non captured template” choose the physical I/O for each Virtual I/O Servers :
    • choose_io1

    • If you are deploying a “captured template” the physical I/O will be automatically choose for each Virtual I/O Servers :
    • choose_io2

    • The Virtual I/O Server profiles are craved here :
    • craving_virtual_io_servers

    • You next have the choice to use a NIM server or to use the HMC image repository to deploy the Virtual I/O Servers in both cases you have to choose the adapter used to deploy the image :
    • NIM way :
    • nim_way

    • HMC way (check the tip at the beginning of the post about installios if you are choosing this method :
    • hmc_way

    • Click start when you are ready. The start button will invoke the lpar_netboot command with the settings you put in the previous screen :
    • start_dep

    • You can monitor the installation process by clicking monitoring vterm (on the images below you can check the ping is successful, the bootp is ok, the tftp is downloading, and the being mksysb restored :
    • monitor1
      monitor2
      monitor3

    • The RMC connection has to be up on both Virtual I/O Servers to build the Shared Ethernet Adapters and the Virtual I/O Server license must be accepted. Check both are ok.
    • RMCok
      licenseok

    • Choose where the Shared Ethernet Adapters will be created and the create the link aggregation device here (choose here on which network adapters and network ports will your Shared Ethernet Adapters be created) :
    • choose_adapter

    • Click start on the next screen to create the Shared Ethernet Adapter automatically :
    • sea_creation_ok

    • After a successful deployment of a system template a summary will be displayed on the screen :
    • template_ok

    IBM Provisioning Toolkit for PowerVM : A tool created by the Admins for the Admins

    As you now know the HMC templates are ok, but there are some drawbacks about using this method. In my humble opinion the HMC templates are good for a beginner, the user is now guided step by step and it is much simpler for someone who doesn’t know anything about PowerVM to build a server from scratch, without knowing and understanding all the features of PowerVM (Virtual I/O Server, Shared Ethernet Adapter). The deployment is not fully automatized the HMC will not mirror your rootvg, will not set any attributes on your fiber channel adapters, will never run a custom script after the installation to fit your needs. Last point, I’m sure that as a system administrator you probably prefer using command line tools than a “crappy” GUI, a template can not be created, neither deployed in command line (change this please). There is another way to build your server and it’s called IBM PowerVM Provisioning toolkit. This tool is developed by STG Lab Services US and is not well known in Europe but I can assure you that a lot of US customers are using it (raise your voice in comments us guys). This tool can help you in many ways :

    • Carving Virtual I/O Servers profiles.
    • Building and deploying Virtual I/O Servers with a NIM Server without having to create anything by hand.
    • Creating your SEA with or without control channel, failover/sharing, tagged/non-tagged.
    • Setting attributes on your fire channel adapters.
    • Building and deploying Virtual I/O Clients in NPIV and vscsi.
    • Mirroring you rootvg.
    • Capturing a whole frame and redeploy it on another server.
    • A lot of other things.

    Just to let you understand the approach of the tool let’s begin with an example. I want to deploy a new machine with two Virtual I/O Server :

    • 1 (white) – I’m writing a profile file : in this one I’m putting all the information that are the same all the machines (virtual switches, shared processor pools, Virtual I/O Server profiles, Shared Ethernet Adapter definition, image chosen to deploy the Virtual I/O Server, physical I/O adapter for each Virtual I/O Server)
    • 2 (white) – I’m writing a config file : in this one I’m putting all the information that are unique for each machine (name, ip, HMC name used to deploy, CEC serial number, and so on)
    • 3 (yellow) – I’m launching the provisioning toolkit to build my machine, the NIM objects are created (networks, standalone machines) and the bosinst operation is launched from the NIM server
    • 4 (red) – The Virtual I/O Servers profiles are created and the lpar_netboot command is launched an ssh key has to be shared between the NIM server and the Hardware management console
    • 5 (blue) – Shared Ethernet Adapter are created and post configuration is launched on the Virtual I/O Server (mirror creation, vfc attributes …)

    toolkit

    Let me show you a detailed example of a new machine deployment :

    • On the NIM server, the toolkit is located in /export/nim/provision. You can see that the main script called buildframe.ksh.v3.24.2, and two directories one for the profiles (build_profiles) and one for the configuration files (config_files). The work_area directory is the log directory :
    • # cd /export/nim/provision
      # ls
      build_profiles          buildframe.ksh.v3.24.2  config_files       lost+found              work_area
      
    • Let’s check a configuration file a new Power720 deployment :
    • # vi build_profiles/p720.conf
      
    • Some variables will be set in the configuration file put N/A value for this ones :
    • VARIABLES      (SERVERNAME)=NA
      VARIABLES      (BUILDHMC)=NA
      [..]
      VARIABLES      (BUILDUSER)=hscroot
      VARIABLES      (VIO1_LPARNAME)=NA
      VARIABLES      (vio1_hostname)=(VIO1_LPARNAME)
      VARIABLES      (VIO1_PROFILE)=default_profile
      
      VARIABLES      (VIO2_LPARNAME)=NA
      VARIABLES      (vio2_hostname)=(VIO2_LPARNAME)
      VARIABLES      (VIO2_PROFILE)=default_profile
      
      VARIABLES      (VIO1_IP)=NA
      VARIABLES      (VIO2_IP)=NA
      
    • Choose the ports that will be used to restore the Virtual I/O Server mksysb :
    • VARIABLES      (NIMPORT_VIO1)=(CEC1)-P1-C6-T1
      VARIABLES      (NIMPORT_VIO2)=(CEC1)-P1-C7-T1
      
    • In the example I’m building the Virtual I/O Server with 3 Shared Ethernet Adapters, and I’m not creating any LACP aggregation :
    • # SEA1
      VARIABLES      (SEA1VLAN1)=401
      VARIABLES      (SEA1VLAN2)=402
      VARIABLES      (SEA1VLAN3)=403
      VARIABLES      (SEA1VLAN4)=404
      VARIABLES      (SEA1VLANS)=(SEA1VLAN1),(SEA1VLAN2),(SEA1VLAN3),(SEA1VLAN4)
      # SEA2
      VARIABLES      (SEA2VLAN1)=100,101,102
      VARIABLES      (SEA2VLAN2)=103,104,105
      VARIABLES      (SEA2VLAN3)=106,107,108
      VARIABLES      (SEA2VLAN4)=109,110
      VARIABLES      (SEA2VLANS)=(SEA2VLAN1),(SEA2VLAN2),(SEA2VLAN3),(SEA2VLAN4)
      # SEA3
      VARIABLES      (SEA3VLAN1)=200,201,202,203,204,309
      VARIABLES      (SEA3VLAN2)=205,206,207,208,209,310
      VARIABLES      (SEA3VLAN3)=210,300,301,302,303
      VARIABLES      (SEA3VLAN4)=304,305,306,307,308
      VARIABLES      (SEA3VLANS)=(SEA3VLAN1),(SEA3VLAN2),(SEA3VLAN3),(SEA3VLAN4)
      # SEA DEF (I'm putting adapter ID and PVID here)
      SEADEF         seadefid=SEA1,networkpriority=S,vswitch=vdct,seavirtid=10,10,(SEA1VLAN1):11,11,(SEA1VLAN2):12,12,(SEA1VLAN3):13,13,(SEA1VLAN4),seactlchnlid=14,99,vlans=(SEA1VLANS),netmask=(SEA1NETMASK),gateway=(SEA1GATEWAY),etherchannel=NO,lacp8023ad=NO,vlan8021q=YES,seaat
      trid=nojumbo
      SEADEF         seadefid=SEA2,networkpriority=S,vswitch=vdca,seavirtid=15,15,(SEA2VLAN1):16,16,(SEA2VLAN2):17,17,(SEA2VLAN3):18,18,(SEA2VLAN4),seactlchnlid=19,98,vlans=(SEA2VLANS),netmask=(SEA2NETMASK),gateway=(SEA2GATEWAY),etherchannel=NO,lacp8023ad=NO,vlan8021q=YES,seaat
      trid=nojumbo
      SEADEF         seadefid=SEA3,networkpriority=S,vswitch=vdcb,seavirtid=20,20,(SEA3VLAN1):21,21,(SEA3VLAN2):22,22,(SEA3VLAN3):23,23,(SEA3VLAN4),seactlchnlid=24,97,vlans=(SEA3VLANS),netmask=(SEA3NETMASK),gateway=(SEA3GATEWAY),etherchannel=NO,lacp8023ad=NO,vlan8021q=YES,seaat
      trid=nojumbo
      # SEA PHYSICAL PORTS 
      VARIABLES      (SEA1AGGPORTS_VIO1)=(CEC1)-P1-C6-T2
      VARIABLES      (SEA1AGGPORTS_VIO2)=(CEC1)-P1-C7-T2
      VARIABLES      (SEA2AGGPORTS_VIO1)=(CEC1)-P1-C1-C3-T1
      VARIABLES      (SEA2AGGPORTS_VIO2)=(CEC1)-P1-C1-C4-T1
      VARIABLES      (SEA3AGGPORTS_VIO1)=(CEC1)-P1-C4-T1
      VARIABLES      (SEA3AGGPORTS_VIO2)=(CEC1)-P1-C5-T1
      # SEA ATTR 
      SEAATTR        seaattrid=nojumbo,ha_mode=sharing,largesend=1,large_receive=yes
      
    • I’m defining each physical I/O adapter for each Virtual I/O Servers :
    • VARIABLES      (HBASLOTS_VIO1)=(CEC1)-P1-C1-C1,(CEC1)-P1-C2
      VARIABLES      (HBASLOTS_VIO2)=(CEC1)-P1-C1-C2,(CEC1)-P1-C3
      VARIABLES      (ETHSLOTS_VIO1)=(CEC1)-P1-C6,(CEC1)-P1-C1-C3,(CEC1)-P1-C4
      VARIABLES      (ETHSLOTS_VIO2)=(CEC1)-P1-C7,(CEC1)-P1-C1-C4,(CEC1)-P1-C5
      VARIABLES      (SASSLOTS_VIO1)=(CEC1)-P1-T9
      VARIABLES      (SASSLOTS_VIO2)=(CEC1)-P1-C19-T1
      VARIABLES      (NPIVFCPORTS_VIO1)=(CEC1)-P1-C1-C1-T1,(CEC1)-P1-C1-C1-T2,(CEC1)-P1-C1-C1-T3,(CEC1)-P1-C1-C1-T4,(CEC1)-P1-C2-T1,(CEC1)-P1-C2-T2,(CEC1)-P1-C2-T3,(CEC1)-P1-C2-T4
      VARIABLES      (NPIVFCPORTS_VIO2)=(CEC1)-P1-C1-C2-T1,(CEC1)-P1-C1-C2-T2,(CEC1)-P1-C1-C2-T3,(CEC1)-P1-C1-C2-T4,(CEC1)-P1-C3-T1,(CEC1)-P1-C3-T2,(CEC1)-P1-C3-T3,(CEC1)-P1-C3-T4
      
    • I’m defining the mksysb image to use and the Virtual I/O Server profiles :
    • BOSINST        bosinstid=viogold,source=mksysb,mksysb=golden-vios-2234-29122014-mksysb,spot=golden-vios-2234-29122014-spot,bosinst_data=no_prompt_hdisk0-bosinst_data,accept_licenses=yes,boot_client=no
      
      PARTITIONDEF   partitiondefid=vioPartition,bosinstid=viogold,lpar_env=vioserver,proc_mode=shared,min_proc_units=0.4,desired_proc_units=1,max_proc_units=16,min_procs=1,desired_procs=4,max_procs=16,sharing_mode=uncap,uncap_weight=255,min_mem=1024,desired_mem=8192,max_mem=12
      288,mem_mode=ded,max_virtual_slots=500,all_resources=0,msp=1,allow_perf_collection=1
      PARTITION      name=(VIO1_LPARNAME),profile_name=(VIO1_PROFILE),partitiondefid=vioPartition,lpar_netboot=(NIM_IP),(vio1_hostname),(VIO1_IP),(NIMNETMASK),(NIMGATEWAY),(NIMPORT_VIO1),(NIM_SPEED),(NIM_DUPLEX),NA,YES,NO,NA,NA
      PARTITION      name=(VIO2_LPARNAME),profile_name=(VIO2_PROFILE),partitiondefid=vioPartition,lpar_netboot=(NIM_IP),(vio2_hostname),(VIO2_IP),(NIMNETMASK),(NIMGATEWAY),(NIMPORT_VIO2),(NIM_SPEED),(NIM_DUPLEX),NA,YES,NO,NA,NA
      
    • Let’s now check a configuration file for a specific machine (as you can see I’m putting the Virtual I/O Server name here, the ip address and all that is specific to the new machines (CEC serial number and so on)) :
    • # cat P720-8202-E4D-1.conf
      (BUILDHMC)=myhmc
      (SERVERNAME)=P720-8202-E4D-1
      (CEC1)=WZSKM8U
      (VIO1_LPARNAME)=labvios1
      (VIO2_LPARNAME)=labvios2
      (VIO1_IP)=10.14.14.1
      (VIO2_IP)=10.14.14.2
      (NIMGATEWAY)=10.14.14.254
      (VIODNS)=10.10.10.1,10.10.10.2
      (VIOSEARCH)=lab.chmod66.org,prod.chmod666.org
      (VIODOMAIN)=chmod666.org
      
    • We are now ready to build the new machine. the first thing to do is to create the vswitches on the machine (you have to confirm all operations):
    • ./buildframe.ksh.v3.24.2 -p p720 -c P720-8202-E4D-1.conf -f vswitch
      150121162625 Start of buildframe DATE: (150121162625) VERSION: v3.24.2
      150121162625        profile: p720.conf
      150121162625      operation: FRAMEvswitch
      150121162625 partition list:
      150121162625   program name: buildframe.ksh.v3.24.2
      150121162625    install dir: /export/nim/provision
      150121162625    post script:
      150121162625          DEBUG: 0
      150121162625         run ID: 150121162625
      150121162625       log file: work_area/150121162625_p720.conf.log
      150121162625 loading configuration file: config_files/P720-8202-E4D-1.conf
      [..]
      Do you want to continue?
      Please enter Y or N Y
      150121162917 buildframe is done with return code 0
      
    • Let’s now build the Virtual I/O Servers, create the Shared Ethernet Adapters and let’s have a coffee ;-)
    • # ./buildframe.ksh.v3.24.2 -p p720 -c P720-8202-E4D-1.conf -f build
      [..]
      150121172320 Creating partitions
      150121172320                 --> labvios1
      150121172322                 --> labvios2
      150121172325 Updating partition profiles
      150121172325   updating VETH adapters in partition: labvios1 profile: default_profile
      150121172329   updating VETH adapters in partition: labvios1 profile: default_profile
      150121172331   updating VETH adapters in partition: labvios1 profile: default_profile
      150121172342   updating VETH adapters in partition: labvios2 profile: default_profile
      150121172343   updating VETH adapters in partition: labvios2 profile: default_profile
      150121172344   updating VETH adapters in partition: labvios2 profile: default_profile
      150121172345   updating IOSLOTS in partition: labvios1 profile: default_profile
      150121172347   updating IOSLOTS in partition: labvios2 profile: default_profile
      150121172403 Configuring NIM for partitions
      150121172459 Executing--> lpar_netboot   -K 255.255.255.0 -f -t ent -l U78AA.001.WZSKM8U-P1-C6-T1 -T off -D -s auto -d auto -S 10.20.20.1 -G 10.14.14.254 -C 10.14.14.1 labvios1 default_profile s00ka9936774-8202-E4D-845B2CV
      150121173247 Executing--> lpar_netboot   -K 255.255.255.0 -f -t ent -l U78AA.001.WZSKM8U-P1-C7-T1 -T off -D -s auto -d auto -S 10.20.20.1 -G 10.14.14.254 -C 10.14.14.2 labvios2 default_profile s00ka9936774-8202-E4D-845B2CV
      150121174028 buildframe is done with return code 0
      
    • After the mksysb is deployed you can tail the logs on each Virtual I/O Server to check what is going on :
    • [..]
      150121180520 creating SEA for virtID: ent4,ent5,ent6,ent7
      ent21 Available
      en21
      et21
      150121180521 Success: running /usr/ios/cli/ioscli mkvdev -sea ent1 -vadapter ent4,ent5,ent6,ent7 -default ent4 -defaultid 10 -attr ctl_chan=ent8  ha_mode=sharing largesend=1 large_receive=yes, rc=0
      150121180521 found SEA ent device: ent21
      150121180521 creating SEA for virtID: ent9,ent10,ent11,ent12
      [..]
      ent22 Available
      en22
      et22
      150121180523 Success: running /usr/ios/cli/ioscli mkvdev -sea ent20 -vadapter ent9,ent10,ent11,ent12 -default ent9 -defaultid 15 -attr ctl_chan=ent13  ha_mode=sharing largesend=1 large_receive=yes, rc=0
      150121180523 found SEA ent device: ent22
      150121180523 creating SEA for virtID: ent14,ent15,ent16,ent17
      [..]
      ent23 Available
      en23
      et23
      [..]
      150121180540 Success: /usr/ios/cli/ioscli cfgnamesrv -add -ipaddr 10.10.10.1, rc=0
      150121180540 adding DNS: 10.10.10.1
      150121180540 Success: /usr/ios/cli/ioscli cfgnamesrv -add -ipaddr 10.10.10.2, rc=0
      150121180540 adding DNS: 159.50.203.10
      150121180540 adding DOMAIN: lab.chmod666.org
      150121180541 Success: /usr/ios/cli/ioscli cfgnamesrv -add -dname fr.net.intra, rc=0
      150121180541 adding SEARCH: lab.chmod666.org prod.chmod666.org
      150121180541 Success: /usr/ios/cli/ioscli cfgnamesrv -add -slist lab.chmod666.org prod.chmod666.org, rc=0
      [..]
      150121180542 Success: found fcs device for physical location WZSKM8U-P1-C2-T4: fcs3
      150121180542 Processed the following FCS attributes: fcsdevice=fcs4,fcs5,fcs6,fcs7,fcs0,fcs1,fcs2,fcs3,fcsattrid=fcsAttributes,port=WZSKM8U-P1-C1-C1-T1,WZSKM8U-P1-C1-C1-T2,WZSKM8U-P1-C1-C1-T3,WZSKM8U-P1-C1-C1-T4,WZSKM8U-P1-C2-T1,WZSKM8U-P1-C2-T2,WZSKM8U-P1-C2-T3,WZSKM8U-P
      1-C2-T4,max_xfer_size=0x100000,num_cmd_elems=2048
      150121180544 Processed the following FSCSI attributes: fcsdevice=fcs4,fcs5,fcs6,fcs7,fcs0,fcs1,fcs2,fcs3,fscsiattrid=fscsiAttributes,port=WZSKM8U-P1-C1-C1-T1,WZSKM8U-P1-C1-C1-T2,WZSKM8U-P1-C1-C1-T3,WZSKM8U-P1-C1-C1-T4,WZSKM8U-P1-C2-T1,WZSKM8U-P1-C2-T2,WZSKM8U-P1-C2-T3,WZS
      KM8U-P1-C2-T4,fc_err_recov=fast_fail,dyntrk=yes
      [..]
      150121180546 Success: found device U78AA.001.WZSKM8U-P2-D4: hdisk0
      150121180546 Success: found device U78AA.001.WZSKM8U-P2-D5: hdisk1
      150121180546 Mirror hdisk0 -->  hdisk1
      150121180547 Success: extendvg -f rootvg hdisk1, rc=0
      150121181638 Success: mirrorvg rootvg hdisk1, rc=0
      150121181655 Success: bosboot -ad hdisk0, rc=0
      150121181709 Success: bosboot -ad hdisk1, rc=0
      150121181709 Success: bootlist -m normal hdisk0 hdisk1, rc=0
      150121181709 VIOmirror <- rc=0
      150121181709 VIObuild <- rc=0
      150121181709 Preparing to reboot in 10 seconds, press control-C to abort
      

    The new server was deployed in one command and you avoid any manual mistake by using the toolkit. The example above is just one of the many was to use the toolkit. This is a very powerful and simple tool and I really want to see other Europe customers using it, so ask you IBM Pre-sales, ask for PowerCare and take the control of you deployment by using the toolkit. The toolkit is also used to capture and redeploy a whole frame for disaster recovery plan.

    Live Partition Mobility Automation Tool

    Because understanding the provisioning toolkit didn't takes me one full week we still had plenty of time the with Bonnie from STG Lab Service and we decided to give a try to another tool called Live Partition Mobility Automation Tool. I'll not talk about it in details but this tool allows you to automatize your Live Partition Mobility moves. It's a web interface coming with a tomcat server that you can run on a Linux or directly on your laptop. This web application is taking control of your Hardware Management Console and allows you to do a lot of things LPM related :

    • You can run a validation on every partitions on a system.
    • You can move you partitions by spreading or packing them on destination server.
    • You can "record" a move to replay it later (very very very useful for my previous customer for instance, we were making our moves by clients, all clients were hosted on two big P795)
    • You can run a dynamic platform optimizer after the moves.
    • You have an option to move back the partitions to their original location and this is (in my humble opinion) what's make this tool so powerfull

    lpm_toolkit

    Since I have this tool I'm now running on a week basis a validation of all my partition to check if there are any errors. I'm now using it to move and move back the partitions when I have to. So I really recommends the Live Partition Mobility Automation tool.

    Hardware Management Console 8 : Other new features

    Adding a VLAN to an already existing Shared Ethernet Adapter

    With the new Hardware Management Console you can easily add a new vlan to an already existing Shared Ethernet Adapter (failover and shared, with and without control channel : no restriction) without having to perform a dlpar operation on each Virtual I/O Server and then modifying your profiles (if you do not have the synchronization enabled). Even better by using this method to add your new vlans you will avoid any misconfiguration, for instance by forgetting to add the vlan on one or the Virtual I/O Server or by not choosing the same adapter on both side.

    • Open the Virtual Network page in the HMC and click "Add a Virtual Network". You have to remember that a Virtual Network Bridge is an Shared Ethernet Adapter, and a Load balance group is a pair of virtual adapters on both Virtual I/O Server with the same PVID :
    • add_vlan5

    • Choose the name of your vlan (in my case VLAN3331), then choose bridged network (bridged network is the new name for Shared Ethernet Adapters ...), choose "yes" for vlan tagging, and put the vlan id (in my case 3331). By choosing the virtual switch, the HMC will only let you choose a Shared Ethernet Adapter configured in the virtual switch (no mistake possible). DO NOT forget to check the box "Add new virtual network to all Virtual I/O servers" to add the vlan on both sides :
    • add_vlan

    • On the next page you have to choose the Shared Ethernet Adapter on which the vlan will be added (in my case this is super easy, I ALWAYS create one Shared Ethernet Adapter per virtual switch to avoid misconfiguration and network loops created by adding with the same vlan id on two differents Shared Ethernet Adapter) :
    • add_vlan2

    • At last choose or create a new "Load Sharing Group". A load sharing group is one of the virtual adapter of your Shared Ethernet Adapter. In my case my Shared Ethernet Adapter was created with two virtual adapters with id 10 and 11. On this screenshot I'm telling the HMC to add the new vlan on the adapter with the id 10 on both Virtual I/O Servers. You can also create a new virtual adapter to be included in the Shared Ethernet Adapter by choosing "Create a new load sharing group" :
    • add_vlan3

    • Before applying the configuration a summary is prompted to the user to check the changes :
    • add_vlan4

    Partition Templates

    You can also use the template to capture and created partitions not just systems. I'll not give you all the details because the HMC is well documented for this part and there is no tricky things to do, just follow the GUI. One more time the HMC8 is for the noobs \o/. Here are a few screenshot of partitions templates (capture and deploy) :

    create_part2
    create_part6

    A new a nice look and feel for the new Hardware Management Console

    Everybody that the HMC GUI is not very nice but it's working great. One of the major new thing of the HMC 8r8.2.0 is the new GUI. In my opinion the new GUI is awesome the design is nice and I love it. Look at the pictures below :

    hmc8
    virtual_network_diagram

    Conclusion

    The Hardware Management Console 8 is still young but offers a lot of new cool features like system and partitions template, performance dashboard and a new GUI. In my opinion the new GUI is slow and there are a lot of bugs for the moment, my advice is to use when you have the time to use it, not in a rush. Learn the new HMC on your own by trying to do all the common tasks with the new GUI (there are still impossible things to do ;-)). I can assure you that you will need more than a few hour to be familiarized with all those new features. And don't forget to call you pre-sales to have a demonstration of the STG's toolkits, both provisioning and LPM are awesome. Use it !

    What is going on in this world

    This blog is not and will never be the place for political things but with the darkest days we had in France two weeks ago with this insane and inhuman terrorists attacks I had to say a few words about it (because even if my whole life is about AIX for the moment, I'm also an human being .... if you doubt about it). Since the tragic death of 17 men and women in France everybody is raising his voice to tell us (me ?) what is right and what is wrong without thinking seriously about it. Things like this terrorist attack should never happen again. I just wanted to say that I'm for liberty, no only for the "liberty of expression", but just the liberty. By defending this liberty we have to be very careful because in the name of this defense things that are done by our government may take us what we call liberty forever. Are the phone and the internet going to be tapped and logged in the name of the liberty ? Is this liberty ? Think about it and resist.

    Deep dive into PowerVC Standard 1.2.1.0 using Storage Volume Controller and Brocade 8510-4 FC switches in a multifabric environment

    Before reading this post I highly encourage you to read my first post about PowerVC because this one will be focused on the standard edition specificity. I had the chance to work on PowerVC express with IVM and local storage and now with a PowerVC standard with an IBM Storage Volume Controller & Brocade fibre channel switches. A few things are different between these two versions (particularly the storage management). Virtual Machines created by PowerVC standard will use NPIV (Virtual fibre channel adapters) instead of virtual vSCSI adapters. Using local storage or using an SVC in a multi fabric environment are two different things and PowerVC ways to capture/deploy and manage virtual machines are totally different. The PowerVC configuration is more complex and you have to manage the fibre channel ports configuration, the storage connectivity groups and templates. Last but no least the PowerVC standard edition is Live Partition Mobility aware. Let’s have a look on all the standard version specificity. But before you start reading this post I have to warn you that this one is very long (It’s always hard for me to write short posts :-)). Last thing, this post is the result of one month of work on PowerVC mostly on my own, but I had to thanks IBM guys for helping about a few problems (Paul, Eddy, Jay, Phil, …). Cheers guys!

    Prerequisites

    PowerVC standard needs to connect to the Hardware Management Console, to the Storage Provider, and to the Fibre Channel Switches. Be sure ports are open between PowerVC, the HMC, the Storage Array, and the Fibre Channel Switches :

    • Port TCP 12443 between PowerVC and the HMC (PowerVC is using the HMC K2 Rest API to communicate with the HMC)
    • Port TCP 22 (ssh) between PowerVC and the Storage Array.
    • Port TCP 22 (ssh) between PowerVC and the Fibre Channel Switches.

    pvcch

    Check your storage array is compatible with PowerVC standard (for the moment only IBM Storwise storage and IBM Storage Volume Controller are supported). All Brocade switches with a firmware 7 are supported. Be careful the PowerVC Redbook is not up-to-date about this : all Brocade switches are supported (An APAR and a PMR are opened about this mistake)

    This post was written with this PowerVC configuration :

    • PowerVC 1.2.0.0 x86 version & PowerVC 1.2.1.0 PCC version.
    • Storage Volume Controller with EMC VNX2 Storage array.
    • Brocade DCX 8510-4.
    • Two Power770+ with firmware latest AM780 firmware.

    PowerVC standard storage specifics and configuration

    PowerVC needs to control the storage to create or delete luns, to create hosts and it also needs to control the fibre channel switches to create and delete zone for the virtual machines. If you are working with multi fibre channel adapters with many ports you have also to configure the storage connectivity groups and the fibre channels ports to tell which port to use and in which case (you may want to create virtual machines for development only on two virtual fibre channel adapters and production one on four). Let’s see how to to this :

    Adding storage and fabric

    • Add the storage provider (in my case a Storage Volume Controller but it can be any IBM Storwise family storage array) :
    • blog_add_storage

    • PowerVC will ask you a few questions while adding the storage provider (for instance which pool will be the default pool for the deployment of the virtual machines). You can next check in this view the actual size and remaining size of the used pool :
    • blog_storage_added

    • Add each fibre channel switch (in my case two switches one for fabric A and the second one for the fabric B) (be very careful with the fabric designation (A or B), it will be used later when creating storage templates and storage connectivity groups) :
    • blog_add_frabric

    • Each fabric can be viewed and modified afterwards :
    • blog_fabric_added

    Fibre Channel Port Configuration

    If you are working in a multi fabric environment you have to configure the fibre channel ports. For each port the first step is to tell PowerVC on which fabric the port is connected. In my case here is the configuration (you can refer to the colours on the image below, and on the explications below) :

    pb connectivty_zenburn

    • Each Virtual I/O Server has 2 fibre channel adapters with four ports.
    • For the first adapter : first port is connected to Fabric A, and last port is connected to Fabric B.
    • For the second adapter : first port is connected to Fabric B, and last port is connected to Fabric A.
    • Two ports (port 1 and 2) are remaining free for future usage (future growing).
    • For each port I have to tell PowerVC if the port is connected on : (With PowerVC 1.2.0.0 you have to do this manually and check on the fibre channel switch where are the ports connected. With PowerVC 1.2.1.0 it is automatically detected by PowerVC :-))
    • 17_choose_fabric_for_each_port

      • Connected on Fabric A ? (check the image below) (check switch command to find if the port is connected on the fibre channel switch)
      • blog_connected_fabric_A

        switch_fabric_a:FID1:powervcadm> nodefind 10:00:00:90:FA:3E:C6:D1
        No device found
        switch_fabric_a:FID1:powervcadm> nodefind 10:00:00:90:FA:3E:C6:CE
        Local:
         Type Pid    COS     PortName                NodeName                 SCR
         N    01fb40;    2,3;10:00:00:90:fa:3e:c6:ce;20:00:01:20:fa:3e:c6:ce; 0x00000003
            Fabric Port Name: 20:12:00:27:f8:79:ce:01
            Permanent Port Name: 10:00:00:90:fa:3e:c6:ce
            Device type: Physical Unknown(initiator/target)
            Port Index: 18
            Share Area: Yes
            Device Shared in Other AD: No
            Redirect: No
            Partial: No
            Aliases: XXXXX59_3ec6ce
        
      • Connected on Fabric B ? (check the image below) (check switch command to find if the port is connected on the fibre channel switch)
      • blog_connected_fabric_B

        switch_fabric_b:FID1:powervcadm> nodefind 10:00:00:90:FA:3E:C6:D1
        Local:
         Type Pid    COS     PortName                NodeName                 SCR
         N    02fb40;    2,3;10:00:00:90:fa:3e:c6:d1;20:00:01:20:fa:3e:c6:d1; 0x00000003
            Fabric Port Name: 20:12:00:27:f8:79:d0:01
            Permanent Port Name: 10:00:00:90:fa:3e:c6:d1
            Device type: Physical Unknown(initiator/target)
            Port Index: 18
            Share Area: Yes
            Device Shared in Other AD: No
            Redirect: No
            Partial: No
            Aliases: XXXXX59_3ec6d1
        switch_fabric_b:FID1:powervcadm> nodefind 10:00:00:90:FA:3E:C6:CE
        No device found
        
      • Free, not connected ? (check the image below)
      • blog_not_connected

    • At the end each fibre channel port has to be configured with one of these three choices (connected on Fabric A, connected on Fabric B, Free/not connected).

    Port Tagging and Storage Connectivity Group

    Fibre channel ports are now configured, but we have to be sure that when deploying a new virtual machine :

    • Each virtual machine will be deployed with four fibre channel adapters (I am in a CHARM configuration).
    • Each virtual machine is connected on the first Virtual I/O Server to the Fabric A and Fabric B on different adapters (each adapter on a different CEC).
    • Each virtual machine is connected to the second Virtual I/O Server to Fabric A and Fabric B on different adapters.
    • I can choose to deploy the virtual machine using fcs0 (Fabric A) and fcs7 (Fabric B) on each Virtual I/O Server or using fcs3 (Fabric B) and fcs4 (Fabric A). Ideally half of the machines will be created with the first configuration and the second half one the second configuration.

    To do this you have to tag each port with a tag of the name of your choice, and then create a storage connectivity group. A storage connectivity is a constraint that is used for the deployment of virtual machine :

    pb_port_tag_zenburn

    • Two tags are created and set on each ports, fcs0(A)_fcs7(B), and fcs3(B)_fcs4(A) :
    • blog_port_tag

    • Two connectivity groups are created to force the usage of tagged fibre channel ports when deploying a virtual machine.
      • When creating a connectivity group you have to choose the Virtual I/O Server(s) used when deploying a virtual machine using this connectivity group. It can be useful to tell PowerVC to deploy development machines on a single Virtual I/O Server, and production one on dual Virtual I/O Server :
      • blog_vios_connectivity_group

      • In my case connectivity groups are created to restrict the usage of fibre channel adapters. I want to deploy on fibre channel ports fcs0/fcs7 or fibre channel ports fcs3/fcs4. Here are my connectivity groups :
      • blog_connectivity_1
        blog_connectivity_2

      • You can check a sum-up of your connectivity group. I wanted to add this image because I think the two images (provided in PowerVC) are better than text to explain what is a connectivity group :-) :
      • 22_create_connectivity_group_3

    Storage Template

    If you are using different pools or different storage arrays (for example, in my case I can have different storage arrays behind my Storage Volume Controller) you may want to tell PowerVC to deploy virtual machines on a specific pool or with a specific type (I want for instance, my machines to be created on compressed luns, on thin provisioned luns, or on thick provisioned luns). In my case I’ve created two different templates to create machines on thin or compressed lun. Easy !

    • When creating a storage template you first have to choose the storage pool :
    • blog_storage_template_select_storage_pool

    • Then choose the type of lun for this storage template :
    • blog_storage_template_create

    • Here are exemple with my two storage templates :
    • blog_storage_list

    A deeper look on VM capture

    I you read my last article about PowerVC express version you know that capturing an image could take some time when using local storage, “dding” a whole disk is long, copying a file to the PowerVC host is long. But don’t worry PowerVC standard solve this problem easily by using all the potential of the IBM Storage (In my case a Storage Volume Controller) … the solution FlashCopies, more specifically what we call a FlashCopy-Copy (to be clear : a FlashCopy-Copy is a full copy of a lun : there are no more relationship between the source lun being copied on the FlashCopy lun (the FlashCopy is created with the autodelete argument)) . Let me explain to you how PowerVC standard manages the virtual machine capture :

    • The activation engine has be run, the virtual machine to be captured is stopped.
    • The user launch the capture by using PowerVC.
    • A FlashCopy-Copy is created from the storage side, we can check it from the GUI interface :
    • blog_flash_copy_pixelate_1

    • Checking with the SVC command line we can see that (use catauditlog command to check this) :
      • A new volume called volume-Image-[name_of_the_image] is created (all captured images will be called volume-Image-[name]), taking care of the storage template (diskgroup/pool, grainsize, rsize ….)
      • # mkvdisk -name volume-Image_7100-03-03-1415 -iogrp 0 -mdiskgrp VNX_XXXXX_SAS_POOL_1 -size 64424509440 -unit b -autoexpand -grainsize 256 -rsize 2% -warning 0% -easytier on
        
      • A FlashCopy-Copy with the id of boot volume of the virtual machine to capture as source, and the id of the image’s lun as target is created :
      • # mkfcmap -source 865 -target 880 -autodelete
        
      • We can check the vdisk 865 is the boot volume of the captured machine and has a FlashCopy running:
      • # lsvdisk -delim :
        id:name:IO_group_id:IO_group_name:status:mdisk_grp_id:mdisk_grp_name:capacity:type:FC_id:FC_name:RC_id:RC_name:vdisk_UID:fc_map_count:copy_count:fast_write_state:se_copy_count:RC_change:compressed_copy_count
        865:_BOOT:0:io_grp0:online:0:VNX_00086_SAS_POOL_1:60.00GB:striped:0:fcmap0:::600507680184879C2800000000000431:1:1:empty:1:no:0
        
      • The FlashCopy-Copy is prepared and started (at this step we can already use our captured image, the copy is running in background) :
      • # prestartfcmap 0
        # startfcmap 0
        
      • While the copy of the FlahsCopy is running we can check the advancement (we can check it too by logging on the GUI too) :
      • IBM_2145:SVC:powervcadmin>lsfcmap
        id name   source_vdisk_id source_vdisk_name target_vdisk_id target_vdisk_name            group_id group_name status  progress copy_rate clean_progress incremental partner_FC_id partner_FC_name restoring start_time   rc_controlled
        0  fcmap0 865             XXXXXXXXX7_BOOT 880             volume-Image_7100-03-03-1415                     copying 54       50        100            off                                       no        140620002138 no
        
        IBM_2145:SVC:powervcadmin>lsfcmapprogress fcmap0
        id progress
        0  54
        
      • After the FlashCopy-Copy is finished, there are no more relationship between the source volume and the finished FlashCopy. The captured image is a vdisk :
      • IBM_2145:SVC:powervcadmin>lsvdisk 880
        id 880
        name volume-Image_7100-03-03-1415
        IO_group_id 0
        IO_group_name io_grp0
        status online
        mdisk_grp_id 0
        mdisk_grp_name VNX_XXXXX_SAS_POOL_1
        capacity 60.00GB
        type striped
        [..]
        vdisk_UID 600507680184879C280000000000044C
        [..]
        fc_map_count 0
        [..]
        
      • The is no more fcmap for the source volume :
      • IBM_2145:SVC:powervcadmin>lsvdisk 865
        [..]
        fc_map_count 0
        [..]
        

    Deployment mechanism

    blog_deploy3_pixelate

    Deploying a virtual machine with the standard version is very similar as deploying a machine with the express version. The only thing different is the possibility to choose the storage template (with the constraints of the storage connectivity group)

    View from the Hardware Management Console

    PowerVC is using the Hardware Management Console new k2 rest API to create the virtual machine, if you want to go further and check the commands used on the HMC you can check it with the lssvcevents command :

    time=06/21/2014 17:49:12,text=HSCE2123 User name powervc: chsysstate -m XXXX58-9117-MMD-658B2AD -r lpar -o on -n deckard-e9879213-00000018 command was executed successfully.
    time=06/21/2014 17:47:29,text=HSCE2123 User name powervc: chled -r sa -t virtuallpar -m 9117-MMD*658B2AD --id 1 -o off command was executed successfully.
    time=06/21/2014 17:46:51,"text=HSCE2123 User name powervc: chhwres -r virtualio --rsubtype fc -o a -m 9117-MMD*658B2AD -s 29 --id 1 -a remote_slot_num=6,remote_lpar_id=8,adapter_type=server co
    mmand was executed successfully."
    time=06/21/2014 17:46:40,"text=HSCE2123 User name powervc: chsyscfg -m 9117-MMD*658B2AD -r prof -i lpar_name=deckard-e9879213-00000018,""virtual_fc_adapters+=""""6/CLIENT/1//29//0"""""",name=l
    ast*valid*configuration -o apply --override command was executed successfully."
    time=06/21/2014 17:46:32,text=HSCE2245 User name powervc: Activating the partition 8 succeeded on managed system 9117-MMD*658B2AD.
    time=06/21/2014 17:46:17,"text=HSCE2123 User name powervc: chhwres -r virtualio --rsubtype fc -o a -m 9117-MMD*658B2AD -s 28 --id 1 -a remote_slot_num=5,remote_lpar_id=8,adapter_type=server co
    mmand was executed successfully."
    time=06/21/2014 17:46:06,"text=HSCE2123 User name powervc: chsyscfg -m 9117-MMD*658B2AD -r prof -i lpar_name=deckard-e9879213-00000018,""virtual_fc_adapters+=""""5/CLIENT/1//28//0"""""",name=l
    ast*valid*configuration -o apply --override command was executed successfully."
    time=06/21/2014 17:45:57,text=HSCE2245 User name powervc: Activating the partition 8 succeeded on managed system 9117-MMD*658B2AD.
    time=06/21/2014 17:45:46,"text=HSCE2123 User name powervc: chhwres -r virtualio --rsubtype fc -o a -m 9117-MMD*658B2AD -s 30 --id 2 -a remote_slot_num=4,remote_lpar_id=8,adapter_type=server co
    mmand was executed successfully."
    time=06/21/2014 17:45:36,text=HSCE2123 User name powervc: chhwres -r virtualio --rsubtype fc -o r -m 9117-MMD*658B2AD -s 29 --id 1 command was executed successfully.
    time=06/21/2014 17:45:27,"text=HSCE2123 User name powervc: chsyscfg -m 9117-MMD*658B2AD -r prof -i lpar_name=deckard-e9879213-00000018,""virtual_fc_adapters+=""""4/CLIENT/2//30//0"""""",name=l
    ast*valid*configuration -o apply --override command was executed successfully."
    time=06/21/2014 17:45:18,text=HSCE2245 User name powervc: Activating the partition 8 succeeded on managed system 9117-MMD*658B2AD.
    time=06/21/2014 17:45:08,text=HSCE2123 User name powervc: chhwres -r virtualio --rsubtype fc -o r -m 9117-MMD*658B2AD -s 28 --id 1 command was executed successfully.
    time=06/21/2014 17:45:07,text=User powervc has logged off from session id 42151 for the reason:  The user ran the Disconnect task.
    time=06/21/2014 17:45:07,text=User powervc has disconnected from session id 42151 for the reason:  The user ran the Disconnect task.
    time=06/21/2014 17:44:50,"text=HSCE2123 User name powervc: chhwres -r virtualio --rsubtype scsi -o a -m 9117-MMD*658B2AD -s 23 --id 1 -a adapter_type=server,remote_lpar_id=8,remote_slot_num=3
    command was executed successfully."
    time=06/21/2014 17:44:40,"text=HSCE2123 User name powervc: chsyscfg -m 9117-MMD*658B2AD -r prof -i lpar_name=deckard-e9879213-00000018,virtual_scsi_adapters+=3/CLIENT/1//23/0,name=last*valid*c
    onfiguration -o apply --override command was executed successfully."
    time=06/21/2014 17:44:32,text=HSCE2245 User name powervc: Activating the partition 8 succeeded on managed system 9117-MMD*658B2AD.
    time=06/21/2014 17:44:22,"text=HSCE2123 User name powervc: chhwres -r virtualio --rsubtype fc -o a -m 9117-MMD*658B2AD -s 25 --id 2 -a remote_slot_num=2,remote_lpar_id=8,adapter_type=server co
    mmand was executed successfully."
    time=06/21/2014 17:44:11,"text=HSCE2123 User name powervc: chsyscfg -m 9117-MMD*658B2AD -r prof -i lpar_name=deckard-e9879213-00000018,""virtual_fc_adapters+=""""2/CLIENT/2//25//0"""""",name=l
    ast*valid*configuration -o apply --override command was executed successfully."
    time=06/21/2014 17:44:02,text=HSCE2245 User name powervc: Activating the partition 8 succeeded on managed system 9117-MMD*658B2AD.
    time=06/21/2014 17:43:50,text=HSCE2123 User name powervc: chhwres -r virtualio --rsubtype scsi -o r -m 9117-MMD*658B2AD -s 23 --id 1 command was executed successfully.
    time=06/21/2014 17:43:31,text=HSCE2123 User name powervc: chled -r sa -t virtuallpar -m 9117-MMD*658B2AD --id 1 -o off command was executed successfully.
    time=06/21/2014 17:43:31,text=HSCE2123 User name powervc: chled -r sa -t virtuallpar -m 9117-MMD*658B2AD --id 2 -o off command was executed successfully.
    time=06/21/2014 17:42:57,"text=HSCE2123 User name powervc: chsyscfg -m 9117-MMD*658B2AD -r prof -i lpar_name=deckard-e9879213-00000018,""virtual_eth_adapters+=""""32/0/1665//0/0/zvdc4/fabbb99d
    e420/all/"""""",name=last*valid*configuration -o apply --override command was executed successfully."
    time=06/21/2014 17:42:49,text=HSCE2245 User name powervc: Activating the partition 8 succeeded on managed system 9117-MMD*658B2AD.
    time=06/21/2014 17:41:53,text=HSCE2123 User name powervc: chsyscfg -m 9117-MMD*658B2AD -r lpar -p deckard-e9879213-00000018 -n default_profile -o apply command was executed successfully.
    time=06/21/2014 17:41:42,text=HSCE2245 User name powervc: Activating the partition 8 succeeded on managed system 9117-MMD*658B2AD.
    time=06/21/2014 17:41:36,"text=HSCE2123 User name powervc: mksyscfg -m 9117-MMD*658B2AD -r lpar -i name=deckard-e9879213-00000018,lpar_env=aixlinux,min_mem=8192,desired_mem=8192,max_mem=8192,p
    rofile_name=default_profile,max_virtual_slots=64,lpar_proc_compat_mode=default,proc_mode=shared,min_procs=4,desired_procs=4,max_procs=4,min_proc_units=2,desired_proc_units=2,max_proc_units=2,s
    haring_mode=uncap,uncap_weight=128,lpar_avail_priority=127,sync_curr_profile=1 command was executed successfully."
    time=06/21/2014 17:41:01,"text=HSCE2123 User name powervc: mksyscfg -m 9117-MMD*658B2AD -r lpar -i name=FAKE_1403368861661,profile_name=default,lpar_env=aixlinux,min_mem=8192,desired_mem=8192,
    max_mem=8192,max_virtual_slots=4,virtual_eth_adapters=5/0/1//0/1/,virtual_scsi_adapters=2/client/1//2/0,""virtual_serial_adapters=0/server/1/0//0/0,1/server/1/0//0/0"",""virtual_fc_adapters=3/
    client/1//2//0,4/client/1//2//0"" -o query command was executed successfully."
    

    blog_deploy3_hmc1

    As you can see on the picture below four virtual fibre channel adapters are created taking care of the constraints of the storage connectivity groups create earlier (looking on the Virtual I/O Server vfcmaps are ok …) :

    blog_deploy3_hmc2_pixelate

    padmin@XXXXX60:/home/padmin$ lsmap -vadapter vfchost14 -npiv
    Name          Physloc                            ClntID ClntName       ClntOS
    ------------- ---------------------------------- ------ -------------- -------
    vfchost14     U9117.MMD.658B2AD-V1-C28                8 deckard-e98792 AIX
    
    Status:LOGGED_IN
    FC name:fcs3                    FC loc code:U2C4E.001.DBJN916-P2-C1-T4
    Ports logged in:2
    Flags:a
    VFC client name:fcs2            VFC client DRC:U9117.MMD.658B2AD-V8-C5
    
    padmin@XXXXX60:/home/padmin$ lsmap -vadapter vfchost15 -npiv
    Name          Physloc                            ClntID ClntName       ClntOS
    ------------- ---------------------------------- ------ -------------- -------
    vfchost15     U9117.MMD.658B2AD-V1-C29                8 deckard-e98792 AIX
    
    Status:LOGGED_IN
    FC name:fcs4                    FC loc code:U2C4E.001.DBJO029-P2-C1-T1
    Ports logged in:2
    Flags:a
    VFC client name:fcs3            VFC client DRC:U9117.MMD.658B2AD-V8-C6
    

    View from the Storage Volume Controller

    The SVC side is pretty simple, two steps, FlashCopy-Copy of the volume-Image (the one created at the capture step) (the source of the FlashCopy is the volumeImage-[name] lun) and a host creation for the new virtual machine :

    • Creation of a FlashCopy-Copy with the volume used for the capture as source :
    • blog_deploy3_flashcopy1

      # mkvdisk -name volume-boot-9117MMD_658B2AD-deckard-e9879213-00000018 -iogrp 0 -mdiskgrp VNX_00086_SAS_POOL_1 -size 64424509440 -unit b -autoexpand -grainsize 256 -rsize 2% -warning 0% -easytier on
      # mkfcmap -source 880 -target 881 -autodelete
      # prestartfcmap 0
      # startfcmap 0
      
    • The host is created using the height wwpns of the newly created virtual machine (I repaste here the lssyscfg command to check the wwpns are the same :-)
    • hscroot@hmc1:~> lssyscfg -r prof -m XXXXX58-9117-MMD-658B2AD --filter "lpar_names=deckard-e9879213-00000018"
      name=default_profile,lpar_name=deckard-e9879213-00000018,lpar_id=8,lpar_env=aixlinux,all_resources=0,min_mem=8192,desired_mem=8192,max_mem=8192,min_num_huge_pages=0,desired_num_huge_pages=0,max_num_huge_pages=0,mem_mode=ded,mem_expansion=0.0,hpt_ratio=1:128,proc_mode=shared,min_proc_units=2.0,desired_proc_units=2.0,max_proc_units=2.0,min_procs=4,desired_procs=4,max_procs=4,sharing_mode=uncap,uncap_weight=128,shared_proc_pool_id=0,shared_proc_pool_name=DefaultPool,affinity_group_id=none,io_slots=none,lpar_io_pool_ids=none,max_virtual_slots=64,"virtual_serial_adapters=0/server/1/any//any/1,1/server/1/any//any/1",virtual_scsi_adapters=3/client/1/XXXXX60/29/0,virtual_eth_adapters=32/0/1665//0/0/zvdc4/fabbb99de420/all/0,virtual_eth_vsi_profiles=none,"virtual_fc_adapters=""2/client/2/XXXXX59/30/c050760727c5004a,c050760727c5004b/0"",""4/client/2/XXXXX59/25/c050760727c5004c,c050760727c5004d/0"",""5/client/1/XXXXX60/28/c050760727c5004e,c050760727c5004f/0"",""6/client/1/XXXXX60/23/c050760727c50050,c050760727c50051/0""",vtpm_adapters=none,hca_adapters=none,boot_mode=norm,conn_monitoring=1,auto_start=0,power_ctrl_lpar_ids=none,work_group_id=none,redundant_err_path_reporting=0,bsr_arrays=0,lpar_proc_compat_mode=default,electronic_err_reporting=null,sriov_eth_logical_ports=none
      
      # mkhost -name deckard-e9879213-00000018-06976900 -hbawwpn C050760727C5004A -force
      # addhostport -hbawwpn C050760727C5004B -force 11
      # addhostport -hbawwpn C050760727C5004C -force 11
      # addhostport -hbawwpn C050760727C5004D -force 11
      # addhostport -hbawwpn C050760727C5004E -force 11
      # addhostport -hbawwpn C050760727C5004F -force 11
      # addhostport -hbawwpn C050760727C50050 -force 11
      # addhostport -hbawwpn C050760727C50051 -force 11
      # mkvdiskhostmap -host deckard-e9879213-00000018-06976900 -scsi 0 881
      

      blog_deploy3_svc_host1
      blog_deploy3_svc_host2

    View from fibre channel switches

    On the two fibre channel switches four zones a created (do not forget the zones used for the Live Partition Mobility). These zone can be easily identified by their names. All PowerVC zones are prefixed by “powervc” (unfortunately names are truncated) :

    • Four zones are created on the fibre channel switch of the fabric A :
    • switch_fabric_a:FID1:powervcadmin> zoneshow powervc_eckard_e9879213_00000018c050760727c50051500507680110f32c
       zone:  powervc_eckard_e9879213_00000018c050760727c50051500507680110f32c
                      c0:50:76:07:27:c5:00:51; 50:05:07:68:01:10:f3:2c
      
      switch_fabric_a:FID1:powervcadmin> zoneshow powervc_eckard_e9879213_00000018c050760727c5004c500507680110f385
       zone:  powervc_eckard_e9879213_00000018c050760727c5004c500507680110f385
                      c0:50:76:07:27:c5:00:4c; 50:05:07:68:01:10:f3:85
      
      switch_fabric_a:FID1:powervcadmin> zoneshow powervc_eckard_e9879213_00000018c050760727c5004d500507680110f385
       zone:  powervc_eckard_e9879213_00000018c050760727c5004d500507680110f385
                      c0:50:76:07:27:c5:00:4d; 50:05:07:68:01:10:f3:85
      
      switch_fabric_a:FID1:powervcadmin> zoneshow powervc_eckard_e9879213_00000018c050760727c50050500507680110f32c
       zone:  powervc_eckard_e9879213_00000018c050760727c50050500507680110f32c
                      c0:50:76:07:27:c5:00:50; 50:05:07:68:01:10:f3:2c
      
    • Four zones are created on the fibre channel switch of the fabric B :
    • switch_fabric_b:FID1:powervcadmin> zoneshow powervc_eckard_e9879213_00000018c050760727c5004e500507680120f385
       zone:  powervc_eckard_e9879213_00000018c050760727c5004e500507680120f385
                      c0:50:76:07:27:c5:00:4e; 50:05:07:68:01:20:f3:85
      
      switch_fabric_b:FID1:powervcadmin> zoneshow powervc_eckard_e9879213_00000018c050760727c5004a500507680120f32c
       zone:  powervc_eckard_e9879213_00000018c050760727c5004a500507680120f32c
                      c0:50:76:07:27:c5:00:4a; 50:05:07:68:01:20:f3:2c
      
      switch_fabric_b:FID1:powervcadmin> zoneshow powervc_eckard_e9879213_00000018c050760727c5004b500507680120f32c
       zone:  powervc_eckard_e9879213_00000018c050760727c5004b500507680120f32c
                      c0:50:76:07:27:c5:00:4b; 50:05:07:68:01:20:f3:2c
      
      switch_fabric_b:FID1:powervcadmin> zoneshow powervc_eckard_e9879213_00000018c050760727c5004f500507680120f385
       zone:  powervc_eckard_e9879213_00000018c050760727c5004f500507680120f385
                      c0:50:76:07:27:c5:00:4f; 50:05:07:68:01:20:f3:85
      

    Activation Engine and Virtual Optical Device

    All my deployed virtual machines are connected to one of the Virtual I/O Server by a vSCSI adapter. This vSCSI adapter is used to connect the virtual machine to a virtual optical device (a virtual cdrom) needed by the activation engine to reconfigure the virtual machine. Looking in the Virtual I/O Server the virtual media repository is filled with customized iso files needed to activate the virtual machines :

    • Here is the output of the lsrep command on one of my Virtual I/O Server is by PowerVC :
    • padmin@XXX60:/home/padmin$ lsrep 
      Size(mb) Free(mb) Parent Pool         Parent Size      Parent Free 
          1017     1014 rootvg                   279552           110592 
      
      Name                                                  File Size Optical         Access 
      vopt_1c967c7b27a94464bebb6d043e6c7a6e                         1 None            ro 
      vopt_b21849cc4a32410f914a0f6372a8f679                         1 None            ro 
      vopt_e9879213dc90484bb3c5a50161456e35                         1 None            ro
      
    • At the time of writing this post the vSCSI adapter is not deleted after the virtual machines activation, but this one is only used at the first boot of the machines :
    • blog_adapter_for_ae_pixelate

    • Even better you can mount this iso and check it is used by the activation engine. The network configuration to be applied at reboot is written in an xml file. For those -like me- who have ever played with VMcontrol it may remember you the deploy command used in VMcontrol :
    • root@XXXX60:# cd /var/vio/VMLibrary
      root@XXXX60:/var/vio/VMLibrary# loopmount -i vopt_1c967c7b27a94464bebb6d043e6c7a6e -o "-V cdrfs -o ro" -m /mnt
      root@XXXX60:/var/vio/VMLibrary# cd /mnt
      root@XXXX60:/mnt# ls
      ec2          openstack    ovf-env.xml
      root@XXXX60:/mnt# cat ovf-env.xml
      <Environment xmlns="http://schemas.dmtf.org/ovf/environment/1" xmlns:ovf="http://schemas.dmtf.org/ovf/envelope/1" xmlns:ovfenv="http://schemas.dmtf.org/ovf/environment/1" xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData" xmlns:vssd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_VirtualSystemSettingData" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" ovfenv:id="vs0">
          <PlatformSection>
          <Locale>en</Locale>
        </PlatformSection>
        <PropertySection>
        <Property ovfenv:key="com.ibm.ovf.vmcontrol.system.networking.ipv4defaultgateway" ovfenv:value="10.244.17.1"/><Property ovfenv:key="com.ibm.ovf.vmcontrol.system.networking.hostname" ovfenv:value="deckard"/><Property ovfenv:key="com.ibm.ovf.vmcontrol.adapter.networking.slotnumber.1" ovfenv:value="32"/>;<Property ovfenv:key="com.ibm.ovf.vmcontrol.system.networking.dnsIPaddresses" ovfenv:value="10.10.20.10 10.10.20.11"/><Property ovfenv:key="com.ibm.ovf.vmcontrol.adapter.networking.usedhcpv4.1" ovfenv:value="false"/><Property ovfenv:key="com.ibm.ovf.vmcontrol.adapter.networking.ipv4addresses.1" ovfenv:value="10.244.17.35"/><Property ovfenv:key="com.ibm.ovf.vmcontrol.adapter.networking.ipv4netmasks.1" ovfenv:value="255.255.255.0"/><Property ovfenv:key="com.ibm.ovf.vmcontrol.system.networking.domainname" ovfenv:value="localdomain"/><Property ovfenv:key="com.ibm.ovf.vmcontrol.system.timezone" ovfenv:value=""/></PropertySection>
      

    Shared Ethernet Adapters auto management

    This part is not specific to the standard version of PowerVC but I wanted to talk about this here. You probably already knows that PowerVC is built on top of OpenStack and OpenStack is clever. The product doesn’t want to keep unnecessary objects in your configuration. I was very impressed by the management of the networks and of the vlans, PowerVC is managing and taking care of your Shared Ethernet Adapter for you. You don’t have to remove not used vlan, and to add by hand new vlans (just add the network in PowerVC), here are a few examples :

    • If you are adding a vlan in PowerVC you have the choice to select the Shared Ethernet Adapter for this vlan. For instance you can choose not to deploy this vlan on a particular host :
    • blog_network_do_not_use_pixelate

    • If you deploy a virtual machine on this vlan this one will be automatically added to the Shared Ethernet Adapter if this is the first machine using this vlan :
    • # chhwres -r virtualio --rsubtype vnetwork -o a -m 9117-MMD*658B2AD --vnetwork 1503-zvdc4 -a vlan_id=1503,vswitch=zvdc4,is_tagged=1
      
    • If you are moving a machine from one host to one another and this machine is last to use this vlan, the vlan will be automatically cleaned up and removed from the Shared Ethernet Adapter.
    • I have in my configuration two Shared Ethernet Adapters each one on a different virtual switch. Good news : PowerVC is vswitch aware :-)
    • This link is explaining this in details (not the redbook): Click here

    Mobility

    PowerVC standard is able to manage the mobility of your virtual machines. Machines can be relocated on any hosts on the PowerVC pool. You do not have anymore to remind you the long and complicated migrlpar command, PowerVC is taking care of this for you, just by clicking the migrate button :

    blog_migrate_1_pixelate

    • Looking in the Hardware Management Console lssvcevents, you can check that the migrlpar command is taking care of the storage connectivity group created earlier, and is going to map the lpar on adapter fcs3 and fcs4 :
    • # migrlpar -m XXX58-9117-MMD-658B2AD -t XXX55-9117-MMD-65ED82C --id 8 -i ""virtual_fc_mappings=2//1//fcs3,4//1//fcs4,5//2//fcs3,6//2//fcs4""
      
    • On the Storage Volume Controller, the host created with the Live Partition Mobility wwpns are correctly activated while the machine is moving to the other host :
    • blog_migrate_svc_lpm_wwpns_greened

    About supported fibre channel switches : all FOS >= 6.4 are ok !

    At the time of writing this post things are not very clear about this. Checking in the Redbook the only supported models of fibre channel switches are IBM SAN24B-5 and IBM SAN48B-5. I’m using Brocade 8510-4 fibre channel switches and they are working well with PowerVC. After a couple of calls and mails with the PowerVC development team it seems that all Fabric OS superior or equals to version 6.4 are ok. Don’t worry if the PowerVC validator is failing, it may appends, just open a call to get the validator working with you switch model (have problems in version 1.2.0.1 but nor more problem with the latest 1.2.1.0 :-))

    Conclusion

    PowerVC is impressive. In my opinion PowerVC is already production ready. Building a machine with four virtual NPIV fibre channel adapter in five minutes is something every AIX system administrator has dreamed of. Tell your boss this is the right way to build machines, and invest for the future by deploying PowerVC : it’s a must have :-) :-) :-) :-)! Need advice about it, need someone to deploy it ? Hire me !

    sitckers_resized

    PowerVM Shared Ethernet Adapter simplification : Get rid of Control Channel Adapter

    Since I started working on Virtual I/O Servers and PowerVM I’ve created many Shared Ethernet Adapters in all modes (standard, failover, or sharing). I’ve learned one important lesson “be careful when creating a Shared Ethernet Adapter“. A single mistake can cause a network outage and I’m sure that you’ve already seen someone in your team creating an ARP storm by mismatching control channel adapter or by adding a vlan that is already added on a Virtual Ethernet Adapter. Because of this kind of errors I know some customers who are trying to avoid the configuration of Shared Ethernet Adapter in failover or sharing mode to avoid any network outage. With the new version of Virtual I/O Server (starting from 2.2.2.2) network loop and ARP storms are -in most cases- detected and stopped at the Virtual I/O Server level or at the firwmare level. I always check two or three times my configuration before creating a Shared Ethernet Adapter. All these errors come -most of the time- from a lack of rigor and are in -almost- all cases due to the system administrator. With the new version of PowerVM you can now create all Shared Ethernet Adapters without specifying any control channel adapter (The Hardware Management Console and the Virtual I/O Server will do it for you). A new discovery protocol implemented on Virtual I/O Server is matching Shared Ethernet Adapters between them and will take care of creating the Control Channel vlan for you (this one will not be visible on the Virtual I/O Server). Much simpler = less errors. Here is a practical how-to :

    How does it work ?

    A new discovery protocol called SEA HA match partners between them by using a dedicated vlan (not configurable by the user). Here are a few things to know :

    • Multiple Shared Ethernet Adapters can share the vlan 4095 for their Control Channel link.
    • The vlan 4095 is created per Virtual Switch for this Control Channel link.
    • As always only two Shared Ethernet Adapters can be partners, the Hardware Management Console is ensuring that priority 1 and 2 are used (I’ve seen some customers using priority 3 and 4, do don’t this.)
    • Both failover and sharing mode can be used.
    • Shared Ethernet Adapters with a dedicated Control Channel Adapter, can be migrated to this configuration with a network outage, put the SEA in defined state before :

    Here is any example of this configuration on a Shared Ethernet Adapter in Sharing Mode :

    sea_no_ctl_chan_fig1

    On the image below you can follow the steps of this new discovery protocol :

    • 1/No dedicated Control Channel Adapter in Shared Ethernet Adapter Creation. The discovery protocol will be used if you are creating a SEA in failover or sharing mode without specifying the ctl_chan attribute.
    • 2/Partners are identified by their PVID, both partners must have the same PVID.
    • 3/This PVID has to be uniq per SEA pairs.
    • 4/Additional vlans ID are compared : partners with not matching additional vlans IDs are still considered as partners if their PVID match.
    • 5/Shared Ethernet Adapter with matching additional vlan IDs and not matching PVID are not considered as partners.
    • 6/If partners are not matching their additional vlan IDs they are still considered partners but an error is logged in the errlog.

    sea_no_ctl_chan_fig2

    Prerequisites

    Shared Ethernet Adapter without the need of a Control Channel Adapter can’t be created on all systems. At the time of writing this post only a few models of POWER7 machines (maybe POWER8) have the firmware implementing the feature. You have to check that the firmware of your machine is at least a XX780_XXX release. Be careful to check the release note of the firmware, some of the 780’s firmwares does not permit the creation of SEA without Control Channel Adapter (especially 9117-MMB) (here is an example on this page : link here, the release note says : “Support was added to the Management Console command line to allow configuring a shared control channel for multiple pairs of Shared Ethernet Adapters (SEAs). This simplifies the control channel configuration to reduce network errors when the SEAs are in fail-over mode. This feature is not supported on IBM Power 770 (9117-MMB) and IBM Power 780 (9179-MHB) systems.”). Because the Hardware Management Console is using the vlan 4095 to create the Control Channel link between Shared Ethernet Adapters it has to be aware of this feature and must ensure that the vlan 4095 is not usable or configurable by the administrator. The HMC v7R7.8.0 is aware of this that’s why the HMC must be updated at least to this level.

    • Check your machine firmware, in my case I’m working on a 9117-MMD (P7+770) with the lastest firmware available (at the time of writing this post) :
    # lsattr -El sys0 -a modelname
    modelname IBM,9117-MMD Machine name False
    # lsmcode -A
    sys0!system:AM780_056 (t) AM780_056 (p) AM780_056 (t)
    
    • These prerequisites can be check directly from the Hardware Management Console :
    hscroot@myhmc:~> lslic -t sys -m 9117-MMD-65XXXX
    lic_type=Managed System,management_status=Enabled,disabled_reason=,activated_level=56,activated_spname=FW780.10,installed_level=56,installed_spname=FW780.10,accepted_level=56,accepted_spname=FW780.10,ecnumber=01AM780,mtms=9117-MMD*658B2AD,deferred_level=None,deferred_spname=FW780.10,platform_ipl_level=56,platform_ipl_spname=FW780.10,curr_level_primary=56,curr_spname_primary=FW780.10,curr_ecnumber_primary=01AM780,curr_power_on_side_primary=temp,pend_power_on_side_primary=temp,temp_level_primary=56,temp_spname_primary=FW780.10,temp_ecnumber_primary=01AM780,perm_level_primary=56,perm_spname_primary=FW780.10,perm_ecnumber_primary=01AM780,update_control_primary=HMC,curr_level_secondary=56,curr_spname_secondary=FW780.10,curr_ecnumber_secondary=01AM780,curr_power_on_side_secondary=temp,pend_power_on_side_secondary=temp,temp_level_secondary=56,temp_spname_secondary=FW780.10,temp_ecnumber_secondary=01AM780,perm_level_secondary=56,perm_spname_secondary=FW780.10,perm_ecnumber_secondary=01AM780,update_control_secondary=HMC
    
    • Check your Hardware Management Console release is at least V7R7.8.0 (in my case my HMC is at the latest level available at the time of writing this post) :
    hscroot@myhmc:~> lshmc -V
    "version= Version: 7
     Release: 7.9.0
     Service Pack: 0
    HMC Build level 20140409.1
    MH01406: Required fix for HMC V7R7.9.0 (04-16-2014)
    ","base_version=V7R7.9.0
    "
    

    Shared Ethernet Adapter creation in sharing mode without control channel

    The creation is simple, just identify your Real Adapter and your Virtual Adapter(s). Check on both Virtual I/O Server that PVID used on Virtual Adapters are the same and check priority are ok (use priority 1 on PRIMARY Virtual I/O Server and priority 2 on BACKUP Virtual I/O Server). I’m creating in this post a Shared Ethernet Adapter in Sharing Mode, steps are the same if you are creating a Shared Ethernet Adapter in auto mode.

    • Identify the Real Adapter (in my case an LACP 802.3ad adapter) :
    • padmin@vios1$ lsdev -dev ent17
      name             status      description
      ent17            Available   EtherChannel / IEEE 802.3ad Link Aggregation
      padmin@vios2$ lsdev -dev ent17
      name             status      description
      ent17            Available   EtherChannel / IEEE 802.3ad Link Aggregation
      
    • Identify the Virtual Adapters : priority 1 on PRIMARY Virtual I/O Server and priority 2 on BACKUP Virtual I/O Server (my advice is to check that additional vlan IDs are ok too) :
    • padmin@vios1$ entstat -all ent13 | grep -iE "Priority|Port VLAN ID"
        Priority: 1  Active: False
      Port VLAN ID:    15
      padmin@vios1$ entstat -all ent14 | grep -iE "Priority|Port VLAN ID"
        Priority: 1  Active: False
      Port VLAN ID:    16
      padmin@vios2$ entstat -all ent13 | grep -iE "Priority|Port VLAN ID"
        Priority: 2  Active: True
      Port VLAN ID:    15
      padmin@vios2$ entstat -all ent14 | grep -iE "Priority|Port VLAN ID"
        Priority: 2  Active: True
      Port VLAN ID:    16
      
    • Create the Shared Ethernet Adapter without specifying the ctl_chan attribute :
    • padmin@vios1$ mkvdev -sea ent17 -vadapter ent13 ent14 -default ent13 -defaultid 15 -attr ha_mode=sharing largesend=1 large_receive=yes
      ent18 Available
      padmin@vios2$ mkvdev -sea ent17 -vadapter ent13 ent14 -default ent13 -defaultid 15 -attr ha_mode=sharing largesend=1 large_receive=yes
      ent18 Available
      
    • Shared Ethernet Adapter are created! You can check that the ctl_chan attribute is empty when checking the device :
    • padmin@svios1$ lsdev -dev ent18 -attr
      attribute     value       description                                                        user_settable
      
      accounting    disabled    Enable per-client accounting of network statistics                 True
      adapter_reset yes         Reset real adapter on HA takeover                                  True
      ctl_chan                  Control Channel adapter for SEA failover                           True
      gvrp          no          Enable GARP VLAN Registration Protocol (GVRP)                      True
      ha_mode       sharing     High Availability Mode                                             True
      [..]
      pvid          15          PVID to use for the SEA device                                     True
      pvid_adapter  ent13       Default virtual adapter to use for non-VLAN-tagged packets         True
      qos_mode      disabled    N/A                                                                True
      queue_size    8192        Queue size for a SEA thread                                        True
      real_adapter  ent17       Physical adapter associated with the SEA                           True
      send_RARP     yes         Transmit Reverse ARP after HA takeover                             True
      thread        1           Thread mode enabled (1) or disabled (0)                            True
      virt_adapters ent13,ent14 List of virtual adapters associated with the SEA (comma separated) True
      
    • By using the entstat command you can check that the Control Channel exists and is using the PVID 4095 (same result on second Virtual I/O Server) :
    • padmin@vios1$ entstat -all ent18 | grep -i "Control Channel PVID"
          Control Channel PVID: 4095
      
    • Looking at the entstat output SEA are partners (one PRIMARY_SH and one BACKUP_SH :
    padmin@vios1$ entstat -all ent18 | grep -i state
        State: PRIMARY_SH
    padmin@vios2$  entstat -all ent18 | grep -i state
        State: BACKUP_SH
    

    Verbose and intelligent errlog

    While configuring Shared Ethernet Adapter in this mode the errlog can give you a lot of informations about your configuration. For instance if additional vlan IDs does not match betweens Virtual Adapters of a Shared Ethernet Adapter you’ll be warned by an error in the errlog. Here are a few examples :

    • Additional vlan IDs does not match between Virtual Adapters :
    padmin@vios1$ errlog | more
    IDENTIFIER TIMESTAMP  T C RESOURCE_NAME  DESCRIPTION
    A759776F   0506205214 I H ent18          SEA HA PARTNERS VLANS MISMATCH
    
    • Looking on a detailed output you can get the missing vlan id :
    padmin@vios1$ 
    ---------------------------------------------------------------------------
    LABEL:          VIOS_SEAHA_DSCV_VLA
    IDENTIFIER:     A759776F
    Date/Time:       Tue May  6 20:52:59 2014
    Sequence Number: 704
    Machine Id:      00XXXXXXXX00
    Node Id:         vios1
    Class:           H
    Type:            INFO
    WPAR:            Global
    Resource Name:   ent18
    Resource Class:  adapter
    Resource Type:   sea
    Location:
    
    Description
    SEA HA PARTNERS VLANS MISMATCH
    
    Probable Causes
    VLAN MISCONFIGURATION
    
    Failure Causes
    VLAN MISCONFIGURATION
    
            Recommended Actions
            NONE
    
    Detail Data
    ERNUM
    0000 001A
    ABSTRACT
    Discovered HA partner with unmatched VLANs
    AREA
    VLAN misconfiguration
    BUILD INFO
    BLD: 1309 30-10:08:58 y2013_40A0
    LOCATION
    Filename:sea_ha.c Function:seaha_process_dscv_init Line:6156
    DATA
    VLAN = 0x03E9
    
    • The last line is the value of the missing vlan in hexadecimal (0x03E9, 1001 converted in decimal). We can manually check that this vlan is missing on vios1 :
    # echo "ibase=16; 03E9" | bc
    1001
    padmin@vios1$ entstat -all ent18 | grep -i "VLAN Tag IDs:"
    VLAN Tag IDs:  1659
    VLAN Tag IDs:  1682
    VLAN Tag IDs:  1682
    padmin@vios2$ entstat -all ent18 | grep -i "VLAN Tag IDs:"
    VLAN Tag IDs:  1659
    VLAN Tag IDs:  1001  1682
    VLAN Tag IDs:  1001  1682
    
    • A loss of communication between SEA will also be logged in the errlog :
    padmin@vios1$ errlog | more
    IDENTIFIER TIMESTAMP  T C RESOURCE_NAME  DESCRIPTION
    B8C78C08   0502231214 I H ent18          SEA HA PARTNER LOST
    padmin@vios1$ errlog -ls | more
    Location:
    
    Description
    SEA HA PARTNER LOST
    
    Probable Causes
    SEA HA PARTNER DOWN
    
    Failure Causes
    SEA HA PARTNER DOWN
    
            Recommended Actions
            INITIATE PARTNER DISCOVERY
    
    Detail Data
    ERNUM
    0000 0019
    ABSTRACT
    Initiating partner discovery due to lost partner
    AREA
    SEA HA discovery partner lost
    BUILD INFO
    BLD: 1309 30-10:08:58 y2013_40A0
    LOCATION
    Filename:sea_ha.c Function:seaha_dscv_ka_rcv_timeout Line:2977
    DATA
    Partner MAC: 0x1A:0xC4:0xFD:0x72:0x9B:0x0F
    
    • Be careful looking at the errlog, a SEA in sharing mode will “become primary” even if it is the “backup” SEA (you have to look with errlog -ls command for the details) :
    padmin@vios$ errlog | grep BECOME
    E48A73A4   0506205214 I H ent18          BECOME PRIMARY
    padmin@vios2$ errlog | grep BECOME
    1FE2DD91   0506205314 I H ent18          BECOME PRIMARY
    
    padmin@vios1$ errlog -ls | more
    LABEL:          VIOS_SEAHA_PRIMARY
    IDENTIFIER:     E48A73A4
    [..]
    Description
    BECOME PRIMARY
    [..]
    padmin@vios2$ errlog -ls | more
    LABEL:          VIOS_SEAHA_BACKUP
    IDENTIFIER:     1FE2DD91
    [..]
    Description
    BECOME PRIMARY
    [..]
    ABSTRACT
    Transition from INIT to BACKUP
    [..]
    seahap->state= 0x00000003
    Become the Backup SEA
    

    Removing the control channel adapter from an existing Shared Ethernet Adapter

    A “classic” Shared Ethernet Adapter can be modified to be usable without the need of a dedicated Control Channel Adapter. This modification require a network outage and the Shared Ethernet Adapter needs to be in defined state. I DO NOT LIKE to do administration as root on Virtual I/O Servers but I’ll do it here because of the use of the mkdev command :

    • On both Virtual I/O Servers put the Shared Ethernet Adapter in defined state :
    padmin@vios1$ oem_setup_env
    root@vios1# rmdev -l ent18
    ent18 Defined
    padmin@vios2$ oem_setup_env
    root@vios2# rmdev -l ent18
    ent18 Defined
    
    • On both Virtual I/O Servers remove the dedicated Control Channel Adapter for both Shared Ethernet Adapters :
    root@vios1# lsattr -El ent18 -a ctl_chan
    ctl_chan ent12 Control Channel adapter for SEA failover True
    root@vios1# chdev -l ent18 -a ctl_chan=""
    ent18 changed
    root@vios1# lsattr -El ent18 -a ctl_chan
    ctl_chan  Control Channel adapter for SEA failover True
    root@vios2# lsattr -El ent18 -a ctl_chan
    ctl_chan ent12 Control Channel adapter for SEA failover True
    root@vios2# chdev -l ent18 -a ctl_chan=""
    ent18 changed
    root@vios2# lsattr -El ent18 -a ctl_chan
    ctl_chan  Control Channel adapter for SEA failover True
    
    • Put each Shared Ethernet Adapter in available state by using the mkdev command :
    root@vios1# mkdev -l ent18
    ent18 Available
    root@vios2# mkdev -l ent18
    ent18 Available
    
    • Verify that the Shared Ethernet Adapter is now using vlan 4095 as Control Channel PVID :
    padmin@vios1$ entstat -all ent18 | grep -i "Control Channel PVID"
        Control Channel PVID: 4095
    padmin@vios2$ entstat -all ent18 | grep -i "Control Channel PVID"
        Control Channel PVID: 4095
    

    The first step to a global PowerVM simplification

    Be aware that this simplification is one of the first step of a much larger project. With the latest version of the HMC v8R80.1 a lot of new features will be available (June 2014). I can’t wait to test the “single point of management” for Virtual I/O Servers. Anyway, creating a Shared Ethernet Adapter is easier than before. Use this method to avoid human errors and misconfiguration of your Shared Ethernet Adapters. As always I hope this post will help you to understand this simplification. :-)

    PowerVC Express using local storage : Overview, tips and tricks, and lessons learned from experience

    Everybody talks about PowerVC since the October 8th announcement, after seeing a few videos and reading a few articles about it, I didn’t find anything telling what’s the product really has in his guts. I had the chance to deploy and test a PowerVC express version (using local storage), faced a lot of problems and found some interesting things to share with you. Rather than boiling the ocean :-) and asking for new features (oh everybody wants new features !), here is a practical how-to, some tips and tricks and the lessons I’ve learned about it. After a few weeks of work I can say that PowerVC is really good and pretty simple to use and deploy. Here we go :

    Preparing the PowerVC express host

    Setting SELinux from enforcing to permissive

    cool1

    Please refer to my previous about installing a Linux On Power if you have any doubt about this. Before trying to install PowerVC express edition you first have to disable selinux or at least set the policy form enforcing to permissive. Please note that a mandatory reboot is needed for this modification :

    # sed -i 's/SELINUX=enforcing/SELINUX=permissive/g' /etc/selinux/config
    # grep ^SELINUX /etc/selinux/config
    SELINUX=permissive
    SELINUXTYPE=targeted
    

    Reboot the PowerVC express host :

    [root@powervc-standard ~]# shutdown -fh now
    Broadcast message from root@powervc-standard
    (/dev/pts/0) at 15:43 ...
    The system is going down for halt NOW!
    

    Yum repository

    cool2

    Before running the installer you have to configure your yum repository because the installer needs to install rpm shipped with the Red Hat Enterprise installation cdrom. I choose to use the cdrom as repository but this one can be served through http, without any problems :

    # mkdir /mnt/cdrom ; mount -o loop /dev/cdrom /mnt/cdrom
    # cat /etc/yum.repos.d/rhel-cdrom.repo
    [rhel-cdrom]
    name=RHEL Cdrom
    baseurl=file:///mnt/cdrom
    gpgckeck=0
    enabled=1
    # yum update
    # yum upgrade
    

    If using x86 version : noop scheduler

    If you are using the x86 version of PowerVC express you can experience some slowness while trying to install the product. In my case I had to change the I/O scheduler from cfq to noop. My advice is just to temporarily enable it. My installation of PowerVC express takes hours (no joke, almost 5 hours) before changing the I/O scheduler to noop. Enabling this option reduce this time to an half hour (in my case) :

    # cat /sys/block/vda/queue/scheduler
    noop anticipatory deadline [cfq]
    # echo "noop" > /sys/block/vda/queue/scheduler
    # cat /sys/block/vda/queue/scheduler
    [noop] anticipatory deadline cfq
    

    PATH modification

    Add the /opt/ibm/powervc/bin to your path to be allowed to run PowerVC commands such as powervc-console-term, powervc-services, powervc-get-token, and so on …..

    # more /root/.bash_profile
    PATH=$PATH:$HOME/bin:/opt/ibm/powervc/bin
    

    I’ll not detail the installation here but just run this installer and follow the questions asked by the installer :

    # ./install
    Select the offering type to install:
       1 - Express  (IVM support)
       2 - Standard (HMC support)
       9 - Exit
    1
    Extracting license content
    International Program License Agreement
    [..]
    

    Preparing the Virtual I/O Server and the IVM

    Before trying to do anything you have to configure the Virtual I/O Server and the IVM, check that all the points below are ok before registering the host :

    • You need at least one Shared Ethernet Adapter to use PowerVC express, you can on an IVM have up to four Shared Ethernet Adapter.
    • A virtual media repository created with at least 40Gb free.
    • # lsrep
      Size(mb) Free(mb) Parent Pool         Parent Size      Parent Free
         40795    29317 rootvg                   419328           342272
      
    • A PowerVM enterprise edition or PowerVM for IBM PowerLinux key.
    • The max virtual adapter correctly configured (in my case 256).
    • The maximum number of ssh sessions opened on the Virtual I/O Server has to be at least 20.
    • # grep MaxSessions /etc/ssh/sshd_config
      MaxSessions 20
      # stopsrc -s sshd
      # startsrc -s sshd
      
    • FTP transfers and FTP ports are opened between the PowerVC host and the IVM.

    PowerVC usage

    Host Registering

    It’s very easy but registering the host is one of the most important step in this configuration. Just set your IVM hostname user and password. The tricky part is to check the box to use local storage, you then have to choose the directory where images will be stored. Be careful when choosing this directory, it can’t be changed on the fly, and you have to remove and re-register the host if you want to do this. My advice is not to choose the default /home/padmin directory, and to create a dedicated logical volume for this.

    deckard

    If the host registration fails check all the Virtual I/O Server prerequisites, then retry. If it fails again check the /var/log/nova/api.log and /var/log/nova/compute_xxx.log

    host1

    Manage existing Virtual Machines

    Unlike VMcontrol PowerVC allows to manage existing machines, so if your IVM is correctly configured, you’ll not have trouble to import existing machines, and manage them with PowerVC. This is one of the strength of PowerVC, it assure a backward compatibility for your existing Virtual I/O Clients. And it’s simple to use (look at the images below) :

    manage_existing
    manage_existing1

    Network Definition

    Create a network using one of your Shared Ethernet Adapter to be able to deploy machines :

    network_1

    First installation with ISO image

    Importing ISO images

    For the first installation (if you do not have any systems already installed on your system) you first need to import an iso to PowerVC, be very careful to read the next steps because I had a lot problems of space with this. Importing images is managed by glance, so if you have any problem checking in /var/log/glance file can be useful (putting in verbose mode too in /etc/glance/glance.conf). Just use the powervc-iso-import command to do so :

    # powervc-iso-import --name aix-7100-02-02 --os-distro aix --location /root/AIX_7.1_Base_Operating_System_TL_7100-02-02_DVD_1_of_2_32013.iso 
    Password: 
    +----------------------------+--------------------------------------+
    | Property                   | Value                                |
    +----------------------------+--------------------------------------+
    | Property 'architecture'    | ppc64                                |
    | Property 'hypervisor_type' | powervm                              |
    | Property 'os_distro'       | aix                                  |
    | checksum                   | df548a0cc24dbec196d0d3ead92feaca     |
    | container_format           | bare                                 |
    | created_at                 | 2014-02-04T19:45:29.125109           |
    | deleted                    | False                                |
    | deleted_at                 | None                                 |
    | disk_format                | iso                                  |
    | id                         | ee0a6544-c065-4ab7-aec8-7d6ee4248672 |
    | is_public                  | True                                 |
    | min_disk                   | 0                                    |
    | min_ram                    | 0                                    |
    | name                       | aix-7100-02-02                       |
    | owner                      | 437b161186414e2bb0d4778cbd6fa14c     |
    | protected                  | False                                |
    | size                       | 3835723776                           |
    | status                     | active                               |
    | updated_at                 | 2014-02-04T19:49:29.031481           |
    +----------------------------+--------------------------------------+
    

    importing_iso

    The result of the command above do not tell you anything about what is really done by the command.

    Images are stored forever in /var/lib/glance/images and copied here by powerc-iso-import, this is the place where you need to have free space, don’t forget to remove you source image from the PowerVC host or you’ll need to have more space (in fact double space :-)). Check the /var/lib/glance/images while running powervc-iso-import shows you that images are copied :

    # ls -lh /var/lib/glance/images
    total 3.3G
    -rw-r-----. 1 glance glance 3.3G Feb  4 22:08 3b95401b-85b4-4682-a7a5-332ea9e48348
    # ls -lh /var/lib/glance/images
    total 3.4G
    -rw-r-----. 1 glance glance 3.4G Feb  4 22:09 3b95401b-85b4-4682-a7a5-332ea9e48348
    

    image_import2

    Deploying a Virtual Machine with ISO image :

    Be careful when deploying images to have enough space in /home/padmin directory of the Virtual I/O Server : images are first copied to this directory before being available on the Virtual I/O Server media repository in /var/vio/VMLibrary (they are -apparently- removed later). On the PowerVC host itself, be careful to have enough space in /var/lib/nova/images and /var/lib/glance/images. On the PowerVC host images are stored by glance PowerVC so DON’T DELETE IMAGES in /var/lib/glance/images ! My understanding of this is that images are copied on fly from glance (/var/lib/images/glances) where images are stored (by powervc-import-iso), to nova (/var/lib/nova/images) where images are copied and then sent to the Virtual I/O Server, and then added to the Virtual I/O Server repository. PowerVC is using ftp to copy files to the Virtual I/O Server, so be sure to have ports open between PowerVC host and the Virtual I/O Server.

    • Here is an exemple of iso file present in /home/padmin on the Virtual I/O Server when deploying a server with an image, below we can see that image was copied in /var/lib/nova/images before beeing copied on the Virtual I/O Server :
    • padmin@deckard# ls *.iso
      config                                                  rhel-server-6.4-beta-ppc64-dvd.iso                      smit.script
      89386e2e-c2f0-4f9d-b0f4-267f4a99dc1e.192_168_0_100.iso  rhel-server-ppc-6.4-boot.iso                            smit.transaction
      ioscli.log                                              smit.log                                                tivoli
      [root@powervc-express ~]# ls -l /var/lib/nova/images/
      total 4579236
      -rw-r--r--. 1 nova nova 4689133568 Feb  4 22:31 89386e2e-c2f0-4f9d-b0f4-267f4a99dc1e.192_168_0_100.iso
      
    • Once images are copied from PowerVC they are imported to the Virtual I/O Server repository :
    • padmin@deckard# ps -ef | grep mkvopt
        padmin  6422716  8519802   0 05:41:42      -  0:00 ioscli mkvopt -name 89386e2e-c2f0-4f9d-b0f4-267f4a99dc1e -file /home/padmin/89386e2e-c2f0-4f9d-b0f4-267f4a99dc1e.192_168_0_100.iso -ro
        padmin  8519802 10485798   0 05:41:42      -  0:00 rksh -c ioscli mkvopt -name 89386e2e-c2f0-4f9d-b0f4-267f4a99dc1e -file /home/padmin/89386e2e-c2f0-4f9d-b0f4-267f4a99dc1e.192_168_0_100.iso -ro;echo $?
        padmin 10158232  9699504   2 05:42:30  pts/0  0:00 grep mkvopt
      
    • The Virtual Optical Device is then used to load the CDROM to the partition :
    • padmin@deckard# lsmap -all
      SVSA            Physloc                                      Client Partition ID
      --------------- -------------------------------------------- ------------------
      vhost0          U8203.E4A.06E7E53-V1-C11                     0x00000002
      
      VTD                   vtopt0
      Status                Available
      LUN                   0x8200000000000000
      Backing device        /var/vio/VMLibrary/89386e2e-c2f0-4f9d-b0f4-267f4a99dc1e
      Physloc               
      Mirrored              N/A
      
      VTD                   vtscsi0
      Status                Available
      LUN                   0x8100000000000000
      Backing device        lv00
      Physloc               
      Mirrored              N/A
      
      padmin@deckard# lsrep
      Size(mb) Free(mb) Parent Pool         Parent Size      Parent Free
         40795    29317 rootvg                   419328           342272
      
      Name                                                  File Size Optical         Access 
      89386e2e-c2f0-4f9d-b0f4-267f4a99dc1e                       4472 vtopt0          ro     
      fa9b3cf0-a649-4bf0-b309-5f2bab6379ea                       3659 None            ro     
      rhel-server-ppc-6.4-boot.iso                                227 None            ro     
      rhel-server-ppc-6.4.iso                                    3120 None            ro     
      
      

    When deploying an iso image after all the steps below are finished the Virtual Machine newly created will be in shutoff state :

    shutoff_before_start

    Run the console term before starting the Virtual Machine, then start the Virtual Machine (by PowerVC) :

    # powervc-console-term tyler61
    Password: 
    Starting terminal.
    
  • When deploying multiple hosts with the same image it is possible that some virtual machines will have the same name; in this case the powervc-console-term will warn you :
  • # powervc-console-term --f mary
    Password: 
    Multiple servers were found with the same name. Specify the server ID.
    089ecbc5-5bed-4d06-8659-bf7c57529c95 mary
    231ad074-7557-42b5-82b9-82ae2483fccd mary
    powervc-console-term --f 089ecbc5-5bed-4d06-8659-bf7c57529c95
    
    padmin@deckard# ps -ef | grep -i mkvt
      padmin  2556048  8323232   0 06:39:15  pts/1  0:00 rksh -c ioscli rmvt -id 2 && ioscli mkvt -id 2 && exit
    

    starting_first

    Then follow the instruction on the screen to finish these first installation (like if you were installing an AIX form the cdrom):

    IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM 
    IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM 
    -
    Elapsed time since release of system processors: 151 mins 48 secs
    /
    Elapsed time since release of system processors: 151 mins 57 secs
    -------------------------------------------------------------------------------
                                    Welcome to AIX.
                       boot image timestamp: 18:20:10 02/04/2013
                     The current time and date: 05:00:31 02/05/2014
            processor count: 1;  memory size: 2048MB;  kernel size: 29153194
    boot device: /vdevice/v-scsi@30000002/disk@8200000000000000:\ppc\chrp\bootfile.exe
                           kernel debugger setting: enabled
    -------------------------------------------------------------------------------
    
    AIX Version 6.1
    
    

    Preparing the capture of the first installed Virtual Machine

    The Activation Engine

    Before capturing the Virtual machine run the Activation Engine. The script will allow the machine to be captured and the captured image to be automatically reconfigured on the fly after first boot time. Be careful when running the Activation Engine, the Virtual Machine will be shut-off just by running this script.

    # scp 192.168.0.98:/opt/ibm/powervc/activation-engine/vmc.vsae.tar .
    # tar xvf vmc.vsae.tar
    x activation-engine-2.2-106.aix5.3.noarch.rpm, 1240014 bytes, 2422 media blocks.
    [..]
    x aix-install.sh, 2681 bytes, 6 media blocks.
    # rm /opt/ibm/ae/AP/*
    # cp /opt/ibm/ae/AS/vmc-network-restore/resetenv /opt/ibm/ae/AP/ovf-env.xml
    # JAVA_HOME=/usr/java5/jre
    # ./aix-install.sh
    Install VSAE and VMC extensions
    package activation-engine-jython-2.2-106 is already installed
    package activation-engine-2.2-106 is already installed
    package vmc-vsae-ext-2.4.4-1 is already installed
    # /opt/ibm/ae/AE.sh --reset
    JAVA_HOME=/usr/java5/jre
    [..]
    [2014-02-04 23:49:51,980] INFO: OS: AIX Version: 6
    [..]
    [2014-02-04 23:51:20,095] INFO: Cleaning AR and AP directories
    [2014-02-04 23:51:20,125] INFO: Shutting down the system
    
    SHUTDOWN PROGRAM
    Tue Feb  4 23:51:21 CST 2014
    
    
    Broadcast message from root@tyler (tty) at 23:51:21 ... 
    
    shutdown: PLEASE LOG OFF NOW !!!
    System maintenance is in progress.
    All processes will be killed now. 
    
    Broadcast message from root@tyler (tty) at 23:51:21 ... 
    
    shutdown: THE SYSTEM IS BEING SHUT DOWN NOW
    
    [..]
    
    Wait for '....Halt completed....' before stopping. 
    Error reporting has stopped.
    

    Capturing the host

    Just select the virtual machine you want to capture, and click capture ;-) :

    powervc_capture_ted
    snapshot1

    Here are the step realized by PowerVC when running a capture (so be careful to have enough space on PowerVC and Virtual I/O Server before running it :

    • By looking on the Virtual I/O Server itself, the main capture process is a simple dd command capturing the logical volume of the physical volume used as rootvg backing device, once the dd is finished this one is gzipped (in /home/padmin) :
    • padmin@deckard# ps -ef | grep dd      
          root  5832754  7078058   9 06:54:09      -  0:01 dd if=/dev/lv00 bs=1024k
          root  7078058  9043976  82 06:54:09      -  0:14 dd if=/dev/lv00 bs=1024k
        padmin  8388674  9699504   2 06:59:20  pts/0  0:00 grep dd
      padmin@deckard# ls -l /home/padmin/5154d176-6c3b-4eda-aa20-998deb207ca8.gz
      -rw-r--r--    1 root     staff    6102452605 Feb 05 07:13 5154d176-6c3b-4eda-aa20-998deb207ca8.gz
      
    • Once again the captured image is transferred to nova with ftp (ftpd process is spawned on the Virtual I/O Server) :
    • padmin@deckard# ps -ef | grep ftp
        padmin  7012516  9699504   1 07:20:47  pts/0  0:00 grep ftp
        padmin  7078072  4587660  47 07:14:18      -  0:11 ftpd
      [root@powervc-express ~]#  ls -l /var/lib/nova/images
      total 4666504
      -rw-r--r--. 1 nova nova 4778496000 Feb  5 00:22 5154d176-6c3b-4eda-aa20-998deb207ca8.gz
      
    • Then again the image is unzipped to glance :
    • # ls -l /var/lib/glance/images
      total 6453096
      -rw-r-----. 1 glance glance 1918828544 Feb  5 00:30 5154d176-6c3b-4eda-aa20-998deb207ca8
      -rw-r-----. 1 glance glance 4689133568 Feb  4 22:26 89386e2e-c2f0-4f9d-b0f4-267f4a99dc1e
      
    • During all these steps you can check take the Virtual Machine is in snapshot mode :
    • snapshot2

    • After the capture completion, you can have a look on the details :
    • image_3

    Deploying

    Deploying an host is very easy, just follow the instruction :

    • Here is an example of a deploy screen (I like visual thing when reading documents :-)) :
    • deploy1

    • Choose on which host you want to deploy the machine. You can on this step select the number of instance to deploy (you had to have a dhcp network configured for multiple instance), and select the size of the machine (by default a few one are pre-defined but you can define you own templates) :
    • deploy_1
      choose

    • PowerVC is smart enough to tell you a prediction of your machine usage. (It will show you in yellow the whole usage of your Power Server after the machine deployment (practical, and visual, love it !) :
    • deploy_3

    • Then just wait for the deployment to finish, steps are the same as an iso deployment, but the activation engine will be started at first boot to reconfigure the virtual machine :
    • deploy2

    Here is a image to sum-up the capture, the ISO deployment, and a deployment of a Virtual Machine, I think it’ll be easier for you to understand with an image :

    powervc-deploy-capture

    Tips and tricks

    Using PowerVC express on Power 6 machines

    There are a few things not written and tell in the manual. By looking in the source code you can find an hidden option to add in the /etc/nova/nova.conf file. There is one very interesting option for PowerVC express that allows you to try it on a Power6 server. If you want to do this, just add ivm_power6_enabled = true to /etc/nova/nova.conf. Restart PowerVC service before you can add any Power 6 server. The piece of code can be found in /usr/lib/python2.6/site-packages/powervc_discovery/registration/compute/ivm_powervm_registrar.py file :

    LOG.info("ivm_power6_enabled set to TRUE in nova.conf, "
             "so POWER6 will be allowed for testing")
    

    If you want to do so, just add it in the /etc/nova/nova.conf file in the [DEFAULT] section

    # grep power6 /etc/nova/nova.conf
    ivm_power6_enabled = true
    

    Just for the story, I was sure this was possible because the first presentation I’ve found on the internet about PowerVC was on PowerVC Express on a 8203-EA4 machine which is a Power 6 machine, the screenshots provided in these presentation were enough to tell me it was possible (don’t blame anybody for this). Next grep was my best friend to find were this option was hidden. Be aware that this option is only available for test purpose, so don’t open a PMR about this or it’ll be directly closed by IBM. Once again if IBMers are reading this one, tell me if it is ok to publish this option. If not I can remove it from the post.

    Enabling verbose and debug output

    PowerVC is not verbose at all when something is going wrong it’s sometimes difficult to check what is going on. First off all, the product is based on OpenStack so you’ll have access to all OpenStack log files. These files are located to /var/log/nova, /var/log/glance and so on. By default debug and verbose output are disabled for each OpenStack part. This is not supported by PowerVC but you can enable this verbose and debug output. For instance I had problem with nova when registering an host, putting verbose and debug mode in /etc/nova/nova.conf helped me a lot and let me check the ssh command run on the Virtual I/O Server (look for on the example below):

    # grep -iE "verbose|debug" /etc/nova/nova.conf
    verbose=true
    debug=true
    # vi /var/log/nova/compute-192_168_0_100.log
    2014-02-16 22:07:48.523 13090 ERROR powervc_nova.virt.ibmpowervm.ivm.exception [req-a4bee79a-5eb8-43fd-8ca6-ed75ebee880f 04c4ca89f32046ed91e0493c9e554d1d 437b161186414e2bb0d4778cbd6fa14c] Unexpected exception while running IVM command.
    Command: mksyscfg -r lpar -i "max_virtual_slots=64,max_procs=4,lpar_env=aixlinux,desired_procs=1,min_procs=1,proc_mode=shared,virtual_eth_adapters=\"36/0/1//0/0\",desired_proc_units=0.100000000000000,sharing_mode=uncap,min_mem=512,desired_mem=512,virtual_eth_mac_base_value=fa3f3d3cae,max_proc_units=4,lpar_proc_compat_mode=default,name=priss-6712136f-000000cd,max_mem=4096,min_proc_units=0.1"
    Exit code: 1
    Stdout: []
    Stderr: ['[VIOSE01040181-0025] Value for attribute desired_proc_units is not valid.', '']
    

    Using the PowerVC Rest API

    Systems engineers and systems administrator like me are rarely using REST APIs. If you want to automate some PowerVC actions such as deploying virtual machines without the need to go one the web interface you have to use the REST API provided with PowerVC. First of all here are the places where you’ll find some useful documentation for the PowerVC REST API

    • On the PowerVC inforcenter, you’ll find good tips and tricks for using the REST API :
    • The PowerVC programming guide :

    PowerVC is providing a script that is using the REST API. This one will generate an API token used for each calls of the API. This script is written in Python so I decided to take this script as reference to develop my own scripts based on this one :

    • You first have to use powervc-get-token to generate a token used to call the API. In general GET requests are used to query PowerVC (list virtual machines, list networks), and POST request to create things (create a network, create a virtual machine).
    • Get an API token to begin, by using powervc-get-token :
    • # powervc-get-token 
      Password: 
      323806024c70455d84a7a1db900a4f89
      
    • To create a virtual machine you’ll need to know three things, the tenant, the network on which the VM will be deployed and the images used to deploy the server.
    • Here is the script I used to get the tenant (url : /powervc/openstack/identity/v2.0/tenants) :
    • import httplib
      import json
      import os
      import sys
      
      def main():
          token = raw_input("Please enter PowerVC token : ")
          print "PowerVC token used = "+token
      
          conn = httplib.HTTPSConnection('localhost')
          headers = {"X-Auth-Token":token, "Content-type":"application/json"}
          body = ""
      
          conn.request("GET", "/powervc/openstack/identity/v2.0/tenants", body, headers)
          response = conn.getresponse()
          raw_response = response.read()
          conn.close()
          json_data = json.loads(raw_response)
          print json.dumps(json_data, indent=4, sort_keys=True)
      
      if __name__ == "__main__":
          main()
      
    • By running the script I get the tenant id 437b161186414e2bb0d4778cbd6fa14c :
    • # ./powervc-get-tenants
      Please enter PowerVC token : a3a9904fa5a24a24aa6833358f54c7ce
      PowerVC token used = a3a9904fa5a24a24aa6833358f54c7ce
      {
          "tenants": [
              {
                  "description": "IBM Default Tenant", 
                  "enabled": true, 
                  "id": "437b161186414e2bb0d4778cbd6fa14c", 
                  "name": "ibm-default"
              }
          ], 
          "tenants_links": []
      }
      
    • Here is the script I used to get the network id (url :/powervc/openstack/network/v2.0/networks) :
    • import httplib
      import json
      import os
      import sys
      
      def main():
          token = raw_input("Please enter PowerVC token : ")
          print "PowerVC token used = "+token
          tenant_id = raw_input("Please enter PowerVC Tenant ID : ")
          print "Tenant ID = "+tenant_id
      
          conn = httplib.HTTPSConnection('localhost')
          headers = {"X-Auth-Token":token, "Content-type":"application/json"}
          body = ""
      
          conn.request("GET", "/powervc/openstack/network/v2.0/networks", body, headers)
          response = conn.getresponse()
          raw_response = response.read()
          conn.close()
          json_data = json.loads(raw_response)
          print json.dumps(json_data, indent=4, sort_keys=True)
      
      if __name__ == "__main__":
          main()
      
    • By running the script I get the network id 83e233a7-34ef-4bf2-ae95-958046da770f :
    • # ./powervc-list-networks
      Please enter PowerVC token : a3a9904fa5a24a24aa6833358f54c7ce
      PowerVC token used = a3a9904fa5a24a24aa6833358f54c7ce
      Please enter PowerVC Tenant ID : 437b161186414e2bb0d4778cbd6fa14c
      Tenant ID = 437b161186414e2bb0d4778cbd6fa14c
      {
          "networks": [
              {
                  "admin_state_up": true, 
                  "id": "83e233a7-34ef-4bf2-ae95-958046da770f", 
                  "name": "local_net", 
                  "provider:network_type": "vlan", 
                  "provider:physical_network": "default", 
                  "provider:segmentation_id": 1, 
                  "shared": false, 
                  "status": "ACTIVE", 
                  "subnets": [
                      "6b76f7e6-02fa-427f-9032-e8d28aaa6ef4"
                  ], 
                  "tenant_id": "437b161186414e2bb0d4778cbd6fa14c"
              }
          ]
      }
      
    • Here is the script I used to get the image id (url : /powervc/openstack/compute/v2/”+tenant_id+”/images)
    • import httplib
      import json
      import os
      import sys
      
      def main():
          token = raw_input("Please enter PowerVC token : ")
          print "PowerVC token used = "+token
          tenant_id = raw_input("Please enter PowerVC Tenant ID : ")
          print "Tenant ID ="+tenant_id
      
          conn = httplib.HTTPSConnection('localhost')
          headers = {"X-Auth-Token":token, "Content-type":"application/json"}
          body = ""
      
          conn.request("GET", "/powervc/openstack/compute/v2/"+tenant_id+"/images", body, headers)
          response = conn.getresponse()
          raw_response = response.read()
          conn.close()
          json_data = json.loads(raw_response)
          print json.dumps(json_data, indent=4, sort_keys=True)
      
      if __name__ == "__main__":
          main()
      
    • By running the script I get the image id 0537da41-8542-41a0-b1b0-84ed75c6ed27 :
    • # ./powervc-list-images
      Please enter PowerVC token : a3a9904fa5a24a24aa6833358f54c7ce
      PowerVC token used = a3a9904fa5a24a24aa6833358f54c7ce
      Please enter PowerVC Tenant ID : 437b161186414e2bb0d4778cbd6fa14c
      Tenant ID = 437b161186414e2bb0d4778cbd6fa14c
      {
          "images": [
              {
                  "id": "0537da41-8542-41a0-b1b0-84ed75c6ed27", 
                  "links": [
                      {
                          "href": "http://localhost:8774/v2/437b161186414e2bb0d4778cbd6fa14c/images/0537da41-8542-41a0-b1b0-84ed75c6ed27", 
                          "rel": "self"
                      }, 
                      {
                          "href": "http://localhost:8774/437b161186414e2bb0d4778cbd6fa14c/images/0537da41-8542-41a0-b1b0-84ed75c6ed27", 
                          "rel": "bookmark"
                      }, 
                      {
                          "href": "http://192.168.0.12:9292/437b161186414e2bb0d4778cbd6fa14c/images/0537da41-8542-41a0-b1b0-84ed75c6ed27", 
                          "rel": "alternate", 
                          "type": "application/vnd.openstack.image"
                      }
                  ], 
                  "name": "ted_capture_201402161858"
              }
          ]
      }
      
    • With all this information, the token (a3a9904fa5a24a24aa6833358f54c7ce), the tenant id (437b161186414e2bb0d4778cbd6fa14c), the network id (83e233a7-34ef-4bf2-ae95-958046da770f), the image id (0537da41-8542-41a0-b1b0-84ed75c6ed27), I create a script to create a virtual machine (url : /powervc/openstack/compute/v2/”+tenant_id+”/servers) :
    • import httplib
      import json
      import os
      import sys
      
      def main():
          token = raw_input("Please enter PowerVC token : ")
          print "PowerVC token used = "+token
          tenant_id = raw_input("Please enter PowerVC Tenant ID : ")
          print "Tenant ID ="+tenant_id
          headers = {"Content-Type": "application/json"}
      
          conn = httplib.HTTPSConnection('localhost')
          headers = {"X-Auth-Token":token, "Content-type":"application/json"}
      
          body = {
            "server": {
              "flavor": {
                "OS-FLV-EXT-DATA:ephemeral": 10,
                "disk": 10,
                "extra_specs": {
                  "powervm:proc_units": 1
                },
                "ram": 512,
                "vcpus": 1
              },
              "imageRef": "0537da41-8542-41a0-b1b0-84ed75c6ed27",
              "max_count": 1,
              "name": "api",
              "networkRef": "83e233a7-34ef-4bf2-ae95-958046da770f",
              "networks": [
                {
                "fixed_ip": "192.168.0.21",
                "uuid": "83e233a7-34ef-4bf2-ae95-958046da770f"
                }
              ]
            }
          }
      
          conn.request("POST", "/powervc/openstack/compute/v2/"+tenant_id+"/servers",
                       json.dumps(body), headers)
          response = conn.getresponse()
          raw_response = response.read()
          conn.close()
          json_data = json.loads(raw_response)
          print json.dumps(json_data, indent=4, sort_keys=True)
      
      if __name__ == "__main__":
          main()
      
    • Running the script will finally create the virtual machine, you can check that the virtual machine is in deploying state in the PowerVC web interface : :
    • api

      # ./powervc-create-vm 
      Please enter PowerVC token : a3a9904fa5a24a24aa6833358f54c7ce
      PowerVC token used = a3a9904fa5a24a24aa6833358f54c7ce
      Please enter PowerVC Tenant ID : 437b161186414e2bb0d4778cbd6fa14c
      Tenant ID =437b161186414e2bb0d4778cbd6fa14c
      {
          "server": {
              "OS-DCF:diskConfig": "MANUAL", 
              "adminPass": "LE2bqbA2y87X", 
              "id": "0c7521d1-7e09-4c07-bc19-40e9ac3b756f", 
              "links": [
                  {
                      "href": "http://localhost:8774/v2/437b161186414e2bb0d4778cbd6fa14c/servers/0c7521d1-7e09-4c07-bc19-40e9ac3b756f", 
                      "rel": "self"
                  }, 
                  {
                      "href": "http://localhost:8774/437b161186414e2bb0d4778cbd6fa14c/servers/0c7521d1-7e09-4c07-bc19-40e9ac3b756f", 
                      "rel": "bookmark"
                  }
              ], 
              "security_groups": [
                  {
                      "name": "default"
                  }
              ]
          }
      }
      

    Backup and restore

    PowerVC does not have any HA solution, so my advice is to run it on a ip alias and to have a second dormant PowerVC instance ready to be setup at the time you need it. To do so my advice is to regularly run a powervc-backup (why not in crontab). If you need to restore PowerVC on the dormant instance, the only thing to do is to restore the backup (put it in /var/opt/imb/powervc/backups before running powervc-restore). The backup/restore is just an export/import of each db2 database (cinder,glance,nova,…), so it can take space and time (in my case my backup takes 8Gb and restoring the backup on the dormant instance takes me 1 hour).

    • Backuping :
    • # powervc-backup 
      Continuing with this operation will stop all PowerVC services.  Do you want to continue?  (y/N):y
      PowerVC services stopped.
      Database CINDER backup completed.
      Database QTM_IBM backup completed.
      Database NOSQL backup completed.
      Database NOVA backup completed.
      Database GLANCE backup completed.
      Database KEYSTONE backup completed.
      Database and file backup completed. Backup data is in archive /var/opt/ibm/powervc/backups/20142199544840294/powervc_backup.tar.gz.
      PowerVC services started.
      PowerVC backup completed successfully.
      
    • Restoring :
    • # powervc-restore --noPrompt
      

    Places to check

    Finding information about PowerVC is not so simple, the product is still young and there are not many feedbacks and informations about it. Here are a few places to check if you have any problems. Keep in mind that the community is very active and is growing day by day :

    If I have one last word to say this one will be future. In my opinion PowerVC is the future for deployment on Power Systems. I had the chance to use VMcontrol and PowerVC : both are powerful, but the second is so simple to use that I can easily say that it will be used by IBM customers (has anyone used VMcontrol in production outside of PureSystems ?). Where VMcontrol has fail PowerVC can succeed …. but looking in the code you’ll find some part of VMcontrol (in the Activation Engine). So the ghost of VMcontrol is not so far and wil surely be kicked by PowerVC. Once again, I hope it helps, comments are welcome, I really need it to be sure my posts are useful and simple to understand.

    end