Configuration of a Remote Restart Capable partition

How can we move a partition to another machine if the machine or the data-center on which the partition is hosted is totally unavailable ? This question is often asked by managers and technical people. Live Partition Mobility can’t answer to this question because the source machine needs to be running to initiate the mobility. I’m sure that most of you are implementing a manual solution based on a bunch of scripts recreating the partition profile by hand but this is hard to maintain and it’s not fully automatized and not supported by IBM. A solution to this problem is to setup your partitions as Remote Restart Capable partitions. This PowerVM feature is available since the release of VMcontrol (IBM Systems Director plugin). Unfortunately this powerful feature is not well documented but will probably in the next months or in the next year be a must have on your newly deployed AIX machines. One last word : with the new Power8 machines things are going to change about remote restart, the functionality will be easier to use and a lot of prerequisites are going to disappear. Just to be clear this post has been written using Power7+ 9117-MMD machines, the only thing you can’t do with these machines (compared to Power8 ones) is changing a current partition to be remote restart capable aware without having to delete and recreate its profile.

Pre-requesite

To create and use a remote restart partition on Power7+/Power8 machines you’ll need this prerequisites :

  • A PowerVM enterprise license (Capability “PowerVM remote restart capable” to true, be careful there is another capability named “Remote restart capable” this was used by VMcontrol only, so double check the capability ok for you).
  • A firmware 780 (or later all Power8 firmware are ok, all Power7 >= 780 are ok).
  • Your source and destination machine are connected to the same Hardware Management Console, you can’t remote restart between two HMC at the moment.
  • Minimum version of HMC is 8r8.0.0. Check you have the rrstartlpar command (not the rrlpar command used by VMcontrol only).
  • Better than a long post check this video (don’t laugh at me, I’m trying to do my best but this is one of my first video …. hope it is good) :

What is a remote restart capable virtual machine ?

Better than a long text to explain you what is, check the picture below and follow each number from 1 to 4 to understand what is a remote restart partition :

remote_restart_explanation

Create the profile of you remote restart capable partition : Power7 vs Power8

A good reason to move to Power8 faster than you planed is that you can change a virtual machine to be remote restart capable without having to recreate the whole profile. I don’t know why at the time of writing this post but changing a non remote restart capable lpar to a remote restart capable lpar is only available on Power8 systems. If you are using a Power7 machine (like me in all the examples below) be carful to check this option while creating the machine. Keep in mind that if you forgot to check to option you will not be able to enable the remote restart capability afterwards and you unfortunately have to remove you profile and recreate it, sad but true … :

  • Don’t forget to check the check box to allow the partition to be remote restart capable :
  • remote_restart_capable_enabled1

  • After the partition is created you can notice in the I/O tab that all remote restart capable partition are not able to own any physical I/O adapter :
  • rr2_nophys

  • You can check in the properties that the remote restart capable feature is activated :
  • remote_restart_capable_activated

  • If you try to modify an existing profile on a Power7 machine you’ll get this error message. On a Power8 machine there will be not problem :
  • # chsyscfg -r lpar -m XXXX-9117-MMD-658B2AD -p test_lpar-i remote_restart_capable=1
    An error occurred while changing the partition named test_lpar.
    The managed system does not support changing the remote restart capability of a partition. You must delete the partition and recreate it with the desired remote restart capability.
    
  • You can verify that some of your lpar are remote restart capable :
  • lssyscfg -r lpar -m source-machine -F name,remote_restart_capable
    [..]
    lpar1,1
    lpar2,1
    lpar3,1
    remote-restart,1
    [..]
    
  • On a Power 7 machine the best way to enable remote restart on an already created machine is to delete the profile and recreate it by hand and adding it the remote restart attribute :
  • Get the current partition profile :
  • $ lssyscfg -r prof -m s00ka9927558-9117-MMD-658B2AD --filter "lpar_names=temp3-b642c120-00000133"
    name=default_profile,lpar_name=temp3-b642c120-00000133,lpar_id=11,lpar_env=aixlinux,all_resources=0,min_mem=8192,desired_mem=8192,max_mem=8192,min_num_huge_pages=0,desired_num_huge_pages=0,max_num_huge_pages=0,mem_mode=ded,mem_expansion=0.0,hpt_ratio=1:128,proc_mode=shared,min_proc_units=2.0,desired_proc_units=2.0,max_proc_units=2.0,min_procs=4,desired_procs=4,max_procs=4,sharing_mode=uncap,uncap_weight=128,shared_proc_pool_id=0,shared_proc_pool_name=DefaultPool,affinity_group_id=none,io_slots=none,lpar_io_pool_ids=none,max_virtual_slots=64,"virtual_serial_adapters=0/server/1/any//any/1,1/server/1/any//any/1",virtual_scsi_adapters=3/client/2/s00ia9927560/32/0,virtual_eth_adapters=32/0/1659//0/0/vdct/facc157c3e20/all/0,virtual_eth_vsi_profiles=none,"virtual_fc_adapters=""2/client/1/s00ia9927559/32/c050760727c5007a,c050760727c5007b/0"",""4/client/1/s00ia9927559/35/c050760727c5007c,c050760727c5007d/0"",""5/client/2/s00ia9927560/34/c050760727c5007e,c050760727c5007f/0"",""6/client/2/s00ia9927560/35/c050760727c50080,c050760727c50081/0""",vtpm_adapters=none,hca_adapters=none,boot_mode=norm,conn_monitoring=1,auto_start=0,power_ctrl_lpar_ids=none,work_group_id=none,redundant_err_path_reporting=0,bsr_arrays=0,lpar_proc_compat_mode=default,electronic_err_reporting=null,sriov_eth_logical_ports=none
    
  • Remove the partition :
  • $ chsysstate -r lpar -o shutdown --immed -m source-server -n temp3-b642c120-00000133
    $ rmsyscfg -r lpar -m source-server -n temp3-b642c120-00000133
    
  • Recreate the partition with the remote restart attribute enabled :
  • mksyscfg -r lpar -m s00ka9927558-9117-MMD-658B2AD -i 'name=temp3-b642c120-00000133,profile_name=default_profile,remote_restart_capable=1,lpar_id=11,lpar_env=aixlinux,all_resources=0,min_mem=8192,desired_mem=8192,max_mem=8192,min_num_huge_pages=0,desired_num_huge_pages=0,max_num_huge_pages=0,mem_mode=ded,mem_expansion=0.0,hpt_ratio=1:128,proc_mode=shared,min_proc_units=2.0,desired_proc_units=2.0,max_proc_units=2.0,min_procs=4,desired_procs=4,max_procs=4,sharing_mode=uncap,uncap_weight=128,shared_proc_pool_name=DefaultPool,affinity_group_id=none,io_slots=none,lpar_io_pool_ids=none,max_virtual_slots=64,"virtual_serial_adapters=0/server/1/any//any/1,1/server/1/any//any/1",virtual_scsi_adapters=3/client/2/s00ia9927560/32/0,virtual_eth_adapters=32/0/1659//0/0/vdct/facc157c3e20/all/0,virtual_eth_vsi_profiles=none,"virtual_fc_adapters=""2/client/1/s00ia9927559/32/c050760727c5007a,c050760727c5007b/0"",""4/client/1/s00ia9927559/35/c050760727c5007c,c050760727c5007d/0"",""5/client/2/s00ia9927560/34/c050760727c5007e,c050760727c5007f/0"",""6/client/2/s00ia9927560/35/c050760727c50080,c050760727c50081/0""",vtpm_adapters=none,hca_adapters=none,boot_mode=norm,conn_monitoring=1,auto_start=0,power_ctrl_lpar_ids=none,work_group_id=none,redundant_err_path_reporting=0,bsr_arrays=0,lpar_proc_compat_mode=default,sriov_eth_logical_ports=none'
    

Creating a reserved storage device

The reserved storage device pool is used to store the configuration data of the remote restart partition. At the time of writing this post thoses devices are mandatory and as far as I know they are used just to store the configuration and not the state (memory state) of the virtual machines itself (maybe in the future, who knows ?) (You can’t create or boot any remote restart partition if you do not have a reserved storage device pool created, do this before doing anything else) :

  • You have first to find on both Virtual I/O Server and on both machines (source and destination machine used for the remote restart operation) a bunch of devices. These ones have to be the same on all the Virtual I/O Server used for the remote restart operation. The lsmemdev command is used to find those devices :
  • vios1$ lspv | grep -iE "hdisk988|hdisk989|hdisk990"
    hdisk988         00ced82ce999d6f3                     None
    hdisk989         00ced82ce999d960                     None
    hdisk990         00ced82ce999dbec                     None
    vios2$ lspv | grep -iE "hdisk988|hdisk989|hdisk990"
    hdisk988         00ced82ce999d6f3                     None
    hdisk989         00ced82ce999d960                     None
    hdisk990         00ced82ce999dbec                     None
    vios3$ lspv | grep -iE "hdisk988|hdisk989|hdisk990"
    hdisk988         00ced82ce999d6f3                     None
    hdisk989         00ced82ce999d960                     None
    hdisk990         00ced82ce999dbec                     None
    vios4$ lspv | grep -iE "hdisk988|hdisk989|hdisk990"
    hdisk988         00ced82ce999d6f3                     None
    hdisk989         00ced82ce999d960                     None
    hdisk990         00ced82ce999dbec                     None
    
    $ lsmemdev -r avail -m source-machine -p vios1,vios2
    [..]
    device_name=hdisk988,redundant_device_name=hdisk988,size=61440,type=phys,phys_loc=U2C4E.001.DBJN916-P2-C1-T1-W500507680140F32C-L3E5000000000000,redundant_phys_loc=U2C4E.001.DBJN916-P2-C2-T1-W500507680140F32C-L3E5000000000000,redundant_capable=1
    device_name=hdisk989,redundant_device_name=hdisk989,size=61440,type=phys,phys_loc=U2C4E.001.DBJN916-P2-C1-T1-W500507680140F32C-L3E6000000000000,redundant_phys_loc=U2C4E.001.DBJN916-P2-C2-T1-W500507680140F32C-L3E6000000000000,redundant_capable=1
    device_name=hdisk990,redundant_device_name=hdisk990,size=61440,type=phys,phys_loc=U2C4E.001.DBJN916-P2-C1-T1-W500507680140F32C-L3E7000000000000,redundant_phys_loc=U2C4E.001.DBJN916-P2-C2-T1-W500507680140F32C-L3E7000000000000,redundant_capable=1
    [..]
    $ lsmemdev -r avail -m dest-machine -p vios3,vios4
    [..]
    device_name=hdisk988,redundant_device_name=hdisk988,size=61440,type=phys,phys_loc=U2C4E.001.DBJN914-P2-C2-T1-W500507680140F32C-L3E5000000000000,redundant_phys_loc=U2C4E.001.DBJN914-P2-C1-T1-W500507680140F32C-L3E5000000000000,redundant_capable=1
    device_name=hdisk989,redundant_device_name=hdisk989,size=61440,type=phys,phys_loc=U2C4E.001.DBJN914-P2-C2-T1-W500507680140F32C-L3E6000000000000,redundant_phys_loc=U2C4E.001.DBJN914-P2-C1-T1-W500507680140F32C-L3E6000000000000,redundant_capable=1
    device_name=hdisk990,redundant_device_name=hdisk990,size=61440,type=phys,phys_loc=U2C4E.001.DBJN914-P2-C2-T1-W500507680140F32C-L3E7000000000000,redundant_phys_loc=U2C4E.001.DBJN914-P2-C1-T1-W500507680140F32C-L3E7000000000000,redundant_capable=1
    [..]
    
  • Create the reserved storage device pool using the chhwres command on the Hardware Management Console (create on all machines used by the remote restart operation) :
  • $ chhwres -r rspool -m source-machine -o a -a vios_names=\"vios1,vios2\"
    $ chhwres -r rspool -m source-machine -o a -p vios1 --rsubtype rsdev --device hdisk988 --manual
    $ chhwres -r rspool -m source-machine -o a -p vios1 --rsubtype rsdev --device hdisk989 --manual
    $ chhwres -r rspool -m source-machine -o a -p vios1 --rsubtype rsdev --device hdisk990 --manual
    $ lshwres -r rspool -m source-machine --rsubtype rsdev
    device_name=hdisk988,vios_name=vios1,vios_id=1,size=61440,type=phys,state=Inactive,phys_loc=U2C4E.001.DBJN916-P2-C1-T1-W500507680140F32C-L3E5000000000000,is_redundant=1,redundant_device_name=hdisk988,redundant_vios_name=vios2,redundant_vios_id=2,redundant_state=Inactive,redundant_phys_loc=U2C4E.001.DBJN916-P2-C2-T1-W500507680140F32C-L3E5000000000000,lpar_id=none,device_selection_type=manual
    device_name=hdisk989,vios_name=vios1,vios_id=1,size=61440,type=phys,state=Inactive,phys_loc=U2C4E.001.DBJN916-P2-C1-T1-W500507680140F32C-L3E6000000000000,is_redundant=1,redundant_device_name=hdisk989,redundant_vios_name=vios2,redundant_vios_id=2,redundant_state=Inactive,redundant_phys_loc=U2C4E.001.DBJN916-P2-C2-T1-W500507680140F32C-L3E6000000000000,lpar_id=none,device_selection_type=manual
    device_name=hdisk990,vios_name=vios1,vios_id=1,size=61440,type=phys,state=Inactive,phys_loc=U2C4E.001.DBJN916-P2-C1-T1-W500507680140F32C-L3E7000000000000,is_redundant=1,redundant_device_name=hdisk990,redundant_vios_name=vios2,redundant_vios_id=2,redundant_state=Inactive,redundant_phys_loc=U2C4E.001.DBJN916-P2-C2-T1-W500507680140F32C-L3E7000000000000,lpar_id=none,device_selection_type=manual
    $ lshwres -r rspool -m source-machine
    "vios_names=vios1,vios2","vios_ids=1,2"
    
  • You can also create the reserved storage device pool from Hardware Management Console GUI :
  • After selecting the Virtual I/O Server, click select devices :
  • rr_rsd_pool_p

  • Choose the maximum and minimum size to filter the devices you can select for the creation of the reserved storage device :
  • rr_rsd_pool2_p

  • Choose the disk you want to put in you reserved storage device pool (put all the devices used by remote restart partitions in manual mode, automatic devices are used by suspend/resume operation or AMS pool. One device can not be shared by two remote restart partitions) :
  • rr_rsd_pool_waiting_3_p
    rr_pool_create_7_p

  • You can check afterwards that your reserved device storage pool is created and is composed by three devices :
  • rr_pool_create_9
    rr_pool_create_8_p

Select a storage device for each remote restart partition before starting it :

After creating the reserved device storage pool you have for every partition to select a device from the storage pool. This device will be used to store the configuration data of the partition :

  • You can see you cannot start the partition if no devices were selected !
  • To select the correct device size you first have to calculate the needed space for every partition using the using the lsrsdevsize command. This size around the size of max memory value set in the partition profile (don’t ask me why):
  • $ lsrsdevsize -m source-machine -p temp3-b642c120-00000133
    size=8498
    
  • Select the device you want to assign to your machine (in my case there was already a device selected for this machine) :
  • rr_rsd_pool_assign_p

  • Then select the machine you want to assign for the device :
  • rr_rsd_pool_assign2_p

  • Or do this in command line :
  • $ chsyscfg -r lpar -m source-machine -i "name=temp3-b642c120-00000133,primary_rs_vios_name=vios1,secondary_rs_vios_name=vios2,rs_device_name=hdisk988"
    $ lssyscfg -r lpar -m source-machine --filter "lpar_names=temp3-b642c120-00000133" -F primary_rs_vios_name,secondary_rs_vios_name,curr_rs_vios_name
    vios1,vios2,vios1
    $ lshwres -r rspool -m source-machine --rsubtype rsdev
    device_name=hdisk988,vios_name=vios1,vios_id=1,size=61440,type=phys,state=Active,phys_loc=U2C4E.001.DBJN916-P2-C1-T1-W500507680140F32C-L3E5000000000000,is_redundant=1,redundant_device_name=hdisk988,redundant_vios_name=vios2,redundant_vios_id=2,redundant_state=Active,redundant_phys_loc=U2C4E.001.DBJN916-P2-C2-T1-W500507680140F32C-L3E5000000000000,lpar_name=temp3-b642c120-00000133,lpar_id=11,device_selection_type=manual
    

Launch the remote restart operation

All the remote restart operations are launched from the Hardware Management Console with the rrstartlpar command. At the time of writing this post there is not GUI function to remote restart a machine and you can only do it with the command line :

Validation

Like you can do it with a Live Partition Mobility move you can validate a remote restart operation before running it. You can only perform the remote restart operation if the machine on which the remote restart machine is hosted is shutdown or in error, so the validation is very useful and mandatory to check your remote restart machine are well configured without having to stop the source machine :

$ rrstartlpar -o validate -m source-machine -t dest-machine -p rrlpar
$ rrstartlpar -o validate -m source-machine -t dest-machine -p rrlpar -d 5
$ rrstartlpar -o validate -m source-machine -t dest-machine -p rrlpar --redundantvios 2 -d 5 -v

Execution

As I said before the remote restart operation can only be performed if the source machine is in a particular state, the states that allows a remote restart operation are :

  • Power Off.
  • Error.
  • Error – Dump in progress state.

So the only way to test a remote restart operation today is to shutdown your source machine :

  • Shutdown the source machine :
  • step1

    $ chsysstate -m source-machine -r sys  -o off --immed
    

    rr_step2_mod

  • You can next check on the Hardware Management Console that Virtual I/O Servers and the remote restart lpar are in state “Not available”. You’re now ready to remote restart the lpar (if the partition id is used on the destination machine the next available one will be used) (you have to wait a little before remote restarting the partition, check below) :
  • $ rrstartlpar -o restart -m source-machine -t dest-machine -p rrlpar -d 5 -v
    HSCLA9CE The managed system is not in a valid state to support partition remote restart operations.
    $ rrstartlpar -o restart -m source-machine -t dest-machine -p rrlpar -d 5 -v
    Warnings:
    HSCLA32F The specified partition ID is no longer valid. The next available partition ID will be used.
    

    step3
    rr_step4_mod
    step5

Cleanup

When the source machine is ready to be up (after an outage for instance) just boot the machine and its Virtual I/O Server. After the machine is up you can notice that the rrlpar profile is still there and it can be a huge problem if somebody is trying to boot this machine because it is started on the other machine after the remote restart operation. To prevent such an error you have to cleanup your remote restart partition by using the rrstartlpar command again. Be careful not to check the option to boot the partitions after the machine is started :

  • Restart the source machine and its Virtual I/O Servers :
  • $ chsysstate -m source-machine -r sys -o on
    $ chsysstate -r lpar -m source-machine -n vios1 -o on -f default_profile
    $ chsysstate -r lpar -m source-machine -n vios2 -o on -f default_profile
    

    rr_step6_mod

  • Perform the cleanup operation to remove the profile of the remote restart partition (if you want later to LPM back your machine you have to keep the device of the reserved device storage pool in the pool, if you do not use the –retaindev option the device will be automatically removed from the pool) :
  • $ rrstartlpar -o cleanup -m source-machine -p rrlpar --retaindev -d 5 -v --force
    

    rr_step7_mod

Refresh the partition and profile data

During my test I encounter a problem. The configuration was not correctly synced between the device used in the reserved device storage pool and the current partition profile. I had to use a command named refdev (for refresh device) to synchronize the partition and profile data to the storage device.

$ refdev -m source-machine -p refdev -m sys1 -p temp3-b642c120-00000133 -v 

What’s in the reserved storage device ?

I’m a curious guy. After playing with remote restart I asked myself a question, what is really stored in the reserved device storage device assigned to the remote restart partition. Looking in the documentation on the internet does not answer to my question so I had to look on it on my own. By ‘dding” the reserved storage device assigned to a partition I realized that the profile is stored in xml format. Maybe this format is the same format that the one used by the HMC 8 templates library. For the moment and during my tests on Power7+ machine the state of the memory of the partition is not transferred to the destination machine, maybe because I had to shutdown the whole source machine to test. Maybe the memory state of the machine is transferred to the destination machine if this one is in error state or is dumping. I had not chance to test this :

root@vios1:/home/padmin# dd if=/dev/hdisk17 of=/tmp/hdisk17.out bs=1024 count=10
10+0 records in
10+0 records out
root@vios1:/home/padmin# more hdisk17.out
[..]
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
BwEAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACgDIAZAAAQAEAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA" Profile="H4sIAAAAA
98VjxbxEAhNaZEqpEptPS/iMJO4cTJBdHVj38zcYvu619fTGQlQVmxY0AUICSH4A5XYorJgA1I3sGMBCx5Vs4RNd2zgXI89tpNMxslIiRzPufec853zfefk/t/osMfRBYPZRbpuF9ueUTQsShxR1NSl9dvEEPPMMgnfvPnVk
a2ixplLuOiVCHaUKn/yYMv/PY/ydTRuv016TbgOzdVv4w6+KM0vyheMX62jgq0L7hsCXtxBH6J814WoZqRh/96+4a+ff3Br8+o3uTE0pqJZA7vYoKKnOgYnNoSsoiPECp7KzHfELTQV/lnBAgt0/Fbfs4Wd1sV+ble7Lup/c
be0LQj01FJpoVpecaNP15MhHxpcJP8al6b7fg8hxCnPY68t8LpFjn83/eKFhcffjqF8DRUshs0almioaFK0OfHaUKCue/1GcN0ndyfg9/fwsyzQ6SblellXK6RDDaIIwem6L4iXCiCfCuBZxltFz6G4eHed2EWD2sVVx6Mth
eEOtnzSjQoVwLbo2+uEf3T/s2emPv3z4xA16eD0AC6oRN3FXNnYoA6U7y3OfFc1g5hOIiTQsVUHSusSc43QVluEX2wKdKJZq4q2YmJXEF7hhuqYJA0+inNx3YTDab2m6T7vEGpBlAaJnU0qjWofTkj+uT2Tv3Rl69prZx/9s
thQTBMK42WK7XSzrizqFhPL5E6FeHGVhnSJQLlKKreab1l6z9MwF0C/jTi3OfmKCsoczcJGwITgy+f74Z4Lu2OU70SDyIdXg1+JAApBWZoAbLaEj4InyonZIDbjvZGwv3H5+tb7C5tPThQA9oUdsCN0HsnWoLxWLjPHAdJSp
Ja45pBarVb3JDyUJOn3aemXcIqtUfgPi3wCuiw76tMh6mVtNVDHOB+BxqEUDWZGtPgPrFc9oBgBhhJzEdsEVI9zC1gr0JTexhwgThzIwYEG7lLbt3dcPyHQLKQqfGzVsSNzVSvenkDJU/lUoiXGRNrdxLy2soyhtcNX47INZ
nHKOCjYfsoeR3kpm58GdYDVxipIZXDgSmhfCDCPlKZm4dZoVFORzEX0J6CLvK4py6N7Pz94yiXlPBAArd3zqIEtjXFZ4izJzQ44sCv7hh3bTnY5TbKdnOtHGtatTjrEynTuWFNXV3ouaUKIIKfDgE5XrrpWb/SHWyWCbXMM5
DkaHNzXVJws6csK57jnpToLopiQLZdgHJJh9wm+M+wbof7GzSRJBYvAAaV0RvE8ZlA5yxSob4fAiJiNNwwQAwu2y5/O881fvvz3HxgK70ZDwc1FS8JezBgKR0e/S4XR3ta8OwmdS56akXJITAmYBpElF5lZOdlXuO+8N0opU
m0HeJTw76oiD8PS9QfRECUYqk0B1KGkZ+pRGQPUhPFEb12XIoe7u4WXuwdVqTAnZT8gyYrvAPlL/sYG4RkDmAx5HFZpFIVnAz9Lrlyh9tFIc4nZAColOLNGdFRKmE8GJd5zZx++zMiAoTOWNrJvBjODNo1UOGuXngzcHWjrn
LgmkxjBXLj+6Fjy1DHFF0zV6lVH/p+VYO6pbZzYD9/ORFLouy6MwvlGuRz8Qz10ugawprAdtJ4GxWAOtmQjZXJ+Lg58T/fDy4K74bYWr9CyLIVdQiplHPLbjinZRu4BZuAENE6jxTP2zNkBVgfiWiFcv7f3xYjFqxs/7vb0P
 lpar_name="rrlpar" lpar_uuid="0D80582A44F64B43B2981D632743A6C8" lpar_uuid_gen_method="0"><SourceLparConfig additional_mac_addr_bases="" ame_capability="0" auto_start_e
rmal" conn_monitoring="0" desired_proc_compat_mode="default" effective_proc_compat_mode="POWER7" hardware_mem_encryption="10" hardware_mem_expansion="5" keylock="normal
"4" lpar_placement="0" lpar_power_mgmt="0" lpar_rr_dev_desc="	<cpage>		<P>1</P>
		<S>51</S>
		<VIOS_descri
00010E0000000000003FB04214503IBMfcp</VIOS_descriptor>
	</cpage>
" lpar_rr_status="6" lpar_tcc_slot_id="65535" lpar_vtpm_status="65535" mac_addres
x_virtual_slots="10" partition_type="rpa" processor_compatibility_mode="default" processor_mode="shared" shared_pool_util_authority="0" sharing_mode="uncapped" slb_mig_
ofile="1" time_reference="0" uncapped_weight="128"><VirtualScsiAdapter is_required="false" remote_lpar_id="2" src_vios_slot_number="4" virtual_slot_number="4"/><Virtual
"false" remote_lpar_id="1" src_vios_slot_number="3" virtual_slot_number="3"/><Processors desired="4" max="8" min="1"/><VirtualFibreChannelAdapter/><VirtualEthernetAdapt
" filter_mac_address="" is_ieee="0" is_required="false" mac_address="82776CE63602" mac_address_flags="0" qos_priority="0" qos_priority_control="false" virtual_slot_numb
witch_id="1" vswitch_name="vdct"/><Memory desired="8192" hpt_ratio="7" max="16384" memory_mode="ded" min="256" mode="ded" psp_usage="3"><IoEntitledMem usage="auto"/></M
 desired="200" max="400" min="10"/></SourceLparConfig></SourceLparInfo></SourceInfo><FileInfo modification="0" version="1"/><SriovEthMappings><SriovEthVFInfo/></SriovEt
VirtualFibreChannelAdapterInfo/></VfcMappings><ProcPools capacity="0"/><TargetInfo concurr_mig_in_prog="-1" max_msp_concur_mig_limit_dynamic="-1" max_msp_concur_mig_lim
concur_mig_limit="-1" mpio_override="1" state="nonexitent" uuid_override="1" vlan_override="1" vsi_override="1"><ManagerInfo/><TargetMspInfo port_number="-1"/><TargetLp
ar_name="rrlpar" processor_pool_id="-1" target_profile_name="mig3_9117_MMD_10C94CC141109224549"><SharedMemoryConfig pool_id="-1" primary_paging_vios_id="0"/></TargetLpa
argetInfo><VlanMappings><VlanInfo description="VkVSU0lPTj0xClZJT19UWVBFPVZFVEgKVkxBTl9JRD0zMzMxClZTV0lUQ0g9dmRjdApCUklER0VEPXllcwo=" vlan_id="3331" vswitch_mode="VEB" v
ibleTargetVios/></VlanInfo></VlanMappings><MspMappings><MspInfo/></MspMappings><VscsiMappings><VirtualScsiAdapterInfo description="PHYtc2NzaS1ob3N0PgoJPGdlbmVyYWxJbmZvP
mVyc2lvbj4KCQk8bWF4VHJhbmZlcj4yNjIxNDQ8L21heFRyYW5mZXI+CgkJPGNsdXN0ZXJJRD4wPC9jbHVzdGVySUQ+CgkJPHNyY0RyY05hbWU+VTkxMTcuTU1ELjEwQzk0Q0MtVjItQzQ8L3NyY0RyY05hbWU+CgkJPG1pb
U9TcGF0Y2g+CgkJPG1pblZJT1Njb21wYXRhYmlsaXR5PjE8L21pblZJT1Njb21wYXRhYmlsaXR5PgoJCTxlZmZlY3RpdmVWSU9TY29tcGF0YWJpbGl0eT4xPC9lZmZlY3RpdmVWSU9TY29tcGF0YWJpbGl0eT4KCTwvZ2VuZ
TxwYXJ0aXRpb25JRD4yPC9wYXJ0aXRpb25JRD4KCTwvcmFzPgoJPHZpcnREZXY+CgkJPHZEZXZOYW1lPnJybHBhcl9yb290dmc8L3ZEZXZOYW1lPgoJCTx2TFVOPgoJCQk8TFVBPjB4ODEwMDAwMDAwMDAwMDAwMDwvTFVBP
FVOU3RhdGU+CgkJCTxjbGllbnRSZXNlcnZlPm5vPC9jbGllbnRSZXNlcnZlPgoJCQk8QUlYPgoJCQkJPHR5cGU+dmRhc2Q8L3R5cGU+CgkJCQk8Y29ubldoZXJlPjE8L2Nvbm5XaGVyZT4KCQkJPC9BSVg+CgkJPC92TFVOP
gkJCTxyZXNlcnZlVHlwZT5OT19SRVNFUlZFPC9yZXNlcnZlVHlwZT4KCQkJPGJkZXZUeXBlPjE8L2JkZXZUeXBlPgoJCQk8cmVzdG9yZTUyMD50cnVlPC9yZXN0b3JlNTIwPgoJCQk8QUlYPgoJCQkJPHVkaWQ+MzMyMTM2M
DAwMDAwMDAwMDNGQTA0MjE0NTAzSUJNZmNwPC91ZGlkPgoJCQkJPHR5cGU+VURJRDwvdHlwZT4KCQkJPC9BSVg+CgkJPC9ibG9ja1N0b3JhZ2U+Cgk8L3ZpcnREZXY+Cjwvdi1zY3NpLWhvc3Q+" slot_number="4" sou
_slot_number="4"><PossibleTargetVios/></VirtualScsiAdapterInfo><VirtualScsiAdapterInfo description="PHYtc2NzaS1ob3N0PgoJPGdlbmVyYWxJbmZvPgoJCTx2ZXJzaW9uPjIuNDwvdmVyc2lv
NjIxNDQ8L21heFRyYW5mZXI+CgkJPGNsdXN0ZXJJRD4wPC9jbHVzdGVySUQ+CgkJPHNyY0RyY05hbWU+VTkxMTcuTU1ELjEwQzk0Q0MtVjEtQzM8L3NyY0RyY05hbWU+CgkJPG1pblZJT1NwYXRjaD4wPC9taW5WSU9TcGF0
YXRhYmlsaXR5PjE8L21pblZJT1Njb21wYXRhYmlsaXR5PgoJCTxlZmZlY3RpdmVWSU9TY29tcGF0YWJpbGl0eT4xPC9lZmZlY3RpdmVWSU9TY29tcGF0YWJpbGl0eT4KCTwvZ2VuZXJhbEluZm8+Cgk8cmFzPgoJCTxwYXJ0
b25JRD4KCTwvcmFzPgoJPHZpcnREZXY+CgkJPHZEZXZOYW1lPnJybHBhcl9yb290dmc8L3ZEZXZOYW1lPgoJCTx2TFVOPgoJCQk8TFVBPjB4ODEwMDAwMDAwMDAwMDAwMDwvTFVBPgoJCQk8TFVOU3RhdGU+MDwvTFVOU3Rh
cnZlPm5vPC9jbGllbnRSZXNlcnZlPgoJCQk8QUlYPgoJCQkJPHR5cGU+dmRhc2Q8L3R5cGU+CgkJCQk8Y29ubldoZXJlPjE8L2Nvbm5XaGVyZT4KCQkJPC9BSVg+CgkJPC92TFVOPgoJCTxibG9ja1N0b3JhZ2U+CgkJCTxy
UlZFPC9yZXNlcnZlVHlwZT4KCQkJPGJkZXZUeXBlPjE8L2JkZXZUeXBlPgoJCQk8cmVzdG9yZTUyMD50cnVlPC9yZXN0b3JlNTIwPgoJCQk8QUlYPgoJCQkJPHVkaWQ+MzMyMTM2MDA1MDc2ODBDODAwMDEwRTAwMDAwMDAw
ZmNwPC91ZGlkPgoJCQkJPHR5cGU+VURJRDwvdHlwZT4KCQkJPC9BSVg+CgkJPC9ibG9ja1N0b3JhZ2U+Cgk8L3ZpcnREZXY+Cjwvdi1zY3NpLWhvc3Q+" slot_number="3" source_vios_id="1" src_vios_slot_n
tVios/></VirtualScsiAdapterInfo></VscsiMappings><SharedMemPools find_devices="false" max_mem="16384"><SharedMemPool/></SharedMemPools><MigrationSession optional_capabil
les" recover="na" required_capabilities="veth_switch,hmc_compatibilty,proc_compat_modes,remote_restart_capability,lpar_uuid" stream_id="9988047026654530562" stream_id_p
on>

About the state of the source machine ?

You have to know this before using remote restart : at the time of writing this post the remote restart feature is still young and have to evolve before being usable in real life, I’m saying this because the FSP of the source machine has to be up to perform a remote restart operation. To be clear the remote restart feature does not answer to the total loss of one of your site. It’s just useful to restart partitions of a system with a problem that is not an FSP problem (problem with memory DIMM, problem with CPUs for instance). It can be used in your DRP exercises but not if your whole site is totally down which is -in my humble opinion- one of the key feature that remote restart needs to answer. Don’t be afraid read the conclusion ….

Conclusion

This post have been written using Power7+ machines, my goal was to give you an example of remote restart operations : a summary of what is is, how it work, and where and when to use it. I’m pretty sure that a lot of things are going to change about remote restart. First, on Power8 machines you don’t have to recreate the partitions to make them remote restart aware. Second, I know that changes are on the way for remote restart on Power8 machines, especially about reserved storage devices and about the state of the source machine. I’m sure this feature will have a bright future and used with PowerVC it can be a killer feature. Hope to see all this changes in a near future ;-). Once again I hope this post helps you.

An overview of the IBM Technical Collaboration Council for PowerSystems 2014

Since now eight ten months I decided to change my job for better or for worst. Talking about the better I had the chance to be enrolled for the Technical Collaboration Council for Power Systems (I’ll not talk about the worst … this could takes me hours to explain it..). The Technical Collaboration Council is not well known in Europe, and not well known for Power Systems and I think writing this blog post may offer a better worldwide visibility to the Technical Collaboration Council. It deserve a blog post :-).

To be clear and to avoid any problem to participate in the meeting you have first to sign a Non Disclosure Announcement. A lot of presentations are still IBM confidential. This said I had sign this NDA. So I cannot talk about the content of the meeting. Sure there is a lot of things to say but I have to keep it for me … :-)

3
But what is exactly the Technical Collaboration Council ? This annual meeting takes places in Austin Texas at the home of Power Systems :-). The duration is for one week from Monday to Friday. The Technical Collaboration Council is inviting biggest IBM customers all over the world. For a guy like me so involved in this community, coming here was a great opportunity and way to spread the word about my blog and my participation in the Power community. In fact we were just a few people coming from Europe and a lot of US guys. The TCC looks like an IBM Technical University in better … because you can participate during the meeting and answer to a lot of surveys about the shape of things to come about Power Systems :-) :-) :-) .

Here is what you can see and what you can do when you are coming to the TCC Power. And for me it’s exciting !!! :

  • Meetings about trends an directions about Power Systems (overview of new products (hardware and software), new functionality and new releases going to be released in the next year).
  • Direct Access to IBM Lab. You can go and ask the lab about a particular feature you need, or about something you didn’t understand. For instance I had a quick meeting with PowerVC guys (not only guys, sorry Christine) about my needs for the next few months. Another one : I had the chance to talk to the head manager of AIX and ask him about a few things I’d like to see in the new version of AIX (Who said an installation over http ?).
  • Big “names” of Power are here, they share and talk : Doug Ballog, Satya Sharma. Seeing them is always impressive !
  • Interaction and sharing with other customers : like me a lot of customers were here at the TCC and sharing about how they do things and how they use their Power Systems it ALWAYS useful. Had a few interesting conversations with guys from another big bank with the same constraints as me.
  • You can say what you think. IBM is waiting for you feedback .. positive or negative.
  • Demo and hands on new products and new functionality (Remember about the IBM Provisionning Toolkit for PowerVM & a cool LPM scheduler presented by STG lab services guys).
  • Possibility to enroll for beta programs … (in my case HMC)
  • You can finally meet guys you had on the phone or by mail since a couple of years in real. It’s always useful !
  • And of course lot of fun :-)

I had the chance to talk about my experience about PowerVC in front of all the TCC members. It was very stressful for a French guy like me … and I just had a few minutes to prepare … Hope it was good, but It was a great experience. You can do things like this at the TCC … you think PowerVC is good, just go on the scene and have 15 minutes talk about it … :-)

4

The Technical Collaboration Council is not just about technical stuffs and work. You can also have a lot of fun talking to IBM guys and customers. There are a lot of moments when people can eat and drink together and the possibility to share about everything is always here. And if I had to remember only one thing about the Technical Collaboration Council it will be that it is a great moment of sharing with others and not just about work and Power Systems. This said I wanted to thanks IBM and a lot of people for their kindness, their availability and all the fun they give us during this week. So thanks to : Philippe H., Patrice P., Satya S., Jay K., Carl B., Eddy S., Natalie M, Christine W, François L, Rosa D … and sorry for those I’ve forgotten :-). And never forget that Power is performance redefined.

Ok ; one last word. Maybe some of the customers who were here this year are going to read this post and I encourage you to react to this post and to post comments. Redhat moto is “We grow when we share”, but in such events I am (and we are) growing when IBM is sharing. People may think that IBM do not share … I disagree :-). They are doing it and they are doing it well ! And never forget that the Power Community is still alive and ready to rocks ! So please raise your voice about it. In such times, times of Media and Social we have to prove to IBM and to the world that is community is growing, is great, and is ready to share.
One last thing, the way to work in US seems to be very different than the way we do in Europe … could be cool to move to US

Exploit the full potential of PowerVC by using Shared Storage Pools & Linked Clones | PowerVC secrets about Linked Clones (pooladm,mksnap,mkclone,vioservice)

My journey into PowerVC still continues :-). The blog was not updated for two months but I’ve been busy these days, get sick … and so on, have another post in the pipe but this one has to be approved by IBM before posting ….. Since the latest version (at the time of writing this post 1.2.1.2) PowerVC is now capable of managing Shared Storage Pool (SSP). It’s a huge announcement because a lot of customers do not have a Storage Volume Controller and supported fibre channel switches. By using PowerVC in conjunction with SSP you will reveal the true and full potential of the product. There are two major enhancements brought by SSP, the first is the time of deployment of the new virtual machines … by using an SSP you’ll move from minutes to …. seconds. Second huge enhancement : by using SSP you’ll automatically -without knowing it- using a feature called “Linked Clones”. For those who are following my blog since the very beginning you’re probably aware that Linked Clones are usable and available since SSP were managed by the IBM Systems Director VMcontrol module. You can still refer to my blog posts about it … even if ISD VMcontrol is now a little bit outdated by PowerVC : here. Using PowerVC with Shared Storage Pools is easy, but how does it work behind the scene ? After analysing the process of deployment I’ve found some cool features, PowerVC is using secrets undocumented commands, pooladm, vioservice, mkdev secrets arguments … :

Discovering Shared Storage Pool on your PowerVC environment

The first step to do before beginning is to discover the Shared Storage Pool on PowerVC. I’m taking the time to explain you that because it’s so easy that people (like me) can think there is much to do about it … but no PowerVC is simple. You have nothing to do. I’m not going to explain you here how to create a Shared Storage Pool, please refer to my previous posts about this : here and here. After the Shared Storage Pool is created this one will be automatically added into PowerVC … nothing to do. Keep in mind that you will need the latest in date version of the Hardware Management Console (v8r8.1.0). If you are in trouble discovering the Shared Storage Pool check the Virtual I/O Server‘s RMC are ok. In general if you can query and perform any action on the Shared Storage Pool from the HMC there will be no problem from the PowerVC side.

  • You don’t have to reload PowerVC after creating the Shared Storage Pool, just check you can see it from the storage tab :
  • pvc_ssp1

  • You will get more details by clicking on the Shared Storage Pool ….
  • pvc_ssp2

  • such as captured image on the Shared Storage Pool ….
  • pvc_ssp3

  • volumes created on it …
  • pvc_ssp4

What is a linked clone ?

Think before start. You have to understand what is a Linked Clone before reading the rest of this post. Linked Clones are not well described in documentations and Rebooks. Linked Clones are based on Shared Storage Pools snapshots. No Shared Storage Pool = No Linked Clones. Here is what is going behind the scene when you are deploying a Linked Clone :

  1. The captured rootvg underlying disk is a Shared Storage Pool Logical Unit.
  2. When the image is captured the rootvg Logical Unit is copied and is known as a “Logical (Client Image) Unit”.
  3. When deploying a new machine a snapshot is created from the Logical (Client Image) Unit.
  4. A “special Logical Unit” is created from the snapshot. This Logical Unit seems to be a pointer to the snapshot. We call it a clone.
  5. The machine is booted and the activation engine is running and reconfiguring the network.
  6. When a block is modified on the new machine this one is duplicated and modified on one new block on the Shared Storage Pool.
  7. This said if no blocks are modified all the machines created from this capture are sharing the same blocks on the Shared Storage Pool.
  8. Only modified blocks are not shared between Linked Clones. The more things you will change on your rootvg the more space you will use on the Shared Storage Pool.
  9. That’s why these machines are called Linked Clones : they all are connected by the same source Logical Unit.
  10. You will save TIME (just a snapshot creation for the storage side) and SPACE (all rootvg will be shared by all the deployed machines) by using Linked Clones.

An image is sometimes better than long text, so here is a schema explaining all about Linked Clones :

LinkedClones

You have to capture an SSP base VM to deploy on the SSP

Be aware of one thing, you can’t deploy a virtual machine on the SSP if you don’t have an captured image on the SSP. You can’t deploy your Storwize images to deploy on the SSP. You first have to create by your own a machine which has its rootvg running on the SSP :

  • Create an image based on an SSP virtual machine :
  • pvc_ssp_capture1

  • Shared Storage Pool Logical Unit are stored in /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/VOL1.
  • Shared Storage Pool Logical (Client Image) Unit are stored in /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/IM.
  • The Logical Unit of the captured virtual machine is copied with the dd command from the VOL1 (/var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/VOL1) directory to the IM directory (/var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/IM) (so from volumes to images).
  • If you do this yourself by using the dd command you can see that the capture image is not shown at the ouput of the snapshot command (by using Linked Clones the snapshot command is separated in two categories, the actuals and “real” Logical Unit and Logical (Client Image) Units which are the PowerVC images …
  • A secret API managed by a secret command called vioservice is adding your newly created image too the Shared Storage pool soliddb.
  • After the “registration” the Client Image is visible with the snapshot command.

Deployment

After the image is captured and stored on the Shared Storage Pool images directory, you can now deploy virtual machines based on this image. Keep in mind that blocks are shared by each linked clones, you’ll be suprised that deploying machines will not used the free space on the shared storage pool. But be aware that you can’t deploy any machines if there is no “blank” space in the PowerVC space bar (check image below ….) :

deploy

Step by step deployment by exemple

  • A snapshot of the image is created trough the pooladm command. You can check the output of the snapshot command after this step you’ll see a new snapshot derived from the Logical (Client Image) Unit.
  • This snapshot is cloned (My understanding of the clone is that it is a normal logical unit sharing block with an image). After the snapshot is cloned a new volume is created in the shared storage pool volume directory but at this step this one is not visible with the lu command because creating a clone do not create meta-data on the shared storage pool.
  • A dummy logical unit is created. Then the clone is moved on the dummy logical unit to replace it.
  • The clone logical unit is mapped to client.

dummy

You can do it yourself without PowerVC (not supported)

Just for my understanding of what is doing PowerVC behind the scene I decided to try to do all the steps on my own.This steps are working but are not supported at all by IBM.

  • Before starting to read this you need to know that $ prompts are for padmin commands, # prompts are for root commands. You’ll need the cluster id and the pool id to build some xml files :
  • $ cluster -list -field CLUSTER_ID
    CLUSTER_ID:      c50a291c18ab11e489f46cae8b692f30
    $ lssp -clustername powervc_cluster -field POOL_ID
    POOL_ID:         000000000AFFF80C0000000053DA327C
    
  • So the cluster id will be c50a291c18ab11e489f46cae8b692f30 and the pool id will be c50a291c18ab11e489f46cae8b692f30000000000AFFF80C0000000053DA327C. These id are often prefixed by two characters (I don’t know the utility of these ones but it will work in all cases …)
  • Image files are stored in /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/IM.
  • Logical units files are stored in /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/VOL1.
  • Create the “envelope” of the Logical (Client Image) Unit, by creating an xml file (the udid are build with the cluster udid and the pool udid) used as the standard input of the vioservice command :
  • # cat create_client_image.xml
    <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
    <VIO xmlns="http://ausgsa.austin.ibm.com/projects/v/vios/schema/1.20" version="1.20">
        <Request action="1">
            <Cluster udid="22c50a291c18ab11e489f46cae8b692f30">
                <Pool udid="24c50a291c18ab11e489f46cae8b692f30000000000AFFF80C0000000053DA327C">
                    <Tier>
                        <LU capacity="55296" type="8">
                            <Image label="chmod666-homemade-image"/>
                        </LU>
                    </Tier>
                </Pool>
            </Cluster>
        </Request>
    </VIO>
    # /usr/ios/sbin/vioservice lib/libvio/lu < create_client_image.xml
    <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
    <VIO xmlns="http://ausgsa.austin.ibm.com/projects/v/vios/schema/1.20" version="1.20">
    
    <Response><Cluster udid="22c50a291c18ab11e489f46cae8b692f30" name="powervc_cluster"><Pool udid="24c50a291c18ab11e489f46cae8b692f30000000000AFFF80C0000000053DA327C" name="powervc_sp" raidLevel="0" overCommitSpace="0"><Tier udid="25c50a291c18ab11e489f46cae8b692f3019f95b3ea4c4dee1" name="SYSTEM" overCommitSpace="0"><LU udid="29c50a291c18ab11e489f46cae8b692f30d87113d5be9004791d28d44208150874" capacity="55296" physicalUsage="0" unusedCommitment="0" type="8" derived="" thick="0" tmoveState="0"><Image label="chmod666-homemade-image" relPath=""/></LU></Tier></Pool></Cluster></Response></VIO>
    # ls -l /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/IM
    total 339759720
    -rwx------    1 root     staff    57982058496 Sep  8 19:00 chmod666-homemade-image.d87113d5be9004791d28d44208150874
    -rwx------    1 root     system   57982058496 Aug 12 17:53 volume-Image_7100-03-03-1415-SSP3e2066b2a7a9437194f48860affd56c0.ac671df86edaf07e96e399e3a2dbd425
    -rwx------    1 root     system   57982058496 Aug 18 19:15 volume-Image_7100-03-03-1415-c--5bd3991bdac84c48b519e19bfb1be381.e525b8eb474f54e1d34d9d02cb0b49b4
    
  • You can now see with the snapshot command that a new Logical (Client Image) Unit is here :
  • $ snapshot -clustername powervc_cluster -list -spname powervc_sp
    Lu(Client Image)Name     Size(mb)       ProvisionType     %Used Unused(mb)     Lu Udid
    volume-Image_7100-03-03-1415-SSP3e2066b2a7a9437194f48860affd56c055296          THIN               100% 0              ac671df86edaf07e96e399e3a2dbd425
    chmod666-homemade-image  55296          THIN                 0% 55299          d87113d5be9004791d28d44208150874
    volume-Image_7100-03-03-1415-c--5bd3991bdac84c48b519e19bfb1be38155296          THIN               100% 55299          e525b8eb474f54e1d34d9d02cb0b49b4
                    Snapshot
                    2631012f1a558e51d1af7608f3779a1bIMSnap
                    09a6c90817d24784ece38f71051e419aIMSnap
                    e400827d363bb86db7984b1a7de08495IMSnap
                    5fcef388618c9a512c0c5848177bc134IMSnap
    
  • Copy the source image (the stopped virtual machine with the activation engine activated) to this newly created image. (This one will be the new reference of all your virtual machines created with this image as source). Use the dd command to do it (and don’t forget the block size). You can check while the dd is running that the unused percentage is increasing :
  • # dd if=/var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/VOL1/volume-boot-9117MMD_658B2AD-aaaa95f8317c666549c4809264281db536dd.a2b7ed754030ca97668b30ab6cff5c45 of=/var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/IM/chmod666-homemade-image.d87113d5be9004791d28d44208150874 bs=1M
    $ snapshot -clustername powervc_cluster -list -spname powervc_sp
    Lu(Client Image)Name     Size(mb)       ProvisionType     %Used Unused(mb)     Lu Udid
    chmod666-homemade-image  55296          THIN                23% 0              d87113d5be9004791d28d44208150874
    [..]
    $ snapshot -clustername powervc_cluster -list -spname powervc_sp
    Lu(Client Image)Name     Size(mb)       ProvisionType     %Used Unused(mb)     Lu Udid
    [..]
    chmod666-homemade-image  55296          THIN                40% 0              d87113d5be9004791d28d44208150874
    n$ snapshot -clustername powervc_cluster -list -spname powervc_sp
    Lu(Client Image)Name     Size(mb)       ProvisionType     %Used Unused(mb)     Lu Udid
    [..]
    chmod666-homemade-image  55296          THIN               100% 0              d87113d5be9004791d28d44208150874
    
  • You have now a new reference image. This one will be used as a reference for all you linked clone deployed virtual machines. A linked clone is created from a snapshot, so you have first to create a snapshot of the newly created image, by using the pooladm command (keep in mind that you can’t use snapshot command to work on Logical (Client Image) Unit). The snapshot is identified by the logical unit name suffixed by the “@“. Use mksnap to create the snap, and lssnap to show it. The snapshot will be visible at the output of the snapshot command :
  • # pooladm file mksnap /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/IM/chmod666-homemade-image.d87113d5be9004791d28d44208150874@chmod666IMSnap
    # pooladm file lssnap /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/IM/chmod666-homemade-image.d87113d5be9004791d28d44208150874
    Primary Path         File Snapshot name
    ---------------------------------------
    /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/IM/chmod666-homemade-image.d87113d5be9004791d28d44208150874 chmod666IMSnap
    $ snapshot -clustername powervc_cluster -list -spname powervc_sp
    Lu(Client Image)Name     Size(mb)       ProvisionType     %Used Unused(mb)     Lu Udid
    chmod666-homemade-image  55296          THIN               100% 55299  d87113d5be9004791d28d44208150874
                    Snapshot
                    chmod666IMSnap
    [..]
    
  • You can now create the clone from the snap (snap are identified by a ‘@’ character prefixed by the image name). Name the clone the way you want because this one will be renamed and moved to replace a normal logical unit, I’m using here the PowerVC convention (IMtmp). The creation of the clone will create a new file in the VOL1 directory with no shared storage pool meta data, so this clone will no be visible at the output of the lu command :
  • $ pooladm file mkclone /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/IM/chmod666-homemade-image.d87113d5be9004791d28d44208150874@chmod666IMSnap /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/VOL1/volume-boot-9117MMD_658B2AD-chmod666-IMtmp
    $ ls -l  /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/VOL1/*chmod666-IM*
    -rwx------    1 root     system   57982058496 Sep  9 16:27 /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/VOL1/volume-boot-9117MMD_658B2AD-chmod666-IMtmp
    
  • By using vioservice, create a logical unit on the shared storage pool. This will create a new image with a newly generated udid. If you check in the volume directory you can notice that the clone does not have the meta-data file needed by shared storage pool.(This file is prefixed by a dot (.)). After creating this logical unit replace it with your clone with a simple move :
  • $ cat create_client_lu.xml
    <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
    <VIO xmlns="http://ausgsa.austin.ibm.com/projects/v/vios/schema/1.20" version="1.20">
        <Request action="1">
            <Cluster udid="22c50a291c18ab11e489f46cae8b692f30">
                <Pool udid="24c50a291c18ab11e489f46cae8b692f30000000000AFFF80C0000000053DA327C">
                    <Tier>
                        <LU capacity="55296" type="1">
                            volume-boot-9117MMD_658B2AD-chmod666"/>
                        </LU>
                    </Tier>
                </Pool>
            </Cluster>
        </Request>
    </VIO>
    $ /usr/ios/sbin/vioservice lib/libvio/lu < create_client_lu.xml
    <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
    <VIO xmlns="http://ausgsa.austin.ibm.com/projects/v/vios/schema/1.20" version="1.20">
    
    <Response><Cluster udid="22c50a291c18ab11e489f46cae8b692f30" name="powervc_cluster"><Pool udid="24c50a291c18ab11e489f46cae8b692f30000000000AFFF80C0000000053DA327C" name="powervc_sp" raidLevel="0" overCommitSpace="0"><Tier udid="25c50a291c18ab11e489f46cae8b692f3019f95b3ea4c4dee1" name="SYSTEM" overCommitSpace="0"><LU udid="27c50a291c18ab11e489f46cae8b692f30e4d360832b29be950824d3e5bf57d777" capacity="55296" physicalUsage="0" unusedCommitment="0" type="1" derived="" thick="0" tmoveState="0"><Disk label="volume-boot-9117MMD_658B2AD-chmod666"/></LU></Tier></Pool></Cluster></Response></VIO>
    $ mv /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/VOL1/volume-boot-9117MMD_658B2AD-chmod666-IMtmp /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/VOL1/volume-boot-9117MMD_658B2AD-chmod666.e4d360832b29be950824d3e5bf57d777
    
  • You are ready to use your linked clone, you have a source image, a snap of this one, and a clone of this snap :
  • # pooladm file lssnap /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/IM/chmod666-homemade-image.d87113d5be9004791d28d44208150874
    Primary Path         File Snapshot name
    ---------------------------------------
    /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/IM/chmod666-homemade-image.d87113d5be9004791d28d44208150874 chmod666IMSnap
    # pooladm file lsclone /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/VOL1/volume-boot-9117MMD_658B2AD-chmod666.e4d360832b29be950824d3e5bf57d777
    Snapshot             Clone name
    ----------------------------------
    chmod666IMSnap /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/VOL1/volume-boot-9117MMD_658B2AD-chmod666.e4d360832b29be950824d3e5bf57d777
    
  • Then, using vioservice or the mkdev command map the clone to your virtual scsi adapter (identifed by its physloc name) (do this on both Virtual I/O Servers) :
  • $ cat map_clone.xml
    <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
    <VIO xmlns="http://ausgsa.austin.ibm.com/projects/v/vios/schema/1.20" version="1.20">
        <Request action="5">
            <Cluster udid="22c50a291c18ab11e489f46cae8b692f30">
                <Map label="" udid="27c50a291c18ab11e489f46cae8b692f30e4d360832b29be950824d3e5bf57d777" drcname="U9117.MMD.658B2AD-V2-C99"/>
            </Cluster>
        </Request>
    </VIO>
    $ /usr/ios/sbin/vioservice lib/libvio/lu < map_clone.xml
    <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
    <VIO xmlns="http://ausgsa.austin.ibm.com/projects/v/vios/schema/1.20" version="1.20">
    
    <Response><Cluster udid="22c50a291c18ab11e489f46cae8b692f30" name="powervc_cluster"/></Response></VIO>
    

    or

    # mkdev -t ngdisk -s vtdev -c virtual_target -aaix_tdev=volume-boot-9117MMD_658B2AD-chmod666.e4d360832b29be950824d3e5bf57d777 -audid_info=4d360832b29be950824d3e5bf57d77 -apath_name=/var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/VOL1 -p vhost5 -acluster_id=c50a291c18ab11e489f46cae8b692f30
    
  • Boot the machine ... this one is a linked clone create by yourself without PowerVC.

About the activation engine ?

Your captured image has the activation engine enabled. To reconfigure the network & the hostname PowerVC is copying an iso from the PowerVC server to the Virtual I/O Server. This iso contains an ovf file needed by the activation engine to customize your virtual machine. To customize my linked clone virtual machine created on my own I decided to re-use an old iso file created by PowerVC for another deployment :

  • Mount the image located in /var/vio/VMLibrary, and modify the xml ovf file to fit your needs :
  • # ls -l /var/vio/VMLibrary
    total 840
    drwxr-xr-x    2 root     system          256 Jul 31 20:17 lost+found
    -r--r-----    1 root     system       428032 Sep  9 18:11 vopt_c07e6e0bab6048dfb23586aa90e514e6
    # loopmount -i vopt_c07e6e0bab6048dfb23586aa90e514e6 -o "-V cdrfs -o ro" -m /mnt
    
  • Copy the content of the cd to a directory :
  • # mkdir /tmp/mycd
    # cp -r /mnt/* /tmp/mycd
    
  • Edit the ovf file to fit your needs (In my case for instance I'm changing the hostname of the machine and it's ip address :
  • # cat /tmp/mycd/ovf-env.xml
    <Environment xmlns="http://schemas.dmtf.org/ovf/environment/1" xmlns:ovf="http://schemas.dmtf.org/ovf/envelope/1" xmlns:ovfenv="http://schemas.dmtf.org/ovf/environment/1" xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData" xmlns:vssd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_VirtualSystemSettingData" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" ovfenv:id="vs0">
        <PlatformSection>
        <Locale>en</Locale>
      </PlatformSection>
      <PropertySection>
      <Property ovfenv:key="com.ibm.ovf.vmcontrol.system.networking.ipv4defaultgateway" ovfenv:value="10.218.238.1"/><Property ovfenv:key="com.ibm.ovf.vmcontrol.system.networking.hostname" ovfenv:value="homemadelinkedclone"/><Property ovfenv:key="com.ibm.ovf.vmcontrol.adapter.networking.slotnumber.1" ovfenv:value="32"/><Property ovfenv:key="com.ibm.ovf.vmcontrol.system.networking.dnsIPaddresses" ovfenv:value=""/><Property ovfenv:key="com.ibm.ovf.vmcontrol.adapter.networking.usedhcpv4.1" ovfenv:value="false"/><Property ovfenv:key="com.ibm.ovf.vmcontrol.adapter.networking.ipv4addresses.1" ovfenv:value="10.218.238.140"/><Property ovfenv:key="com.ibm.ovf.vmcontrol.adapter.networking.ipv4netmasks.1" ovfenv:value="255.255.255.0"/><Property ovfenv:key="com.ibm.ovf.vmcontrol.system.networking.domainname" ovfenv:value="localdomain"/><Property ovfenv:key="com.ibm.ovf.vmcontrol.system.timezone" ovfenv:value=""/></PropertySection>
    </Environment>
    
  • Recreate the cd using the mkdvd command and put it in the /var/vio/VMLibrary directory :
  • # mkdvd -r /tmp/mycd -S
    Initializing mkdvd log: /var/adm/ras/mkcd.log...
    Verifying command parameters...
    Creating temporary file system: /mkcd/cd_images...
    Creating Rock Ridge format image: /mkcd/cd_images/cd_image_19267708
    Running mkisofs ...
    
    mkrr_fs was successful.
    # mv /mkcd/cd_images/cd_image_19267708 /var/vio/VMLibrary
    $ lsrep
    Size(mb) Free(mb) Parent Pool         Parent Size      Parent Free
        1017     1015 rootvg                   279552           171776
    
    Name                                                  File Size Optical         Access
    cd_image_19267708                                             1 None            rw
    vopt_c07e6e0bab6048dfb23586aa90e514e6                         1 vtopt1          ro
    
  • Load the cdrom and map it to the linked clone :
  • $ mkvdev -fbo -vadapter vhost11
    $ loadopt -vtd vtopt0 -disk cd_image_19267708
    
  • When the linked clone virtual machine will boot the cd will be mounted and the activation engine will take the ovf file as parameter, and will reconfigure the network. For instance you can check the hostname has changed :
  • # hostname
    homemadelinkedclone.localdomain
    

A view on the layout ?

I asked myself a question about Linked Clones, how can we check Shared Storage Pool blocks (or PP ?) are shared by the capture machine (the captured LU) on one linked clone ? To answer to this question I had to play with the pooladm command (which is unsupported for customer use) to check the logcial unit layout of the capture virtual machine and of the deployed linked clone and then compare them. Please note that this is my understanding of the linked clones. This is not validated by any IBM support, do this at your own risk, you can correct my interpretation of what I'm seeing here :-) :

  • Get the layout of the captured VM by getting the layout of the logical unit (the captured image is in my case located in /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/IM/volume-Image_7100-03-03-1415-c--5bd3991bdac84c48b519e19bfb1be381.e525b8eb474f54e1d34d9d02cb0b49b4) :
  • root@vios:/home/padmin# ls -l /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/IM
    total 339759720
    -rwx------    1 root     system   57982058496 Aug 12 17:53 volume-Image_7100-03-03-1415-SSP3e2066b2a7a9437194f48860affd56c0.ac671df86edaf07e96e399e3a2dbd425
    -rwx------    1 root     system   57982058496 Aug 18 19:15 volume-Image_7100-03-03-1415-c--5bd3991bdac84c48b519e19bfb1be381.e525b8eb474f54e1d34d9d02cb0b49b4
    # pooladm file layout /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/IM/volume-Image_7100-03-03-1415-c--5bd3991bdac84c48b519e19bfb1be381.e525b8eb474f54e1d34d9d02cb0b49b4 | /tmp/captured_vm.layout
    0x0-0x100000 shared
        LP 0xFE:0xF41000
        PP /dev/hdisk968 0x2E8:0xF41000
    0x100000-0x200000 shared
        LP 0x48:0x387F000
        PP /dev/hdisk866 0x1:0x387F000
    [..]
    
  • Get the layout of the linked clone (the linked clone is in my case located in /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/VOL1/volume-boot-9117MMD_658B2AD-layo583b3eb5e98b495b992fdc3accc39bc3.54c172062957d73ec92e90d203d23fba)
  • # ls /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/VOL1/volume-boot-9117MMD_658B2AD-layo583b3eb5e98b495b992fdc3accc39bc3.54c172062957d73ec92e90d203d23fba
    /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/VOL1/volume-boot-9117MMD_658B2AD-layo583b3eb5e98b495b992fdc3accc39bc3.54c172062957d73ec92e90d203d23fba
    # pooladm file layout /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/VOL1/volume-boot-9117MMD_658B2AD-layo583b3eb5e98b495b992fdc3accc39bc3.54c172062957d73ec92e90d203d23fba | tee /tmp/linked_clone.layout
    0x0-0x100000 shared
        LP 0xFE:0xF41000
        PP /dev/hdisk968 0x2E8:0xF41000
    0x100000-0x200000 shared
        LP 0x48:0x387F000
        PP /dev/hdisk866 0x1:0x387F000
    [..]
    
  • At this step you can first compare the two files, you can see some useful informations, but do not misunderstand this output. You first have to sort it to make conclusion. But you can be sure of one thing : some PPs have been modified on the linked clone and cannot be shared anymore, others are shared between the linked clone and the capture image :
  • sdiff_layout1_modifed_1

  • You can have a better view of shared and not shared PPs by sorting the output of these files, here the commands I used to do it :
  • #grep PP linked_clone.layout | tr -s " " | sort -k1 > /tmp/pp_linked_clone.layout
    #grep PP captured_vm.layout | tr -s " " | sort -k1 > /tmp/pp_captured_vm.layout
    
  • By sdiffing these two files I can now check which PPs are shared and which are not :
  • sdiff_layout2_modifed_1

  • The pooladm command can give you stats about linked clone. My understanding of the owned block count tell me that 78144 SSP blocks (not PPs) (so blocks of 4k) are uniq to this linked clones and not shared with the captured image :
  • vios1#pooladm file stat /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/VOL1/volume-boot-9117MMD_658B2AD-layo583b3eb5e98b495b992fdc3accc39bc3.54c172062957d73ec92e90d203d23fba
    Path: /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/VOL1/volume-boot-9117MMD_658B2AD-layo583b3eb5e98b495b992fdc3accc39bc3.54c172062957d73ec92e90d203d23fba
    Size            57982058496
    Number Blocks   14156655
    Data Blocks     14155776
    Pool Block Size 4096
    
    Tier: SYSTEM
    Owned Blocks    78144
    Max Blocks      14156655
    Block Debt      14078511
    

Mixed NPIV & SSP deployment

For some reasons for some machine with an I/O intensive workload, it can be usefull to put your data luns on an NPIV adapter. I'm actually working on a project involving PowerVC and the question was ask, why not mix SSP Lun for rootvg and NPIV based lun for data volume group. One more time it's very simple with PowerVC, just attach a volume, this time by choosing your Storage Volume Controller provider ... easy :

mixed1_masked

This will created NPIV adapters and create new zoning and masking on the fibre channels switches. One more time easy ....

Debugging ?

I'll not lie. I had a lot of problems with Shared Storage Pool and PowerVC but these problems were related to my configuration moving a lot during the tests. Always remind you that you'll learn from theses errors and in my case it helped my a lot to debug PowerVC :

  • From the Virtual I/O Server side check you have no core file in the /home/ios/logs directory. A core file in this directory indicates one of the command run by PowerVC just "cored" :
  • root@vios1:/home/ios/logs# ls core*
    core.9371682.18085943
    
  • From the Virtual I/O Server side check the /home/ios/logs/viosvc.log. You can check all the xml files and all the ouputs used by the vioservice command. Most of PowerVC actions are performed trough the vioservice command ....
  • root@vios1:/home/ios/logs# ls viosvc.log
    -rw-r--r--    1 root     system     10240000 Sep 11 00:28 viosvc.log
    
  • Step by step check all PowerVC actions are ok. For instance verify with the lsrep command that the iso has been copied from PowerVC to the Virtual I/O Server library. Check there is space left on the Shared Storage Pool ....
  • Sometimes the secret vioserivce api is stucked and not responding. In some cases it can be useful to rebuild the soliddb ... I'm using this script to do it (run it as root) :
  • # cat rebuilddb.sh
    #!/usr/bin/ksh
    set -x
    stopsrc -s vio_daemon
    sleep 30
    rm -rf /var/vio/CM
    startsrc -s vio_daemon
    
  • EDIT I had another info from IBM regarding the method to rebuild the SolidDB, using my script won't properly bring up the SolidDB back up properly and could leave you in a bad state. Just add this at the end of the script :
  • pid=$(lssrc -s vio_daemon | awk 'NR==2 {print $2}')
    kill -1 $pid  
    
  • On PowerVC side when you have problem it is always good to increase the verbosity of the logs (located in /var/log) (in this case nova) (restart PowerVC after setting verbosity level)
  • # openstack-config --set /etc/nova/nova-9117MMD_658B2AD.conf DEFAULT default_log_levels powervc_nova=DEBUG,powervc_k2=DEBUG,nova=DEBUG
    

Conclusion

It takes me more than two months write this post. Why ? Just because PowerVC design is not documented. It work like a charm, but nobody will explain you HOW. I hope this post will help you to understand how PowerVC is working. I'm a huge fan of PowerVC and SSP, try it by yourself and you'll see that it is a pleasure to use it. It's simple, effecient, and powerfull. Anybody can give me an access to a PowerKVM host to write & proove that PowerVC is also simple and efficient with PowerKVM ... ?

Deep dive into PowerVC Standard 1.2.1.0 using Storage Volume Controller and Brocade 8510-4 FC switches in a multifabric environment

Before reading this post I highly encourage you to read my first post about PowerVC because this one will be focused on the standard edition specificity. I had the chance to work on PowerVC express with IVM and local storage and now with a PowerVC standard with an IBM Storage Volume Controller & Brocade fibre channel switches. A few things are different between these two versions (particularly the storage management). Virtual Machines created by PowerVC standard will use NPIV (Virtual fibre channel adapters) instead of virtual vSCSI adapters. Using local storage or using an SVC in a multi fabric environment are two different things and PowerVC ways to capture/deploy and manage virtual machines are totally different. The PowerVC configuration is more complex and you have to manage the fibre channel ports configuration, the storage connectivity groups and templates. Last but no least the PowerVC standard edition is Live Partition Mobility aware. Let’s have a look on all the standard version specificity. But before you start reading this post I have to warn you that this one is very long (It’s always hard for me to write short posts :-)). Last thing, this post is the result of one month of work on PowerVC mostly on my own, but I had to thanks IBM guys for helping about a few problems (Paul, Eddy, Jay, Phil, …). Cheers guys!

Prerequisites

PowerVC standard needs to connect to the Hardware Management Console, to the Storage Provider, and to the Fibre Channel Switches. Be sure ports are open between PowerVC, the HMC, the Storage Array, and the Fibre Channel Switches :

  • Port TCP 12443 between PowerVC and the HMC (PowerVC is using the HMC K2 Rest API to communicate with the HMC)
  • Port TCP 22 (ssh) between PowerVC and the Storage Array.
  • Port TCP 22 (ssh) between PowerVC and the Fibre Channel Switches.

pvcch

Check your storage array is compatible with PowerVC standard (for the moment only IBM Storwise storage and IBM Storage Volume Controller are supported). All Brocade switches with a firmware 7 are supported. Be careful the PowerVC Redbook is not up-to-date about this : all Brocade switches are supported (An APAR and a PMR are opened about this mistake)

This post was written with this PowerVC configuration :

  • PowerVC 1.2.0.0 x86 version & PowerVC 1.2.1.0 PCC version.
  • Storage Volume Controller with EMC VNX2 Storage array.
  • Brocade DCX 8510-4.
  • Two Power770+ with firmware latest AM780 firmware.

PowerVC standard storage specifics and configuration

PowerVC needs to control the storage to create or delete luns, to create hosts and it also needs to control the fibre channel switches to create and delete zone for the virtual machines. If you are working with multi fibre channel adapters with many ports you have also to configure the storage connectivity groups and the fibre channels ports to tell which port to use and in which case (you may want to create virtual machines for development only on two virtual fibre channel adapters and production one on four). Let’s see how to to this :

Adding storage and fabric

  • Add the storage provider (in my case a Storage Volume Controller but it can be any IBM Storwise family storage array) :
  • blog_add_storage

  • PowerVC will ask you a few questions while adding the storage provider (for instance which pool will be the default pool for the deployment of the virtual machines). You can next check in this view the actual size and remaining size of the used pool :
  • blog_storage_added

  • Add each fibre channel switch (in my case two switches one for fabric A and the second one for the fabric B) (be very careful with the fabric designation (A or B), it will be used later when creating storage templates and storage connectivity groups) :
  • blog_add_frabric

  • Each fabric can be viewed and modified afterwards :
  • blog_fabric_added

Fibre Channel Port Configuration

If you are working in a multi fabric environment you have to configure the fibre channel ports. For each port the first step is to tell PowerVC on which fabric the port is connected. In my case here is the configuration (you can refer to the colours on the image below, and on the explications below) :

pb connectivty_zenburn

  • Each Virtual I/O Server has 2 fibre channel adapters with four ports.
  • For the first adapter : first port is connected to Fabric A, and last port is connected to Fabric B.
  • For the second adapter : first port is connected to Fabric B, and last port is connected to Fabric A.
  • Two ports (port 1 and 2) are remaining free for future usage (future growing).
  • For each port I have to tell PowerVC if the port is connected on : (With PowerVC 1.2.0.0 you have to do this manually and check on the fibre channel switch where are the ports connected. With PowerVC 1.2.1.0 it is automatically detected by PowerVC :-))
  • 17_choose_fabric_for_each_port

    • Connected on Fabric A ? (check the image below) (check switch command to find if the port is connected on the fibre channel switch)
    • blog_connected_fabric_A

      switch_fabric_a:FID1:powervcadm> nodefind 10:00:00:90:FA:3E:C6:D1
      No device found
      switch_fabric_a:FID1:powervcadm> nodefind 10:00:00:90:FA:3E:C6:CE
      Local:
       Type Pid    COS     PortName                NodeName                 SCR
       N    01fb40;    2,3;10:00:00:90:fa:3e:c6:ce;20:00:01:20:fa:3e:c6:ce; 0x00000003
          Fabric Port Name: 20:12:00:27:f8:79:ce:01
          Permanent Port Name: 10:00:00:90:fa:3e:c6:ce
          Device type: Physical Unknown(initiator/target)
          Port Index: 18
          Share Area: Yes
          Device Shared in Other AD: No
          Redirect: No
          Partial: No
          Aliases: XXXXX59_3ec6ce
      
    • Connected on Fabric B ? (check the image below) (check switch command to find if the port is connected on the fibre channel switch)
    • blog_connected_fabric_B

      switch_fabric_b:FID1:powervcadm> nodefind 10:00:00:90:FA:3E:C6:D1
      Local:
       Type Pid    COS     PortName                NodeName                 SCR
       N    02fb40;    2,3;10:00:00:90:fa:3e:c6:d1;20:00:01:20:fa:3e:c6:d1; 0x00000003
          Fabric Port Name: 20:12:00:27:f8:79:d0:01
          Permanent Port Name: 10:00:00:90:fa:3e:c6:d1
          Device type: Physical Unknown(initiator/target)
          Port Index: 18
          Share Area: Yes
          Device Shared in Other AD: No
          Redirect: No
          Partial: No
          Aliases: XXXXX59_3ec6d1
      switch_fabric_b:FID1:powervcadm> nodefind 10:00:00:90:FA:3E:C6:CE
      No device found
      
    • Free, not connected ? (check the image below)
    • blog_not_connected

  • At the end each fibre channel port has to be configured with one of these three choices (connected on Fabric A, connected on Fabric B, Free/not connected).

Port Tagging and Storage Connectivity Group

Fibre channel ports are now configured, but we have to be sure that when deploying a new virtual machine :

  • Each virtual machine will be deployed with four fibre channel adapters (I am in a CHARM configuration).
  • Each virtual machine is connected on the first Virtual I/O Server to the Fabric A and Fabric B on different adapters (each adapter on a different CEC).
  • Each virtual machine is connected to the second Virtual I/O Server to Fabric A and Fabric B on different adapters.
  • I can choose to deploy the virtual machine using fcs0 (Fabric A) and fcs7 (Fabric B) on each Virtual I/O Server or using fcs3 (Fabric B) and fcs4 (Fabric A). Ideally half of the machines will be created with the first configuration and the second half one the second configuration.

To do this you have to tag each port with a tag of the name of your choice, and then create a storage connectivity group. A storage connectivity is a constraint that is used for the deployment of virtual machine :

pb_port_tag_zenburn

  • Two tags are created and set on each ports, fcs0(A)_fcs7(B), and fcs3(B)_fcs4(A) :
  • blog_port_tag

  • Two connectivity groups are created to force the usage of tagged fibre channel ports when deploying a virtual machine.
    • When creating a connectivity group you have to choose the Virtual I/O Server(s) used when deploying a virtual machine using this connectivity group. It can be useful to tell PowerVC to deploy development machines on a single Virtual I/O Server, and production one on dual Virtual I/O Server :
    • blog_vios_connectivity_group

    • In my case connectivity groups are created to restrict the usage of fibre channel adapters. I want to deploy on fibre channel ports fcs0/fcs7 or fibre channel ports fcs3/fcs4. Here are my connectivity groups :
    • blog_connectivity_1
      blog_connectivity_2

    • You can check a sum-up of your connectivity group. I wanted to add this image because I think the two images (provided in PowerVC) are better than text to explain what is a connectivity group :-) :
    • 22_create_connectivity_group_3

Storage Template

If you are using different pools or different storage arrays (for example, in my case I can have different storage arrays behind my Storage Volume Controller) you may want to tell PowerVC to deploy virtual machines on a specific pool or with a specific type (I want for instance, my machines to be created on compressed luns, on thin provisioned luns, or on thick provisioned luns). In my case I’ve created two different templates to create machines on thin or compressed lun. Easy !

  • When creating a storage template you first have to choose the storage pool :
  • blog_storage_template_select_storage_pool

  • Then choose the type of lun for this storage template :
  • blog_storage_template_create

  • Here are exemple with my two storage templates :
  • blog_storage_list

A deeper look on VM capture

I you read my last article about PowerVC express version you know that capturing an image could take some time when using local storage, “dding” a whole disk is long, copying a file to the PowerVC host is long. But don’t worry PowerVC standard solve this problem easily by using all the potential of the IBM Storage (In my case a Storage Volume Controller) … the solution FlashCopies, more specifically what we call a FlashCopy-Copy (to be clear : a FlashCopy-Copy is a full copy of a lun : there are no more relationship between the source lun being copied on the FlashCopy lun (the FlashCopy is created with the autodelete argument)) . Let me explain to you how PowerVC standard manages the virtual machine capture :

  • The activation engine has be run, the virtual machine to be captured is stopped.
  • The user launch the capture by using PowerVC.
  • A FlashCopy-Copy is created from the storage side, we can check it from the GUI interface :
  • blog_flash_copy_pixelate_1

  • Checking with the SVC command line we can see that (use catauditlog command to check this) :
    • A new volume called volume-Image-[name_of_the_image] is created (all captured images will be called volume-Image-[name]), taking care of the storage template (diskgroup/pool, grainsize, rsize ….)
    • # mkvdisk -name volume-Image_7100-03-03-1415 -iogrp 0 -mdiskgrp VNX_XXXXX_SAS_POOL_1 -size 64424509440 -unit b -autoexpand -grainsize 256 -rsize 2% -warning 0% -easytier on
      
    • A FlashCopy-Copy with the id of boot volume of the virtual machine to capture as source, and the id of the image’s lun as target is created :
    • # mkfcmap -source 865 -target 880 -autodelete
      
    • We can check the vdisk 865 is the boot volume of the captured machine and has a FlashCopy running:
    • # lsvdisk -delim :
      id:name:IO_group_id:IO_group_name:status:mdisk_grp_id:mdisk_grp_name:capacity:type:FC_id:FC_name:RC_id:RC_name:vdisk_UID:fc_map_count:copy_count:fast_write_state:se_copy_count:RC_change:compressed_copy_count
      865:_BOOT:0:io_grp0:online:0:VNX_00086_SAS_POOL_1:60.00GB:striped:0:fcmap0:::600507680184879C2800000000000431:1:1:empty:1:no:0
      
    • The FlashCopy-Copy is prepared and started (at this step we can already use our captured image, the copy is running in background) :
    • # prestartfcmap 0
      # startfcmap 0
      
    • While the copy of the FlahsCopy is running we can check the advancement (we can check it too by logging on the GUI too) :
    • IBM_2145:SVC:powervcadmin>lsfcmap
      id name   source_vdisk_id source_vdisk_name target_vdisk_id target_vdisk_name            group_id group_name status  progress copy_rate clean_progress incremental partner_FC_id partner_FC_name restoring start_time   rc_controlled
      0  fcmap0 865             XXXXXXXXX7_BOOT 880             volume-Image_7100-03-03-1415                     copying 54       50        100            off                                       no        140620002138 no
      
      IBM_2145:SVC:powervcadmin>lsfcmapprogress fcmap0
      id progress
      0  54
      
    • After the FlashCopy-Copy is finished, there are no more relationship between the source volume and the finished FlashCopy. The captured image is a vdisk :
    • IBM_2145:SVC:powervcadmin>lsvdisk 880
      id 880
      name volume-Image_7100-03-03-1415
      IO_group_id 0
      IO_group_name io_grp0
      status online
      mdisk_grp_id 0
      mdisk_grp_name VNX_XXXXX_SAS_POOL_1
      capacity 60.00GB
      type striped
      [..]
      vdisk_UID 600507680184879C280000000000044C
      [..]
      fc_map_count 0
      [..]
      
    • The is no more fcmap for the source volume :
    • IBM_2145:SVC:powervcadmin>lsvdisk 865
      [..]
      fc_map_count 0
      [..]
      

Deployment mechanism

blog_deploy3_pixelate

Deploying a virtual machine with the standard version is very similar as deploying a machine with the express version. The only thing different is the possibility to choose the storage template (with the constraints of the storage connectivity group)

View from the Hardware Management Console

PowerVC is using the Hardware Management Console new k2 rest API to create the virtual machine, if you want to go further and check the commands used on the HMC you can check it with the lssvcevents command :

time=06/21/2014 17:49:12,text=HSCE2123 User name powervc: chsysstate -m XXXX58-9117-MMD-658B2AD -r lpar -o on -n deckard-e9879213-00000018 command was executed successfully.
time=06/21/2014 17:47:29,text=HSCE2123 User name powervc: chled -r sa -t virtuallpar -m 9117-MMD*658B2AD --id 1 -o off command was executed successfully.
time=06/21/2014 17:46:51,"text=HSCE2123 User name powervc: chhwres -r virtualio --rsubtype fc -o a -m 9117-MMD*658B2AD -s 29 --id 1 -a remote_slot_num=6,remote_lpar_id=8,adapter_type=server co
mmand was executed successfully."
time=06/21/2014 17:46:40,"text=HSCE2123 User name powervc: chsyscfg -m 9117-MMD*658B2AD -r prof -i lpar_name=deckard-e9879213-00000018,""virtual_fc_adapters+=""""6/CLIENT/1//29//0"""""",name=l
ast*valid*configuration -o apply --override command was executed successfully."
time=06/21/2014 17:46:32,text=HSCE2245 User name powervc: Activating the partition 8 succeeded on managed system 9117-MMD*658B2AD.
time=06/21/2014 17:46:17,"text=HSCE2123 User name powervc: chhwres -r virtualio --rsubtype fc -o a -m 9117-MMD*658B2AD -s 28 --id 1 -a remote_slot_num=5,remote_lpar_id=8,adapter_type=server co
mmand was executed successfully."
time=06/21/2014 17:46:06,"text=HSCE2123 User name powervc: chsyscfg -m 9117-MMD*658B2AD -r prof -i lpar_name=deckard-e9879213-00000018,""virtual_fc_adapters+=""""5/CLIENT/1//28//0"""""",name=l
ast*valid*configuration -o apply --override command was executed successfully."
time=06/21/2014 17:45:57,text=HSCE2245 User name powervc: Activating the partition 8 succeeded on managed system 9117-MMD*658B2AD.
time=06/21/2014 17:45:46,"text=HSCE2123 User name powervc: chhwres -r virtualio --rsubtype fc -o a -m 9117-MMD*658B2AD -s 30 --id 2 -a remote_slot_num=4,remote_lpar_id=8,adapter_type=server co
mmand was executed successfully."
time=06/21/2014 17:45:36,text=HSCE2123 User name powervc: chhwres -r virtualio --rsubtype fc -o r -m 9117-MMD*658B2AD -s 29 --id 1 command was executed successfully.
time=06/21/2014 17:45:27,"text=HSCE2123 User name powervc: chsyscfg -m 9117-MMD*658B2AD -r prof -i lpar_name=deckard-e9879213-00000018,""virtual_fc_adapters+=""""4/CLIENT/2//30//0"""""",name=l
ast*valid*configuration -o apply --override command was executed successfully."
time=06/21/2014 17:45:18,text=HSCE2245 User name powervc: Activating the partition 8 succeeded on managed system 9117-MMD*658B2AD.
time=06/21/2014 17:45:08,text=HSCE2123 User name powervc: chhwres -r virtualio --rsubtype fc -o r -m 9117-MMD*658B2AD -s 28 --id 1 command was executed successfully.
time=06/21/2014 17:45:07,text=User powervc has logged off from session id 42151 for the reason:  The user ran the Disconnect task.
time=06/21/2014 17:45:07,text=User powervc has disconnected from session id 42151 for the reason:  The user ran the Disconnect task.
time=06/21/2014 17:44:50,"text=HSCE2123 User name powervc: chhwres -r virtualio --rsubtype scsi -o a -m 9117-MMD*658B2AD -s 23 --id 1 -a adapter_type=server,remote_lpar_id=8,remote_slot_num=3
command was executed successfully."
time=06/21/2014 17:44:40,"text=HSCE2123 User name powervc: chsyscfg -m 9117-MMD*658B2AD -r prof -i lpar_name=deckard-e9879213-00000018,virtual_scsi_adapters+=3/CLIENT/1//23/0,name=last*valid*c
onfiguration -o apply --override command was executed successfully."
time=06/21/2014 17:44:32,text=HSCE2245 User name powervc: Activating the partition 8 succeeded on managed system 9117-MMD*658B2AD.
time=06/21/2014 17:44:22,"text=HSCE2123 User name powervc: chhwres -r virtualio --rsubtype fc -o a -m 9117-MMD*658B2AD -s 25 --id 2 -a remote_slot_num=2,remote_lpar_id=8,adapter_type=server co
mmand was executed successfully."
time=06/21/2014 17:44:11,"text=HSCE2123 User name powervc: chsyscfg -m 9117-MMD*658B2AD -r prof -i lpar_name=deckard-e9879213-00000018,""virtual_fc_adapters+=""""2/CLIENT/2//25//0"""""",name=l
ast*valid*configuration -o apply --override command was executed successfully."
time=06/21/2014 17:44:02,text=HSCE2245 User name powervc: Activating the partition 8 succeeded on managed system 9117-MMD*658B2AD.
time=06/21/2014 17:43:50,text=HSCE2123 User name powervc: chhwres -r virtualio --rsubtype scsi -o r -m 9117-MMD*658B2AD -s 23 --id 1 command was executed successfully.
time=06/21/2014 17:43:31,text=HSCE2123 User name powervc: chled -r sa -t virtuallpar -m 9117-MMD*658B2AD --id 1 -o off command was executed successfully.
time=06/21/2014 17:43:31,text=HSCE2123 User name powervc: chled -r sa -t virtuallpar -m 9117-MMD*658B2AD --id 2 -o off command was executed successfully.
time=06/21/2014 17:42:57,"text=HSCE2123 User name powervc: chsyscfg -m 9117-MMD*658B2AD -r prof -i lpar_name=deckard-e9879213-00000018,""virtual_eth_adapters+=""""32/0/1665//0/0/zvdc4/fabbb99d
e420/all/"""""",name=last*valid*configuration -o apply --override command was executed successfully."
time=06/21/2014 17:42:49,text=HSCE2245 User name powervc: Activating the partition 8 succeeded on managed system 9117-MMD*658B2AD.
time=06/21/2014 17:41:53,text=HSCE2123 User name powervc: chsyscfg -m 9117-MMD*658B2AD -r lpar -p deckard-e9879213-00000018 -n default_profile -o apply command was executed successfully.
time=06/21/2014 17:41:42,text=HSCE2245 User name powervc: Activating the partition 8 succeeded on managed system 9117-MMD*658B2AD.
time=06/21/2014 17:41:36,"text=HSCE2123 User name powervc: mksyscfg -m 9117-MMD*658B2AD -r lpar -i name=deckard-e9879213-00000018,lpar_env=aixlinux,min_mem=8192,desired_mem=8192,max_mem=8192,p
rofile_name=default_profile,max_virtual_slots=64,lpar_proc_compat_mode=default,proc_mode=shared,min_procs=4,desired_procs=4,max_procs=4,min_proc_units=2,desired_proc_units=2,max_proc_units=2,s
haring_mode=uncap,uncap_weight=128,lpar_avail_priority=127,sync_curr_profile=1 command was executed successfully."
time=06/21/2014 17:41:01,"text=HSCE2123 User name powervc: mksyscfg -m 9117-MMD*658B2AD -r lpar -i name=FAKE_1403368861661,profile_name=default,lpar_env=aixlinux,min_mem=8192,desired_mem=8192,
max_mem=8192,max_virtual_slots=4,virtual_eth_adapters=5/0/1//0/1/,virtual_scsi_adapters=2/client/1//2/0,""virtual_serial_adapters=0/server/1/0//0/0,1/server/1/0//0/0"",""virtual_fc_adapters=3/
client/1//2//0,4/client/1//2//0"" -o query command was executed successfully."

blog_deploy3_hmc1

As you can see on the picture below four virtual fibre channel adapters are created taking care of the constraints of the storage connectivity groups create earlier (looking on the Virtual I/O Server vfcmaps are ok …) :

blog_deploy3_hmc2_pixelate

padmin@XXXXX60:/home/padmin$ lsmap -vadapter vfchost14 -npiv
Name          Physloc                            ClntID ClntName       ClntOS
------------- ---------------------------------- ------ -------------- -------
vfchost14     U9117.MMD.658B2AD-V1-C28                8 deckard-e98792 AIX

Status:LOGGED_IN
FC name:fcs3                    FC loc code:U2C4E.001.DBJN916-P2-C1-T4
Ports logged in:2
Flags:a
VFC client name:fcs2            VFC client DRC:U9117.MMD.658B2AD-V8-C5

padmin@XXXXX60:/home/padmin$ lsmap -vadapter vfchost15 -npiv
Name          Physloc                            ClntID ClntName       ClntOS
------------- ---------------------------------- ------ -------------- -------
vfchost15     U9117.MMD.658B2AD-V1-C29                8 deckard-e98792 AIX

Status:LOGGED_IN
FC name:fcs4                    FC loc code:U2C4E.001.DBJO029-P2-C1-T1
Ports logged in:2
Flags:a
VFC client name:fcs3            VFC client DRC:U9117.MMD.658B2AD-V8-C6

View from the Storage Volume Controller

The SVC side is pretty simple, two steps, FlashCopy-Copy of the volume-Image (the one created at the capture step) (the source of the FlashCopy is the volumeImage-[name] lun) and a host creation for the new virtual machine :

  • Creation of a FlashCopy-Copy with the volume used for the capture as source :
  • blog_deploy3_flashcopy1

    # mkvdisk -name volume-boot-9117MMD_658B2AD-deckard-e9879213-00000018 -iogrp 0 -mdiskgrp VNX_00086_SAS_POOL_1 -size 64424509440 -unit b -autoexpand -grainsize 256 -rsize 2% -warning 0% -easytier on
    # mkfcmap -source 880 -target 881 -autodelete
    # prestartfcmap 0
    # startfcmap 0
    
  • The host is created using the height wwpns of the newly created virtual machine (I repaste here the lssyscfg command to check the wwpns are the same :-)
  • hscroot@hmc1:~> lssyscfg -r prof -m XXXXX58-9117-MMD-658B2AD --filter "lpar_names=deckard-e9879213-00000018"
    name=default_profile,lpar_name=deckard-e9879213-00000018,lpar_id=8,lpar_env=aixlinux,all_resources=0,min_mem=8192,desired_mem=8192,max_mem=8192,min_num_huge_pages=0,desired_num_huge_pages=0,max_num_huge_pages=0,mem_mode=ded,mem_expansion=0.0,hpt_ratio=1:128,proc_mode=shared,min_proc_units=2.0,desired_proc_units=2.0,max_proc_units=2.0,min_procs=4,desired_procs=4,max_procs=4,sharing_mode=uncap,uncap_weight=128,shared_proc_pool_id=0,shared_proc_pool_name=DefaultPool,affinity_group_id=none,io_slots=none,lpar_io_pool_ids=none,max_virtual_slots=64,"virtual_serial_adapters=0/server/1/any//any/1,1/server/1/any//any/1",virtual_scsi_adapters=3/client/1/XXXXX60/29/0,virtual_eth_adapters=32/0/1665//0/0/zvdc4/fabbb99de420/all/0,virtual_eth_vsi_profiles=none,"virtual_fc_adapters=""2/client/2/XXXXX59/30/c050760727c5004a,c050760727c5004b/0"",""4/client/2/XXXXX59/25/c050760727c5004c,c050760727c5004d/0"",""5/client/1/XXXXX60/28/c050760727c5004e,c050760727c5004f/0"",""6/client/1/XXXXX60/23/c050760727c50050,c050760727c50051/0""",vtpm_adapters=none,hca_adapters=none,boot_mode=norm,conn_monitoring=1,auto_start=0,power_ctrl_lpar_ids=none,work_group_id=none,redundant_err_path_reporting=0,bsr_arrays=0,lpar_proc_compat_mode=default,electronic_err_reporting=null,sriov_eth_logical_ports=none
    
    # mkhost -name deckard-e9879213-00000018-06976900 -hbawwpn C050760727C5004A -force
    # addhostport -hbawwpn C050760727C5004B -force 11
    # addhostport -hbawwpn C050760727C5004C -force 11
    # addhostport -hbawwpn C050760727C5004D -force 11
    # addhostport -hbawwpn C050760727C5004E -force 11
    # addhostport -hbawwpn C050760727C5004F -force 11
    # addhostport -hbawwpn C050760727C50050 -force 11
    # addhostport -hbawwpn C050760727C50051 -force 11
    # mkvdiskhostmap -host deckard-e9879213-00000018-06976900 -scsi 0 881
    

    blog_deploy3_svc_host1
    blog_deploy3_svc_host2

View from fibre channel switches

On the two fibre channel switches four zones a created (do not forget the zones used for the Live Partition Mobility). These zone can be easily identified by their names. All PowerVC zones are prefixed by “powervc” (unfortunately names are truncated) :

  • Four zones are created on the fibre channel switch of the fabric A :
  • switch_fabric_a:FID1:powervcadmin> zoneshow powervc_eckard_e9879213_00000018c050760727c50051500507680110f32c
     zone:  powervc_eckard_e9879213_00000018c050760727c50051500507680110f32c
                    c0:50:76:07:27:c5:00:51; 50:05:07:68:01:10:f3:2c
    
    switch_fabric_a:FID1:powervcadmin> zoneshow powervc_eckard_e9879213_00000018c050760727c5004c500507680110f385
     zone:  powervc_eckard_e9879213_00000018c050760727c5004c500507680110f385
                    c0:50:76:07:27:c5:00:4c; 50:05:07:68:01:10:f3:85
    
    switch_fabric_a:FID1:powervcadmin> zoneshow powervc_eckard_e9879213_00000018c050760727c5004d500507680110f385
     zone:  powervc_eckard_e9879213_00000018c050760727c5004d500507680110f385
                    c0:50:76:07:27:c5:00:4d; 50:05:07:68:01:10:f3:85
    
    switch_fabric_a:FID1:powervcadmin> zoneshow powervc_eckard_e9879213_00000018c050760727c50050500507680110f32c
     zone:  powervc_eckard_e9879213_00000018c050760727c50050500507680110f32c
                    c0:50:76:07:27:c5:00:50; 50:05:07:68:01:10:f3:2c
    
  • Four zones are created on the fibre channel switch of the fabric B :
  • switch_fabric_b:FID1:powervcadmin> zoneshow powervc_eckard_e9879213_00000018c050760727c5004e500507680120f385
     zone:  powervc_eckard_e9879213_00000018c050760727c5004e500507680120f385
                    c0:50:76:07:27:c5:00:4e; 50:05:07:68:01:20:f3:85
    
    switch_fabric_b:FID1:powervcadmin> zoneshow powervc_eckard_e9879213_00000018c050760727c5004a500507680120f32c
     zone:  powervc_eckard_e9879213_00000018c050760727c5004a500507680120f32c
                    c0:50:76:07:27:c5:00:4a; 50:05:07:68:01:20:f3:2c
    
    switch_fabric_b:FID1:powervcadmin> zoneshow powervc_eckard_e9879213_00000018c050760727c5004b500507680120f32c
     zone:  powervc_eckard_e9879213_00000018c050760727c5004b500507680120f32c
                    c0:50:76:07:27:c5:00:4b; 50:05:07:68:01:20:f3:2c
    
    switch_fabric_b:FID1:powervcadmin> zoneshow powervc_eckard_e9879213_00000018c050760727c5004f500507680120f385
     zone:  powervc_eckard_e9879213_00000018c050760727c5004f500507680120f385
                    c0:50:76:07:27:c5:00:4f; 50:05:07:68:01:20:f3:85
    

Activation Engine and Virtual Optical Device

All my deployed virtual machines are connected to one of the Virtual I/O Server by a vSCSI adapter. This vSCSI adapter is used to connect the virtual machine to a virtual optical device (a virtual cdrom) needed by the activation engine to reconfigure the virtual machine. Looking in the Virtual I/O Server the virtual media repository is filled with customized iso files needed to activate the virtual machines :

  • Here is the output of the lsrep command on one of my Virtual I/O Server is by PowerVC :
  • padmin@XXX60:/home/padmin$ lsrep 
    Size(mb) Free(mb) Parent Pool         Parent Size      Parent Free 
        1017     1014 rootvg                   279552           110592 
    
    Name                                                  File Size Optical         Access 
    vopt_1c967c7b27a94464bebb6d043e6c7a6e                         1 None            ro 
    vopt_b21849cc4a32410f914a0f6372a8f679                         1 None            ro 
    vopt_e9879213dc90484bb3c5a50161456e35                         1 None            ro
    
  • At the time of writing this post the vSCSI adapter is not deleted after the virtual machines activation, but this one is only used at the first boot of the machines :
  • blog_adapter_for_ae_pixelate

  • Even better you can mount this iso and check it is used by the activation engine. The network configuration to be applied at reboot is written in an xml file. For those -like me- who have ever played with VMcontrol it may remember you the deploy command used in VMcontrol :
  • root@XXXX60:# cd /var/vio/VMLibrary
    root@XXXX60:/var/vio/VMLibrary# loopmount -i vopt_1c967c7b27a94464bebb6d043e6c7a6e -o "-V cdrfs -o ro" -m /mnt
    root@XXXX60:/var/vio/VMLibrary# cd /mnt
    root@XXXX60:/mnt# ls
    ec2          openstack    ovf-env.xml
    root@XXXX60:/mnt# cat ovf-env.xml
    <Environment xmlns="http://schemas.dmtf.org/ovf/environment/1" xmlns:ovf="http://schemas.dmtf.org/ovf/envelope/1" xmlns:ovfenv="http://schemas.dmtf.org/ovf/environment/1" xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData" xmlns:vssd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_VirtualSystemSettingData" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" ovfenv:id="vs0">
        <PlatformSection>
        <Locale>en</Locale>
      </PlatformSection>
      <PropertySection>
      <Property ovfenv:key="com.ibm.ovf.vmcontrol.system.networking.ipv4defaultgateway" ovfenv:value="10.244.17.1"/><Property ovfenv:key="com.ibm.ovf.vmcontrol.system.networking.hostname" ovfenv:value="deckard"/><Property ovfenv:key="com.ibm.ovf.vmcontrol.adapter.networking.slotnumber.1" ovfenv:value="32"/>;<Property ovfenv:key="com.ibm.ovf.vmcontrol.system.networking.dnsIPaddresses" ovfenv:value="10.10.20.10 10.10.20.11"/><Property ovfenv:key="com.ibm.ovf.vmcontrol.adapter.networking.usedhcpv4.1" ovfenv:value="false"/><Property ovfenv:key="com.ibm.ovf.vmcontrol.adapter.networking.ipv4addresses.1" ovfenv:value="10.244.17.35"/><Property ovfenv:key="com.ibm.ovf.vmcontrol.adapter.networking.ipv4netmasks.1" ovfenv:value="255.255.255.0"/><Property ovfenv:key="com.ibm.ovf.vmcontrol.system.networking.domainname" ovfenv:value="localdomain"/><Property ovfenv:key="com.ibm.ovf.vmcontrol.system.timezone" ovfenv:value=""/></PropertySection>
    

Shared Ethernet Adapters auto management

This part is not specific to the standard version of PowerVC but I wanted to talk about this here. You probably already knows that PowerVC is built on top of OpenStack and OpenStack is clever. The product doesn’t want to keep unnecessary objects in your configuration. I was very impressed by the management of the networks and of the vlans, PowerVC is managing and taking care of your Shared Ethernet Adapter for you. You don’t have to remove not used vlan, and to add by hand new vlans (just add the network in PowerVC), here are a few examples :

  • If you are adding a vlan in PowerVC you have the choice to select the Shared Ethernet Adapter for this vlan. For instance you can choose not to deploy this vlan on a particular host :
  • blog_network_do_not_use_pixelate

  • If you deploy a virtual machine on this vlan this one will be automatically added to the Shared Ethernet Adapter if this is the first machine using this vlan :
  • # chhwres -r virtualio --rsubtype vnetwork -o a -m 9117-MMD*658B2AD --vnetwork 1503-zvdc4 -a vlan_id=1503,vswitch=zvdc4,is_tagged=1
    
  • If you are moving a machine from one host to one another and this machine is last to use this vlan, the vlan will be automatically cleaned up and removed from the Shared Ethernet Adapter.
  • I have in my configuration two Shared Ethernet Adapters each one on a different virtual switch. Good news : PowerVC is vswitch aware :-)
  • This link is explaining this in details (not the redbook): Click here

Mobility

PowerVC standard is able to manage the mobility of your virtual machines. Machines can be relocated on any hosts on the PowerVC pool. You do not have anymore to remind you the long and complicated migrlpar command, PowerVC is taking care of this for you, just by clicking the migrate button :

blog_migrate_1_pixelate

  • Looking in the Hardware Management Console lssvcevents, you can check that the migrlpar command is taking care of the storage connectivity group created earlier, and is going to map the lpar on adapter fcs3 and fcs4 :
  • # migrlpar -m XXX58-9117-MMD-658B2AD -t XXX55-9117-MMD-65ED82C --id 8 -i ""virtual_fc_mappings=2//1//fcs3,4//1//fcs4,5//2//fcs3,6//2//fcs4""
    
  • On the Storage Volume Controller, the host created with the Live Partition Mobility wwpns are correctly activated while the machine is moving to the other host :
  • blog_migrate_svc_lpm_wwpns_greened

About supported fibre channel switches : all FOS >= 6.4 are ok !

At the time of writing this post things are not very clear about this. Checking in the Redbook the only supported models of fibre channel switches are IBM SAN24B-5 and IBM SAN48B-5. I’m using Brocade 8510-4 fibre channel switches and they are working well with PowerVC. After a couple of calls and mails with the PowerVC development team it seems that all Fabric OS superior or equals to version 6.4 are ok. Don’t worry if the PowerVC validator is failing, it may appends, just open a call to get the validator working with you switch model (have problems in version 1.2.0.1 but nor more problem with the latest 1.2.1.0 :-))

Conclusion

PowerVC is impressive. In my opinion PowerVC is already production ready. Building a machine with four virtual NPIV fibre channel adapter in five minutes is something every AIX system administrator has dreamed of. Tell your boss this is the right way to build machines, and invest for the future by deploying PowerVC : it’s a must have :-) :-) :-) :-)! Need advice about it, need someone to deploy it ? Hire me !

sitckers_resized