Exploit the full potential of PowerVC by using Shared Storage Pools & Linked Clones | PowerVC secrets about Linked Clones (pooladm,mksnap,mkclone,vioservice)

My journey into PowerVC still continues :-). The blog was not updated for two months but I’ve been busy these days, get sick … and so on, have another post in the pipe but this one has to be approved by IBM before posting ….. Since the latest version (at the time of writing this post 1.2.1.2) PowerVC is now capable of managing Shared Storage Pool (SSP). It’s a huge announcement because a lot of customers do not have a Storage Volume Controller and supported fibre channel switches. By using PowerVC in conjunction with SSP you will reveal the true and full potential of the product. There are two major enhancements brought by SSP, the first is the time of deployment of the new virtual machines … by using an SSP you’ll move from minutes to …. seconds. Second huge enhancement : by using SSP you’ll automatically -without knowing it- using a feature called “Linked Clones”. For those who are following my blog since the very beginning you’re probably aware that Linked Clones are usable and available since SSP were managed by the IBM Systems Director VMcontrol module. You can still refer to my blog posts about it … even if ISD VMcontrol is now a little bit outdated by PowerVC : here. Using PowerVC with Shared Storage Pools is easy, but how does it work behind the scene ? After analysing the process of deployment I’ve found some cool features, PowerVC is using secrets undocumented commands, pooladm, vioservice, mkdev secrets arguments … :

Discovering Shared Storage Pool on your PowerVC environment

The first step to do before beginning is to discover the Shared Storage Pool on PowerVC. I’m taking the time to explain you that because it’s so easy that people (like me) can think there is much to do about it … but no PowerVC is simple. You have nothing to do. I’m not going to explain you here how to create a Shared Storage Pool, please refer to my previous posts about this : here and here. After the Shared Storage Pool is created this one will be automatically added into PowerVC … nothing to do. Keep in mind that you will need the latest in date version of the Hardware Management Console (v8r8.1.0). If you are in trouble discovering the Shared Storage Pool check the Virtual I/O Server‘s RMC are ok. In general if you can query and perform any action on the Shared Storage Pool from the HMC there will be no problem from the PowerVC side.

  • You don’t have to reload PowerVC after creating the Shared Storage Pool, just check you can see it from the storage tab :
  • pvc_ssp1

  • You will get more details by clicking on the Shared Storage Pool ….
  • pvc_ssp2

  • such as captured image on the Shared Storage Pool ….
  • pvc_ssp3

  • volumes created on it …
  • pvc_ssp4

What is a linked clone ?

Think before start. You have to understand what is a Linked Clone before reading the rest of this post. Linked Clones are not well described in documentations and Rebooks. Linked Clones are based on Shared Storage Pools snapshots. No Shared Storage Pool = No Linked Clones. Here is what is going behind the scene when you are deploying a Linked Clone :

  1. The captured rootvg underlying disk is a Shared Storage Pool Logical Unit.
  2. When the image is captured the rootvg Logical Unit is copied and is known as a “Logical (Client Image) Unit”.
  3. When deploying a new machine a snapshot is created from the Logical (Client Image) Unit.
  4. A “special Logical Unit” is created from the snapshot. This Logical Unit seems to be a pointer to the snapshot. We call it a clone.
  5. The machine is booted and the activation engine is running and reconfiguring the network.
  6. When a block is modified on the new machine this one is duplicated and modified on one new block on the Shared Storage Pool.
  7. This said if no blocks are modified all the machines created from this capture are sharing the same blocks on the Shared Storage Pool.
  8. Only modified blocks are not shared between Linked Clones. The more things you will change on your rootvg the more space you will use on the Shared Storage Pool.
  9. That’s why these machines are called Linked Clones : they all are connected by the same source Logical Unit.
  10. You will save TIME (just a snapshot creation for the storage side) and SPACE (all rootvg will be shared by all the deployed machines) by using Linked Clones.

An image is sometimes better than long text, so here is a schema explaining all about Linked Clones :

LinkedClones

You have to capture an SSP base VM to deploy on the SSP

Be aware of one thing, you can’t deploy a virtual machine on the SSP if you don’t have an captured image on the SSP. You can’t deploy your Storwize images to deploy on the SSP. You first have to create by your own a machine which has its rootvg running on the SSP :

  • Create an image based on an SSP virtual machine :
  • pvc_ssp_capture1

  • Shared Storage Pool Logical Unit are stored in /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/VOL1.
  • Shared Storage Pool Logical (Client Image) Unit are stored in /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/IM.
  • The Logical Unit of the captured virtual machine is copied with the dd command from the VOL1 (/var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/VOL1) directory to the IM directory (/var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/IM) (so from volumes to images).
  • If you do this yourself by using the dd command you can see that the capture image is not shown at the ouput of the snapshot command (by using Linked Clones the snapshot command is separated in two categories, the actuals and “real” Logical Unit and Logical (Client Image) Units which are the PowerVC images …
  • A secret API managed by a secret command called vioservice is adding your newly created image too the Shared Storage pool soliddb.
  • After the “registration” the Client Image is visible with the snapshot command.

Deployment

After the image is captured and stored on the Shared Storage Pool images directory, you can now deploy virtual machines based on this image. Keep in mind that blocks are shared by each linked clones, you’ll be suprised that deploying machines will not used the free space on the shared storage pool. But be aware that you can’t deploy any machines if there is no “blank” space in the PowerVC space bar (check image below ….) :

deploy

Step by step deployment by exemple

  • A snapshot of the image is created trough the pooladm command. You can check the output of the snapshot command after this step you’ll see a new snapshot derived from the Logical (Client Image) Unit.
  • This snapshot is cloned (My understanding of the clone is that it is a normal logical unit sharing block with an image). After the snapshot is cloned a new volume is created in the shared storage pool volume directory but at this step this one is not visible with the lu command because creating a clone do not create meta-data on the shared storage pool.
  • A dummy logical unit is created. Then the clone is moved on the dummy logical unit to replace it.
  • The clone logical unit is mapped to client.

dummy

You can do it yourself without PowerVC (not supported)

Just for my understanding of what is doing PowerVC behind the scene I decided to try to do all the steps on my own.This steps are working but are not supported at all by IBM.

  • Before starting to read this you need to know that $ prompts are for padmin commands, # prompts are for root commands. You’ll need the cluster id and the pool id to build some xml files :
  • $ cluster -list -field CLUSTER_ID
    CLUSTER_ID:      c50a291c18ab11e489f46cae8b692f30
    $ lssp -clustername powervc_cluster -field POOL_ID
    POOL_ID:         000000000AFFF80C0000000053DA327C
    
  • So the cluster id will be c50a291c18ab11e489f46cae8b692f30 and the pool id will be c50a291c18ab11e489f46cae8b692f30000000000AFFF80C0000000053DA327C. These id are often prefixed by two characters (I don’t know the utility of these ones but it will work in all cases …)
  • Image files are stored in /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/IM.
  • Logical units files are stored in /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/VOL1.
  • Create the “envelope” of the Logical (Client Image) Unit, by creating an xml file (the udid are build with the cluster udid and the pool udid) used as the standard input of the vioservice command :
  • # cat create_client_image.xml
    <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
    <VIO xmlns="http://ausgsa.austin.ibm.com/projects/v/vios/schema/1.20" version="1.20">
        <Request action="1">
            <Cluster udid="22c50a291c18ab11e489f46cae8b692f30">
                <Pool udid="24c50a291c18ab11e489f46cae8b692f30000000000AFFF80C0000000053DA327C">
                    <Tier>
                        <LU capacity="55296" type="8">
                            <Image label="chmod666-homemade-image"/>
                        </LU>
                    </Tier>
                </Pool>
            </Cluster>
        </Request>
    </VIO>
    # /usr/ios/sbin/vioservice lib/libvio/lu < create_client_image.xml
    <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
    <VIO xmlns="http://ausgsa.austin.ibm.com/projects/v/vios/schema/1.20" version="1.20">
    
    <Response><Cluster udid="22c50a291c18ab11e489f46cae8b692f30" name="powervc_cluster"><Pool udid="24c50a291c18ab11e489f46cae8b692f30000000000AFFF80C0000000053DA327C" name="powervc_sp" raidLevel="0" overCommitSpace="0"><Tier udid="25c50a291c18ab11e489f46cae8b692f3019f95b3ea4c4dee1" name="SYSTEM" overCommitSpace="0"><LU udid="29c50a291c18ab11e489f46cae8b692f30d87113d5be9004791d28d44208150874" capacity="55296" physicalUsage="0" unusedCommitment="0" type="8" derived="" thick="0" tmoveState="0"><Image label="chmod666-homemade-image" relPath=""/></LU></Tier></Pool></Cluster></Response></VIO>
    # ls -l /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/IM
    total 339759720
    -rwx------    1 root     staff    57982058496 Sep  8 19:00 chmod666-homemade-image.d87113d5be9004791d28d44208150874
    -rwx------    1 root     system   57982058496 Aug 12 17:53 volume-Image_7100-03-03-1415-SSP3e2066b2a7a9437194f48860affd56c0.ac671df86edaf07e96e399e3a2dbd425
    -rwx------    1 root     system   57982058496 Aug 18 19:15 volume-Image_7100-03-03-1415-c--5bd3991bdac84c48b519e19bfb1be381.e525b8eb474f54e1d34d9d02cb0b49b4
    
  • You can now see with the snapshot command that a new Logical (Client Image) Unit is here :
  • $ snapshot -clustername powervc_cluster -list -spname powervc_sp
    Lu(Client Image)Name     Size(mb)       ProvisionType     %Used Unused(mb)     Lu Udid
    volume-Image_7100-03-03-1415-SSP3e2066b2a7a9437194f48860affd56c055296          THIN               100% 0              ac671df86edaf07e96e399e3a2dbd425
    chmod666-homemade-image  55296          THIN                 0% 55299          d87113d5be9004791d28d44208150874
    volume-Image_7100-03-03-1415-c--5bd3991bdac84c48b519e19bfb1be38155296          THIN               100% 55299          e525b8eb474f54e1d34d9d02cb0b49b4
                    Snapshot
                    2631012f1a558e51d1af7608f3779a1bIMSnap
                    09a6c90817d24784ece38f71051e419aIMSnap
                    e400827d363bb86db7984b1a7de08495IMSnap
                    5fcef388618c9a512c0c5848177bc134IMSnap
    
  • Copy the source image (the stopped virtual machine with the activation engine activated) to this newly created image. (This one will be the new reference of all your virtual machines created with this image as source). Use the dd command to do it (and don’t forget the block size). You can check while the dd is running that the unused percentage is increasing :
  • # dd if=/var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/VOL1/volume-boot-9117MMD_658B2AD-aaaa95f8317c666549c4809264281db536dd.a2b7ed754030ca97668b30ab6cff5c45 of=/var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/IM/chmod666-homemade-image.d87113d5be9004791d28d44208150874 bs=1M
    $ snapshot -clustername powervc_cluster -list -spname powervc_sp
    Lu(Client Image)Name     Size(mb)       ProvisionType     %Used Unused(mb)     Lu Udid
    chmod666-homemade-image  55296          THIN                23% 0              d87113d5be9004791d28d44208150874
    [..]
    $ snapshot -clustername powervc_cluster -list -spname powervc_sp
    Lu(Client Image)Name     Size(mb)       ProvisionType     %Used Unused(mb)     Lu Udid
    [..]
    chmod666-homemade-image  55296          THIN                40% 0              d87113d5be9004791d28d44208150874
    n$ snapshot -clustername powervc_cluster -list -spname powervc_sp
    Lu(Client Image)Name     Size(mb)       ProvisionType     %Used Unused(mb)     Lu Udid
    [..]
    chmod666-homemade-image  55296          THIN               100% 0              d87113d5be9004791d28d44208150874
    
  • You have now a new reference image. This one will be used as a reference for all you linked clone deployed virtual machines. A linked clone is created from a snapshot, so you have first to create a snapshot of the newly created image, by using the pooladm command (keep in mind that you can’t use snapshot command to work on Logical (Client Image) Unit). The snapshot is identified by the logical unit name suffixed by the “@“. Use mksnap to create the snap, and lssnap to show it. The snapshot will be visible at the output of the snapshot command :
  • # pooladm file mksnap /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/IM/chmod666-homemade-image.d87113d5be9004791d28d44208150874@chmod666IMSnap
    # pooladm file lssnap /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/IM/chmod666-homemade-image.d87113d5be9004791d28d44208150874
    Primary Path         File Snapshot name
    ---------------------------------------
    /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/IM/chmod666-homemade-image.d87113d5be9004791d28d44208150874 chmod666IMSnap
    $ snapshot -clustername powervc_cluster -list -spname powervc_sp
    Lu(Client Image)Name     Size(mb)       ProvisionType     %Used Unused(mb)     Lu Udid
    chmod666-homemade-image  55296          THIN               100% 55299  d87113d5be9004791d28d44208150874
                    Snapshot
                    chmod666IMSnap
    [..]
    
  • You can now create the clone from the snap (snap are identified by a ‘@’ character prefixed by the image name). Name the clone the way you want because this one will be renamed and moved to replace a normal logical unit, I’m using here the PowerVC convention (IMtmp). The creation of the clone will create a new file in the VOL1 directory with no shared storage pool meta data, so this clone will no be visible at the output of the lu command :
  • $ pooladm file mkclone /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/IM/chmod666-homemade-image.d87113d5be9004791d28d44208150874@chmod666IMSnap /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/VOL1/volume-boot-9117MMD_658B2AD-chmod666-IMtmp
    $ ls -l  /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/VOL1/*chmod666-IM*
    -rwx------    1 root     system   57982058496 Sep  9 16:27 /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/VOL1/volume-boot-9117MMD_658B2AD-chmod666-IMtmp
    
  • By using vioservice, create a logical unit on the shared storage pool. This will create a new image with a newly generated udid. If you check in the volume directory you can notice that the clone does not have the meta-data file needed by shared storage pool.(This file is prefixed by a dot (.)). After creating this logical unit replace it with your clone with a simple move :
  • $ cat create_client_lu.xml
    <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
    <VIO xmlns="http://ausgsa.austin.ibm.com/projects/v/vios/schema/1.20" version="1.20">
        <Request action="1">
            <Cluster udid="22c50a291c18ab11e489f46cae8b692f30">
                <Pool udid="24c50a291c18ab11e489f46cae8b692f30000000000AFFF80C0000000053DA327C">
                    <Tier>
                        <LU capacity="55296" type="1">
                            volume-boot-9117MMD_658B2AD-chmod666"/>
                        </LU>
                    </Tier>
                </Pool>
            </Cluster>
        </Request>
    </VIO>
    $ /usr/ios/sbin/vioservice lib/libvio/lu < create_client_lu.xml
    <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
    <VIO xmlns="http://ausgsa.austin.ibm.com/projects/v/vios/schema/1.20" version="1.20">
    
    <Response><Cluster udid="22c50a291c18ab11e489f46cae8b692f30" name="powervc_cluster"><Pool udid="24c50a291c18ab11e489f46cae8b692f30000000000AFFF80C0000000053DA327C" name="powervc_sp" raidLevel="0" overCommitSpace="0"><Tier udid="25c50a291c18ab11e489f46cae8b692f3019f95b3ea4c4dee1" name="SYSTEM" overCommitSpace="0"><LU udid="27c50a291c18ab11e489f46cae8b692f30e4d360832b29be950824d3e5bf57d777" capacity="55296" physicalUsage="0" unusedCommitment="0" type="1" derived="" thick="0" tmoveState="0"><Disk label="volume-boot-9117MMD_658B2AD-chmod666"/></LU></Tier></Pool></Cluster></Response></VIO>
    $ mv /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/VOL1/volume-boot-9117MMD_658B2AD-chmod666-IMtmp /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/VOL1/volume-boot-9117MMD_658B2AD-chmod666.e4d360832b29be950824d3e5bf57d777
    
  • You are ready to use your linked clone, you have a source image, a snap of this one, and a clone of this snap :
  • # pooladm file lssnap /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/IM/chmod666-homemade-image.d87113d5be9004791d28d44208150874
    Primary Path         File Snapshot name
    ---------------------------------------
    /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/IM/chmod666-homemade-image.d87113d5be9004791d28d44208150874 chmod666IMSnap
    # pooladm file lsclone /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/VOL1/volume-boot-9117MMD_658B2AD-chmod666.e4d360832b29be950824d3e5bf57d777
    Snapshot             Clone name
    ----------------------------------
    chmod666IMSnap /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/VOL1/volume-boot-9117MMD_658B2AD-chmod666.e4d360832b29be950824d3e5bf57d777
    
  • Then, using vioservice or the mkdev command map the clone to your virtual scsi adapter (identifed by its physloc name) (do this on both Virtual I/O Servers) :
  • $ cat map_clone.xml
    <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
    <VIO xmlns="http://ausgsa.austin.ibm.com/projects/v/vios/schema/1.20" version="1.20">
        <Request action="5">
            <Cluster udid="22c50a291c18ab11e489f46cae8b692f30">
                <Map label="" udid="27c50a291c18ab11e489f46cae8b692f30e4d360832b29be950824d3e5bf57d777" drcname="U9117.MMD.658B2AD-V2-C99"/>
            </Cluster>
        </Request>
    </VIO>
    $ /usr/ios/sbin/vioservice lib/libvio/lu < map_clone.xml
    <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
    <VIO xmlns="http://ausgsa.austin.ibm.com/projects/v/vios/schema/1.20" version="1.20">
    
    <Response><Cluster udid="22c50a291c18ab11e489f46cae8b692f30" name="powervc_cluster"/></Response></VIO>
    

    or

    # mkdev -t ngdisk -s vtdev -c virtual_target -aaix_tdev=volume-boot-9117MMD_658B2AD-chmod666.e4d360832b29be950824d3e5bf57d777 -audid_info=4d360832b29be950824d3e5bf57d77 -apath_name=/var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/VOL1 -p vhost5 -acluster_id=c50a291c18ab11e489f46cae8b692f30
    
  • Boot the machine ... this one is a linked clone create by yourself without PowerVC.

About the activation engine ?

Your captured image has the activation engine enabled. To reconfigure the network & the hostname PowerVC is copying an iso from the PowerVC server to the Virtual I/O Server. This iso contains an ovf file needed by the activation engine to customize your virtual machine. To customize my linked clone virtual machine created on my own I decided to re-use an old iso file created by PowerVC for another deployment :

  • Mount the image located in /var/vio/VMLibrary, and modify the xml ovf file to fit your needs :
  • # ls -l /var/vio/VMLibrary
    total 840
    drwxr-xr-x    2 root     system          256 Jul 31 20:17 lost+found
    -r--r-----    1 root     system       428032 Sep  9 18:11 vopt_c07e6e0bab6048dfb23586aa90e514e6
    # loopmount -i vopt_c07e6e0bab6048dfb23586aa90e514e6 -o "-V cdrfs -o ro" -m /mnt
    
  • Copy the content of the cd to a directory :
  • # mkdir /tmp/mycd
    # cp -r /mnt/* /tmp/mycd
    
  • Edit the ovf file to fit your needs (In my case for instance I'm changing the hostname of the machine and it's ip address :
  • # cat /tmp/mycd/ovf-env.xml
    <Environment xmlns="http://schemas.dmtf.org/ovf/environment/1" xmlns:ovf="http://schemas.dmtf.org/ovf/envelope/1" xmlns:ovfenv="http://schemas.dmtf.org/ovf/environment/1" xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData" xmlns:vssd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_VirtualSystemSettingData" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" ovfenv:id="vs0">
        <PlatformSection>
        <Locale>en</Locale>
      </PlatformSection>
      <PropertySection>
      <Property ovfenv:key="com.ibm.ovf.vmcontrol.system.networking.ipv4defaultgateway" ovfenv:value="10.218.238.1"/><Property ovfenv:key="com.ibm.ovf.vmcontrol.system.networking.hostname" ovfenv:value="homemadelinkedclone"/><Property ovfenv:key="com.ibm.ovf.vmcontrol.adapter.networking.slotnumber.1" ovfenv:value="32"/><Property ovfenv:key="com.ibm.ovf.vmcontrol.system.networking.dnsIPaddresses" ovfenv:value=""/><Property ovfenv:key="com.ibm.ovf.vmcontrol.adapter.networking.usedhcpv4.1" ovfenv:value="false"/><Property ovfenv:key="com.ibm.ovf.vmcontrol.adapter.networking.ipv4addresses.1" ovfenv:value="10.218.238.140"/><Property ovfenv:key="com.ibm.ovf.vmcontrol.adapter.networking.ipv4netmasks.1" ovfenv:value="255.255.255.0"/><Property ovfenv:key="com.ibm.ovf.vmcontrol.system.networking.domainname" ovfenv:value="localdomain"/><Property ovfenv:key="com.ibm.ovf.vmcontrol.system.timezone" ovfenv:value=""/></PropertySection>
    </Environment>
    
  • Recreate the cd using the mkdvd command and put it in the /var/vio/VMLibrary directory :
  • # mkdvd -r /tmp/mycd -S
    Initializing mkdvd log: /var/adm/ras/mkcd.log...
    Verifying command parameters...
    Creating temporary file system: /mkcd/cd_images...
    Creating Rock Ridge format image: /mkcd/cd_images/cd_image_19267708
    Running mkisofs ...
    
    mkrr_fs was successful.
    # mv /mkcd/cd_images/cd_image_19267708 /var/vio/VMLibrary
    $ lsrep
    Size(mb) Free(mb) Parent Pool         Parent Size      Parent Free
        1017     1015 rootvg                   279552           171776
    
    Name                                                  File Size Optical         Access
    cd_image_19267708                                             1 None            rw
    vopt_c07e6e0bab6048dfb23586aa90e514e6                         1 vtopt1          ro
    
  • Load the cdrom and map it to the linked clone :
  • $ mkvdev -fbo -vadapter vhost11
    $ loadopt -vtd vtopt0 -disk cd_image_19267708
    
  • When the linked clone virtual machine will boot the cd will be mounted and the activation engine will take the ovf file as parameter, and will reconfigure the network. For instance you can check the hostname has changed :
  • # hostname
    homemadelinkedclone.localdomain
    

A view on the layout ?

I asked myself a question about Linked Clones, how can we check Shared Storage Pool blocks (or PP ?) are shared by the capture machine (the captured LU) on one linked clone ? To answer to this question I had to play with the pooladm command (which is unsupported for customer use) to check the logcial unit layout of the capture virtual machine and of the deployed linked clone and then compare them. Please note that this is my understanding of the linked clones. This is not validated by any IBM support, do this at your own risk, you can correct my interpretation of what I'm seeing here :-) :

  • Get the layout of the captured VM by getting the layout of the logical unit (the captured image is in my case located in /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/IM/volume-Image_7100-03-03-1415-c--5bd3991bdac84c48b519e19bfb1be381.e525b8eb474f54e1d34d9d02cb0b49b4) :
  • root@vios:/home/padmin# ls -l /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/IM
    total 339759720
    -rwx------    1 root     system   57982058496 Aug 12 17:53 volume-Image_7100-03-03-1415-SSP3e2066b2a7a9437194f48860affd56c0.ac671df86edaf07e96e399e3a2dbd425
    -rwx------    1 root     system   57982058496 Aug 18 19:15 volume-Image_7100-03-03-1415-c--5bd3991bdac84c48b519e19bfb1be381.e525b8eb474f54e1d34d9d02cb0b49b4
    # pooladm file layout /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/IM/volume-Image_7100-03-03-1415-c--5bd3991bdac84c48b519e19bfb1be381.e525b8eb474f54e1d34d9d02cb0b49b4 | /tmp/captured_vm.layout
    0x0-0x100000 shared
        LP 0xFE:0xF41000
        PP /dev/hdisk968 0x2E8:0xF41000
    0x100000-0x200000 shared
        LP 0x48:0x387F000
        PP /dev/hdisk866 0x1:0x387F000
    [..]
    
  • Get the layout of the linked clone (the linked clone is in my case located in /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/VOL1/volume-boot-9117MMD_658B2AD-layo583b3eb5e98b495b992fdc3accc39bc3.54c172062957d73ec92e90d203d23fba)
  • # ls /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/VOL1/volume-boot-9117MMD_658B2AD-layo583b3eb5e98b495b992fdc3accc39bc3.54c172062957d73ec92e90d203d23fba
    /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/VOL1/volume-boot-9117MMD_658B2AD-layo583b3eb5e98b495b992fdc3accc39bc3.54c172062957d73ec92e90d203d23fba
    # pooladm file layout /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/VOL1/volume-boot-9117MMD_658B2AD-layo583b3eb5e98b495b992fdc3accc39bc3.54c172062957d73ec92e90d203d23fba | tee /tmp/linked_clone.layout
    0x0-0x100000 shared
        LP 0xFE:0xF41000
        PP /dev/hdisk968 0x2E8:0xF41000
    0x100000-0x200000 shared
        LP 0x48:0x387F000
        PP /dev/hdisk866 0x1:0x387F000
    [..]
    
  • At this step you can first compare the two files, you can see some useful informations, but do not misunderstand this output. You first have to sort it to make conclusion. But you can be sure of one thing : some PPs have been modified on the linked clone and cannot be shared anymore, others are shared between the linked clone and the capture image :
  • sdiff_layout1_modifed_1

  • You can have a better view of shared and not shared PPs by sorting the output of these files, here the commands I used to do it :
  • #grep PP linked_clone.layout | tr -s " " | sort -k1 > /tmp/pp_linked_clone.layout
    #grep PP captured_vm.layout | tr -s " " | sort -k1 > /tmp/pp_captured_vm.layout
    
  • By sdiffing these two files I can now check which PPs are shared and which are not :
  • sdiff_layout2_modifed_1

  • The pooladm command can give you stats about linked clone. My understanding of the owned block count tell me that 78144 SSP blocks (not PPs) (so blocks of 4k) are uniq to this linked clones and not shared with the captured image :
  • vios1#pooladm file stat /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/VOL1/volume-boot-9117MMD_658B2AD-layo583b3eb5e98b495b992fdc3accc39bc3.54c172062957d73ec92e90d203d23fba
    Path: /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/VOL1/volume-boot-9117MMD_658B2AD-layo583b3eb5e98b495b992fdc3accc39bc3.54c172062957d73ec92e90d203d23fba
    Size            57982058496
    Number Blocks   14156655
    Data Blocks     14155776
    Pool Block Size 4096
    
    Tier: SYSTEM
    Owned Blocks    78144
    Max Blocks      14156655
    Block Debt      14078511
    

Mixed NPIV & SSP deployment

For some reasons for some machine with an I/O intensive workload, it can be usefull to put your data luns on an NPIV adapter. I'm actually working on a project involving PowerVC and the question was ask, why not mix SSP Lun for rootvg and NPIV based lun for data volume group. One more time it's very simple with PowerVC, just attach a volume, this time by choosing your Storage Volume Controller provider ... easy :

mixed1_masked

This will created NPIV adapters and create new zoning and masking on the fibre channels switches. One more time easy ....

Debugging ?

I'll not lie. I had a lot of problems with Shared Storage Pool and PowerVC but these problems were related to my configuration moving a lot during the tests. Always remind you that you'll learn from theses errors and in my case it helped my a lot to debug PowerVC :

  • From the Virtual I/O Server side check you have no core file in the /home/ios/logs directory. A core file in this directory indicates one of the command run by PowerVC just "cored" :
  • root@vios1:/home/ios/logs# ls core*
    core.9371682.18085943
    
  • From the Virtual I/O Server side check the /home/ios/logs/viosvc.log. You can check all the xml files and all the ouputs used by the vioservice command. Most of PowerVC actions are performed trough the vioservice command ....
  • root@vios1:/home/ios/logs# ls viosvc.log
    -rw-r--r--    1 root     system     10240000 Sep 11 00:28 viosvc.log
    
  • Step by step check all PowerVC actions are ok. For instance verify with the lsrep command that the iso has been copied from PowerVC to the Virtual I/O Server library. Check there is space left on the Shared Storage Pool ....
  • Sometimes the secret vioserivce api is stucked and not responding. In some cases it can be useful to rebuild the soliddb ... I'm using this script to do it (run it as root) :
  • # cat rebuilddb.sh
    #!/usr/bin/ksh
    set -x
    stopsrc -s vio_daemon
    sleep 30
    rm -rf /var/vio/CM
    startsrc -s vio_daemon
    
  • EDIT I had another info from IBM regarding the method to rebuild the SolidDB, using my script won't properly bring up the SolidDB back up properly and could leave you in a bad state. Just add this at the end of the script :
  • pid=$(lssrc -s vio_daemon | awk 'NR==2 {print $2}')
    kill -1 $pid  
    
  • On PowerVC side when you have problem it is always good to increase the verbosity of the logs (located in /var/log) (in this case nova) (restart PowerVC after setting verbosity level)
  • # openstack-config --set /etc/nova/nova-9117MMD_658B2AD.conf DEFAULT default_log_levels powervc_nova=DEBUG,powervc_k2=DEBUG,nova=DEBUG
    

Conclusion

It takes me more than two months write this post. Why ? Just because PowerVC design is not documented. It work like a charm, but nobody will explain you HOW. I hope this post will help you to understand how PowerVC is working. I'm a huge fan of PowerVC and SSP, try it by yourself and you'll see that it is a pleasure to use it. It's simple, effecient, and powerfull. Anybody can give me an access to a PowerKVM host to write & proove that PowerVC is also simple and efficient with PowerKVM ... ?

Virtual I/O Server 2.2.3.1 Part 1 : Shared Storage Pool enhancements

Everybody knows that I’m a huge fan of Shared Storage Pools, you can check my previous post on this subject on the blog. With the new version of 2.2.3.1 of Virtual I/O Servers Shared Storage Pool have been enhanced with some cool features : simplified management, pool mirroring, pool shrinking and the long awaited unicast mode to get rid of the multicast. This post will show you that Shared Storage Pool are now powerful and ready to be used for production server (this is my own opinion). I’ll not talk about this later but be aware that the maximum pool size is now 16TB and the pool can now serve 250 Virtual I/O Clients. Here we go :

Rolling updates

Rolling updates are available since Virtual I/O Server 2.2.2.0 but it is the first time I am using it. This feature is not a new enhancement brought by the version 2.2.3.1 but it still good to write about it :-). The rolling updates allow you to update Virtual I/O Severs (members of a Shared Storage Pool) one by one without causing an outage of the entire cluster. Each Virtual I/O Server has to leave the cluster before starting the update and can rejoin the cluster after the reboot and its update. A special node called database primary node (DBN) checks every ten minutes if Virtual I/O Servers are ON_LEVEL or UP_LEVEL. Before trying new Shared Storage Pool features I had to update Virtual I/O Servers, here is a little reminder on how to do it with rolling updates :

  • Who is the current database primary node in the cluster (This is the last I’ll update) :
  • # cluster -clustername vio-cluster -status -verbose -fmt : -field pool_name pool_state node_name node_mtm node_partition_num node_upgrade_status node_roles
    vio-ssp:OK:vios1.domain.test:8202-E4C02064099R:2:ON_LEVEL:
    vio-ssp:OK:vios2.domain.test:8202-E4C02064099R:1:ON_LEVEL:
    vio-ssp:OK:vios3.domain.test:8202-E4C02064011R:2:ON_LEVEL:DBN
    vio-ssp:OK:vios4.domain.test:8202-E4C02064011R:1:ON_LEVEL:
    
  • The DBN is going to move from one Virtual I/O Server to one another at the moment it will leave the cluster :
  • vio-ssp:OK:vios1.domain.test:8202-E4C02064099R:2:UP_LEVEL:
    vio-ssp:OK:vios2.domain.test:8202-E4C02064099R:1:UP_LEVEL:
     : :vios3.domain.test:8202-E4C02064011R:2:ON_LEVEL:
    vio-ssp:OK:vios4.domain.test:8202-E4C02064011R:1:UP_LEVEL:DBN
    
  • On the Virtual I/O Server you want to update leave the cluster before running the update :
  • # clstartstop -n vio-cluster -m $(hostname)
    # mount nim_server:/export/nim/lpp_source/vios2231-fp27-lpp_source /mnt
    # updateios -dev /mnt -accept -install
    
  • When the update is finished, re-join the cluster :
  • # ioslevel
    2.2.3.1
    # clstartstop -start -n vio_cluster -m $(hostname)
    
  • On any Virtual I/O Server you can list the cluster and check if Virtual I/O Servers are ON_LEVEL or UP_LEVEL :
  • # cluster -clustername vio-cluster -status -verbose -fmt : -field pool_name pool_state node_name node_mtm node_partition_num node_upgrade_status node_roles
    vio-ssp:OK:vios1.domain.test:8202-E4C02064099R:2:ON_LEVEL:
    vio-ssp:OK:vios2.domain.test:8202-E4C02064099R:1:UP_LEVEL:
    vio-ssp:OK:vios3.domain.test:8202-E4C02064011R:2:ON_LEVEL:DBN
    vio-ssp:OK:vios4.domain.test:8202-E4C02064011R:1:UP_LEVEL:
    
  • All backing devices served by the Shared Storage Pool are still available on each node no matter it is ON_LEVEL or UP_LEVEL.
  • When all the Virtual I/O Servers are updated (and the last one is updated, in my case the DBN), all Virtual I/O Server node_upgrade_status will be ON_LEVEL. Remember that you have to wait at least 10 minutes to check that all Virtual I/O Servers are ON_LEVEL :
  • $ cluster -clustername vio-cluster -status -verbose -fmt : -field pool_name pool_state node_name node_mtm node_partition_num node_upgrade_status node_roles
    vio-ssp:OK:vios1.domain.test:8202-E4C02064099R:2:2.2.3.1 ON_LEVEL:
    vio-ssp:OK:vios2.domain.test:8202-E4C02064099R:1:2.2.3.1 ON_LEVEL:DBN
    vio-ssp:OK:vios3.domain.test:8202-E4C02064011R:2:2.2.3.1 ON_LEVEL:
    vio-ssp:OK:vios4.domain.test:8202-E4C02064011R:1:2.2.3.1 ON_LEVEL:
    
  • The Shared Storage Pool upgrade is performed by root user by the crontab. You can run the sspupgrade command by hand if you want to check if there are any Shared Storage Pool update running :
  • # crontab -l | grep sspupgrade
    0,10,20,30,40,50 * * * * /usr/sbin/sspupgrade -upgrade
    # sspupgrade -status
    No Upgrade in progress
    
  • If all nodes are not at the same ioslevel you’ll not be able to use new commands (check the output below) :
  • # failgrp -list
    The requested operation can not be performed since the software capability is currently not enabled.
    Please upgrade all nodes within the cluster and retry the request once the upgrade has completed successfully.
    
  • If you are moving from any version to 2.2.3.1 the communication mode for the cluster will change from multicast to unicast as part of rolling upgrade operation. You have nothing to do.

Mirroring the Shared Storage Pool with failgrp command

One of the major drawback of Shared Storage Pools was resilience. The Shared Storage Pool was not able to mirror luns from on site to one another. The failgrp command introduced in this new version is used to mirror the Shared Storage Pool across different SAN array and can easily answer to resiliency questions. In my opinion this was the missing feature needed to deploy Shared Storage Pools in a production environment. One of the major cool thing in this is that you have NOTHING to do at the lpar level. All the mirroring is performed by the Virtual I/O Server and the Shared Storage Pool themselves, no need at all to verify on each lpar mirroring of your volume groups. The single point of management for mirroring is now the failure group. :-)

  • By default after upgrading all nodes to 2.2.3.1 all luns are assigned to a failure group called Default, you can rename it if you want to. Notice that the Shared Storage Pool is not mirrored at this state :
  • $ failgrp -list -fmt : -header
    POOL_NAME:TIER_NAME:FG_NAME:FG_SIZE:FG_STATE
    vio-ssp:SYSTEM:Default:55744:ONLINE
    $ failgrp -modify -clustername vio-cluster -sp vio-ssp -fg Default -attr FG_NAME=failgrp_site1
    Given attribute(s) modified successfully.
    $  failgrp -list -fmt : -header
    POOL_NAME:TIER_NAME:FG_NAME:FG_SIZE:FG_STATE
    vio-ssp:SYSTEM:failgrp_site1:55744:ONLINE
    $ cluster -clustername vio-cluster -status -verbose 
    [..]
        Pool Name:            vio-ssp
        Pool Id:              000000000AFD672900000000529641F4
        Pool Mirror State:    NOT_MIRRORED
    [..]
    
  • Create the second failure group on the second site (be carefull with the command syntax) :
  • $ failgrp -create -clustername vio-cluster -sp vio-ssp -fg failgrp_site2: hdiskpower141
    failgrp_site2 FailureGroup has been created successfully.
    $ failgrp -list -fmt : -header
    POOL_NAME:TIER_NAME:FG_NAME:FG_SIZE:FG_STATE
    vio-ssp:SYSTEM:failgrp_site1:223040:ONLINE
    vio-ssp:SYSTEM:failgrp_site2:223040:ONLINE
    
  • While failgrp command is mirroring the pool (and when it’s finished) you can check that your pool is in SYNCED mode :
  • $ cluster -status -clustername vio-cluster -verbose
        Pool Name:            vio-ssp
        Pool Id:              000000000AFD672900000000529641F4
        Pool Mirror State:    SYNCED
    
  • To identify luns in the Shared Storage Pool a new command is introduced called pv. You can quickly identify luns used by the Shared Storage Pool with their respective failure group :
  • $ pv -list
    
    POOL_NAME: vio-ssp
    TIER_NAME: SYSTEM
    FG_NAME: failgrp_site1
    PV_NAME          SIZE(MB)    STATE            UDID
    hdiskpower156    55770       ONLINE           1D06020F6709SYMMETRIX03EMCfcp
    hdiskpower1      111540      ONLINE           1D06020F6909SYMMETRIX03EMCfcp
    
    POOL_NAME: vio-ssp
    TIER_NAME: SYSTEM
    FG_NAME: failgrp_site2
    PV_NAME          SIZE(MB)    STATE            UDID
    hdiskpower2      223080      ONLINE           1D06020F6B09SYMMETRIX03EMCfcp
    

Mapping made easier with the lu command

If like me you ever used a Shared Storage Pool you probably already know that mapping a device was not so easy with the mkbdsp command. Once again the new version of Virtual I/O Server is trying to simplify the logical units and backing device management with the addition of a new command called lu. Its purpose is to manage logical units creation, listing, mapping and removing in a single easy to use command, i’ll not tell you here how to use the command but here are a few nice example :

  • Create a thin backing device called bd_lu01 and map it to vhost4 :
  • $ lu -create -clustername vio-cluster -sp vio-ssp -lu lu01 -vadapter vhost4 -vtd bd_lu01 -size 10G
    Lu Name:lu0011
    Lu Udid:e982bf85313bcafe0af7653e8e39c3d9
    
    Assigning logical unit 'lu01' as a backing device.
    
    VTD:bd_lu01
    
  • Do not forget to map it on the second Virtual I/O Server :
  • $ lu -map -clustername vio-cluster -sp vio-ssp -lu lu01 -vadapter vhost4 -vtd bd_lu01
    Assigning logical unit 'lu01' as a backing device.
    
    VTD:bd_lu01
    
  • List all logical units viewed in the Shared Storage Pool :
  • $ lu -list
    POOL_NAME: vio-ssp
    TIER_NAME: SYSTEM
    LU_NAME                 SIZE(MB)    UNUSED(MB)  UDID
    lu01                  10240       10240       e982bf85313bcafe0af7653e8e39c3d9
    [..]
    

Physical volume management made easier with the pv command

Before this new release Shared Storage Pool was able to replace a lun by using the chsp command. You were not able to remove a lun for the Shared Storage Pool. The new release aims to simplify the Shared Storage Pool management and a new command called pv is introduced to manage luns from a single and easy command to add, replace, remove and list luns from the Shared Storage Pool.

  • Adding a disk to the Shared Storage Pool :
  • $ pv -add -clustername vio-cluster -sp vio-ssp -fg failgrp_site1: hdiskpower1
    Given physical volume(s) have been added successfully.
    
  • Replacing a disk from the Shared Storage Pool (note that you can’t replace a disk by a smaller one even if this one is not totally used) :
  • $ pv -replace -clustername vio-cluster -sp vio-ssp -oldpv hdiskpower2 -newpv hdiskpower1
    Current request action progress: % 5
    Current request action progress: % 100
    The capacity of the new device(s) is less than the capacity
    of the old device(s). The old device(s) cannot be replaced.
    $ pv -replace -clustername vio-cluster -sp vio-ssp -oldpv hdiskpower156 -newpv hdiskpower1
    Current request action progress: % 5
    Current request action progress: % 6
    Current request action progress: % 100
    Given physical volume(s) have been replaced successfully.
    
  • Unfortunately the REPDISK cannot be replaced by this command, you have to use chrepos command:
  • $ lspv | grep hdiskpower0
    hdiskpower0      00f7407858d6a19d                     caavg_private    active
    $ pv -replace -oldpv hdiskpower0 -newpv hdiskpower156
    The specified PV is not part of the storage pool
    
  • Used pv can be easily list with this command :
  • $ pv -list -verbose -fmt : -header
    POOL_NAME:TIER_NAME:FG_NAME:PV_NAME:PV_SIZE:PV_STATE:PV_UDID:PV_DESC
    vio-ssp:SYSTEM:failgrp_site1:hdiskpower1:111540:ONLINE:1D06020F6909SYMMETRIX03EMCfcp:PowerPath Device
    vio-ssp:SYSTEM:failgrp_site2:hdiskpower2:223080:ONLINE:1D06020F6B09SYMMETRIX03EMCfcp:PowerPath Device
    

    Removing a disk from the Shared Storage Pool

    It is now possible to remove disk from the Shared Storage Pool, this new features is known as pool shrinking

  • Removing a disk from the Shared Storage Pool :
  • $ pv -remove -clustername vio-cluster -sp vio-ssp -pv hdiskpower1
    Given physical volume(s) have been removed successfully.
    

    Repository disk failure and replacement

    The Shared Storage pool is now able to be up without its repository disk running. Having an error on the repository disk is not a problem at all. This one can be replaced at the moment you found an error on it by the chrepos command and it’s pretty easy to do. Here is a little reminder :

    • To identify repository disk you can use CAA command as root :
    • # /usr/lib/cluster/clras lsrepos
      [..]
      hdisk558 has a cluster repository signature.
      hdisk559 has a cluster repository signature.
      hdisk560 has a cluster repository signature.
      Cycled 680 disks.
      hdisk561 has a cluster repository signature.
      hdiskpower139 has a cluster repository signature.
      Cycled 690 disks.
      Found 5 cluster repository disks.
      [..]
      
    • For the example we are going to write some zero to the repository disk, and showing it’s not readable anymore by the Shared Storage Pool :
    • # lsvg -p caavg_private
      caavg_private:
      PV_NAME           PV STATE          TOTAL PPs   FREE PPs    FREE DISTRIBUTION
      hdiskpower0       active            15          8           02..00..00..03..03
      # lsvg -l caavg_private
      caavg_private:
      LV NAME             TYPE       LPs     PPs     PVs  LV STATE      MOUNT POINT
      caalv_private1      boot       1       1       1    closed/syncd  N/A
      caalv_private2      boot       1       1       1    closed/syncd  N/A
      caalv_private3                 4       4       1    open/syncd    N/A
      powerha_crlv        boot       1       1       1    closed/syncd  N/A
      #  dd if=/dev/zero of=/dev/hdiskpower0 bs=1024 count=1024
      1024+0 records in
      1024+0 records out
      # lsvg -l caavg_private
      0516-066 : Physical volume is not a volume group member.
              Check the physical volume name specified.
      
    • The REPDISK cannot be totally removed. You have to replace by another lun in case of emergency :
    • $ chrepos -n vio-cluster -r -hdiskpower0
      chrepos: The removal of repository disks is not currently supported.
      
    • If the repository disk has a problem you can move it another disk by using the chrepos command :
    • $ chrepos -r -hdiskpower0,+hdiskpower141
      chrepos: Successfully modified repository disk or disks.
      

    Cluster unicast communication by default

    Using unicast in a Cluster AIX Aware cluster is a long awaited feature asked by IBM customers. As everybody knows Shared Storage Pools are based on the Cluster AIX Aware cluster and is using multicast before the 2.2.3.1 release. By updating all Virtual I/O Servers members of a Shared Storage Pool the communication_mode attribute used by CAA will now be change to unicast. This mode does not exists at all in previous version so do not try to modify it. With this new feature a new command will be available as padmin called clo. This one will lets you check and change CAA tunables, communication_mode included :

    • Before updating the Virtual I/O Server to 2.2.3.1 the communication_mode tunable does not exists and you have to check it with the CAA command : clctrl (you have to be root) :
    • # clctrl -tune -a
         vio-cluster(791d19c8-5796-11e3-ac2c-5cf3fcea5648).config_timeout = 480
           vio-cluster(791d19c8-5796-11e3-ac2c-5cf3fcea5648).deadman_mode = a
           vio-cluster(791d19c8-5796-11e3-ac2c-5cf3fcea5648).link_timeout = 0
        vio-cluster(791d19c8-5796-11e3-ac2c-5cf3fcea5648).node_down_delay = 10000
           vio-cluster(791d19c8-5796-11e3-ac2c-5cf3fcea5648).node_timeout = 20000
             vio-cluster(791d19c8-5796-11e3-ac2c-5cf3fcea5648).packet_ttl = 32
       vio-cluster(791d19c8-5796-11e3-ac2c-5cf3fcea5648).remote_hb_factor = 10
             vio-cluster(791d19c8-5796-11e3-ac2c-5cf3fcea5648).repos_mode = e
      vio-cluster(791d19c8-5796-11e3-ac2c-5cf3fcea5648).site_merge_policy = p
      
    • As you can see on the output below running a lscluster command on a Shared Storage Pool before version 2.2.3.1 gives you the multicast address used by the CAA cluster :
    • $ lscluster -i | grep MULTICAST
                      IPv4 MULTICAST ADDRESS: 228.253.103.41
                      IPv4 MULTICAST ADDRESS: 228.253.103.41
                      IPv4 MULTICAST ADDRESS: 228.253.103.41
                      IPv4 MULTICAST ADDRESS: 228.253.103.41
      
    • After updating a node to version 2.2.3.1 in the /usr/ios/utils directory a new command named clo will be available, and a new tunable will be here called communication_mode :
    • $ oem_setup_env
      # ls -l /usr/ios/utils/clo
      lrwxrwxrwx    1 root     system           20 Dec  2 10:20 /usr/ios/utils/clo -> /usr/lib/cluster/clo
      # exit
      $ clo -L communication_mode
      NAME                      DEF    MIN    MAX    UNIT           SCOPE
           ENTITY_NAME(UUID)                                                CUR
      --------------------------------------------------------------------------------
      communication_mode        m                                   c
           vio-cluster(791d19c8-5796-11e3-ac2c-5cf3fcea5648)                m
      --------------------------------------------------------------------------------
      
    • If all nodes in the cluster are not at the same level (an your update is not finished) you’ll not be able to use this tunable :
    • # clo -o communication_mode
      clo: 1485-506 The current cluster level does not support tunable communication_mode.
      
    • After updating all nodes to 2.2.3.1 (all nodes will be ON_LEVEL), the communication_mode will automatically be configured to unicast :
    • $  clo -o communication_mode
      vio-cluster(791d19c8-5796-11e3-ac2c-5cf3fcea5648).communication_mode = u
      

    To sum up : in my opinion Shared Storage Pools are now production ready. I’ve never seen them in production neither in development environment but Shared Storage Pools really deserve to be used. Things are drasticly simplified and even more with this new version. Please try the new Shared Storage Pools and give me your feedbacks in the comments. Once again I hope it helps.

    IBM Oct 8 2013 Announcement : A technical point of view after Enterprise 2013

    PowerVM and Power systems lovers like me have probably heard a lot of things since Oct 8 2013. Unfortunately with the big mass of information, some of you, like me, may be lost with all these new things coming. I personnaly feel the need to clarify things from a technical point of vue, and have a deeper look on these annoucements. Finding technical stuffs from the announcement was not easy but Enterprise 2013 gave us a lot of information about the shape of things to come. Is this a new era for Power Systems ? Are our jobs going to change with the rise of the cloud ? Will PowerVM be replaced by KVM in a few years ? Will AIX be replaced by Linux On Power ? I can easily give you an answer : NO ! Have a look below and you will be as excited as I am, Power Systems are awesome, and they are going to be even better with all these new products:-).

    PowerVC

    PowerVC stands for Power Virtualization Center. PowerVC let you manages your virtualized infrastructure by capturing, deploying, creating and moving virtual machines. It’s based on OpenStack and has to be installed on a Linux Machine (Red Hat Enterprise Linux 6, running on x86 or Power). PowerVC runs on top of Power Systems infrastructure (Power6, or Power7 hardware, Hardware Management Console or IVM). It’s declined on two versions, standard (allowing Power6 and Power7 management with an HMC) and express (allowing only Power7 management with an IVM). At the time of writing this post PowerVC manages storage only for IBM V* Serie and IBM SVC. Brocade SAN switch are the only one supported. In a few words here is what you have to remember :

    • PowerVC runs on top of Hardware and HMC/IVM, like VMcontrol.
    • PowerVC allows you to manage and create (capture, deploy, …) virtual machines.
    • PowerVC allows you to move and automatically place virtual machines (resource pooling, dynamic virtual machines placement).
    • PowerVC is based on OpenStack API. IBM modifications and enhancements to OpenStack are committed to the community.
    • PowerVC only runs on RHEL 6.4 x86 or Power, RHEL support is not included.
    • PowerVC express manages Power7, Power7+ hardware.
    • PowerVC standard manages Power6, Power7 and Power7+ hardware.
    • Storage Systems SVC-family is mandatory (SVC/V7000/V3700/V3500)
    • PowerVC does not install Virtual I/O Server neither IVM. Virtual infrastructure has to be installed and it’s a mandatory prerequisite.
    • PowerVC express edition needs at least Virtual I/O Server 2.2.1.5, storage can be pre-zoned (I don’t know if it can be pre-masked), limit of five managed hosts with a total of 100 maximum managed LPARs.
    • PowerVC standard edition needs HMC 7.7.8 and Virtual I/O Server 2.2.3.0, storage can’t be pre-zoned and only supported SAN Switch is Brocade. It’s limited to ten managed hosts with a total of 400 LPARs.
    • PowerVC1

      In my opinion PowerVC will in the near future replace VMcontrol, it’s a shame for me because I spend so much time on VMcontrol but I think it’s good thing for Power Systems. PowerVC seems to be easy to deploy, easy to manage and seems to have a nice look and feel, my only regret : it’s only running on a Linux :-(. So keep an eye on this product because it’s the future of Power Systems management. You can see it as a VMware VCenter for Power Systems ;-)

      PowerVC2

    PowerVP

    PowerVP was first known as Sleuth and was first an internal IBM tool. It’s a performance analysis tool looking from the whole machine to lpar. No joke, you can compare it to the lssrad command and it’s seems to be a graphical version of this one. Nice visuals tells you how the hardware resources are assigned and consumed by the lpars. Three views are accessible :

    • The System Topology view shows the hardware topology how busy are the chips, and how busy is the traffic between each chips :
    • PowerVP2
      PowerVP1

    • The Node View shows you how each cores per chips are consumed and gives you information on memory controller, busses, and traffic from/to remote I/O :
    • PowerVP3

    • The partition view seems to be more classical and gives you information on CPU, Memory, Disk, Ethernet cache and memory affinity. You can drill down on each of this statistics :-) :
    • PowerVP4

    Some agents are needed by PowerVP. For partitions drill down an agent seems to be mandatory (oh no, not again); for whole system monitoring a kind of “super-agent” is also needed (to be installed on one partition ?)

    PowerVPcollectors

    I don’t why but PowerVP graphical interface is a java/swing application. In 2013 everybody wants a web interface I really do not understand this choice …. All data can be monitored and recorded (for analysis comparison) on the fly. The product is included to PowerVM enterprise edition and needs at least firmware 770 and 780 for high end machines.

    PowerVM Virtual I/O Server 2.2.3.0

    Shared Storage Pool 4 (SSP4)

    As a big fan of Shared Storage Pool, the SSPv4 was long long awaited. It enables a few cool things, the one I’m waiting the most is the SSP mirroring. You can now mask luns from two different SAN array and the mirroring is performed by the Virtual I/O Server. Nothing to do in the Virtual I/O Client :-). This new feature comes with a new command : failgrp. By default when the SSP is created the failover group is named default. You can also check which pv belongs to which failover group with the new pv command, this command allows you to check SAN array id by checking pv’s udid. It allows you too to check if pv are capable (for example pv comming from iscsi are not SSP capable) Here are a few cool commands example (sorry, without the output, theses ones are deduced from recent Nigel G. tests :-)).

    # failgrp -list
    # pv -list -capable
    # failgrp -create -fg SANARRAY2: hdisk12 hdisk13 hdisk14
    # failgrp -modify -fg Default -attr fg_name=SANARRAY1
    # pv -add -fg SANARRAY1: hdisk15 hdisk16 SANARRAY2: hdisk17 hdisk18
    # pv -list
    

    SSP4 is also simplying SSP management with new commands. The lu command allows you to create a “lu” in the SSP and to allocate it to a vhost in one command.

    • lu creation and mapping in one command :
    # lu -create -lu lpar1rootvg -size 32G -vadapter vhost3
    
    • lu creation and mapping in two commands :
    # lu -create -lu lpar1rootvg
    # lu -map -lu lpar1rootvg -vadpater vhost3
    

    The last thing to say about SSPv4, you can now remove a lun from the storage pool ! So with all these new features SSP can be used on production server. Finally.

    Throwing away Control Channel from Shared Ethernet Adapter

    To simplifiy Virtual I/O Server management and SEA creation it is now possible to create SEA failover without creating and specifying any control channel adapter. Both Virtual I/O Server can now detect if an SEA failover is created and use their default virtual adapter for control channel. You can find more precision on this subject on Scott Vetter‘s blog here. A funny thing to notice is that an APAR was created this year on this subject. This APAR leaks the name of the Project K2 (IBM internal name for PowerVM simplification) : IV37193. Ok to sum up here are the technical things to remember about this Shared Ethernet Adapter Simplification :

    • The creation of the SEA requires HMC V7.7.8, Virtual I/O Server 2.2.3.0, and firmware 780 (it’s seems to be only supported on Power7 and Power7+).
    • The end user can’t see it but the control channel function is using the SEA‘s default adapter.
    • The end user can’t see it but the control channel function is using a special VLAN id : 4095.
    • Standard SEA with classic control channel adapter can still be created.
    • My understanding of this feature is that it has a limitation : you can create only one Shared Ethernet Adapter per virtual switch. It’s seems that the HMC check that there is only one adapter with priority 1 and one adapter with priority 2 per virtual switch.
    • Command to create SEA is still the same, it you omit the ctl_chan attribute the SEA will automatically use the management VLAN 4095 for the control channel
    • A special discovery protocol is implemented on the Virtual I/O Server to automatically enables the “ghost” control channel

    … other few things

    VIOS advisors are modified, they can now monitor Shared Storage Pools and NPIV adapters, the part command is updated to do so. There are also some additional statistics and listing for SEA.

    Virtual Network Adapter are updated with a ping feature to check if the adapter is up or down. A virtual adapter can now be considered as down and can feel a physical link loss/failure. Is this feature the end of the netmon.cf file in PowerHA ?

    Conclusion, OpenPower, Power8, KVM

    I can’t finish this post without writing a few lines about huge global things to come. A new consorsium called OpenPower was created to work on Power8. It means that some third party manufacturers will be able to build their own implementation of Power8 processors. Talking about Power8 this one is on the way a seems to be a real performance monster, first entry class systems will probably be available in Q3/Q4 2014. But Power8 is not going to be released alone, KVM is going to be ported on Power8 hardware. Keep in mind that this will not provide nested virtualization (the purpose is not to run KVM on AIX, or to run AIX on KVM). A Power8 hardware will be able to run a special firmware letting you run a Linux KVM and to build and run Linux PPC virtual machines. At the time of writing this post users will be able to switch between this firmware and the classic one running PowerVM. It is pretty exciting !

    I hope this post is clarifying the current situation about all these new awesome annoucements.

    Adventures in IBM Systems Director in System P environment. Part 6 : VMcontrol and Shared Storage Pool Linked Clones

    As many of you already know Virtual I/O Shared Storage Pools comes with one very cool feature : snapshots ! If you have read the Part 5 of these adventures, you know how to use VMcontrol and deploy a new Virtual Appliance. The part 5 tells you how to deploy a Virtual Appliance through NIM using a mksysb image or a lpp_source. Using a mksysb or a lpp_source can takes time depending on the lpar configuration (entitlement capacity, virtual processors ..) or on the NIM network speed (for instance a NIM with a 100 Mbits network adapter). In my case a rte installation takes approximately twenty to thirty minutes. By using Shared Storage Pools feature, VMcontrol can create a snapshot of an actual Workload and use it to create a new one. This is called a Linked Clone (because the new Virtual Appliance will obviously be linked to its source Workload by its snapshot). By using a linked clone a new Virtual Appliance deployment takes twenty seconds, no joke ….

    Here are the four needed steps to use linked clones, each one will be described in details in this post :

    1. Create a linked clones repository. This VMcontrol repository is created on a Virtual I/O Server participating in the Shared Storage Pool.
    2. On a existing Workload deploy the Activation Engine.
    3. Capture the Workload (the one with the Activation Engine installed), to create a new Virtual Appliance. At this point a snapshot of the Workload is created on the Shared Storage Pool.
    4. Deploy the Virtual Appliance to create a new Workload, the Workload will be booted an reconfigured by the Activation Engine. The activation engine will set the new hostname and the IP address.

    Repository creation

    • Get the OID of one of the Virtual I/O Server participing in the Shared Storage Pool, this OID is needed for the creation of the repository :
    • # smcli lssys -oT vios03
      vios03, Server, 0x506a2
      vios3, OperatingSystem, 0x50654
      
    • A repository has to be created on a storage (this storage location can be on a NIM server or in our case for linked clones on a Shared Storage Pool). A list of available storage locations can be found by using the mkrepos command. In this case the storage OID is 326512 :
    • # mkrepos -C | grep -ip vios03
      [..]
      vios03 (329300)
      repositorystorage
              Min:    1
              Max:    1
              Description:    null
              Options:
              Key,    Storage,        Storage location,       Type,   Available GB,   Total GB,       Description,    OID
              [tst-ssp]       tst-ssp tst-cluster     SAN     6       68              326512
      [..]
      
    • With the storage OID and the Virtual I/O Server OID create the repository and give it a name :
    • # smcli mkrepos -S 326512 -O 0x50654 -n linked-clones-repository
      
    • List the repositories and check the new one is created :
    • # smcli lsrepos
      nim-repostory
      linked-clones-repository
      

    Activation Engine

    The Activation Engine is a script used to customize newly deployed Virtual Appliances. By default it changes the ip address and the hostname with new ones (you have to set ip and hostname of the new Virtual Appliance when you’ll deploy it). The Activation Engine can be customized but this post will not talk about it. Here is a link to the documentation : click here

    • The Activation Engine can be found on the director itself, in /opt/ibm/director/proddate/activation-engine/vmc.vsae.tar. Copy it to the Workload you want to capture :
    • # scp /opt/ibm/director/proddata/activation-engine/vmc.vsae.tar pyrite:/root
      root@pyrite's password:
      vmc.vsae.tar                                                                                                                                                                  100% 7950KB   7.8MB/s   00:01
      
    • Unpack it and run the installation :
    • # tar xvf vmc.vsae.tar
      x activation-engine-2.1-1.13.aix5.3.noarch.rpm, 86482 bytes, 169 tape blocks
      [..]
      x aix-install.sh, 2198 bytes, 5 tape blocks
      
      # export JAVA_HOME=/usr/java5/jre
      # ./aix-install.sh
      Install VSAE and VMC extensions
      JAVA_HOME=/usr/java5/jre
      [..]
      [2013-06-03 11:18:04,871] INFO: Looking for platform initialization commands
      [2013-06-03 11:18:04,905] INFO:  Version: AIX pyrite 1 6 00XXXXXXXX00
      [..]
      [2013-06-03 11:18:15,082] INFO: Created system services for activation.
      
    • Prepare the capture by running newly installed script AE.sh. Be aware that running this command will shutdown your host, so be sure all the customization that you want have been made on this host :
    • # /opt/ibm/ae/AE.sh --reset
      JAVA_HOME=/usr/java5/jre
      [2013-06-03 11:23:43,575] INFO: Looking for platform initialization commands
      [2013-06-03 11:23:43,591] INFO:  Version: AIX pyrite 1 6 00C0CE744C00
      [..]
      [2013-06-03 11:23:52,476] INFO: Cleaning AR and AP directories
      [2013-06-03 11:23:52,492] INFO: Shutting down the system
      
      SHUTDOWN PROGRAM
      Mon Jun  3 11:23:53 CDT 2013
      
      
      Broadcast message from root@pyrite.prodinfo.gca (tty) at 11:23:54 ...
      
      PLEASE LOG OFF NOW ! ! !
      System maintenance in progress.
      All processes will be killed now.
      ! ! ! SYSTEM BEING BROUGHT DOWN NOW ! ! !
      

    Capture

    We’re now ready to capture the host. You’ll need the Server’s OID and the repository’s OID :

    1. The repository OID is 0x6ec9b :
    2. # smcli lsrepos -o | grep linked-clones-repository
      linked-clones-repository, 453787 (0x6ec9b)
      
    3. The server to caputre OID is 0x6ef4e :
    4. smcli lssys -oT pyrite
      pyrite, Server, 0x6ef4e
      
    5. Capture the server with the captureva command or though the GUI (be sure you have a Server and Operating System object for this one):
    6. # smcli captureva -r 0x6ec9b -s 0x6ef4e -n pyrite-vmc-va -D "imported from server pyrite"
      Mon Jun 03 19:27:17 CEST 2013  captureva Operation started.
      Get capture customization data
      Call capture function
      DNZLOP411I Capturing virtual server pyrite to virtual appliance pyrite-vmc-va in repository linked-clones-repository.
      DNZLOP912I Disk group to be captured: DG_05.29.2013-13:26:28:062
      DNZLOP900I Requesting SAN volume(s)
      DNZLOP948I New disk group: DG_06.03.2013-19:27:21:609
      DNZLOP413I The virtual appliance is using disk group DG_06.03.2013-19:27:21:609 with the following SAN volumes: [pyrite-vmc-va4].
      DNZLOP414I The virtual server is using disk group DG_05.29.2013-13:26:28:062 with the following SAN volumes: [IBMsvsp22].
      DNZLOP909I Copying disk images
      DNZLOP409I Creating the OVF for the virtual appliance.
      Call capture command executed. Return code= 456,287
      Mon Jun 03 19:27:28 CEST 2013  captureva Operation took 11 seconds.
      
    7. This output tells you two things about the storage : the captured virtual server is using a backing device on the Shared Storage Pool called IBMsvsp22; a snapshot of this backing device has been created an will be used by the virtual appliance, this snapshot is called pyrite-vmc-va4. On the Shared Storage Pool you can check -by using the snapshot- command that a snaphot of IBMsvsp22 has been created :
    8. # snapshot -clustername vio-cluster -list -spname vio-ssp -lu IBMsvsp22
      Lu Name          Size(mb)    ProvisionType      %Used Unused(mb)  Lu Udid
      IBMsvsp22        9537        THIN                 41% 9537        7d2895ede7cb14dab04b988064616ff2
      Snapshot
      e68b174abd27e3fa0beb4c8d30d76f92IMSnap
      
      Lu(Client Image)Name     Size(mb)       ProvisionType     %Used Unused(mb)     Lu Udid
      pyrite-vmc-va4           9537           THIN                41% 9537           e68b174abd27e3fa0beb4c8d30d76f92
      
    9. Optionally you can check the creation of the Virtual Appliance on the Director itself :
    10. # smcli lsva -o | grep -i pyrite  
      pyrite-vmc-va, 456287 (0x6f65f)
      # smcli lsva -l -V 0x6f65f
      pyrite-vmc-va
              TrunkId:13
              Notifiable:true
              ClassName:com.ibm.usmi.datamodel.virtual.VirtualAppliance
              RevisionVersion:1.1
              Description:imported from server pyrite
              ChangedDate:2013-06-03T19:27:27+02:00
              TrunkName:pyrite-vmc-va
              DisplayName:pyrite-vmc-va
              CreatedDate:2013-06-03T19:27:26+02:00
              SpecificationId:1
              SpecificationVersion:1.1
              OID:456287
              Guid:25C4F588982E3A8C8249871DDFB15031
              ApplianceId:5c1e6c95-68bc-4697-a9ce-3b5641c4f48f
              ObjectType:VirtualAppliance
              DisplayNameSpecified:true
      

    Deploy

    We’re now ready to deploy the Virtual Appliance. For this one you’ll need the Virtual Appliance OID (we already have it : 0x6f65f), a system or a system pool where the Virtual Appliance we’ll be deployed, and a deploymentplanid. (Please don’t ask why we need a deploymentplanid, I don’t know, if an Ibmer is reading this one please tell us in why … :-)):

    1. In my case I’m using a system pool with OID 0x57e20 (by deploying a Virtual Appliance in a system pool this one can be resilient and automatically moved between the systems for instance in case of a hardware failure or anything else). Use lssys if you’re deploying on a system, or lssyspool if you’re deploying on a system pool :
    2. # smcli lssyspool 
      Show server system pool list. 1 Server system pool(s) found.
      --------------------------------
      ID:359968 (0x57e20)
      Name:FRMTST-systempool
      Description:Server System Pool
      Type:PowerHMC
      Status:Critical
      State:Active
      Resilience:Capable
      Server system pool properties
      AutoOptimization:0
      FarmType:PowerHMC
      LEMEnsembleId:0009ED4C0DCD4B5CA83CE5F0232989D4
      OperatingState:20
      OptimizationInterval:30
      Platform:3
      
      Storage Pool Case
      Storage Pool:326512 (0x4fb70),  vio-ssp
      Storage Pool owning Subsystem:vio-cluster
      --------------------------------
      
    3. Use the lscustomization command to find the deployementplanid (the -H option is telling my Workload will be resilient) :
    4. # smcli lscustomization -a deploy_new -V 0x6f65f -g 0x57e20 -H true
      [..]
      deploymentplanid
              Value:  -7980877749837517784_01
              Description:    null
      [..]
      
    5. It’s now time to deploy, this operation take 30 seconds, no joke :
    6. #smcli deployva -v -g 0x57e20 -V 0x6f65f -m -7980877749837517784_01 -a deploy_new -A poolstorages=326512,product.vs0.com.ibm.ovf.vmcontrol.system.networking.hostname=ruby,product.vs0.com.ibm.ovf.vmcontrol.adapter.networking.ipv4addresses.5=10.10.10.209,product.vs0.com.ibm.ovf.vmcontrol.adapter.networking.ipv4netmasks.5=255.255.255.0,product.vs0.com.ibm.ovf.vmcontrol.system.networking.ipv4defaultgateway=10.240.122.254,product.vs0.com.ibm.ovf.vmcontrol.system.networking.dnsIPaddresses=134.227.74.196,134.227.2.251,product.vs0.com.ibm.ovf.vmcontrol.system.networking.domainname=prodinfo.gca"
      Mon June 03 20:01:52 CEST 2013  deployva Operation started.
      Attempt to get the default customization data for deploy_new.
      Attempt to get the deploy_new customization data.
      Update collection with user entered attributes.
      Attempt to validate the deploy request for 456,287.
      Attempt to deploy new.
      Workload pyrite-vmc-va_52529 was created.
      Virtual server ruby added to workload pyrite-vmc-va_52529.
      Workload pyrite-vmc-va_52529 is stopped.
      DNZIMC094I Deployed Virtual Appliance pyrite-vmc-va to new Server ruby hosted by system .
      Mon May 27 18:31:41 CEST 2013  deployva Operation took 30 seconds.
      
    7. Have a look on the Shared Storage Pool and check the newly created server is using the snapshot created by the capture called pyrite-vmc-va4 :
      lsmap -vadapter vhost1
      SVSA            Physloc                                      Client Partition ID
      --------------- -------------------------------------------- ------------------
      vhost1          U8203.E4A.060CE74-V2-C12                     0x00000004
      
      VTD                   deploy504c27f14
      Status                Available
      LUN                   0x8200000000000000
      Backing device        /var/vio/SSP/vio-cluster/D_E_F_A_U_L_T_061310/VOL1/AEIM.47bab2102f7794906a65be98d9f126bf
      Physloc
      Mirrored              N/A
      
      VTD                   vtscsi1
      Status                Available
      LUN                   0x8100000000000000
      Backing device        IBMsvsp24.8c380f19e79706b992f9a970301f944a
      Physloc
      Mirrored              N/A
      # snapshot -clustername udvio000tst-cluster -list -spname udvio000tst-ssp -lu IBMsvsp24
      Lu(Client Image)Name     Size(mb)       ProvisionType     %Used Unused(mb)     Lu Udid
      pyrite-vmc-va4           9537           THIN                41% 9537           e68b174abd27e3fa0beb4c8d30d76f92
                      Snapshot
                      157708e20d49cbd00f21767f3aeda35eIMSnap
      


    Hope this can help !