Exploit the full potential of PowerVC by using Shared Storage Pools & Linked Clones | PowerVC secrets about Linked Clones (pooladm,mksnap,mkclone,vioservice)

My journey into PowerVC still continues :-). The blog was not updated for two months but I’ve been busy these days, get sick … and so on, have another post in the pipe but this one has to be approved by IBM before posting ….. Since the latest version (at the time of writing this post 1.2.1.2) PowerVC is now capable of managing Shared Storage Pool (SSP). It’s a huge announcement because a lot of customers do not have a Storage Volume Controller and supported fibre channel switches. By using PowerVC in conjunction with SSP you will reveal the true and full potential of the product. There are two major enhancements brought by SSP, the first is the time of deployment of the new virtual machines … by using an SSP you’ll move from minutes to …. seconds. Second huge enhancement : by using SSP you’ll automatically -without knowing it- using a feature called “Linked Clones”. For those who are following my blog since the very beginning you’re probably aware that Linked Clones are usable and available since SSP were managed by the IBM Systems Director VMcontrol module. You can still refer to my blog posts about it … even if ISD VMcontrol is now a little bit outdated by PowerVC : here. Using PowerVC with Shared Storage Pools is easy, but how does it work behind the scene ? After analysing the process of deployment I’ve found some cool features, PowerVC is using secrets undocumented commands, pooladm, vioservice, mkdev secrets arguments … :

Discovering Shared Storage Pool on your PowerVC environment

The first step to do before beginning is to discover the Shared Storage Pool on PowerVC. I’m taking the time to explain you that because it’s so easy that people (like me) can think there is much to do about it … but no PowerVC is simple. You have nothing to do. I’m not going to explain you here how to create a Shared Storage Pool, please refer to my previous posts about this : here and here. After the Shared Storage Pool is created this one will be automatically added into PowerVC … nothing to do. Keep in mind that you will need the latest in date version of the Hardware Management Console (v8r8.1.0). If you are in trouble discovering the Shared Storage Pool check the Virtual I/O Server‘s RMC are ok. In general if you can query and perform any action on the Shared Storage Pool from the HMC there will be no problem from the PowerVC side.

  • You don’t have to reload PowerVC after creating the Shared Storage Pool, just check you can see it from the storage tab :
  • pvc_ssp1

  • You will get more details by clicking on the Shared Storage Pool ….
  • pvc_ssp2

  • such as captured image on the Shared Storage Pool ….
  • pvc_ssp3

  • volumes created on it …
  • pvc_ssp4

What is a linked clone ?

Think before start. You have to understand what is a Linked Clone before reading the rest of this post. Linked Clones are not well described in documentations and Rebooks. Linked Clones are based on Shared Storage Pools snapshots. No Shared Storage Pool = No Linked Clones. Here is what is going behind the scene when you are deploying a Linked Clone :

  1. The captured rootvg underlying disk is a Shared Storage Pool Logical Unit.
  2. When the image is captured the rootvg Logical Unit is copied and is known as a “Logical (Client Image) Unit”.
  3. When deploying a new machine a snapshot is created from the Logical (Client Image) Unit.
  4. A “special Logical Unit” is created from the snapshot. This Logical Unit seems to be a pointer to the snapshot. We call it a clone.
  5. The machine is booted and the activation engine is running and reconfiguring the network.
  6. When a block is modified on the new machine this one is duplicated and modified on one new block on the Shared Storage Pool.
  7. This said if no blocks are modified all the machines created from this capture are sharing the same blocks on the Shared Storage Pool.
  8. Only modified blocks are not shared between Linked Clones. The more things you will change on your rootvg the more space you will use on the Shared Storage Pool.
  9. That’s why these machines are called Linked Clones : they all are connected by the same source Logical Unit.
  10. You will save TIME (just a snapshot creation for the storage side) and SPACE (all rootvg will be shared by all the deployed machines) by using Linked Clones.

An image is sometimes better than long text, so here is a schema explaining all about Linked Clones :

LinkedClones

You have to capture an SSP base VM to deploy on the SSP

Be aware of one thing, you can’t deploy a virtual machine on the SSP if you don’t have an captured image on the SSP. You can’t deploy your Storwize images to deploy on the SSP. You first have to create by your own a machine which has its rootvg running on the SSP :

  • Create an image based on an SSP virtual machine :
  • pvc_ssp_capture1

  • Shared Storage Pool Logical Unit are stored in /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/VOL1.
  • Shared Storage Pool Logical (Client Image) Unit are stored in /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/IM.
  • The Logical Unit of the captured virtual machine is copied with the dd command from the VOL1 (/var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/VOL1) directory to the IM directory (/var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/IM) (so from volumes to images).
  • If you do this yourself by using the dd command you can see that the capture image is not shown at the ouput of the snapshot command (by using Linked Clones the snapshot command is separated in two categories, the actuals and “real” Logical Unit and Logical (Client Image) Units which are the PowerVC images …
  • A secret API managed by a secret command called vioservice is adding your newly created image too the Shared Storage pool soliddb.
  • After the “registration” the Client Image is visible with the snapshot command.

Deployment

After the image is captured and stored on the Shared Storage Pool images directory, you can now deploy virtual machines based on this image. Keep in mind that blocks are shared by each linked clones, you’ll be suprised that deploying machines will not used the free space on the shared storage pool. But be aware that you can’t deploy any machines if there is no “blank” space in the PowerVC space bar (check image below ….) :

deploy

Step by step deployment by exemple

  • A snapshot of the image is created trough the pooladm command. You can check the output of the snapshot command after this step you’ll see a new snapshot derived from the Logical (Client Image) Unit.
  • This snapshot is cloned (My understanding of the clone is that it is a normal logical unit sharing block with an image). After the snapshot is cloned a new volume is created in the shared storage pool volume directory but at this step this one is not visible with the lu command because creating a clone do not create meta-data on the shared storage pool.
  • A dummy logical unit is created. Then the clone is moved on the dummy logical unit to replace it.
  • The clone logical unit is mapped to client.

dummy

You can do it yourself without PowerVC (not supported)

Just for my understanding of what is doing PowerVC behind the scene I decided to try to do all the steps on my own.This steps are working but are not supported at all by IBM.

  • Before starting to read this you need to know that $ prompts are for padmin commands, # prompts are for root commands. You’ll need the cluster id and the pool id to build some xml files :
  • $ cluster -list -field CLUSTER_ID
    CLUSTER_ID:      c50a291c18ab11e489f46cae8b692f30
    $ lssp -clustername powervc_cluster -field POOL_ID
    POOL_ID:         000000000AFFF80C0000000053DA327C
    
  • So the cluster id will be c50a291c18ab11e489f46cae8b692f30 and the pool id will be c50a291c18ab11e489f46cae8b692f30000000000AFFF80C0000000053DA327C. These id are often prefixed by two characters (I don’t know the utility of these ones but it will work in all cases …)
  • Image files are stored in /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/IM.
  • Logical units files are stored in /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/VOL1.
  • Create the “envelope” of the Logical (Client Image) Unit, by creating an xml file (the udid are build with the cluster udid and the pool udid) used as the standard input of the vioservice command :
  • # cat create_client_image.xml
    <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
    <VIO xmlns="http://ausgsa.austin.ibm.com/projects/v/vios/schema/1.20" version="1.20">
        <Request action="1">
            <Cluster udid="22c50a291c18ab11e489f46cae8b692f30">
                <Pool udid="24c50a291c18ab11e489f46cae8b692f30000000000AFFF80C0000000053DA327C">
                    <Tier>
                        <LU capacity="55296" type="8">
                            <Image label="chmod666-homemade-image"/>
                        </LU>
                    </Tier>
                </Pool>
            </Cluster>
        </Request>
    </VIO>
    # /usr/ios/sbin/vioservice lib/libvio/lu < create_client_image.xml
    <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
    <VIO xmlns="http://ausgsa.austin.ibm.com/projects/v/vios/schema/1.20" version="1.20">
    
    <Response><Cluster udid="22c50a291c18ab11e489f46cae8b692f30" name="powervc_cluster"><Pool udid="24c50a291c18ab11e489f46cae8b692f30000000000AFFF80C0000000053DA327C" name="powervc_sp" raidLevel="0" overCommitSpace="0"><Tier udid="25c50a291c18ab11e489f46cae8b692f3019f95b3ea4c4dee1" name="SYSTEM" overCommitSpace="0"><LU udid="29c50a291c18ab11e489f46cae8b692f30d87113d5be9004791d28d44208150874" capacity="55296" physicalUsage="0" unusedCommitment="0" type="8" derived="" thick="0" tmoveState="0"><Image label="chmod666-homemade-image" relPath=""/></LU></Tier></Pool></Cluster></Response></VIO>
    # ls -l /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/IM
    total 339759720
    -rwx------    1 root     staff    57982058496 Sep  8 19:00 chmod666-homemade-image.d87113d5be9004791d28d44208150874
    -rwx------    1 root     system   57982058496 Aug 12 17:53 volume-Image_7100-03-03-1415-SSP3e2066b2a7a9437194f48860affd56c0.ac671df86edaf07e96e399e3a2dbd425
    -rwx------    1 root     system   57982058496 Aug 18 19:15 volume-Image_7100-03-03-1415-c--5bd3991bdac84c48b519e19bfb1be381.e525b8eb474f54e1d34d9d02cb0b49b4
    
  • You can now see with the snapshot command that a new Logical (Client Image) Unit is here :
  • $ snapshot -clustername powervc_cluster -list -spname powervc_sp
    Lu(Client Image)Name     Size(mb)       ProvisionType     %Used Unused(mb)     Lu Udid
    volume-Image_7100-03-03-1415-SSP3e2066b2a7a9437194f48860affd56c055296          THIN               100% 0              ac671df86edaf07e96e399e3a2dbd425
    chmod666-homemade-image  55296          THIN                 0% 55299          d87113d5be9004791d28d44208150874
    volume-Image_7100-03-03-1415-c--5bd3991bdac84c48b519e19bfb1be38155296          THIN               100% 55299          e525b8eb474f54e1d34d9d02cb0b49b4
                    Snapshot
                    2631012f1a558e51d1af7608f3779a1bIMSnap
                    09a6c90817d24784ece38f71051e419aIMSnap
                    e400827d363bb86db7984b1a7de08495IMSnap
                    5fcef388618c9a512c0c5848177bc134IMSnap
    
  • Copy the source image (the stopped virtual machine with the activation engine activated) to this newly created image. (This one will be the new reference of all your virtual machines created with this image as source). Use the dd command to do it (and don’t forget the block size). You can check while the dd is running that the unused percentage is increasing :
  • # dd if=/var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/VOL1/volume-boot-9117MMD_658B2AD-aaaa95f8317c666549c4809264281db536dd.a2b7ed754030ca97668b30ab6cff5c45 of=/var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/IM/chmod666-homemade-image.d87113d5be9004791d28d44208150874 bs=1M
    $ snapshot -clustername powervc_cluster -list -spname powervc_sp
    Lu(Client Image)Name     Size(mb)       ProvisionType     %Used Unused(mb)     Lu Udid
    chmod666-homemade-image  55296          THIN                23% 0              d87113d5be9004791d28d44208150874
    [..]
    $ snapshot -clustername powervc_cluster -list -spname powervc_sp
    Lu(Client Image)Name     Size(mb)       ProvisionType     %Used Unused(mb)     Lu Udid
    [..]
    chmod666-homemade-image  55296          THIN                40% 0              d87113d5be9004791d28d44208150874
    n$ snapshot -clustername powervc_cluster -list -spname powervc_sp
    Lu(Client Image)Name     Size(mb)       ProvisionType     %Used Unused(mb)     Lu Udid
    [..]
    chmod666-homemade-image  55296          THIN               100% 0              d87113d5be9004791d28d44208150874
    
  • You have now a new reference image. This one will be used as a reference for all you linked clone deployed virtual machines. A linked clone is created from a snapshot, so you have first to create a snapshot of the newly created image, by using the pooladm command (keep in mind that you can’t use snapshot command to work on Logical (Client Image) Unit). The snapshot is identified by the logical unit name suffixed by the “@“. Use mksnap to create the snap, and lssnap to show it. The snapshot will be visible at the output of the snapshot command :
  • # pooladm file mksnap /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/IM/chmod666-homemade-image.d87113d5be9004791d28d44208150874@chmod666IMSnap
    # pooladm file lssnap /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/IM/chmod666-homemade-image.d87113d5be9004791d28d44208150874
    Primary Path         File Snapshot name
    ---------------------------------------
    /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/IM/chmod666-homemade-image.d87113d5be9004791d28d44208150874 chmod666IMSnap
    $ snapshot -clustername powervc_cluster -list -spname powervc_sp
    Lu(Client Image)Name     Size(mb)       ProvisionType     %Used Unused(mb)     Lu Udid
    chmod666-homemade-image  55296          THIN               100% 55299  d87113d5be9004791d28d44208150874
                    Snapshot
                    chmod666IMSnap
    [..]
    
  • You can now create the clone from the snap (snap are identified by a ‘@’ character prefixed by the image name). Name the clone the way you want because this one will be renamed and moved to replace a normal logical unit, I’m using here the PowerVC convention (IMtmp). The creation of the clone will create a new file in the VOL1 directory with no shared storage pool meta data, so this clone will no be visible at the output of the lu command :
  • $ pooladm file mkclone /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/IM/chmod666-homemade-image.d87113d5be9004791d28d44208150874@chmod666IMSnap /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/VOL1/volume-boot-9117MMD_658B2AD-chmod666-IMtmp
    $ ls -l  /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/VOL1/*chmod666-IM*
    -rwx------    1 root     system   57982058496 Sep  9 16:27 /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/VOL1/volume-boot-9117MMD_658B2AD-chmod666-IMtmp
    
  • By using vioservice, create a logical unit on the shared storage pool. This will create a new image with a newly generated udid. If you check in the volume directory you can notice that the clone does not have the meta-data file needed by shared storage pool.(This file is prefixed by a dot (.)). After creating this logical unit replace it with your clone with a simple move :
  • $ cat create_client_lu.xml
    <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
    <VIO xmlns="http://ausgsa.austin.ibm.com/projects/v/vios/schema/1.20" version="1.20">
        <Request action="1">
            <Cluster udid="22c50a291c18ab11e489f46cae8b692f30">
                <Pool udid="24c50a291c18ab11e489f46cae8b692f30000000000AFFF80C0000000053DA327C">
                    <Tier>
                        <LU capacity="55296" type="1">
                            volume-boot-9117MMD_658B2AD-chmod666"/>
                        </LU>
                    </Tier>
                </Pool>
            </Cluster>
        </Request>
    </VIO>
    $ /usr/ios/sbin/vioservice lib/libvio/lu < create_client_lu.xml
    <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
    <VIO xmlns="http://ausgsa.austin.ibm.com/projects/v/vios/schema/1.20" version="1.20">
    
    <Response><Cluster udid="22c50a291c18ab11e489f46cae8b692f30" name="powervc_cluster"><Pool udid="24c50a291c18ab11e489f46cae8b692f30000000000AFFF80C0000000053DA327C" name="powervc_sp" raidLevel="0" overCommitSpace="0"><Tier udid="25c50a291c18ab11e489f46cae8b692f3019f95b3ea4c4dee1" name="SYSTEM" overCommitSpace="0"><LU udid="27c50a291c18ab11e489f46cae8b692f30e4d360832b29be950824d3e5bf57d777" capacity="55296" physicalUsage="0" unusedCommitment="0" type="1" derived="" thick="0" tmoveState="0"><Disk label="volume-boot-9117MMD_658B2AD-chmod666"/></LU></Tier></Pool></Cluster></Response></VIO>
    $ mv /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/VOL1/volume-boot-9117MMD_658B2AD-chmod666-IMtmp /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/VOL1/volume-boot-9117MMD_658B2AD-chmod666.e4d360832b29be950824d3e5bf57d777
    
  • You are ready to use your linked clone, you have a source image, a snap of this one, and a clone of this snap :
  • # pooladm file lssnap /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/IM/chmod666-homemade-image.d87113d5be9004791d28d44208150874
    Primary Path         File Snapshot name
    ---------------------------------------
    /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/IM/chmod666-homemade-image.d87113d5be9004791d28d44208150874 chmod666IMSnap
    # pooladm file lsclone /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/VOL1/volume-boot-9117MMD_658B2AD-chmod666.e4d360832b29be950824d3e5bf57d777
    Snapshot             Clone name
    ----------------------------------
    chmod666IMSnap /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/VOL1/volume-boot-9117MMD_658B2AD-chmod666.e4d360832b29be950824d3e5bf57d777
    
  • Then, using vioservice or the mkdev command map the clone to your virtual scsi adapter (identifed by its physloc name) (do this on both Virtual I/O Servers) :
  • $ cat map_clone.xml
    <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
    <VIO xmlns="http://ausgsa.austin.ibm.com/projects/v/vios/schema/1.20" version="1.20">
        <Request action="5">
            <Cluster udid="22c50a291c18ab11e489f46cae8b692f30">
                <Map label="" udid="27c50a291c18ab11e489f46cae8b692f30e4d360832b29be950824d3e5bf57d777" drcname="U9117.MMD.658B2AD-V2-C99"/>
            </Cluster>
        </Request>
    </VIO>
    $ /usr/ios/sbin/vioservice lib/libvio/lu < map_clone.xml
    <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
    <VIO xmlns="http://ausgsa.austin.ibm.com/projects/v/vios/schema/1.20" version="1.20">
    
    <Response><Cluster udid="22c50a291c18ab11e489f46cae8b692f30" name="powervc_cluster"/></Response></VIO>
    

    or

    # mkdev -t ngdisk -s vtdev -c virtual_target -aaix_tdev=volume-boot-9117MMD_658B2AD-chmod666.e4d360832b29be950824d3e5bf57d777 -audid_info=4d360832b29be950824d3e5bf57d77 -apath_name=/var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/VOL1 -p vhost5 -acluster_id=c50a291c18ab11e489f46cae8b692f30
    
  • Boot the machine ... this one is a linked clone create by yourself without PowerVC.

About the activation engine ?

Your captured image has the activation engine enabled. To reconfigure the network & the hostname PowerVC is copying an iso from the PowerVC server to the Virtual I/O Server. This iso contains an ovf file needed by the activation engine to customize your virtual machine. To customize my linked clone virtual machine created on my own I decided to re-use an old iso file created by PowerVC for another deployment :

  • Mount the image located in /var/vio/VMLibrary, and modify the xml ovf file to fit your needs :
  • # ls -l /var/vio/VMLibrary
    total 840
    drwxr-xr-x    2 root     system          256 Jul 31 20:17 lost+found
    -r--r-----    1 root     system       428032 Sep  9 18:11 vopt_c07e6e0bab6048dfb23586aa90e514e6
    # loopmount -i vopt_c07e6e0bab6048dfb23586aa90e514e6 -o "-V cdrfs -o ro" -m /mnt
    
  • Copy the content of the cd to a directory :
  • # mkdir /tmp/mycd
    # cp -r /mnt/* /tmp/mycd
    
  • Edit the ovf file to fit your needs (In my case for instance I'm changing the hostname of the machine and it's ip address :
  • # cat /tmp/mycd/ovf-env.xml
    <Environment xmlns="http://schemas.dmtf.org/ovf/environment/1" xmlns:ovf="http://schemas.dmtf.org/ovf/envelope/1" xmlns:ovfenv="http://schemas.dmtf.org/ovf/environment/1" xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData" xmlns:vssd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_VirtualSystemSettingData" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" ovfenv:id="vs0">
        <PlatformSection>
        <Locale>en</Locale>
      </PlatformSection>
      <PropertySection>
      <Property ovfenv:key="com.ibm.ovf.vmcontrol.system.networking.ipv4defaultgateway" ovfenv:value="10.218.238.1"/><Property ovfenv:key="com.ibm.ovf.vmcontrol.system.networking.hostname" ovfenv:value="homemadelinkedclone"/><Property ovfenv:key="com.ibm.ovf.vmcontrol.adapter.networking.slotnumber.1" ovfenv:value="32"/><Property ovfenv:key="com.ibm.ovf.vmcontrol.system.networking.dnsIPaddresses" ovfenv:value=""/><Property ovfenv:key="com.ibm.ovf.vmcontrol.adapter.networking.usedhcpv4.1" ovfenv:value="false"/><Property ovfenv:key="com.ibm.ovf.vmcontrol.adapter.networking.ipv4addresses.1" ovfenv:value="10.218.238.140"/><Property ovfenv:key="com.ibm.ovf.vmcontrol.adapter.networking.ipv4netmasks.1" ovfenv:value="255.255.255.0"/><Property ovfenv:key="com.ibm.ovf.vmcontrol.system.networking.domainname" ovfenv:value="localdomain"/><Property ovfenv:key="com.ibm.ovf.vmcontrol.system.timezone" ovfenv:value=""/></PropertySection>
    </Environment>
    
  • Recreate the cd using the mkdvd command and put it in the /var/vio/VMLibrary directory :
  • # mkdvd -r /tmp/mycd -S
    Initializing mkdvd log: /var/adm/ras/mkcd.log...
    Verifying command parameters...
    Creating temporary file system: /mkcd/cd_images...
    Creating Rock Ridge format image: /mkcd/cd_images/cd_image_19267708
    Running mkisofs ...
    
    mkrr_fs was successful.
    # mv /mkcd/cd_images/cd_image_19267708 /var/vio/VMLibrary
    $ lsrep
    Size(mb) Free(mb) Parent Pool         Parent Size      Parent Free
        1017     1015 rootvg                   279552           171776
    
    Name                                                  File Size Optical         Access
    cd_image_19267708                                             1 None            rw
    vopt_c07e6e0bab6048dfb23586aa90e514e6                         1 vtopt1          ro
    
  • Load the cdrom and map it to the linked clone :
  • $ mkvdev -fbo -vadapter vhost11
    $ loadopt -vtd vtopt0 -disk cd_image_19267708
    
  • When the linked clone virtual machine will boot the cd will be mounted and the activation engine will take the ovf file as parameter, and will reconfigure the network. For instance you can check the hostname has changed :
  • # hostname
    homemadelinkedclone.localdomain
    

A view on the layout ?

I asked myself a question about Linked Clones, how can we check Shared Storage Pool blocks (or PP ?) are shared by the capture machine (the captured LU) on one linked clone ? To answer to this question I had to play with the pooladm command (which is unsupported for customer use) to check the logcial unit layout of the capture virtual machine and of the deployed linked clone and then compare them. Please note that this is my understanding of the linked clones. This is not validated by any IBM support, do this at your own risk, you can correct my interpretation of what I'm seeing here :-) :

  • Get the layout of the captured VM by getting the layout of the logical unit (the captured image is in my case located in /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/IM/volume-Image_7100-03-03-1415-c--5bd3991bdac84c48b519e19bfb1be381.e525b8eb474f54e1d34d9d02cb0b49b4) :
  • root@vios:/home/padmin# ls -l /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/IM
    total 339759720
    -rwx------    1 root     system   57982058496 Aug 12 17:53 volume-Image_7100-03-03-1415-SSP3e2066b2a7a9437194f48860affd56c0.ac671df86edaf07e96e399e3a2dbd425
    -rwx------    1 root     system   57982058496 Aug 18 19:15 volume-Image_7100-03-03-1415-c--5bd3991bdac84c48b519e19bfb1be381.e525b8eb474f54e1d34d9d02cb0b49b4
    # pooladm file layout /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/IM/volume-Image_7100-03-03-1415-c--5bd3991bdac84c48b519e19bfb1be381.e525b8eb474f54e1d34d9d02cb0b49b4 | /tmp/captured_vm.layout
    0x0-0x100000 shared
        LP 0xFE:0xF41000
        PP /dev/hdisk968 0x2E8:0xF41000
    0x100000-0x200000 shared
        LP 0x48:0x387F000
        PP /dev/hdisk866 0x1:0x387F000
    [..]
    
  • Get the layout of the linked clone (the linked clone is in my case located in /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/VOL1/volume-boot-9117MMD_658B2AD-layo583b3eb5e98b495b992fdc3accc39bc3.54c172062957d73ec92e90d203d23fba)
  • # ls /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/VOL1/volume-boot-9117MMD_658B2AD-layo583b3eb5e98b495b992fdc3accc39bc3.54c172062957d73ec92e90d203d23fba
    /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/VOL1/volume-boot-9117MMD_658B2AD-layo583b3eb5e98b495b992fdc3accc39bc3.54c172062957d73ec92e90d203d23fba
    # pooladm file layout /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/VOL1/volume-boot-9117MMD_658B2AD-layo583b3eb5e98b495b992fdc3accc39bc3.54c172062957d73ec92e90d203d23fba | tee /tmp/linked_clone.layout
    0x0-0x100000 shared
        LP 0xFE:0xF41000
        PP /dev/hdisk968 0x2E8:0xF41000
    0x100000-0x200000 shared
        LP 0x48:0x387F000
        PP /dev/hdisk866 0x1:0x387F000
    [..]
    
  • At this step you can first compare the two files, you can see some useful informations, but do not misunderstand this output. You first have to sort it to make conclusion. But you can be sure of one thing : some PPs have been modified on the linked clone and cannot be shared anymore, others are shared between the linked clone and the capture image :
  • sdiff_layout1_modifed_1

  • You can have a better view of shared and not shared PPs by sorting the output of these files, here the commands I used to do it :
  • #grep PP linked_clone.layout | tr -s " " | sort -k1 > /tmp/pp_linked_clone.layout
    #grep PP captured_vm.layout | tr -s " " | sort -k1 > /tmp/pp_captured_vm.layout
    
  • By sdiffing these two files I can now check which PPs are shared and which are not :
  • sdiff_layout2_modifed_1

  • The pooladm command can give you stats about linked clone. My understanding of the owned block count tell me that 78144 SSP blocks (not PPs) (so blocks of 4k) are uniq to this linked clones and not shared with the captured image :
  • vios1#pooladm file stat /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/VOL1/volume-boot-9117MMD_658B2AD-layo583b3eb5e98b495b992fdc3accc39bc3.54c172062957d73ec92e90d203d23fba
    Path: /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/VOL1/volume-boot-9117MMD_658B2AD-layo583b3eb5e98b495b992fdc3accc39bc3.54c172062957d73ec92e90d203d23fba
    Size            57982058496
    Number Blocks   14156655
    Data Blocks     14155776
    Pool Block Size 4096
    
    Tier: SYSTEM
    Owned Blocks    78144
    Max Blocks      14156655
    Block Debt      14078511
    

Mixed NPIV & SSP deployment

For some reasons for some machine with an I/O intensive workload, it can be usefull to put your data luns on an NPIV adapter. I'm actually working on a project involving PowerVC and the question was ask, why not mix SSP Lun for rootvg and NPIV based lun for data volume group. One more time it's very simple with PowerVC, just attach a volume, this time by choosing your Storage Volume Controller provider ... easy :

mixed1_masked

This will created NPIV adapters and create new zoning and masking on the fibre channels switches. One more time easy ....

Debugging ?

I'll not lie. I had a lot of problems with Shared Storage Pool and PowerVC but these problems were related to my configuration moving a lot during the tests. Always remind you that you'll learn from theses errors and in my case it helped my a lot to debug PowerVC :

  • From the Virtual I/O Server side check you have no core file in the /home/ios/logs directory. A core file in this directory indicates one of the command run by PowerVC just "cored" :
  • root@vios1:/home/ios/logs# ls core*
    core.9371682.18085943
    
  • From the Virtual I/O Server side check the /home/ios/logs/viosvc.log. You can check all the xml files and all the ouputs used by the vioservice command. Most of PowerVC actions are performed trough the vioservice command ....
  • root@vios1:/home/ios/logs# ls viosvc.log
    -rw-r--r--    1 root     system     10240000 Sep 11 00:28 viosvc.log
    
  • Step by step check all PowerVC actions are ok. For instance verify with the lsrep command that the iso has been copied from PowerVC to the Virtual I/O Server library. Check there is space left on the Shared Storage Pool ....
  • Sometimes the secret vioserivce api is stucked and not responding. In some cases it can be useful to rebuild the soliddb ... I'm using this script to do it (run it as root) :
  • # cat rebuilddb.sh
    #!/usr/bin/ksh
    set -x
    stopsrc -s vio_daemon
    sleep 30
    rm -rf /var/vio/CM
    startsrc -s vio_daemon
    
  • EDIT I had another info from IBM regarding the method to rebuild the SolidDB, using my script won't properly bring up the SolidDB back up properly and could leave you in a bad state. Just add this at the end of the script :
  • pid=$(lssrc -s vio_daemon | awk 'NR==2 {print $2}')
    kill -1 $pid  
    
  • On PowerVC side when you have problem it is always good to increase the verbosity of the logs (located in /var/log) (in this case nova) (restart PowerVC after setting verbosity level)
  • # openstack-config --set /etc/nova/nova-9117MMD_658B2AD.conf DEFAULT default_log_levels powervc_nova=DEBUG,powervc_k2=DEBUG,nova=DEBUG
    

Conclusion

It takes me more than two months write this post. Why ? Just because PowerVC design is not documented. It work like a charm, but nobody will explain you HOW. I hope this post will help you to understand how PowerVC is working. I'm a huge fan of PowerVC and SSP, try it by yourself and you'll see that it is a pleasure to use it. It's simple, effecient, and powerfull. Anybody can give me an access to a PowerKVM host to write & proove that PowerVC is also simple and efficient with PowerKVM ... ?

6 thoughts on “Exploit the full potential of PowerVC by using Shared Storage Pools & Linked Clones | PowerVC secrets about Linked Clones (pooladm,mksnap,mkclone,vioservice)

  1. I want to add a host (machine P7) to manage the PowerVC. But I don’t want to add all lpars (older VIOS version). How do I exclude them from the scan?

  2. Great article as per normal.

    PowerVC Sounds nice and the IBM demo looks pretty but with a mksysb and NIM I don’t require an outage to clone my VM…am I missing something? I not then I think I’ll wait for the next release. :)

  3. Excellent work exploiting these excellent features that IBM has to offer …. I’m sick of hearing that IBM / AIX is dead, when will customers realise that they will never be able to run workloads with reliability and efficiency on x86 as they can with PowerPC

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>