Adventures in IBM Systems Director in System P environment. Part 6 : VMcontrol and Shared Storage Pool Linked Clones

As many of you already know Virtual I/O Shared Storage Pools comes with one very cool feature : snapshots ! If you have read the Part 5 of these adventures, you know how to use VMcontrol and deploy a new Virtual Appliance. The part 5 tells you how to deploy a Virtual Appliance through NIM using a mksysb image or a lpp_source. Using a mksysb or a lpp_source can takes time depending on the lpar configuration (entitlement capacity, virtual processors ..) or on the NIM network speed (for instance a NIM with a 100 Mbits network adapter). In my case a rte installation takes approximately twenty to thirty minutes. By using Shared Storage Pools feature, VMcontrol can create a snapshot of an actual Workload and use it to create a new one. This is called a Linked Clone (because the new Virtual Appliance will obviously be linked to its source Workload by its snapshot). By using a linked clone a new Virtual Appliance deployment takes twenty seconds, no joke ….

Here are the four needed steps to use linked clones, each one will be described in details in this post :

  1. Create a linked clones repository. This VMcontrol repository is created on a Virtual I/O Server participating in the Shared Storage Pool.
  2. On a existing Workload deploy the Activation Engine.
  3. Capture the Workload (the one with the Activation Engine installed), to create a new Virtual Appliance. At this point a snapshot of the Workload is created on the Shared Storage Pool.
  4. Deploy the Virtual Appliance to create a new Workload, the Workload will be booted an reconfigured by the Activation Engine. The activation engine will set the new hostname and the IP address.

Repository creation

  • Get the OID of one of the Virtual I/O Server participing in the Shared Storage Pool, this OID is needed for the creation of the repository :
  • # smcli lssys -oT vios03
    vios03, Server, 0x506a2
    vios3, OperatingSystem, 0x50654
  • A repository has to be created on a storage (this storage location can be on a NIM server or in our case for linked clones on a Shared Storage Pool). A list of available storage locations can be found by using the mkrepos command. In this case the storage OID is 326512 :
  • # mkrepos -C | grep -ip vios03
    vios03 (329300)
            Min:    1
            Max:    1
            Description:    null
            Key,    Storage,        Storage location,       Type,   Available GB,   Total GB,       Description,    OID
            [tst-ssp]       tst-ssp tst-cluster     SAN     6       68              326512
  • With the storage OID and the Virtual I/O Server OID create the repository and give it a name :
  • # smcli mkrepos -S 326512 -O 0x50654 -n linked-clones-repository
  • List the repositories and check the new one is created :
  • # smcli lsrepos

Activation Engine

The Activation Engine is a script used to customize newly deployed Virtual Appliances. By default it changes the ip address and the hostname with new ones (you have to set ip and hostname of the new Virtual Appliance when you’ll deploy it). The Activation Engine can be customized but this post will not talk about it. Here is a link to the documentation : click here

  • The Activation Engine can be found on the director itself, in /opt/ibm/director/proddate/activation-engine/vmc.vsae.tar. Copy it to the Workload you want to capture :
  • # scp /opt/ibm/director/proddata/activation-engine/vmc.vsae.tar pyrite:/root
    root@pyrite's password:
    vmc.vsae.tar                                                                                                                                                                  100% 7950KB   7.8MB/s   00:01
  • Unpack it and run the installation :
  • # tar xvf vmc.vsae.tar
    x activation-engine-2.1-1.13.aix5.3.noarch.rpm, 86482 bytes, 169 tape blocks
    x, 2198 bytes, 5 tape blocks
    # export JAVA_HOME=/usr/java5/jre
    # ./
    Install VSAE and VMC extensions
    [2013-06-03 11:18:04,871] INFO: Looking for platform initialization commands
    [2013-06-03 11:18:04,905] INFO:  Version: AIX pyrite 1 6 00XXXXXXXX00
    [2013-06-03 11:18:15,082] INFO: Created system services for activation.
  • Prepare the capture by running newly installed script Be aware that running this command will shutdown your host, so be sure all the customization that you want have been made on this host :
  • # /opt/ibm/ae/ --reset
    [2013-06-03 11:23:43,575] INFO: Looking for platform initialization commands
    [2013-06-03 11:23:43,591] INFO:  Version: AIX pyrite 1 6 00C0CE744C00
    [2013-06-03 11:23:52,476] INFO: Cleaning AR and AP directories
    [2013-06-03 11:23:52,492] INFO: Shutting down the system
    Mon Jun  3 11:23:53 CDT 2013
    Broadcast message from root@pyrite.prodinfo.gca (tty) at 11:23:54 ...
    System maintenance in progress.
    All processes will be killed now.


We’re now ready to capture the host. You’ll need the Server’s OID and the repository’s OID :

  1. The repository OID is 0x6ec9b :
  2. # smcli lsrepos -o | grep linked-clones-repository
    linked-clones-repository, 453787 (0x6ec9b)
  3. The server to caputre OID is 0x6ef4e :
  4. smcli lssys -oT pyrite
    pyrite, Server, 0x6ef4e
  5. Capture the server with the captureva command or though the GUI (be sure you have a Server and Operating System object for this one):
  6. # smcli captureva -r 0x6ec9b -s 0x6ef4e -n pyrite-vmc-va -D "imported from server pyrite"
    Mon Jun 03 19:27:17 CEST 2013  captureva Operation started.
    Get capture customization data
    Call capture function
    DNZLOP411I Capturing virtual server pyrite to virtual appliance pyrite-vmc-va in repository linked-clones-repository.
    DNZLOP912I Disk group to be captured: DG_05.29.2013-13:26:28:062
    DNZLOP900I Requesting SAN volume(s)
    DNZLOP948I New disk group: DG_06.03.2013-19:27:21:609
    DNZLOP413I The virtual appliance is using disk group DG_06.03.2013-19:27:21:609 with the following SAN volumes: [pyrite-vmc-va4].
    DNZLOP414I The virtual server is using disk group DG_05.29.2013-13:26:28:062 with the following SAN volumes: [IBMsvsp22].
    DNZLOP909I Copying disk images
    DNZLOP409I Creating the OVF for the virtual appliance.
    Call capture command executed. Return code= 456,287
    Mon Jun 03 19:27:28 CEST 2013  captureva Operation took 11 seconds.
  7. This output tells you two things about the storage : the captured virtual server is using a backing device on the Shared Storage Pool called IBMsvsp22; a snapshot of this backing device has been created an will be used by the virtual appliance, this snapshot is called pyrite-vmc-va4. On the Shared Storage Pool you can check -by using the snapshot- command that a snaphot of IBMsvsp22 has been created :
  8. # snapshot -clustername vio-cluster -list -spname vio-ssp -lu IBMsvsp22
    Lu Name          Size(mb)    ProvisionType      %Used Unused(mb)  Lu Udid
    IBMsvsp22        9537        THIN                 41% 9537        7d2895ede7cb14dab04b988064616ff2
    Lu(Client Image)Name     Size(mb)       ProvisionType     %Used Unused(mb)     Lu Udid
    pyrite-vmc-va4           9537           THIN                41% 9537           e68b174abd27e3fa0beb4c8d30d76f92
  9. Optionally you can check the creation of the Virtual Appliance on the Director itself :
  10. # smcli lsva -o | grep -i pyrite  
    pyrite-vmc-va, 456287 (0x6f65f)
    # smcli lsva -l -V 0x6f65f
            Description:imported from server pyrite


We’re now ready to deploy the Virtual Appliance. For this one you’ll need the Virtual Appliance OID (we already have it : 0x6f65f), a system or a system pool where the Virtual Appliance we’ll be deployed, and a deploymentplanid. (Please don’t ask why we need a deploymentplanid, I don’t know, if an Ibmer is reading this one please tell us in why … :-)):

  1. In my case I’m using a system pool with OID 0x57e20 (by deploying a Virtual Appliance in a system pool this one can be resilient and automatically moved between the systems for instance in case of a hardware failure or anything else). Use lssys if you’re deploying on a system, or lssyspool if you’re deploying on a system pool :
  2. # smcli lssyspool 
    Show server system pool list. 1 Server system pool(s) found.
    ID:359968 (0x57e20)
    Description:Server System Pool
    Server system pool properties
    Storage Pool Case
    Storage Pool:326512 (0x4fb70),  vio-ssp
    Storage Pool owning Subsystem:vio-cluster
  3. Use the lscustomization command to find the deployementplanid (the -H option is telling my Workload will be resilient) :
  4. # smcli lscustomization -a deploy_new -V 0x6f65f -g 0x57e20 -H true
            Value:  -7980877749837517784_01
            Description:    null
  5. It’s now time to deploy, this operation take 30 seconds, no joke :
  6. #smcli deployva -v -g 0x57e20 -V 0x6f65f -m -7980877749837517784_01 -a deploy_new -A poolstorages=326512,,,,,,,"
    Mon June 03 20:01:52 CEST 2013  deployva Operation started.
    Attempt to get the default customization data for deploy_new.
    Attempt to get the deploy_new customization data.
    Update collection with user entered attributes.
    Attempt to validate the deploy request for 456,287.
    Attempt to deploy new.
    Workload pyrite-vmc-va_52529 was created.
    Virtual server ruby added to workload pyrite-vmc-va_52529.
    Workload pyrite-vmc-va_52529 is stopped.
    DNZIMC094I Deployed Virtual Appliance pyrite-vmc-va to new Server ruby hosted by system .
    Mon May 27 18:31:41 CEST 2013  deployva Operation took 30 seconds.
  7. Have a look on the Shared Storage Pool and check the newly created server is using the snapshot created by the capture called pyrite-vmc-va4 :
    lsmap -vadapter vhost1
    SVSA            Physloc                                      Client Partition ID
    --------------- -------------------------------------------- ------------------
    vhost1          U8203.E4A.060CE74-V2-C12                     0x00000004
    VTD                   deploy504c27f14
    Status                Available
    LUN                   0x8200000000000000
    Backing device        /var/vio/SSP/vio-cluster/D_E_F_A_U_L_T_061310/VOL1/AEIM.47bab2102f7794906a65be98d9f126bf
    Mirrored              N/A
    VTD                   vtscsi1
    Status                Available
    LUN                   0x8100000000000000
    Backing device        IBMsvsp24.8c380f19e79706b992f9a970301f944a
    Mirrored              N/A
    # snapshot -clustername udvio000tst-cluster -list -spname udvio000tst-ssp -lu IBMsvsp24
    Lu(Client Image)Name     Size(mb)       ProvisionType     %Used Unused(mb)     Lu Udid
    pyrite-vmc-va4           9537           THIN                41% 9537           e68b174abd27e3fa0beb4c8d30d76f92

Hope this can help !

One thought on “Adventures in IBM Systems Director in System P environment. Part 6 : VMcontrol and Shared Storage Pool Linked Clones

  1. Pingback: Exploit the full potential of PowerVC by using Shared Storage Pools & Linked Clones | PowerVC secrets about Linked Clones (pooladm,mksnap,mkclone,vioservice) | chmod666

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>