Adventures in IBM Systems Director in System P environment. Part 5: VMcontrol and Shared Storage Pool

I am working on IBM Systems Director since almost one year. I remember how I was frustrated when someone from IBM told me that I can’t use VMcontrol because of our SAN environment (CISCO Switches + EMC 40K Storage array). My first question after this statement was “OK no problem, i’ll use it over a Shared Storage Pool”; I was even more frustrated when the answer was “Uh, not yet supported”. Fortunately there was a “yet” in this sentence. This happens six months ago. With the IBM Systems Director update to 6.3.2, and the new Virtual I/O Server version this fonctionality is now supported. I have successfully implemented VMcontrol over a Shared Storage Pool and I doesn’t have enough words to tell you how incredible it is. AWESOME. Here are my tips and tricks to setup VMcontrol over a Shared Storage Pool (v3). Enjoy :


Before trying to deploy a Virtual appliance, or even to capture one, check all points one by one :

  • Update IBM System Director to 6.3.2 :
  • # smcli lsver
  • Ensure that all your Virtual I/O Servers participing in the Shared Storage Pool are Virtual I/O Server (I highly encourage you to install ifixes IV31624m0a and IV32091s0a) :
  • # ioslevel
    # oem_setup_env
    # emgr -l
    1    S    IV31624m0a 12/17/12 14:30:52            VIOS + ifixes
    2    S    IV32091s0a 12/17/12 14:31:10            cleanup fails on source vios
  • On all your Virtual I/O Servers ensure that the common agent is correctly installed with the viocluster subagent in 6.3.2 version :
  • # /opt/ibm/director/agent/bin/ -listFeatures | grep -i vios Disabled Enabled Enabled
  • On your NIM (Network Installation Manager) server, ensure that the common agent is correctly installed with the common repository subagent, and the nim subagent :
  • # /opt/ibm/director/agent/bin/ -listFeatures | grep -E "nim|cr" Enabled Disabled Enabled
  • dsm, openssl, and openssh filesets have to be installed on the NIM server :
  • # lslpp -Lc | grep -E "dsm.core|openssh|openssl"
    dsm:dsm.core: : :C:F:Distributed Systems Management Core: : : : : : :0:0:/:1241
    openssh.base:openssh.base.client: : :C: :Open Secure Shell Commands: : : : : : :0:0:/:
    openssh.base:openssh.base.server: : :C: :Open Secure Shell Server: : : : : : :0:0:/:
    openssh.license:openssh.license: : :C: :Open Secure Shell License: : : : : : :0:0:/: : :C: :Open Secure Shell Documentation - U.S. English: : : : : : :0:0:/:
    openssl.base:openssl.base: : :C: :Open Secure Socket Layer: : : : : : :0:0:/:
    openssl.license:openssl.license: : :C: :Open Secure Socket License: : : : : : :0:0:/: : :C: :Open Secure Socket Layer: : : : : : :0:0:/:

As always “discover, access, inventory”

On all IBM Systems Director objects involved in VMcontrol (Virtual I/O Servers, NIM Server, Pseries on which virtual appliance will be deployed, Pseries on which virtual appliance will be captured, HMC controlling theses Pseries) ensure that access is granted, and a full inventory has been collected :

  • Virtual I/O Server (if someone knows how to get “Last Collected Inventory” from command line it’ll be useful) :
  • # smcli lssys -oT -A AccessState vios1,vios2,vios3,vios4
    vios1, Server, 0x5067e: Unlocked
    vios1, OperatingSystem, 0x50646: Unlocked
    vios2, OperatingSystem, 0x5064c: Unlocked
    vios2, Server, 0x50707: Unlocked
    vios3, Server, 0x506a2: Unlocked
    vios3, OperatingSystem, 0x50654: Unlocked
    vios4, Server, 0x506e3: Unlocked
    vios4, OperatingSystem, 0x5065c: Unlocked

  • NIM Server :
  • # smcli lssys -oT -A AccessState nim
    nim, OperatingSystem, 0x28a28: Unlocked
    nim, Server, 0x47d73: Unlocked
  • Pseries :
  • # smcli lssys -oT -A AccessState P520-TST-1,P520-TST-2
    P520-TST-1, Server, 0x18312: Unlocked
    P520-TST-2, Server, 0x1830c: Unlocked

  • HMC :
  • # smcli lssys -oT -A AccessState hmc1
    hmc1, HardwareManagementConsole, 0x33bfe: Unlocked

If everything is going well, IBM Systems Director has now discovered a new Virtual I/O Cluster and a new Shared Storage Pool associated to it :

  • Virtual I/O Server :
  • # smcli lssys -le vio000tst-cluster
        DisplayName (Name) : vio000tst-cluster (vio000tst-cluster)
        Description (Description) : Storage Manageable Endpoint (Storage System)
        SerialNumber (Serial Number) : a549bcf4c77311e18f5400215e487480 (a549bcf4c77311e18f5400215e487480)
        MachineType (Machine Type) : VIOS Cluster (VIOS Cluster)
        PrimaryHostName (Primary Host Name) : (
        Manufacturer (Manufacturer) : IBM (IBM)
        AccessState (Access State) : Unlocked (Full Access)
        CommunicationState (Communication State) : 2 (Communication OK)
        Model (Model) : VIOS Cluster (VIOS Cluster)
        CreatedDate (Created Date) : 2012-12-17T19:35:08+01:00 (2012-12-17T19:35:08+01:00)
        ChangedDate (Changed Date) : 2012-12-31T11:14:00+01:00 (2012-12-31T11:14:00+01:00)
        CurrentTimeZone (Agent Time Zone Offset) : -1 ()
        IPv4Address (IP Addresses) : { '', '', '', '' } (,,,
        HostName (IP Hosts) : { '', '', '', '' } (,,,
        OperatingState (State) : 0 (Unknown)
        DisplayPingTime (Query Vital Properties Interval) : 2 (Every hour)
        DisplayOperationalStatusTime (Verify Connection Interval) : 3 (Every 15 minutes)

  • Shared Storage Pool :
  • # smcli lssspstoragepool -C 0x4fb69 -l
            OID : 0x4fb70
            Capacity : 65,280
            RemainingManagedSpace : 30,574
            Threshold : 95
            CreatedDate : 12/17/12 7:35 PM
            ChangedDate : 12/31/12 11:12 AM
    # smcli lssharedstpool -l
            OID : 0x4fb69
            PrimaryHostName :
            CreatedDate : 12/17/12 7:35 PM
            ChangedDate : 12/31/12 11:14 AM
    # smcli lssspviosvs -C 0x4fb69
    vios1, vios2, vios3, vios4
    # smcli lssspphysvol -C 0x4fb69 -l
    Repository Volumes
            OID : 0x522bb
            UDID : 1D0667520609SYMMETRIX03EMCfcp
            TotalSize (MB) : 32768
            DeviceID on end-point : hdisk5
            CreatedDate : 12/18/12 11:12 AM
            ChangedDate : 12/31/12 11:15 AM
    Storage Pool Volumes
            OID : 0x522c3
            UDID : 1D0667520709SYMMETRIX03EMCfcp
            TotalSize (MB) : 32768
            DeviceID on end-point : hdisk6
            CreatedDate : 12/18/12 11:12 AM
            ChangedDate : 12/31/12 11:15 AM
            OID : 0x522c1
            UDID : 1D0667520809SYMMETRIX03EMCfcp
            TotalSize (MB) : 32768
            DeviceID on end-point : hdisk7
            CreatedDate : 12/18/12 11:12 AM
            ChangedDate : 12/31/12 11:15 AM

  • Check the Shared Storage Pool is present in the storage managment tab :

Step by step VMcontrol workflow

Here is a step by step workflow for the deployment of a new Workload :

  • 1/ Prerequistes sum up :
    • Ensure that IBM Systems Director version is 6.3.2.
    • Install the IBM System Director Common Agent with the Virtual I/O Server Cluster Subagent on each Shared Storage Pool’s Virtual I/O Server.
    • Install the IBM System Director Common Agent with the NIM Subagent and Common Repository Subagent on NIM Server.
    • Run a full inventory on each IBM System Director objects used by VMcontrol.
    • Check that the Virtual I/O Cluster and the Shared Storage Pool are discovered and accessible.
  • 2/ Create a Common Repository on NIM Server using ‘mkrepos’ command on IBM Systems Director. (Repository can also be created directly on the Virtual I/O Cluster.)
  • 3/ Capture or import a Virtual Appliance using ‘captureva’ or ‘importva’ command on IBM Systems Director. A Virtual Appliance can be capture from :
    • a mksysb.
    • a lpp_source.
    • a Virtual Server (a lpar) (lpar must be powered off to be captured).
    • an existing Virtual appliance.
  • 4/ Using ‘deployva’ command from IBM Systems Director deploy the previously captured Virtual Appliance. Use ‘lscustomization’ command to check available and compatible parameters. A Virtual Appliance can be deployed on :
    • A Server (if deployed on a Server Virtual Appliance resilience will not be active).
    • A System Pool (if deployed on a System Pool Virtual Appliance resilience will be active, and Virtual Appliance can automatically be relocated based on user defined criteria)
  • 5/ Using the Hardware Management Console IBM Systems Director create the new logical partition :
    • If you’re using dual Virtual I/O Servers two client scsi adapters will be created on the new logical parition.
    • On each Virtual I/O Servers a server scsi adapter will be created.
    • A new backing device will be created in the Shared Storage Pool.
  • 6/ IBM System Director will ‘prepare’ the NIM server trough the NIM subagent :
    • Management Object will be created (a hmc object, a cec object).
    • If this is the first deploy of the Virtual Appliance an associated spot will be created.
    • A machine object will be created (be careful, NIM has to resolve the hostname).
    • scripts, resolv_conf, bosinst_data object will be created
  • 7/ Previously created resources are correctly exported to enable the installation of the new Workload.
  • 8/ Logical partition is booted and installed using Hardware Management Console (lpar_netboot).
  • 9/ Resources are unexported.
  • 10/ Inventory is collected on the new created Workload.

Click on the image to enlarge it, this is how VMcontrol is working :

Common image repository creation

VMcontrol needs an Image Repository to store captured Virtual Appliances, a Common Image Repository can be created on a NIM Server or on a Virtual I/O Server. I’ve created 2 repositories, one on the NIM server and the other one on a Virtual I/O Server. Here is an example : how to create a Common Image Repository on a Virtual I/O Server. My “main” Common Image Repository was created on my NIM server, this is the one used for the rest of this post.

  • Use the ‘mkrepos’ command to identify the storage OID, in this case 326512 :
  • # smcli mkrepos -C | grep -ip vios1
    vios1 (329286)
            Min:    1
            Max:    1
            Description:    null
            Key,    Storage,        Storage location,       Type,   Available GB,   Total GB,       Description,    OID
            [vio000tst-ssp]         vio000tst-ssp vio000tst-cluster     SAN     23      68              326512
  • Use the ‘lssys’ command to identify the Operating System on which the Common Image Repository will be created :
  • # smcli lssys  -oT  vios1
    vios1, Server, 0x5067e
    vios1, OperatingSystem, 0x50646
  • Using the Operating System’s OID and the Storage’s OID build the ‘mkrepos’ command and create the common repository :
  • # smcli mkrepos -S 326512 -O 0x50646 -n vio-common-repository
  • All repositories can be listed with ‘lsrepos’ command :
  • # smcli lsrepos -l
            SourceTokens:{ 'NO_IR_DELETE' }

Virtual appliance capture

A captured Virtual Appliance is stored on a repository. Before trying to capture a new Virtual Appliance the first thing to do is to identify the common repository that will be used. As always, I’m working with ID or OID not with names :

  • Identify the common repository with the ‘lsrepos’ command :
  • # smcli lsrepos -o
    nim, 340561 (0x53251)
    vio-repository, 338309 (0x52985)

After the repository has been identify use the ‘captureva’ command to capture the virtual appliance, here are two examples, one using an mksysb the second one using an lpp_source on the NIM server :

  • Capturing a virtual appliance for an mksysb :
  • # smcli captureva -v -r 340561 -F repos://export/nim/images/moonstone -n 7100-02-00-1241-virtual_appliance -D "imported from mksysb 7100-02-00-1241" -A "cpushare=0.1,memsize=512"
    Wed Dec 19 16:26:49 CET 2012  captureva Operation started.
    Attempt to get capture object data from file repos://export/nim/image/moonstone
    Update collection with user entered attributes.
    Call captureFile function
    Call capture command executed. Return code= 340,914
    Wed Dec 19 16:27:40 CET 2012  captureva Operation took 51 seconds.
  • Capturing a virtual appliance from an lpp_source :
  • # smcli captureva -vvvv -r 340561 -F repos:6100-08-01-1245-lpp_source -n 6100-08-01-1245-virtual_appliance -D "imported from 6100-08-01-1245-lpp_source" -A "cpushare=0.1,memsize=512"
    Thu Dec 27 18:01:05 CET 2012  captureva Operation started.
    Attempt to get capture object data from file repos:6100-08-01-1245-lpp_source
    Update collection with user entered attributes.
    Call captureFile function
    Call capture command executed. Return code= 350,520
    Thu Dec 27 18:01:38 CET 2012  captureva Operation took 32 seconds.

If a mksysb is used for the capture, a new NIM Object is created :

# lsnim -l appliance-1_image-1
   class          = resources
   type           = mksysb
   Rstate         = ready for use
   prev_state     = unavailable for use
   location       = /export/nim/appliances/84dd48b5-2eaa-416c-b70b-fe4fe3c5c6c1/moonstone
   version        = 7
   release        = 1
   mod            = 2
   oslevel_r      = 7100-02
   alloc_count    = 0
   server         = master
   extracted_spot = nimrf-0000000000000005-spot
   creation_date  = Wed Dec 19 16:28:48 2012

All captured Virtual Appliances are stored in /exports/nim/appliances:

# ls /export/nim/appliances
84dd48b5-2eaa-416c-b70b-fe4fe3c5c6c1  bf6c42a3-45c8-4764-835e-0b4dc10a90a4  d5227bc5-85ef-4e82-ba86-b7835652a5f7  lost+found
b907734c-84a5-41eb-a112-db7df014984d  d03f6420-d0e7-4756-8e02-7c2e350cfabb  da0b9051-4046-4341-9b41-b8c6dbefb9e6  version

Each Virtual Appliance is described in an ovf (open visualization format) file. This file can be edited by hand :

# more da0b9051-4046-4341-9b41-b8c6dbefb9e6.ovf

Deploy a new Virtual Appliance

Before trying to deploy a new Virtual Appliance you have to collect some information :

  • What is the OID of the Virtual Appliance to be deployed (use ‘lsva’ command to list virtual appliances):
  • # smcli lsva -o
    5300-12-05-1140-virtual_appliance, 346436 (0x54944)
    6100-08-01-1245-virtual_appliance, 350520 (0x55938)
    7100-02-00-1241-virtual_appliance, 359718 (0x57d26)
  • Is this a new Virtual Appliance (deploy_new : the lpar will be created) or an existing virtual appliance (deploy_existing : an existing lpar will be used to deploy the Virtual Appliance).
  • On which host, or on which system pool will the virtual appliance be deployed (use ‘lsdeploytargets’ to check eligible hosts):
  • On a server :
  • # smcli lsdeploytargets -v -a deploy_new -V 340914 | grep TST
    P520-TST-1, (0x18312) (P520-TST-1)
    P520-TST-2, (0x1830c) (P520-TST-2)
  • On a systems pool :
  • # smcli lsdeploytargets -v -o -a deploy_new -V 350520 | grep TST
    FRMTST-systempool, 359968 (0x57e20) (FRMTST-systempool)
  • Some parameters can be tuned (storage pool used, hostname, ip, etc..). Use ‘lscustomization’ command to check tunable parameters :
  • # smcli lscustomization -a deploy_new -V 350520 -s 0x1830c
            Description:    Network Mapping
            Changeable Columns:
                    Column Name*    CLI Attribute
                    Virtual Networks on Host        hostVnet
            Key,    Network Name,   Description,    Virtual Networks on Host*
            [Network 1]     Network 1       Default network Discovered/1122/0
            Options:        Discovered/1122/0 (Discovered/1122/0 (VLAN 1122, Bridged)),
                            ETHERNET0/1122 (Discovered/1122/0 (VLAN 1122, Bridged)),
                            Discovered/4094/0 (Discovered/4094/0 (VLAN 4094, Not Bridged)),
                            ETHERNET0/4094 (Discovered/4094/0 (VLAN 4094, Not Bridged)),
                            Discovered/999/0 (Discovered/999/0 (VLAN 999, Bridged)),
                            ETHERNET0/999 (Discovered/999/0 (VLAN 999, Bridged))
            Min:    1
            Max:    1
            Description:    The storage pools available for virtual disk allocation. Used together with the storagemapping parameter.
            Key,    Name,   Location,       VIOS Count,     Maximum Allocation (MB),        Description
            [326512]        vio000tst-ssp VIOS Cluster: vio000tst-cluster       2       18209   Shared Storage Pool accessed through one or more VIOS.
  • Here are two examples of a Virtual Appliance deployment, one on a Server, the second one on a System Pool :
  • On a Server :
  • # smcli deployva -v -s 0x1830c -V 340914 -a deploy_new -A "poolstorages=326512,,,,,,"
    Fri Dec 21 11:48:03 CET 2012  deployva Operation started.
    Attempt to get the default customization data for deploy_new.
    Attempt to get the deploy_new customization data.
    Update collection with user entered attributes.
    Attempt to validate the deploy request for 340,914.
    Attempt to deploy new.
    Workload 7100-02-00-1241-virtual_appliance_56750 was created.
    DNZLOP412I Deploying virtual appliance 7100-02-00-1241-virtual_appliance to server P520-TST-2.
    DNZLOP412I Deploying virtual appliance 7100-02-00-1241-virtual_appliance to server carbon.
    DNZLOP401I Booting virtual server carbon to the Open Firmware state.
    DNZLOP402I Gathering network adapter information for virtual server carbon.
    DNZLOP405I Initiating deploy processing on the NIM master.
    Virtual server carbon added to workload 7100-02-00-1241-virtual_appliance_56750.
    Workload 7100-02-00-1241-virtual_appliance_56750 is stopped.
    DNZIMC094I Deployed Virtual Appliance 7100-02-00-1241-virtual_appliance to new Server carbon hosted by system P520-TST-2.
    Fri Dec 21 12:13:14 CET 2012  deployva Operation took 1510 seconds.
  • On a System Pool :
  • smcli deployva -v -V 350520 -g 0x57e20 -m -1093167946908409598_01 -a deploy_new -A "poolstorages=326512,,,,,,"

As you can see, the new Virtual Appliance is created in almost 20 minutes, no so bad ….. Here is a screenshot, with some deployed Virtual Appliances :

What’s next

In my opinion, VMcontrol is very powerfull, deploying a new AIX lpar in 20 minutes is incredible. Combined with a Shared Storage Pool, VMcontrol can easily be used and installed by everyone. In the part 6 of “Adventure in IBM Systems Director” I’ll post about how to create a resilient workload. A resiliant workload has to be created on a System Pool and can be automatically relocated between the System Pool’s hosts. These workloads are monitored with resiliency policy, if some problems are detected, action and relocation are taken to maintain workload resilience. I do not want to talk too much about that in this post, you’ll have to wait the next one.

Hope this can help.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>