Tips and tricks for PowerVC 1.2.3 (PVID, ghostdev, clouddev, rest API, growing volumes, deleting boot volume) | PowerVC 1.2.3 Redbook

Writing a Redbook was one of my main goal. After working days and nights for more than 6 years on PowerSystems IBM gave me the opportunity to write a Redbook. I was looking on the Redbook residencies page since a very very long time to find the right one. As there was nothing new on AIX and PowerVM (which are my favorite topics) I decided to give a try to the latest PowerVC Redbook (this Redbook is an update, but a huge one. PowerVC is moving fast). I am a Redbook reader since I’m working on AIX. Almost all Redbooks are good, most of them are the best source of information for AIX and Power administrators. I’m sure that like me, you saw that part about becoming an author every time you are reading a RedBook. I can now say THAT IT IS POSSIBLE (for everyone). I’m now one of this guys and you can also become one. Just find the Redbook that will fit for you and apply on the Redbook webpage (http://www.redbooks.ibm.com/residents.nsf/ResIndex). I wanted to say a BIG Thank you to all the people who gave me the opportunity to do that, especially Philippe Hermes, Jay Kruemcke, Eddie Shvartsman, Scott Vetter, Thomas R Bosthworth. In addition to these people I wanted also to thanks my teammates on this Redbook: Guillermo Corti, Marco Barboni and Liang Xu, they are all true professional people, very skilled and open … this was a great team ! One more time thank you guys. Last, I take the opportunity here to thanks the people who believed in me since the very beginning of my AIX career: Julien Gabel, Christophe Rousseau, and JL Guyot. Thank you guys ! You deserve it, stay like you are. I’m now not an anonymous guy anymore.

redbook

You can download the Redbook at this address: http://www.redbooks.ibm.com/redpieces/pdfs/sg248199.pdf. I’ve learn something during the writing of the Redbook and by talking to the members of the team. Redbooks are not there to tell and explain you what’s “behind the scene”. A Redbook can not be too long, and needs to be written in almost 3 weeks, there is no place for everything. Some topics are better integrated in a blog post than in a Redbook, and Scott told me that a couple of time during the writing session. I totally agree with him. So here is this long awaited blog post. The are advanced topics about PowerVC read the Redbook before reading this post.

Last one thanks to IBM (and just IBM) for believing in me :-). THANK YOU SO MUCH.

ghostdev, clouddev and cloud-init (ODM wipe if using inactive live partition mobility or remote restart)

Everybody who is using cloud-init should be aware of this. Cloud-init is only supported with AIX version that have the clouddev attribute available on sys0. To be totally clear at the time of writing this blog post you will be supported by IBM only if you use AIX 7.1 TL3 SP5 or AIX 6.1 TL9 SP5. All other versions are not supported by IBM. Let me explain why and how you can still use cloud-init on older versions just by doing a little trick. But let’s first explain what the problem is:

Let’s say you have different machines some of them using AIX 7100-03-05 and some of them using 7100-03-04, both use cloud-init for the activation. By looking at cloud-init code at this address here we can say that:

  • After the cloud-init installation cloud-init is:
  • Changing clouddev to 1 if sys0 has a clouddev attribute:
  • # oslevel -s
    7100-03-05-1524
    # lsattr -El sys0 -a ghostdev
    ghostdev 0 Recreate ODM devices on system change / modify PVID True
    # lsattr -El sys0 -a clouddev
    clouddev 1 N/A True
    
  • Changing ghostdev to 1 if sys0 don’t have a clouddev attribute:
  • # oslevel -s
    7100-03-04-1441
    # lsattr -El sys0 -a ghostdev
    ghostdev 1 Recreate ODM devices on system change / modify PVID True
    # lsattr -El sys0 -a clouddev
    lsattr: 0514-528 The "clouddev" attribute does not exist in the predefined
            device configuration database.
    

This behavior can directly be observed in the cloud-init code:

ghostdev_clouddev_cloudinit

Now that we are aware of that, let’s make a remote restart test between two P8 boxes. I take the opportunity here to present you one of the coolest feature of PowerVC 1.2.3. You can now remote restart your virtual machines directly from the PowerVC GUI if you have one of your host in a failure state. I highly encourage you to check my latest post about this subject if you don’t know how to setup remote restartable partitions http://chmod666.org/index.php/using-the-simplified-remote-restart-capability-on-power8-scale-out-servers/:

  • Only simplified remote restart can be managed by PowerVC 1.2.3, the “normal” version of remote restart is not handle by PowerVC 1.2.3
  • In the compute template configuration there is now a checkbox allowing you to create remote restartable partition. Be careful: you can’t go back to a P7 box without having to reboot the machine. So be sure your Virtual Machines will stay on P8 box if you check this option.
  • remote_restart_compute_template

  • When the machine is shutdown or there is a problem on it you can click the “Remotely Restart Virtual Machines” button:
  • rr1

  • Select the machines you want to remote restart:
  • rr2
    rr3

  • While the Virtual Machines are remote restarting, you can check the states of the VM and the state of the host:
  • rr4
    rr5

  • After the evacuation the host is in “Remote Restart Evacuated State”:

rr6

Let’s now check the state of our two Virtual Machines:

  • The ghostdev one (the sys0 messages in the errpt indicates that the partition ID has changed AND DEVICES ARE RECREATED (ODM Wipe)) (no more ip address set on en0):
  • # errpt | more
    IDENTIFIER TIMESTAMP  T C RESOURCE_NAME  DESCRIPTION
    A6DF45AA   0803171115 I O RMCdaemon      The daemon is started.
    1BA7DF4E   0803171015 P S SRC            SOFTWARE PROGRAM ERROR
    CB4A951F   0803171015 I S SRC            SOFTWARE PROGRAM ERROR
    CB4A951F   0803171015 I S SRC            SOFTWARE PROGRAM ERROR
    D872C399   0803171015 I O sys0           Partition ID changed and devices recreat
    # ifconfig -a
    lo0: flags=e08084b,c0
            inet 127.0.0.1 netmask 0xff000000 broadcast 127.255.255.255
            inet6 ::1%1/0
             tcp_sendspace 131072 tcp_recvspace 131072 rfc1323 1
    
  • The clouddev one (the sys0 message in the errpt indicate that the partition ID has changed) (note that the errpt message does not indicate that the devices are recreated):
  • # errpt |more
    60AFC9E5   0803232015 I O sys0           Partition ID changed since last boot.
    # ifconfig -a
    en0: flags=1e084863,480
            inet 10.10.10.20 netmask 0xffffff00 broadcast 10.244.248.63
             tcp_sendspace 262144 tcp_recvspace 262144 rfc1323 1
    lo0: flags=e08084b,c0
            inet 127.0.0.1 netmask 0xff000000 broadcast 127.255.255.255
            inet6 ::1%1/0
             tcp_sendspace 131072 tcp_recvspace 131072 rfc1323 1
    

VSAE is designed to manage ghostdev only OS on the other hand cloud-init is designed to manage clouddev OS. To be perfectly clear here are how ghostdev and clouddev works. But we first need to answer a question. Why do we need to set clouddev or ghostdev to 1 ? The answer is pretty obvious, one of this attribute needs to be set to 1 before capturing the Virtual Machine. When the Virtual Machines is captured, one of this attributes is set to 1. When you will deploy a new Virtual Machine this flag is needed to wipe the ODM before reconfiguring the virtual machine with the parameters set in the PowerVC GUI (ip, hostname). In both clouddev and ghostdev cases it is obvious that we need to wipe the ODM at the machine build/deploy time. Then VSAE or cloud-init (using config drive datasource) is setting hostname, ip address previously wiped by clouddev and ghostdev attributes. This is working well for a new deploy because we need to wipe the ODM in all cases but what about an inactive live partition mobility or a remote restart operation ? The Virtual Machine has moved (not on the same host, and not with the same lpar ID) and we need to keep the ODM as it is. How is it working:

  • If you are using VSAE, this one is managing the ghostdev attribute for you. At the capture time ghostdev is set to 1 by VSAE (when you run the pre-capture script). When deploying a new VM, at the activation time, VSAE is setting ghostdev back to 0. Inactive live partition mobility and remote restart operations will work fine with ghostdev set to 0.
  • If you are using cloud-init on a supported system clouddev is set to 1 at the installation of cloud-init. As cloud-init is doing nothing with both attributes at the activation time IBM needs to find a way to avoid wiping the ODM after the remote restart operation. The clouddev device was introduced: this one is writing a flag in the NVRAM, so when a new VM is built, there is no flag in the NVRAM for this one, the ODM is wiped. When an already existing VM is remote restarted, the flag exists in the NVRAM, the ODM is not wiped. By using clouddev there is no post deploy action needed.
  • If you are using cloud-init on an unsupported system ghostdev is set to 1 at the installation of cloud-init. As cloud-init is doing nothing at post-deploy time, ghostdev will remains to 1 in all cases and the ODM will always be wiped.

cloudghost

There is a way to use cloud-init on unsupported system. Keep in mind that in this case you will not be supported by IBM. So do this at you own risk. To be totally honest I’m using this method in production to use the same activation engine for all my AIX version:

  1. Pre-capture, set ghostdev to 1. What ever happens THIS IS MANDATORY.
  2. Post-capture, reboot the captured VM and set ghostdev to 0.
  3. Post-deploy on every Virtual machine set ghostdev to 0. You can put this in the activation input to do the job:
  4. #cloud-config
    runcmd:
     - chdev -l sys0 -a ghostdev=0
    

The PVID problem

I realized I had this problem after using PowerVC for a while. As PowerVC images for rootvg and other volume group are created using Storage Volume Controller flashcopy (in case of a SVC configuration, but there are similar mechanisms for other storage providers) the PVID for both rootvg and additional volume groups will always be the same for each new virtual machines (all new virtual machines will have the same PVID for their rootvg, and the same PVID for each captured volume group). I did contact IBM about this and the PowerVC team told me that this behavior is totally normal and was observed since the release of VMcontrol. They didn’t have any issues related to this, so if you don’t care about it, just do nothing and keep this behavior as it is. I recommend doing nothing about this!

It’s a shame but most AIX administrators like to keep things as they are and don’t want any changes. (In my humble opinion this is one of the reason AIX is so outdated compared to Linux, we need a community, not narrow-minded people keeping their knowledge for them, just to stay in their daily job routine without having anything to learn). If you are in this case, facing angry colleagues about this particular point you can use the solution proposed below to calm the passions of the few ones who do not want to change !. :-). This is my rant : CHANGE !

By default if you build two virtual machines and check the PVID of each one, you will notice that the PVID are the same:

  • Machine A:
  • root@machinea:/root# lspv
    hdisk0          00c7102d2534adac                    rootvg          active
    hdisk1          00c7102d00d14660                    appsvg          active
    
  • Machine B:
  • root@machineb:root# lspv
    hdisk0          00c7102d2534adac                    rootvg          active
    hdisk1          00c7102d00d14660                    appsvg         active
    

For the rootvg the PVID is always set to 00c7102d2534adac and for the appsvg the PVID is always set to 00c7102d00d14660.

For the rootvg the solution is to change the ghostdev (only the ghostdev) to 2, and to reboot the machine. Putting ghostdev to 2 will change the PVID of the rootvg at the reboot time (after the PVID is changed ghostdev will be automatically set back to 0)

# lsattr -El sys0 -a ghostdev
ghostdev 2 Recreate ODM devices on system change / modify PVID True
# lsattr -l sys0 -R -a ghostdev
0...3 (+1)

For the non rootvg volume group this is a little bit tricky but still possible, the solution is to use the recreatevg (-d option) command to change the PVID of all the physical volumes of your volume group. Before rebooting the server ensure that:

  • Umount all the filesystems in the volume group on which you want to change the PVID.
  • varyoff the volume group.
  • Get the physical volumes names composing the volume group.
  • export the volume group.
  • recreate the volume group (this action will change the PVID)
  • re-import the volume group.

Here is the shell commands doing the trick:

# vg=appsvg
# lsvg -l $vg | awk '$6 == "open/syncd" && $7 != "N/A" { print "fuser -k " $NF }' | sh
# lsvg -l $vg | awk '$6 == "open/syncd" && $7 != "N/A" { print "umount " $NF }' | sh
# varyoffvg $vg
# pvs=$(lspv | awk -v my_vg=$vg '$3 == my_vg {print $1}')
# recreatevg -y $vg -d $pvs
# importvg -y $vg $(echo ${pvs} | awk '{print $1}'

We now agree that you want to do this, but as you are a smart person you want to do it automatically using cloud-init and the activation input, there are two way to do it, the silly way (using shell) and the noble way (using cloudinit syntax):

PowerVC activation engine (shell way)

Use this short ksh script in the activation input, this is not my recommendation, but you can do it for simplicity:

activation_input_shell

PowerVC activation engine (cloudinit way)

Here is the cloud-init way. Important note: use the latest version of cloud-init, the first one I used had a problem with the cc_power_state_change.py not using the right parameters for AIX:

activation_input_ci

Working with REST Api

I will not show you here how to work with the PowerVC RESTful API. I prefer to share a couple of scripts on my github account. Nice examples are often better than how to tutorials. So check the scripts on the github if you want a detailed how to … scripts are well commented. Just a couple of things to say before closing this topic: the best way to work with RESTful api is to code in python, there are a lot existing python libs to work with RESTful api (httplib2, pycurl, request). For my own understanding I prefer in my script using the simple httplib. I will put all my command line tools in a github repository called pvcmd (for PowerVC command line). You can download the scripts at this address, or just use git to clone the repo. One more time it is a community project, feel free to change and share anything: https://github.com/chmod666org/pvcmd:

Growing data lun

To be totally honest here is what I do when I’m creating a new machine with PowerVC. My customers always needs one additionnal volume groups for applications (we will call it appsvg). I’ve create a multi volume image with this volume group created (with a bunch of filesystem in it). As most of customers are asking for the volume group to be 100g large the capture was made with this size. Unfortunately for me we often get requests to create a bigger volume groups let’s say 500 or 600 Gb. Instead of creating a new lun and extending the volume group PowerVC allows you to grow the lun to the desired size. For volume group other than the boot one you must use the RESTful API to extend the volume. To do this I’ve created a python script to called pvcgrowlun (feel free to check the code on github) https://github.com/chmod666org/pvcmd/blob/master/pvcgrowlun. At each virtual machine creation I’m checking if the customer needs a larger volume group and extend it using the command provided below.

While coding this script I got a problem using the os-extend parameter in my http request. PowerVC is not exactly using the same parameters as Openstack is, if you want to code by yourself be aware of this and check in the PowerVC online documentation if you are using “extended attributes” (Thanks to Christine L Wang for this one):

  • In the Openstack documentation the attribute is “os-extend” link here:
  • os-extend

  • In the PowerVC documentation the attribute is “ibm-extend” link here:
  • ibm-extend

  • Identify the lun you want to grow (as the script is taking the name of the volume as parameter) (I have one not published to list all the volumes, tell me if you want it). In my case the volume name is multi-vol-bf697dfa-0000003a-828641A_XXXXXX-data-1, and I want to change its size from 60 to 80. This is not stated in the offical PowerVC documentation but this will work for both boot and data lun.
  • Check the size of the lun is lesser than the desired size:
  • before_grow

  • Run the script:
  • # pvcgrowlun -v multi-vol-bf697dfa-0000003a-828641A_XXXXX-data-1 -s 80 -p localhost -u root -P mysecretpassword
    [info] growing volume multi-vol-bf697dfa-0000003a-828641A_XXXXX-data-1 with id 840d4a60-2117-4807-a2d8-d9d9f6c7d0bf
    JSON Body: {"ibm-extend": {"new_size": 80}}
    [OK] Call successful
    None
    
  • Check the size is changed after the command execution:
  • aftergrow_grow

  • Don’t forget to do the job in the operating system by running a “chvg -g” (check total PPS here):
  • # lsvg vg_apps
    VOLUME GROUP:       vg_apps                  VG IDENTIFIER:  00f9aff800004c000000014e6ee97071
    VG STATE:           active                   PP SIZE:        256 megabyte(s)
    VG PERMISSION:      read/write               TOTAL PPs:      239 (61184 megabytes)
    MAX LVs:            256                      FREE PPs:       239 (61184 megabytes)
    LVs:                0                        USED PPs:       0 (0 megabytes)
    OPEN LVs:           0                        QUORUM:         2 (Enabled)
    TOTAL PVs:          1                        VG DESCRIPTORS: 2
    STALE PVs:          0                        STALE PPs:      0
    ACTIVE PVs:         1                        AUTO ON:        yes
    MAX PPs per VG:     32768                    MAX PVs:        1024
    LTG size (Dynamic): 256 kilobyte(s)          AUTO SYNC:      no
    HOT SPARE:          no                       BB POLICY:      relocatable
    MIRROR POOL STRICT: off
    PV RESTRICTION:     none                     INFINITE RETRY: no
    DISK BLOCK SIZE:    512                      CRITICAL VG:    no
    # chvg -g appsvg
    # lsvg appsvg
    VOLUME GROUP:       appsvg                  VG IDENTIFIER:  00f9aff800004c000000014e6ee97071
    VG STATE:           active                   PP SIZE:        256 megabyte(s)
    VG PERMISSION:      read/write               TOTAL PPs:      319 (81664 megabytes)
    MAX LVs:            256                      FREE PPs:       319 (81664 megabytes)
    LVs:                0                        USED PPs:       0 (0 megabytes)
    OPEN LVs:           0                        QUORUM:         2 (Enabled)
    TOTAL PVs:          1                        VG DESCRIPTORS: 2
    STALE PVs:          0                        STALE PPs:      0
    ACTIVE PVs:         1                        AUTO ON:        yes
    MAX PPs per VG:     32768                    MAX PVs:        1024
    LTG size (Dynamic): 256 kilobyte(s)          AUTO SYNC:      no
    HOT SPARE:          no                       BB POLICY:      relocatable
    MIRROR POOL STRICT: off
    PV RESTRICTION:     none                     INFINITE RETRY: no
    DISK BLOCK SIZE:    512                      CRITICAL VG:    no
    

My own script to create VMs

I’m creating Virtual Machine every weeks, sometimes just a couple and sometime I got 10 Virtual Machines to create in a row. We are here using different storage connectivity groups, and different storage templates if the machine is in production, in development, and so on. We also have to choose the primary copy on the SVC side if the machine is in production (I am using a streched cluster between two distant sites, so I have to choose different storage templates depending on the site where the Virtual Machine is hosted). I make mistakes almost every time using the PowerVC gui (sometime I forgot to put the machine name, sometimes the connectivity group). I’m a lazy guy so I decided to code a script using the PowerVC rest api to create new machines based on a template file. We are planing to give the script to our outsourced teams to allow them to create machine, without knowing what PowerVC is \o/. The script is taking a file as parameter and create the virtual machine:

  • Create a file like the one below with all the information needed for your new virtual machine creation (name, ip address, vlan, host, image, storage connectivity group, ….):
  • # cat test.vm
    name:test
    ip_address:10.16.66.20
    vlan:vlan6666
    target_host:Default Group
    image:multi-vol
    storage_connectivity_group:npiv
    virtual_processor:1
    entitled_capacity:0.1
    memory:1024
    storage_template:storage1
    
  • Launch the script, the Virtual Machine will be created:
  • pvcmkvm -f test.vm -p localhost -u root -P mysecretpassword
    name: test
    ip_address: 10.16.66.20
    vlan: vlan666
    target_host: Default Group
    image: multi-vol
    storage_connectivity_group: npiv
    virtual_processor: 1
    entitled_capacity: 0.1
    memory: 1024
    storage_template: storage1
    [info] found image multi-vol with id 041d830c-8edf-448b-9892-560056c450d8
    [info] found network vlan666 with id 5fae84a7-b463-4a1a-b4dd-9ab24cdb66b5
    [info] found host aggregation Default Group with id 1
    [info] found storage template storage1 with id bfb4f8cc-cd68-46a2-b3a2-c715867de706
    [info] found image multi-vol with id 041d830c-8edf-448b-9892-560056c450d8
    [info] found a volume with id b3783a95-822c-4179-8c29-c7db9d060b94
    [info] found a volume with id 9f2fc777-eed3-4c1f-8a02-00c9b7c91176
    JSON Body: {"os:scheduler_hints": {"host_aggregate_id": 1}, "server": {"name": "test", "imageRef": "041d830c-8edf-448b-9892-560056c450d8", "networkRef": "5fae84a7-b463-4a1a-b4dd-9ab24cdb66b5", "max_count": 1, "flavor": {"OS-FLV-EXT-DATA:ephemeral": 10, "disk": 60, "extra_specs": {"powervm:max_proc_units": 32, "powervm:min_mem": 1024, "powervm:proc_units": 0.1, "powervm:max_vcpu": 32, "powervm:image_volume_type_b3783a95-822c-4179-8c29-c7db9d060b94": "bfb4f8cc-cd68-46a2-b3a2-c715867de706", "powervm:image_volume_type_9f2fc777-eed3-4c1f-8a02-00c9b7c91176": "bfb4f8cc-cd68-46a2-b3a2-c715867de706", "powervm:min_proc_units": 0.1, "powervm:storage_connectivity_group": "npiv", "powervm:min_vcpu": 1, "powervm:max_mem": 66560}, "ram": 1024, "vcpus": 1}, "networks": [{"fixed_ip": "10.244.248.53", "uuid": "5fae84a7-b463-4a1a-b4dd-9ab24cdb66b5"}]}}
    {u'server': {u'links': [{u'href': u'https://powervc.lab.chmod666.org:8774/v2/1471acf124a0479c8d525aa79b2582d0/servers/fc3ab837-f610-45ad-8c36-f50c04c8a7b3', u'rel': u'self'}, {u'href': u'https://powervc.lab.chmod666.org:8774/1471acf124a0479c8d525aa79b2582d0/servers/fc3ab837-f610-45ad-8c36-f50c04c8a7b3', u'rel': u'bookmark'}], u'OS-DCF:diskConfig': u'MANUAL', u'id': u'fc3ab837-f610-45ad-8c36-f50c04c8a7b3', u'security_groups': [{u'name': u'default'}], u'adminPass': u'u7rgHXKJXoLz'}}
    

One of the major advantage of using this is batching Virtual Machine creation. By using the script you can create one hundred Virtual Machine in a couple of minutes. Awesome !

Working with Openstack commands

PowerVC is based on Openstack, so why not using the Openstack command to work with PowerVC. It is possible, but I repeat one more time that this is not supported by IBM at all. Use this trick at you own risk. I was working with cloud manager with openstack (ICMO) and a script including shells variables is provided to “talk” to the ICMO Openstack. Based on the same file I created the same one for PowerVC. Before using any Openstack commands create a powervcrc file that match you PowerVC environement:

# cat powervcrc
export OS_USERNAME=root
export OS_PASSWORD=mypasswd
export OS_TENANT_NAME=ibm-default
export OS_AUTH_URL=https://powervc.lab.chmod666.org:5000/v3/
export OS_IDENTITY_API_VERSION=3
export OS_CACERT=/etc/pki/tls/certs/powervc.crt
export OS_REGION_NAME=RegionOne
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default

Then source the powervcrc file, and you are ready to play with all Openstack commands:

# source powervcrc

You can then play with Openstack commands, here are a few nice example:

  • List virtual machines:
  • # nova list
    +--------------------------------------+-----------------------+--------+------------+-------------+------------------------+
    | ID                                   | Name                  | Status | Task State | Power State | Networks               |
    +--------------------------------------+-----------------------+--------+------------+-------------+------------------------+
    | dc5c9fce-c839-43af-8af7-e69f823e57ca | ghostdev0clouddev1    | ACTIVE | -          | Running     | vlan666=10.16.66.56    |
    | d7d0fd7e-a580-41c8-b3d8-d7aab180d861 | ghostdevto1cloudevto1 | ACTIVE | -          | Running     | vlan666=10.16.66.57    |
    | bf697dfa-f69a-476c-8d0f-abb2fdcb44a7 | multi-vol             | ACTIVE | -          | Running     | vlan666=10.16.66.59    |
    | 394ab4d4-729e-44c7-a4d0-57bf2c121902 | deckard               | ACTIVE | -          | Running     | vlan666=10.16.66.60    |
    | cd53fb69-0530-451b-88de-557e86a2e238 | priss                 | ACTIVE | -          | Running     | vlan666=10.16.66.61    |
    | 64a3b1f8-8120-4388-9d64-6243d237aa44 | rachael               | ACTIVE | -          | Running     |                        |
    | 2679e3bd-a2fb-4a43-b817-b56ead26852d | batty                 | ACTIVE | -          | Running     |                        |
    | 5fdfff7c-fea0-431a-b99b-fe20c49e6cfd | tyrel                 | ACTIVE | -          | Running     |                        |
    +--------------------------------------+-----------------------+--------+------------+-------------+------------------------+
    
  • Reboot a machine:
  • # nova reboot multi-vol
    
  • List the hosts:
  • # nova hypervisor-list
    +----+---------------------+-------+---------+
    | ID | Hypervisor hostname | State | Status  |
    +----+---------------------+-------+---------+
    | 21 | 828641A_XXXXXXX     | up    | enabled |
    | 23 | 828641A_YYYYYYY     | up    | enabled |
    +----+---------------------+-------+---------+
    
  • Migrate a virtual machine (run a live partition mobility operation):
  • # nova live-migration ghostdevto1cloudevto1 828641A_YYYYYYY
    
  • Evacuate and set a server in maintenance mode and move all the partitions to another host:
  • # nova maintenance-enable --migrate active-only --target-host 828641A_XXXXXX 828641A_YYYYYYY
    
  • Virtual Machine creation (output truncated):
  • # nova boot --image 7100-03-04-cic2-chef --flavor powervm.tiny --nic net-id=5fae84a7-b463-4a1a-b4dd-9ab24cdb66b5,v4-fixed-ip=10.16.66.51 novacreated
    +-------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------+
    | Property                            | Value                                                                                                                                            |
    +-------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------+
    | OS-DCF:diskConfig                   | MANUAL                                                                                                                                           |
    | OS-EXT-AZ:availability_zone         | nova                                                                                                                                             |
    | OS-EXT-SRV-ATTR:host                | -                                                                                                                                                |
    | OS-EXT-SRV-ATTR:hypervisor_hostname | -                                                                                                                                                |
    | OS-EXT-SRV-ATTR:instance_name       | novacreated-bf704dc6-00000040                                                                                                                    |
    | OS-EXT-STS:power_state              | 0                                                                                                                                                |
    | OS-EXT-STS:task_state               | scheduling                                                                                                                                       |
    | OS-EXT-STS:vm_state                 | building                                                                                                                                         |
    | accessIPv4                          |                                                                                                                                                  |
    | accessIPv6                          |                                                                                                                                                  |
    | adminPass                           | PDWuY2iwwqQZ                                                                                                                                     |
    | avail_priority                      | -                                                                                                                                                |
    | compliance_status                   | [{"status": "compliant", "category": "resource.allocation"}]                                                                                     |
    | cpu_utilization                     | -                                                                                                                                                |
    | cpus                                | 1                                                                                                                                                |
    | created                             | 2015-08-05T15:56:01Z                                                                                                                             |
    | current_compatibility_mode          | -                                                                                                                                                |
    | dedicated_sharing_mode              | -                                                                                                                                                |
    | desired_compatibility_mode          | -                                                                                                                                                |
    | endianness                          | big-endian                                                                                                                                       |
    | ephemeral_gb                        | 0                                                                                                                                                |
    | flavor                              | powervm.tiny (ac01ba9b-1576-450e-a093-92d53d4f5c33)                                                                                              |
    | health_status                       | {"health_value": "PENDING", "id": "bf704dc6-f255-46a6-b81b-d95bed00301e", "value_reason": "PENDING", "updated_at": "2015-08-05T15:56:02.307259"} |
    | hostId                              |                                                                                                                                                  |
    | id                                  | bf704dc6-f255-46a6-b81b-d95bed00301e                                                                                                             |
    | image                               | 7100-03-04-cic2-chef (96f86941-8480-4222-ba51-3f0c1a3b072b)                                                                                      |
    | metadata                            | {}                                                                                                                                               |
    | name                                | novacreated                                                                                                                                      |
    | operating_system                    | -                                                                                                                                                |
    | os_distro                           | aix                                                                                                                                              |
    | progress                            | 0                                                                                                                                                |
    | root_gb                             | 60                                                                                                                                               |
    | security_groups                     | default                                                                                                                                          |
    | status                              | BUILD                                                                                                                                            |
    | storage_connectivity_group_id       | -                                                                                                                                                |
    | tenant_id                           | 1471acf124a0479c8d525aa79b2582d0                                                                                                                 |
    | uncapped                            | -                                                                                                                                                |
    | updated                             | 2015-08-05T15:56:02Z                                                                                                                             |
    | user_id                             | 0688b01e6439ca32d698d20789d52169126fb41fb1a4ddafcebb97d854e836c9                                                                                 |
    +-------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------+
    
    

LUN order, remove a boot lun

If you are moving to PowerVC you will probably need to migrate existing machines to your PowerVC environment. One of my customer is asking to move its machines from old boxes using vscsi, to new PowerVC managed boxes using NPIV. I am doing it with the help of a SVC for the storage side. Instead of creating the Virtual Machine profile on the HMC, and then doing the zoning and masking on the Storage Volume Controller and on the SAN switches, I decided to let PowerVC do the job for me. Unfortunately, PowerVC can’t only “carve” Virtual Machine, if you want to do so you have to build a Virtual Machine (rootvg include). This is what I am doing. During the migration process I have to replace the PowerVC created lun by the lun used for the migration …. and finally delete the PowerVC created boot lun. There is a trick to know if you want to do this:

  • Let’s say the lun created by PowerVC is the one named “volume-clouddev-test….” and the orignal rootvg is named “good_rootvg”. The Virtual Machine is booted on the “good_rootvg” lun and I want to remove the “volume-clouddev-test….”:
  • root1

  • You first have to click the “Edit Details” button:
  • root2

  • Then toggle the boot set to “YES” for the “good_rootvg” lun and click move up (the rootvg order must be set to 1, it is mandatory, the lun at order 1 can’t be deleted):
  • root3

  • Toggle the boot set to “NO” for the PowerVC created rootvg:
  • root4

  • If you are trying to detach the volume in first position you will got an error:
  • root5

  • When the order are ok, you can detach and delete the lun created by PowerVC:
  • root6
    root7

Conclusion

There are always good things to learn about PowerVC and related AIX topics. Tell me if these tricks are useful for you and I will continue to write posts like this one. You don’t need to understand all this details to work with PowerVC, most customers don’t. I’m sure you prefer understand what is going on “behind the scene” instead of just clicking a nice GUI. I hope it helps you to better understand what PowerVC is made of. And don’t be shy share you tricks with me. Next: more to come about Chef ! Up the irons !

Configuration of a Remote Restart Capable partition

How can we move a partition to another machine if the machine or the data-center on which the partition is hosted is totally unavailable ? This question is often asked by managers and technical people. Live Partition Mobility can’t answer to this question because the source machine needs to be running to initiate the mobility. I’m sure that most of you are implementing a manual solution based on a bunch of scripts recreating the partition profile by hand but this is hard to maintain and it’s not fully automatized and not supported by IBM. A solution to this problem is to setup your partitions as Remote Restart Capable partitions. This PowerVM feature is available since the release of VMcontrol (IBM Systems Director plugin). Unfortunately this powerful feature is not well documented but will probably in the next months or in the next year be a must have on your newly deployed AIX machines. One last word : with the new Power8 machines things are going to change about remote restart, the functionality will be easier to use and a lot of prerequisites are going to disappear. Just to be clear this post has been written using Power7+ 9117-MMD machines, the only thing you can’t do with these machines (compared to Power8 ones) is changing a current partition to be remote restart capable aware without having to delete and recreate its profile.

Pre-requesite

To create and use a remote restart partition on Power7+/Power8 machines you’ll need this prerequisites :

  • A PowerVM enterprise license (Capability “PowerVM remote restart capable” to true, be careful there is another capability named “Remote restart capable” this was used by VMcontrol only, so double check the capability ok for you).
  • A firmware 780 (or later all Power8 firmware are ok, all Power7 >= 780 are ok).
  • Your source and destination machine are connected to the same Hardware Management Console, you can’t remote restart between two HMC at the moment.
  • Minimum version of HMC is 8r8.0.0. Check you have the rrstartlpar command (not the rrlpar command used by VMcontrol only).
  • Better than a long post check this video (don’t laugh at me, I’m trying to do my best but this is one of my first video …. hope it is good) :

What is a remote restart capable virtual machine ?

Better than a long text to explain you what is, check the picture below and follow each number from 1 to 4 to understand what is a remote restart partition :

remote_restart_explanation

Create the profile of you remote restart capable partition : Power7 vs Power8

A good reason to move to Power8 faster than you planed is that you can change a virtual machine to be remote restart capable without having to recreate the whole profile. I don’t know why at the time of writing this post but changing a non remote restart capable lpar to a remote restart capable lpar is only available on Power8 systems. If you are using a Power7 machine (like me in all the examples below) be carful to check this option while creating the machine. Keep in mind that if you forgot to check to option you will not be able to enable the remote restart capability afterwards and you unfortunately have to remove you profile and recreate it, sad but true … :

  • Don’t forget to check the check box to allow the partition to be remote restart capable :
  • remote_restart_capable_enabled1

  • After the partition is created you can notice in the I/O tab that all remote restart capable partition are not able to own any physical I/O adapter :
  • rr2_nophys

  • You can check in the properties that the remote restart capable feature is activated :
  • remote_restart_capable_activated

  • If you try to modify an existing profile on a Power7 machine you’ll get this error message. On a Power8 machine there will be not problem :
  • # chsyscfg -r lpar -m XXXX-9117-MMD-658B2AD -p test_lpar-i remote_restart_capable=1
    An error occurred while changing the partition named test_lpar.
    The managed system does not support changing the remote restart capability of a partition. You must delete the partition and recreate it with the desired remote restart capability.
    
  • You can verify that some of your lpar are remote restart capable :
  • lssyscfg -r lpar -m source-machine -F name,remote_restart_capable
    [..]
    lpar1,1
    lpar2,1
    lpar3,1
    remote-restart,1
    [..]
    
  • On a Power 7 machine the best way to enable remote restart on an already created machine is to delete the profile and recreate it by hand and adding it the remote restart attribute :
  • Get the current partition profile :
  • $ lssyscfg -r prof -m s00ka9927558-9117-MMD-658B2AD --filter "lpar_names=temp3-b642c120-00000133"
    name=default_profile,lpar_name=temp3-b642c120-00000133,lpar_id=11,lpar_env=aixlinux,all_resources=0,min_mem=8192,desired_mem=8192,max_mem=8192,min_num_huge_pages=0,desired_num_huge_pages=0,max_num_huge_pages=0,mem_mode=ded,mem_expansion=0.0,hpt_ratio=1:128,proc_mode=shared,min_proc_units=2.0,desired_proc_units=2.0,max_proc_units=2.0,min_procs=4,desired_procs=4,max_procs=4,sharing_mode=uncap,uncap_weight=128,shared_proc_pool_id=0,shared_proc_pool_name=DefaultPool,affinity_group_id=none,io_slots=none,lpar_io_pool_ids=none,max_virtual_slots=64,"virtual_serial_adapters=0/server/1/any//any/1,1/server/1/any//any/1",virtual_scsi_adapters=3/client/2/s00ia9927560/32/0,virtual_eth_adapters=32/0/1659//0/0/vdct/facc157c3e20/all/0,virtual_eth_vsi_profiles=none,"virtual_fc_adapters=""2/client/1/s00ia9927559/32/c050760727c5007a,c050760727c5007b/0"",""4/client/1/s00ia9927559/35/c050760727c5007c,c050760727c5007d/0"",""5/client/2/s00ia9927560/34/c050760727c5007e,c050760727c5007f/0"",""6/client/2/s00ia9927560/35/c050760727c50080,c050760727c50081/0""",vtpm_adapters=none,hca_adapters=none,boot_mode=norm,conn_monitoring=1,auto_start=0,power_ctrl_lpar_ids=none,work_group_id=none,redundant_err_path_reporting=0,bsr_arrays=0,lpar_proc_compat_mode=default,electronic_err_reporting=null,sriov_eth_logical_ports=none
    
  • Remove the partition :
  • $ chsysstate -r lpar -o shutdown --immed -m source-server -n temp3-b642c120-00000133
    $ rmsyscfg -r lpar -m source-server -n temp3-b642c120-00000133
    
  • Recreate the partition with the remote restart attribute enabled :
  • mksyscfg -r lpar -m s00ka9927558-9117-MMD-658B2AD -i 'name=temp3-b642c120-00000133,profile_name=default_profile,remote_restart_capable=1,lpar_id=11,lpar_env=aixlinux,all_resources=0,min_mem=8192,desired_mem=8192,max_mem=8192,min_num_huge_pages=0,desired_num_huge_pages=0,max_num_huge_pages=0,mem_mode=ded,mem_expansion=0.0,hpt_ratio=1:128,proc_mode=shared,min_proc_units=2.0,desired_proc_units=2.0,max_proc_units=2.0,min_procs=4,desired_procs=4,max_procs=4,sharing_mode=uncap,uncap_weight=128,shared_proc_pool_name=DefaultPool,affinity_group_id=none,io_slots=none,lpar_io_pool_ids=none,max_virtual_slots=64,"virtual_serial_adapters=0/server/1/any//any/1,1/server/1/any//any/1",virtual_scsi_adapters=3/client/2/s00ia9927560/32/0,virtual_eth_adapters=32/0/1659//0/0/vdct/facc157c3e20/all/0,virtual_eth_vsi_profiles=none,"virtual_fc_adapters=""2/client/1/s00ia9927559/32/c050760727c5007a,c050760727c5007b/0"",""4/client/1/s00ia9927559/35/c050760727c5007c,c050760727c5007d/0"",""5/client/2/s00ia9927560/34/c050760727c5007e,c050760727c5007f/0"",""6/client/2/s00ia9927560/35/c050760727c50080,c050760727c50081/0""",vtpm_adapters=none,hca_adapters=none,boot_mode=norm,conn_monitoring=1,auto_start=0,power_ctrl_lpar_ids=none,work_group_id=none,redundant_err_path_reporting=0,bsr_arrays=0,lpar_proc_compat_mode=default,sriov_eth_logical_ports=none'
    

Creating a reserved storage device

The reserved storage device pool is used to store the configuration data of the remote restart partition. At the time of writing this post thoses devices are mandatory and as far as I know they are used just to store the configuration and not the state (memory state) of the virtual machines itself (maybe in the future, who knows ?) (You can’t create or boot any remote restart partition if you do not have a reserved storage device pool created, do this before doing anything else) :

  • You have first to find on both Virtual I/O Server and on both machines (source and destination machine used for the remote restart operation) a bunch of devices. These ones have to be the same on all the Virtual I/O Server used for the remote restart operation. The lsmemdev command is used to find those devices :
  • vios1$ lspv | grep -iE "hdisk988|hdisk989|hdisk990"
    hdisk988         00ced82ce999d6f3                     None
    hdisk989         00ced82ce999d960                     None
    hdisk990         00ced82ce999dbec                     None
    vios2$ lspv | grep -iE "hdisk988|hdisk989|hdisk990"
    hdisk988         00ced82ce999d6f3                     None
    hdisk989         00ced82ce999d960                     None
    hdisk990         00ced82ce999dbec                     None
    vios3$ lspv | grep -iE "hdisk988|hdisk989|hdisk990"
    hdisk988         00ced82ce999d6f3                     None
    hdisk989         00ced82ce999d960                     None
    hdisk990         00ced82ce999dbec                     None
    vios4$ lspv | grep -iE "hdisk988|hdisk989|hdisk990"
    hdisk988         00ced82ce999d6f3                     None
    hdisk989         00ced82ce999d960                     None
    hdisk990         00ced82ce999dbec                     None
    
    $ lsmemdev -r avail -m source-machine -p vios1,vios2
    [..]
    device_name=hdisk988,redundant_device_name=hdisk988,size=61440,type=phys,phys_loc=U2C4E.001.DBJN916-P2-C1-T1-W500507680140F32C-L3E5000000000000,redundant_phys_loc=U2C4E.001.DBJN916-P2-C2-T1-W500507680140F32C-L3E5000000000000,redundant_capable=1
    device_name=hdisk989,redundant_device_name=hdisk989,size=61440,type=phys,phys_loc=U2C4E.001.DBJN916-P2-C1-T1-W500507680140F32C-L3E6000000000000,redundant_phys_loc=U2C4E.001.DBJN916-P2-C2-T1-W500507680140F32C-L3E6000000000000,redundant_capable=1
    device_name=hdisk990,redundant_device_name=hdisk990,size=61440,type=phys,phys_loc=U2C4E.001.DBJN916-P2-C1-T1-W500507680140F32C-L3E7000000000000,redundant_phys_loc=U2C4E.001.DBJN916-P2-C2-T1-W500507680140F32C-L3E7000000000000,redundant_capable=1
    [..]
    $ lsmemdev -r avail -m dest-machine -p vios3,vios4
    [..]
    device_name=hdisk988,redundant_device_name=hdisk988,size=61440,type=phys,phys_loc=U2C4E.001.DBJN914-P2-C2-T1-W500507680140F32C-L3E5000000000000,redundant_phys_loc=U2C4E.001.DBJN914-P2-C1-T1-W500507680140F32C-L3E5000000000000,redundant_capable=1
    device_name=hdisk989,redundant_device_name=hdisk989,size=61440,type=phys,phys_loc=U2C4E.001.DBJN914-P2-C2-T1-W500507680140F32C-L3E6000000000000,redundant_phys_loc=U2C4E.001.DBJN914-P2-C1-T1-W500507680140F32C-L3E6000000000000,redundant_capable=1
    device_name=hdisk990,redundant_device_name=hdisk990,size=61440,type=phys,phys_loc=U2C4E.001.DBJN914-P2-C2-T1-W500507680140F32C-L3E7000000000000,redundant_phys_loc=U2C4E.001.DBJN914-P2-C1-T1-W500507680140F32C-L3E7000000000000,redundant_capable=1
    [..]
    
  • Create the reserved storage device pool using the chhwres command on the Hardware Management Console (create on all machines used by the remote restart operation) :
  • $ chhwres -r rspool -m source-machine -o a -a vios_names=\"vios1,vios2\"
    $ chhwres -r rspool -m source-machine -o a -p vios1 --rsubtype rsdev --device hdisk988 --manual
    $ chhwres -r rspool -m source-machine -o a -p vios1 --rsubtype rsdev --device hdisk989 --manual
    $ chhwres -r rspool -m source-machine -o a -p vios1 --rsubtype rsdev --device hdisk990 --manual
    $ lshwres -r rspool -m source-machine --rsubtype rsdev
    device_name=hdisk988,vios_name=vios1,vios_id=1,size=61440,type=phys,state=Inactive,phys_loc=U2C4E.001.DBJN916-P2-C1-T1-W500507680140F32C-L3E5000000000000,is_redundant=1,redundant_device_name=hdisk988,redundant_vios_name=vios2,redundant_vios_id=2,redundant_state=Inactive,redundant_phys_loc=U2C4E.001.DBJN916-P2-C2-T1-W500507680140F32C-L3E5000000000000,lpar_id=none,device_selection_type=manual
    device_name=hdisk989,vios_name=vios1,vios_id=1,size=61440,type=phys,state=Inactive,phys_loc=U2C4E.001.DBJN916-P2-C1-T1-W500507680140F32C-L3E6000000000000,is_redundant=1,redundant_device_name=hdisk989,redundant_vios_name=vios2,redundant_vios_id=2,redundant_state=Inactive,redundant_phys_loc=U2C4E.001.DBJN916-P2-C2-T1-W500507680140F32C-L3E6000000000000,lpar_id=none,device_selection_type=manual
    device_name=hdisk990,vios_name=vios1,vios_id=1,size=61440,type=phys,state=Inactive,phys_loc=U2C4E.001.DBJN916-P2-C1-T1-W500507680140F32C-L3E7000000000000,is_redundant=1,redundant_device_name=hdisk990,redundant_vios_name=vios2,redundant_vios_id=2,redundant_state=Inactive,redundant_phys_loc=U2C4E.001.DBJN916-P2-C2-T1-W500507680140F32C-L3E7000000000000,lpar_id=none,device_selection_type=manual
    $ lshwres -r rspool -m source-machine
    "vios_names=vios1,vios2","vios_ids=1,2"
    
  • You can also create the reserved storage device pool from Hardware Management Console GUI :
  • After selecting the Virtual I/O Server, click select devices :
  • rr_rsd_pool_p

  • Choose the maximum and minimum size to filter the devices you can select for the creation of the reserved storage device :
  • rr_rsd_pool2_p

  • Choose the disk you want to put in you reserved storage device pool (put all the devices used by remote restart partitions in manual mode, automatic devices are used by suspend/resume operation or AMS pool. One device can not be shared by two remote restart partitions) :
  • rr_rsd_pool_waiting_3_p
    rr_pool_create_7_p

  • You can check afterwards that your reserved device storage pool is created and is composed by three devices :
  • rr_pool_create_9
    rr_pool_create_8_p

Select a storage device for each remote restart partition before starting it :

After creating the reserved device storage pool you have for every partition to select a device from the storage pool. This device will be used to store the configuration data of the partition :

  • You can see you cannot start the partition if no devices were selected !
  • To select the correct device size you first have to calculate the needed space for every partition using the using the lsrsdevsize command. This size around the size of max memory value set in the partition profile (don’t ask me why):
  • $ lsrsdevsize -m source-machine -p temp3-b642c120-00000133
    size=8498
    
  • Select the device you want to assign to your machine (in my case there was already a device selected for this machine) :
  • rr_rsd_pool_assign_p

  • Then select the machine you want to assign for the device :
  • rr_rsd_pool_assign2_p

  • Or do this in command line :
  • $ chsyscfg -r lpar -m source-machine -i "name=temp3-b642c120-00000133,primary_rs_vios_name=vios1,secondary_rs_vios_name=vios2,rs_device_name=hdisk988"
    $ lssyscfg -r lpar -m source-machine --filter "lpar_names=temp3-b642c120-00000133" -F primary_rs_vios_name,secondary_rs_vios_name,curr_rs_vios_name
    vios1,vios2,vios1
    $ lshwres -r rspool -m source-machine --rsubtype rsdev
    device_name=hdisk988,vios_name=vios1,vios_id=1,size=61440,type=phys,state=Active,phys_loc=U2C4E.001.DBJN916-P2-C1-T1-W500507680140F32C-L3E5000000000000,is_redundant=1,redundant_device_name=hdisk988,redundant_vios_name=vios2,redundant_vios_id=2,redundant_state=Active,redundant_phys_loc=U2C4E.001.DBJN916-P2-C2-T1-W500507680140F32C-L3E5000000000000,lpar_name=temp3-b642c120-00000133,lpar_id=11,device_selection_type=manual
    

Launch the remote restart operation

All the remote restart operations are launched from the Hardware Management Console with the rrstartlpar command. At the time of writing this post there is not GUI function to remote restart a machine and you can only do it with the command line :

Validation

Like you can do it with a Live Partition Mobility move you can validate a remote restart operation before running it. You can only perform the remote restart operation if the machine on which the remote restart machine is hosted is shutdown or in error, so the validation is very useful and mandatory to check your remote restart machine are well configured without having to stop the source machine :

$ rrstartlpar -o validate -m source-machine -t dest-machine -p rrlpar
$ rrstartlpar -o validate -m source-machine -t dest-machine -p rrlpar -d 5
$ rrstartlpar -o validate -m source-machine -t dest-machine -p rrlpar --redundantvios 2 -d 5 -v

Execution

As I said before the remote restart operation can only be performed if the source machine is in a particular state, the states that allows a remote restart operation are :

  • Power Off.
  • Error.
  • Error – Dump in progress state.

So the only way to test a remote restart operation today is to shutdown your source machine :

  • Shutdown the source machine :
  • step1

    $ chsysstate -m source-machine -r sys  -o off --immed
    

    rr_step2_mod

  • You can next check on the Hardware Management Console that Virtual I/O Servers and the remote restart lpar are in state “Not available”. You’re now ready to remote restart the lpar (if the partition id is used on the destination machine the next available one will be used) (you have to wait a little before remote restarting the partition, check below) :
  • $ rrstartlpar -o restart -m source-machine -t dest-machine -p rrlpar -d 5 -v
    HSCLA9CE The managed system is not in a valid state to support partition remote restart operations.
    $ rrstartlpar -o restart -m source-machine -t dest-machine -p rrlpar -d 5 -v
    Warnings:
    HSCLA32F The specified partition ID is no longer valid. The next available partition ID will be used.
    

    step3
    rr_step4_mod
    step5

Cleanup

When the source machine is ready to be up (after an outage for instance) just boot the machine and its Virtual I/O Server. After the machine is up you can notice that the rrlpar profile is still there and it can be a huge problem if somebody is trying to boot this machine because it is started on the other machine after the remote restart operation. To prevent such an error you have to cleanup your remote restart partition by using the rrstartlpar command again. Be careful not to check the option to boot the partitions after the machine is started :

  • Restart the source machine and its Virtual I/O Servers :
  • $ chsysstate -m source-machine -r sys -o on
    $ chsysstate -r lpar -m source-machine -n vios1 -o on -f default_profile
    $ chsysstate -r lpar -m source-machine -n vios2 -o on -f default_profile
    

    rr_step6_mod

  • Perform the cleanup operation to remove the profile of the remote restart partition (if you want later to LPM back your machine you have to keep the device of the reserved device storage pool in the pool, if you do not use the –retaindev option the device will be automatically removed from the pool) :
  • $ rrstartlpar -o cleanup -m source-machine -p rrlpar --retaindev -d 5 -v --force
    

    rr_step7_mod

Refresh the partition and profile data

During my test I encounter a problem. The configuration was not correctly synced between the device used in the reserved device storage pool and the current partition profile. I had to use a command named refdev (for refresh device) to synchronize the partition and profile data to the storage device.

$ refdev -m source-machine -p refdev -m sys1 -p temp3-b642c120-00000133 -v 

What’s in the reserved storage device ?

I’m a curious guy. After playing with remote restart I asked myself a question, what is really stored in the reserved device storage device assigned to the remote restart partition. Looking in the documentation on the internet does not answer to my question so I had to look on it on my own. By ‘dding” the reserved storage device assigned to a partition I realized that the profile is stored in xml format. Maybe this format is the same format that the one used by the HMC 8 templates library. For the moment and during my tests on Power7+ machine the state of the memory of the partition is not transferred to the destination machine, maybe because I had to shutdown the whole source machine to test. Maybe the memory state of the machine is transferred to the destination machine if this one is in error state or is dumping. I had not chance to test this :

root@vios1:/home/padmin# dd if=/dev/hdisk17 of=/tmp/hdisk17.out bs=1024 count=10
10+0 records in
10+0 records out
root@vios1:/home/padmin# more hdisk17.out
[..]
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
BwEAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACgDIAZAAAQAEAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA" Profile="H4sIAAAAA
98VjxbxEAhNaZEqpEptPS/iMJO4cTJBdHVj38zcYvu619fTGQlQVmxY0AUICSH4A5XYorJgA1I3sGMBCx5Vs4RNd2zgXI89tpNMxslIiRzPufec853zfefk/t/osMfRBYPZRbpuF9ueUTQsShxR1NSl9dvEEPPMMgnfvPnVk
a2ixplLuOiVCHaUKn/yYMv/PY/ydTRuv016TbgOzdVv4w6+KM0vyheMX62jgq0L7hsCXtxBH6J814WoZqRh/96+4a+ff3Br8+o3uTE0pqJZA7vYoKKnOgYnNoSsoiPECp7KzHfELTQV/lnBAgt0/Fbfs4Wd1sV+ble7Lup/c
be0LQj01FJpoVpecaNP15MhHxpcJP8al6b7fg8hxCnPY68t8LpFjn83/eKFhcffjqF8DRUshs0almioaFK0OfHaUKCue/1GcN0ndyfg9/fwsyzQ6SblellXK6RDDaIIwem6L4iXCiCfCuBZxltFz6G4eHed2EWD2sVVx6Mth
eEOtnzSjQoVwLbo2+uEf3T/s2emPv3z4xA16eD0AC6oRN3FXNnYoA6U7y3OfFc1g5hOIiTQsVUHSusSc43QVluEX2wKdKJZq4q2YmJXEF7hhuqYJA0+inNx3YTDab2m6T7vEGpBlAaJnU0qjWofTkj+uT2Tv3Rl69prZx/9s
thQTBMK42WK7XSzrizqFhPL5E6FeHGVhnSJQLlKKreab1l6z9MwF0C/jTi3OfmKCsoczcJGwITgy+f74Z4Lu2OU70SDyIdXg1+JAApBWZoAbLaEj4InyonZIDbjvZGwv3H5+tb7C5tPThQA9oUdsCN0HsnWoLxWLjPHAdJSp
Ja45pBarVb3JDyUJOn3aemXcIqtUfgPi3wCuiw76tMh6mVtNVDHOB+BxqEUDWZGtPgPrFc9oBgBhhJzEdsEVI9zC1gr0JTexhwgThzIwYEG7lLbt3dcPyHQLKQqfGzVsSNzVSvenkDJU/lUoiXGRNrdxLy2soyhtcNX47INZ
nHKOCjYfsoeR3kpm58GdYDVxipIZXDgSmhfCDCPlKZm4dZoVFORzEX0J6CLvK4py6N7Pz94yiXlPBAArd3zqIEtjXFZ4izJzQ44sCv7hh3bTnY5TbKdnOtHGtatTjrEynTuWFNXV3ouaUKIIKfDgE5XrrpWb/SHWyWCbXMM5
DkaHNzXVJws6csK57jnpToLopiQLZdgHJJh9wm+M+wbof7GzSRJBYvAAaV0RvE8ZlA5yxSob4fAiJiNNwwQAwu2y5/O881fvvz3HxgK70ZDwc1FS8JezBgKR0e/S4XR3ta8OwmdS56akXJITAmYBpElF5lZOdlXuO+8N0opU
m0HeJTw76oiD8PS9QfRECUYqk0B1KGkZ+pRGQPUhPFEb12XIoe7u4WXuwdVqTAnZT8gyYrvAPlL/sYG4RkDmAx5HFZpFIVnAz9Lrlyh9tFIc4nZAColOLNGdFRKmE8GJd5zZx++zMiAoTOWNrJvBjODNo1UOGuXngzcHWjrn
LgmkxjBXLj+6Fjy1DHFF0zV6lVH/p+VYO6pbZzYD9/ORFLouy6MwvlGuRz8Qz10ugawprAdtJ4GxWAOtmQjZXJ+Lg58T/fDy4K74bYWr9CyLIVdQiplHPLbjinZRu4BZuAENE6jxTP2zNkBVgfiWiFcv7f3xYjFqxs/7vb0P
 lpar_name="rrlpar" lpar_uuid="0D80582A44F64B43B2981D632743A6C8" lpar_uuid_gen_method="0"><SourceLparConfig additional_mac_addr_bases="" ame_capability="0" auto_start_e
rmal" conn_monitoring="0" desired_proc_compat_mode="default" effective_proc_compat_mode="POWER7" hardware_mem_encryption="10" hardware_mem_expansion="5" keylock="normal
"4" lpar_placement="0" lpar_power_mgmt="0" lpar_rr_dev_desc="	<cpage>		<P>1</P>
		<S>51</S>
		<VIOS_descri
00010E0000000000003FB04214503IBMfcp</VIOS_descriptor>
	</cpage>
" lpar_rr_status="6" lpar_tcc_slot_id="65535" lpar_vtpm_status="65535" mac_addres
x_virtual_slots="10" partition_type="rpa" processor_compatibility_mode="default" processor_mode="shared" shared_pool_util_authority="0" sharing_mode="uncapped" slb_mig_
ofile="1" time_reference="0" uncapped_weight="128"><VirtualScsiAdapter is_required="false" remote_lpar_id="2" src_vios_slot_number="4" virtual_slot_number="4"/><Virtual
"false" remote_lpar_id="1" src_vios_slot_number="3" virtual_slot_number="3"/><Processors desired="4" max="8" min="1"/><VirtualFibreChannelAdapter/><VirtualEthernetAdapt
" filter_mac_address="" is_ieee="0" is_required="false" mac_address="82776CE63602" mac_address_flags="0" qos_priority="0" qos_priority_control="false" virtual_slot_numb
witch_id="1" vswitch_name="vdct"/><Memory desired="8192" hpt_ratio="7" max="16384" memory_mode="ded" min="256" mode="ded" psp_usage="3"><IoEntitledMem usage="auto"/></M
 desired="200" max="400" min="10"/></SourceLparConfig></SourceLparInfo></SourceInfo><FileInfo modification="0" version="1"/><SriovEthMappings><SriovEthVFInfo/></SriovEt
VirtualFibreChannelAdapterInfo/></VfcMappings><ProcPools capacity="0"/><TargetInfo concurr_mig_in_prog="-1" max_msp_concur_mig_limit_dynamic="-1" max_msp_concur_mig_lim
concur_mig_limit="-1" mpio_override="1" state="nonexitent" uuid_override="1" vlan_override="1" vsi_override="1"><ManagerInfo/><TargetMspInfo port_number="-1"/><TargetLp
ar_name="rrlpar" processor_pool_id="-1" target_profile_name="mig3_9117_MMD_10C94CC141109224549"><SharedMemoryConfig pool_id="-1" primary_paging_vios_id="0"/></TargetLpa
argetInfo><VlanMappings><VlanInfo description="VkVSU0lPTj0xClZJT19UWVBFPVZFVEgKVkxBTl9JRD0zMzMxClZTV0lUQ0g9dmRjdApCUklER0VEPXllcwo=" vlan_id="3331" vswitch_mode="VEB" v
ibleTargetVios/></VlanInfo></VlanMappings><MspMappings><MspInfo/></MspMappings><VscsiMappings><VirtualScsiAdapterInfo description="PHYtc2NzaS1ob3N0PgoJPGdlbmVyYWxJbmZvP
mVyc2lvbj4KCQk8bWF4VHJhbmZlcj4yNjIxNDQ8L21heFRyYW5mZXI+CgkJPGNsdXN0ZXJJRD4wPC9jbHVzdGVySUQ+CgkJPHNyY0RyY05hbWU+VTkxMTcuTU1ELjEwQzk0Q0MtVjItQzQ8L3NyY0RyY05hbWU+CgkJPG1pb
U9TcGF0Y2g+CgkJPG1pblZJT1Njb21wYXRhYmlsaXR5PjE8L21pblZJT1Njb21wYXRhYmlsaXR5PgoJCTxlZmZlY3RpdmVWSU9TY29tcGF0YWJpbGl0eT4xPC9lZmZlY3RpdmVWSU9TY29tcGF0YWJpbGl0eT4KCTwvZ2VuZ
TxwYXJ0aXRpb25JRD4yPC9wYXJ0aXRpb25JRD4KCTwvcmFzPgoJPHZpcnREZXY+CgkJPHZEZXZOYW1lPnJybHBhcl9yb290dmc8L3ZEZXZOYW1lPgoJCTx2TFVOPgoJCQk8TFVBPjB4ODEwMDAwMDAwMDAwMDAwMDwvTFVBP
FVOU3RhdGU+CgkJCTxjbGllbnRSZXNlcnZlPm5vPC9jbGllbnRSZXNlcnZlPgoJCQk8QUlYPgoJCQkJPHR5cGU+dmRhc2Q8L3R5cGU+CgkJCQk8Y29ubldoZXJlPjE8L2Nvbm5XaGVyZT4KCQkJPC9BSVg+CgkJPC92TFVOP
gkJCTxyZXNlcnZlVHlwZT5OT19SRVNFUlZFPC9yZXNlcnZlVHlwZT4KCQkJPGJkZXZUeXBlPjE8L2JkZXZUeXBlPgoJCQk8cmVzdG9yZTUyMD50cnVlPC9yZXN0b3JlNTIwPgoJCQk8QUlYPgoJCQkJPHVkaWQ+MzMyMTM2M
DAwMDAwMDAwMDNGQTA0MjE0NTAzSUJNZmNwPC91ZGlkPgoJCQkJPHR5cGU+VURJRDwvdHlwZT4KCQkJPC9BSVg+CgkJPC9ibG9ja1N0b3JhZ2U+Cgk8L3ZpcnREZXY+Cjwvdi1zY3NpLWhvc3Q+" slot_number="4" sou
_slot_number="4"><PossibleTargetVios/></VirtualScsiAdapterInfo><VirtualScsiAdapterInfo description="PHYtc2NzaS1ob3N0PgoJPGdlbmVyYWxJbmZvPgoJCTx2ZXJzaW9uPjIuNDwvdmVyc2lv
NjIxNDQ8L21heFRyYW5mZXI+CgkJPGNsdXN0ZXJJRD4wPC9jbHVzdGVySUQ+CgkJPHNyY0RyY05hbWU+VTkxMTcuTU1ELjEwQzk0Q0MtVjEtQzM8L3NyY0RyY05hbWU+CgkJPG1pblZJT1NwYXRjaD4wPC9taW5WSU9TcGF0
YXRhYmlsaXR5PjE8L21pblZJT1Njb21wYXRhYmlsaXR5PgoJCTxlZmZlY3RpdmVWSU9TY29tcGF0YWJpbGl0eT4xPC9lZmZlY3RpdmVWSU9TY29tcGF0YWJpbGl0eT4KCTwvZ2VuZXJhbEluZm8+Cgk8cmFzPgoJCTxwYXJ0
b25JRD4KCTwvcmFzPgoJPHZpcnREZXY+CgkJPHZEZXZOYW1lPnJybHBhcl9yb290dmc8L3ZEZXZOYW1lPgoJCTx2TFVOPgoJCQk8TFVBPjB4ODEwMDAwMDAwMDAwMDAwMDwvTFVBPgoJCQk8TFVOU3RhdGU+MDwvTFVOU3Rh
cnZlPm5vPC9jbGllbnRSZXNlcnZlPgoJCQk8QUlYPgoJCQkJPHR5cGU+dmRhc2Q8L3R5cGU+CgkJCQk8Y29ubldoZXJlPjE8L2Nvbm5XaGVyZT4KCQkJPC9BSVg+CgkJPC92TFVOPgoJCTxibG9ja1N0b3JhZ2U+CgkJCTxy
UlZFPC9yZXNlcnZlVHlwZT4KCQkJPGJkZXZUeXBlPjE8L2JkZXZUeXBlPgoJCQk8cmVzdG9yZTUyMD50cnVlPC9yZXN0b3JlNTIwPgoJCQk8QUlYPgoJCQkJPHVkaWQ+MzMyMTM2MDA1MDc2ODBDODAwMDEwRTAwMDAwMDAw
ZmNwPC91ZGlkPgoJCQkJPHR5cGU+VURJRDwvdHlwZT4KCQkJPC9BSVg+CgkJPC9ibG9ja1N0b3JhZ2U+Cgk8L3ZpcnREZXY+Cjwvdi1zY3NpLWhvc3Q+" slot_number="3" source_vios_id="1" src_vios_slot_n
tVios/></VirtualScsiAdapterInfo></VscsiMappings><SharedMemPools find_devices="false" max_mem="16384"><SharedMemPool/></SharedMemPools><MigrationSession optional_capabil
les" recover="na" required_capabilities="veth_switch,hmc_compatibilty,proc_compat_modes,remote_restart_capability,lpar_uuid" stream_id="9988047026654530562" stream_id_p
on>

About the state of the source machine ?

You have to know this before using remote restart : at the time of writing this post the remote restart feature is still young and have to evolve before being usable in real life, I’m saying this because the FSP of the source machine has to be up to perform a remote restart operation. To be clear the remote restart feature does not answer to the total loss of one of your site. It’s just useful to restart partitions of a system with a problem that is not an FSP problem (problem with memory DIMM, problem with CPUs for instance). It can be used in your DRP exercises but not if your whole site is totally down which is -in my humble opinion- one of the key feature that remote restart needs to answer. Don’t be afraid read the conclusion ….

Conclusion

This post have been written using Power7+ machines, my goal was to give you an example of remote restart operations : a summary of what is is, how it work, and where and when to use it. I’m pretty sure that a lot of things are going to change about remote restart. First, on Power8 machines you don’t have to recreate the partitions to make them remote restart aware. Second, I know that changes are on the way for remote restart on Power8 machines, especially about reserved storage devices and about the state of the source machine. I’m sure this feature will have a bright future and used with PowerVC it can be a killer feature. Hope to see all this changes in a near future ;-). Once again I hope this post helps you.

IBM Oct 8 2013 Announcement : A technical point of view after Enterprise 2013

PowerVM and Power systems lovers like me have probably heard a lot of things since Oct 8 2013. Unfortunately with the big mass of information, some of you, like me, may be lost with all these new things coming. I personnaly feel the need to clarify things from a technical point of vue, and have a deeper look on these annoucements. Finding technical stuffs from the announcement was not easy but Enterprise 2013 gave us a lot of information about the shape of things to come. Is this a new era for Power Systems ? Are our jobs going to change with the rise of the cloud ? Will PowerVM be replaced by KVM in a few years ? Will AIX be replaced by Linux On Power ? I can easily give you an answer : NO ! Have a look below and you will be as excited as I am, Power Systems are awesome, and they are going to be even better with all these new products:-).

PowerVC

PowerVC stands for Power Virtualization Center. PowerVC let you manages your virtualized infrastructure by capturing, deploying, creating and moving virtual machines. It’s based on OpenStack and has to be installed on a Linux Machine (Red Hat Enterprise Linux 6, running on x86 or Power). PowerVC runs on top of Power Systems infrastructure (Power6, or Power7 hardware, Hardware Management Console or IVM). It’s declined on two versions, standard (allowing Power6 and Power7 management with an HMC) and express (allowing only Power7 management with an IVM). At the time of writing this post PowerVC manages storage only for IBM V* Serie and IBM SVC. Brocade SAN switch are the only one supported. In a few words here is what you have to remember :

  • PowerVC runs on top of Hardware and HMC/IVM, like VMcontrol.
  • PowerVC allows you to manage and create (capture, deploy, …) virtual machines.
  • PowerVC allows you to move and automatically place virtual machines (resource pooling, dynamic virtual machines placement).
  • PowerVC is based on OpenStack API. IBM modifications and enhancements to OpenStack are committed to the community.
  • PowerVC only runs on RHEL 6.4 x86 or Power, RHEL support is not included.
  • PowerVC express manages Power7, Power7+ hardware.
  • PowerVC standard manages Power6, Power7 and Power7+ hardware.
  • Storage Systems SVC-family is mandatory (SVC/V7000/V3700/V3500)
  • PowerVC does not install Virtual I/O Server neither IVM. Virtual infrastructure has to be installed and it’s a mandatory prerequisite.
  • PowerVC express edition needs at least Virtual I/O Server 2.2.1.5, storage can be pre-zoned (I don’t know if it can be pre-masked), limit of five managed hosts with a total of 100 maximum managed LPARs.
  • PowerVC standard edition needs HMC 7.7.8 and Virtual I/O Server 2.2.3.0, storage can’t be pre-zoned and only supported SAN Switch is Brocade. It’s limited to ten managed hosts with a total of 400 LPARs.
  • PowerVC1

    In my opinion PowerVC will in the near future replace VMcontrol, it’s a shame for me because I spend so much time on VMcontrol but I think it’s good thing for Power Systems. PowerVC seems to be easy to deploy, easy to manage and seems to have a nice look and feel, my only regret : it’s only running on a Linux :-(. So keep an eye on this product because it’s the future of Power Systems management. You can see it as a VMware VCenter for Power Systems ;-)

    PowerVC2

PowerVP

PowerVP was first known as Sleuth and was first an internal IBM tool. It’s a performance analysis tool looking from the whole machine to lpar. No joke, you can compare it to the lssrad command and it’s seems to be a graphical version of this one. Nice visuals tells you how the hardware resources are assigned and consumed by the lpars. Three views are accessible :

  • The System Topology view shows the hardware topology how busy are the chips, and how busy is the traffic between each chips :
  • PowerVP2
    PowerVP1

  • The Node View shows you how each cores per chips are consumed and gives you information on memory controller, busses, and traffic from/to remote I/O :
  • PowerVP3

  • The partition view seems to be more classical and gives you information on CPU, Memory, Disk, Ethernet cache and memory affinity. You can drill down on each of this statistics :-) :
  • PowerVP4

Some agents are needed by PowerVP. For partitions drill down an agent seems to be mandatory (oh no, not again); for whole system monitoring a kind of “super-agent” is also needed (to be installed on one partition ?)

PowerVPcollectors

I don’t why but PowerVP graphical interface is a java/swing application. In 2013 everybody wants a web interface I really do not understand this choice …. All data can be monitored and recorded (for analysis comparison) on the fly. The product is included to PowerVM enterprise edition and needs at least firmware 770 and 780 for high end machines.

PowerVM Virtual I/O Server 2.2.3.0

Shared Storage Pool 4 (SSP4)

As a big fan of Shared Storage Pool, the SSPv4 was long long awaited. It enables a few cool things, the one I’m waiting the most is the SSP mirroring. You can now mask luns from two different SAN array and the mirroring is performed by the Virtual I/O Server. Nothing to do in the Virtual I/O Client :-). This new feature comes with a new command : failgrp. By default when the SSP is created the failover group is named default. You can also check which pv belongs to which failover group with the new pv command, this command allows you to check SAN array id by checking pv’s udid. It allows you too to check if pv are capable (for example pv comming from iscsi are not SSP capable) Here are a few cool commands example (sorry, without the output, theses ones are deduced from recent Nigel G. tests :-)).

# failgrp -list
# pv -list -capable
# failgrp -create -fg SANARRAY2: hdisk12 hdisk13 hdisk14
# failgrp -modify -fg Default -attr fg_name=SANARRAY1
# pv -add -fg SANARRAY1: hdisk15 hdisk16 SANARRAY2: hdisk17 hdisk18
# pv -list

SSP4 is also simplying SSP management with new commands. The lu command allows you to create a “lu” in the SSP and to allocate it to a vhost in one command.

  • lu creation and mapping in one command :
# lu -create -lu lpar1rootvg -size 32G -vadapter vhost3
  • lu creation and mapping in two commands :
# lu -create -lu lpar1rootvg
# lu -map -lu lpar1rootvg -vadpater vhost3

The last thing to say about SSPv4, you can now remove a lun from the storage pool ! So with all these new features SSP can be used on production server. Finally.

Throwing away Control Channel from Shared Ethernet Adapter

To simplifiy Virtual I/O Server management and SEA creation it is now possible to create SEA failover without creating and specifying any control channel adapter. Both Virtual I/O Server can now detect if an SEA failover is created and use their default virtual adapter for control channel. You can find more precision on this subject on Scott Vetter‘s blog here. A funny thing to notice is that an APAR was created this year on this subject. This APAR leaks the name of the Project K2 (IBM internal name for PowerVM simplification) : IV37193. Ok to sum up here are the technical things to remember about this Shared Ethernet Adapter Simplification :

  • The creation of the SEA requires HMC V7.7.8, Virtual I/O Server 2.2.3.0, and firmware 780 (it’s seems to be only supported on Power7 and Power7+).
  • The end user can’t see it but the control channel function is using the SEA‘s default adapter.
  • The end user can’t see it but the control channel function is using a special VLAN id : 4095.
  • Standard SEA with classic control channel adapter can still be created.
  • My understanding of this feature is that it has a limitation : you can create only one Shared Ethernet Adapter per virtual switch. It’s seems that the HMC check that there is only one adapter with priority 1 and one adapter with priority 2 per virtual switch.
  • Command to create SEA is still the same, it you omit the ctl_chan attribute the SEA will automatically use the management VLAN 4095 for the control channel
  • A special discovery protocol is implemented on the Virtual I/O Server to automatically enables the “ghost” control channel

… other few things

VIOS advisors are modified, they can now monitor Shared Storage Pools and NPIV adapters, the part command is updated to do so. There are also some additional statistics and listing for SEA.

Virtual Network Adapter are updated with a ping feature to check if the adapter is up or down. A virtual adapter can now be considered as down and can feel a physical link loss/failure. Is this feature the end of the netmon.cf file in PowerHA ?

Conclusion, OpenPower, Power8, KVM

I can’t finish this post without writing a few lines about huge global things to come. A new consorsium called OpenPower was created to work on Power8. It means that some third party manufacturers will be able to build their own implementation of Power8 processors. Talking about Power8 this one is on the way a seems to be a real performance monster, first entry class systems will probably be available in Q3/Q4 2014. But Power8 is not going to be released alone, KVM is going to be ported on Power8 hardware. Keep in mind that this will not provide nested virtualization (the purpose is not to run KVM on AIX, or to run AIX on KVM). A Power8 hardware will be able to run a special firmware letting you run a Linux KVM and to build and run Linux PPC virtual machines. At the time of writing this post users will be able to switch between this firmware and the classic one running PowerVM. It is pretty exciting !

I hope this post is clarifying the current situation about all these new awesome annoucements.

Adventures in IBM Systems Director in System P environment. Part 7 : Introduction to Smart Cloud Entry

If you’ve read the 6 previous parts of this series of post you can understand that managing Power Systems through VMControl and IBM System Director is not so easy. Nobody wants to have to type ugly long command lines with puzzling parameters that nobody understands. Nobody wants to check everything is OK on the Virtual I/O Servers, on the Systems Director, on the Storage side, and so on. It’s obvious, customers and administrators just want to deploy new servers, they want to do it quickly and it has to be easy. If you’ve read the previous parts you know that our VMcontrol is ready to deploy servers. The next step is to add on top of this the Smart Cloud Entry. It answer to all the questions below by letting you deploying new servers by using a nice a clear web interface. Smart Cloud Entry has one strength, it’s easy to deploy, easy to use, easy to manage, it’s EASY. This post will not be very technical but will show what you can do with Smart Cloud Entry. This is the last brick of a full automatized and virtualized environment. This is for me the result of one year of hard work trying to understand every brick of this configuration (this is the result of my “after hours work”, these posts and these configurations were written and designed between 18h30 and 22h, I mentioning this because this does not take place in my every day work and my boss do not ask me to do this. This is pure passion)

Installation, configuration, update

Installation

Installing Smart Cloud Entry is pretty easy. Just download the source package for your IBM Entitled Software page, and get the latest updates on FixCentral. In my case I have two files : ESD_-_IBM_SmartCloud_Entry_for_Power_V2.4.0_102012.tar.gz for the base installation package and 2.4.0.3-IBM-SCE-FP003-201304241341.zip for the update. Extract the base installation package and run the installer :

# gunzip ESD_-_IBM_SmartCloud_Entry_for_Power_V2.4.0_102012.tar.gz
# tar xf ESD_-_IBM_SmartCloud_Entry_for_Power_V2.4.0_102012.tar
# cd ESD_-_IBM_SmartCloud_Entry_for_Power_V2.4.0_102012/install/power/aix
# sh sce240_aix_installer.bin
Preparing to install...
Extracting the JRE from the installer archive...
Unpacking the JRE...
Extracting the installation resources from the installer archive...
Configuring the installer for this system's environment...

Launching installer...

===============================================================================
Choose Locale...
----------------

    1- Deutsch
  ->2- English
[..]
Preparing CONSOLE Mode Installation...

Accept License Agreement :

===============================================================================
License Agreement
-----------------

Installation and Use of IBM SmartCloud Entry Requires Acceptance of the
Following License Agreement:

International Program License Agreement

Part 1 - General Terms
[..]
DO YOU ACCEPT THE TERMS OF THIS LICENSE AGREEMENT? (Y/N):

Choose the destination folder. My advice is to create a filesystem dedicated to Smart Cloud Entry by default it goes under /opt/ibm but I’m using /app/sce filesystem to avoid filling /opt filesystem. Please note that I’m using the same folder for the properties files :

===============================================================================
Choose Property File Install Folder
-----------------------------------

Please choose a destination folder for the property files.

Where Would You Like to Install the Property Files? (DEFAULT: /root): /app/sce


===============================================================================
Pre-Installation Summary
------------------------

Please Review the Following Before Continuing:

Property File Install Folder:
    /app/sce/.SCE24/

Install Folder:
    /app/sce/SCE24

Disk Space:
    Free: 6497 MB Required: 765 MB



PRESS  TO CONTINUE:

===============================================================================
Installing...
-------------

 [==================|==================|==================|==================]
 [------------------|------------------|------------------|------------------]

 ===============================================================================
Install Finished
----------------

Define the password for the Smart Cloud Entry administrator :

===============================================================================
Add Configuration Values
------------------------

Changing these properties is optional. All options can be changed post install in  /app/list/sce/product/.SCE24/authentication.properties.  Press  to accept the default value.

Initial admin user name (DEFAULT: admin):

Initial admin name (DEFAULT: SmartCloud Entry Administrator):




===============================================================================
Add Configuration Values
------------------------


Initial administrator password:



===============================================================================
Add Configuration Values
------------------------


Verify initial administrator password:



===============================================================================
IBM SmartCloud Entry has been successfully installed
----------------------------------------------------

IBM SmartCloud Entry has been successfully installed to:

   /app/sce/SCE24

If you choose to create a silent install response file, it will be located in this directory.

    1- Create Silent Install Response File
  ->2- Do Not Create Silent Install Response File

ENTER THE NUMBER FOR YOUR CHOICE, OR PRESS  TO ACCEPT THE DEFAULT::

Update

The way to update Smart Cloud Entry is weird, you have to launch an osgi console and add a repository (the update has to be located in this directory). Then run the installupdates command :

# mkdir 2.4.0.3-IBM-SCE-FP003-201304241341
# mv 2.4.0.3-IBM-SCE-FP003-201304241341.zip 2.4.0.3-IBM-SCE-FP003-201304241341
# cd 2.4.0.3-IBM-SCE-FP003-201304241341
# unzip 2.4.0.3-IBM-SCE-FP003-201304241341.zip
# /app/sce/SCE24/skc
osgi> version
2.4.0.0-201208290130
osgi> addrepo file:/app/sce/2.4.0.3-IBM-SCE-FP003-201304241341
SmartCloud Entry update repository added
osgi> installupdates
SmartCloud Entry updates to install:
        com.ibm.cfs.product 2.4.0.0-201208290130 ==> com.ibm.cfs.product 2.4.0.3-201304241341
SmartCloud Entry update done
osgi> exit

Re-run Smart Cloud Entry and check the version has been correctly updated :

# /app/list/sce/product/SCE24/skc
osgi> version
2.4.0.3-201304241341
[08/26/13 14:01:11:529] 00000 INFO: IBM SmartCloud Entry 2.4.0.3-201304241341 is being initialized. - com.ibm.cfs.app.osgi.internal.Autostarter.start
[08/26/13 14:01:13:719] 00001 INFO: CYX5052I: Metering services are disabled by the administrator. - com.ibm.cfs.services.metering.impl.internal.MeteringServicesImpl.start
[08/26/13 14:01:13:728] 00001 INFO: CYX1223I: Current logging levels: default=FINE - com.ibm.cfs.app.osgi.internal.LoggingCommandProvider.logCurrentLevels
[08/26/13 14:01:13:732] 00001 INFO: *** IBM SmartCloud Entry 2.4.0.3-201304241341 is ready (startup time 00:16.450) ***  - com.ibm.cfs.app.impl.internal.CFSImpl.startup
[08/26/13 14:01:14:756] 00005 INFO: CYX4391I: Billing services have been disabled by the administrator. - com.ibm.cfs.services.billing.BillingServicesFactoryImpl.isBillingEnabled

Start Smart Cloud Entry at boot

The only way I’ve find to start Smart Cloud Entry at server boot is to add an entry in inittab with the -nosplash option :

# mkitab "skc:2:once:/app/list/sce/product/SCE24/skc -nosplash"
# tail -1 /etc/inittab
skc:2:once:/app/list/sce/product/SCE24/skc -nosplash

If you want to access the osgi console without re-running skc add a port number after the -console stanza in the skc.ini file :

# head -10 /app/sce/SCE24/skc.ini
-vm
jre/bin/java
-console
7777
-clean
# telnet localhost 7777
Trying...
Connected to loopback.
Escape character is '^]'.

osgi>

Accessing the web interface

Access Smart Cloud Entry web interface through this url (replace the hostname by yours) : http://pa000sce1.domain.local:8080/cloud/web/login.html.

Adding a new cloud to Smart Cloud Entry

Adding a VMControl Cloud to Smart Cloud Entry is really easy, just point to your IBM Systems Director server, enter your login and password (in my case I’m using my user and password, but my advice is to create a Smart Cloud Entry user on the Systems Director). Be sure you can resolve the IBM Systems Director server with your DNS server and check your firewall rules (TCP 8422) if Smart Cloud Entry and IBM Systems Director are not installed on the same network (this is my case and this is much easier to do this) :


Once the cloud has been correctly added you should see its status at the right of your screen, if everything is fine the status should be ok (like my image). This is it, your VMControl cloud is ready to use. You can now deploy/create/capture servers, easy isn’t it ? Be sure to select the right version of VMControl when adding the cloud. You can check your VMControl version by logging on your IBM Systems Director server and by checking in the GUI (didn’t find any command to check this by using the Systems Director cli …).

No DHCP : network pool creation

I’m not using any DHCP server. Smart Cloud Entry is smart and let you defining a pool of IP addresses by hand, this is very useful and easy to setup. While most of Cloud Managing tools force you to use DHCP, Smart Cloud Entry give you the choice. For my own use I’ve defined a pool with a bunch of addresses in the VLAN served by my Virtual I/O Server (be careful to set the network id to match you Shared Ethernet Adapter configuration)

One more time Smart Cloud Entry let you the choice in everything, while defining a network you’ll have to choose to define a range of addresses or a single IP, in the example below a range of addresses was created.

Even better, when an IP address is in use this one is defined as utilized. You can also lock an IP and Smart Cloud Entry will never use it. For instance do not use the five last IP of a range (these ones are reserved for the network team). This example tells you how Smart Cloud Entry is flexible.

Deploying Virtual Appliance

Deploying a Virtual Appliance is one of the easiest things to do in Smart Cloud Entry. Just select the Virtual Appliance from the list and choose deploy. Two modes are available, “basic”, and “advanced”, in advanced mode you can choose the hostname, redefine processor and memory settings, and define an expiration date. Be careful when setting an expiration date the workload will be totally deleted after the expiration :


While deploying a Virtual Appliance, you can check the Workload summary, one of the Workload will be “In transition”. Logs can be checked too while the Virtual Appliance is deploying.



After a successful creation, the Workload has a green ok button next to its name, you can also check in the resource that you now have less resources …


Approval, Accounting, Metering, …

By default Approval, Accounting (billing), and Metering functions are not enabled, by enabling theses features Smart Cloud Entry will have access to new functionality. I’m not an expert for this part but I’ll tell you how to enable it and what is the purpose of each one and gives you a few screenshots :

  • Approval : you can choose to approve all Smart Cloud Entry events. For instance each time a workload will be initiated the Smart Cloud Entry Administrator will receive a mail and have to approve the workload initiation, this one will be pending until the administrator approve it. The Approval policies can be configured from the web interface in the Cloud configuration tab :

  • Accounting : by enabling accounting, you’ll have the right to create accounts for you users, accounts are linked to a username and allows you to define each user credits. By creating and running appliances users will have to pay, and their current credits will decrease over time :
  • # cd /app/sce/SCE24
    # cp billing.properties billing.properties.$(date +%s)
    # sed s/com.ibm.cfs.billing.enabled=false/com.ibm.cfs.billing.enabled=true/ billing.properties.1377519521 > billing.properties
    # grep com.ibm.cfs.billing.enabled billing.properties
    com.ibm.cfs.billing.enabled=true
    

  • Metering : Enabling metering allow Smart Cloud Entry to calculate the resource usage of every appliance and of every user. The billing is calculated from the data collected by the metering :
  • # cp metering.properties metering.properties.$(date +%s)
    # sed "s/com.ibm.cfs.metering.enabled=false/com.ibm.cfs.metering.enabled=true/" metering.properties.1377519318 > metering.properties
    # grep com.ibm.cfs.metering.enabled metering.properties
    com.ibm.cfs.metering.enabled=true
    

Other features, conclusion

Smart Cloud Entry allows you to capture, suspend, resume, stop, start and resume a workload. You can also modify its size (Processor and memory). Smart Cloud Entry can also be used with VMware and PureSystems but I’ve not tested it myself . In my opinion it is a user-friendly interface to VMcontrol, simple to use, and simple to manage. This is the way a cloud software should be, light and easy to use. It sure can’t answer to all your problems but I’m sure it can fits most customers who wants a cloud. Smart Cloud Entry do a few things but it do it well.