A first look at SRIOV vNIC adapters

I have the chance to participate in the current Early Shipment Program (ESP) for Power Systems, especially the software part. One of my tasks is to test a new feature called SRIOV vNIC. For those who does not know anything about SRIOV this technology is comparable to LHEA except it is based on a industry standard (and have a couple of other features). By using SRIOV adapter you can divide a physical port into what we call a Virtual Function (or a Logical Port) and map this Virtual Function to a partition. You can also set “Quality Of Service” on these Virtual Functions. At the creation you will setup the Virtual Function allowing it to take a certain percentage of the physical port. These can be very useful if you want to be sure that your production server will always have a guaranteed bandwidth instead of using a Shared Ethernet Adapter where every clients partitions are competing for the bandwidth. Customers are also using SRIOV adapters for performance purpose ; as nothing is going through the Virtual I/O Server the latency added by this action is eliminated and CPU cycles are saved on the Virtual I/O Server side (Shared Ethernet Adapter consume a lot of CPU cycles). If you are not aware of what SRIOV is I encourage you to check the IBM Redbook about it (http://www.redbooks.ibm.com/abstracts/redp5065.html?Open. Unfortunately you can’t move a partition by using Live Partition Mobility if this one have a Virtual Function assigned to it. Using vNICs allows you to use SRIOV through the Virtual I/O Servers and enable the possibility to move your partition even if you are using an SRIOV logical port. The better of two worlds : performance/qos and virtualization. Is this the end of the Shared Ethernet Adapter ?

SRIOV vNIC, what’s this ?

Before talking about the technical details it is important to understand what vNICs are. When I’m explaining this to newbies I often refer to NPIV. Imagine something similar as the NPIV but for the network part. By using SRIOV vNIC:

  • A Virtual Function (SRIOV Logical Port) is created and assigned to the Virtual I/O Server.
  • A vNIC adapter is created in the client partition.
  • The Virtual Function and the vNIC adapter are linked (mapped) together.
  • This is a one to one relationship between a Virtual Function and a vNIC (like a vfcs adapter is a one to one relationship between your vfcs and the physical fiber channel adapter).

On the image below, the vNIC lpars are the “yellow” ones, you can see here that the SRIOV adapter is divided in different Virtual Function, and some of them are mapped to the Virtual I/O Server. The relationship between the Virtual Function and the vNIC is achieved by a vnicserver (this is a special Virtual I/O Server device).

One of the major advantage of using vNIC is that you eliminate the need of the Virtual I/O Server for data flows:

  • The network data flow is direct between the partition memory and the SRIOV adapter, there is no data copy passing through the Virtual I/O Server and it eliminate the CPU cost and the latency of doing that. This is achieved by LRDMA. Pretty cool !
  • The vNIC will inherits the bandwidth allocation of the Virtual Function (QoS). If the VF is configured with a capacity of 2% the vNIC will also have this capacity.
  • vNIC2

vNIC Configuration

Before checking all the details on how to configure an SRIOV vNIC adapter you have to check all the prerequisites. As this is a new feature you will need the latest level of …. everything. My advice is to stay up to date as much as possible.

vNIC Prerequisites

These outputs are taken from the early shipment program. All of this can be changed at the GA release:

  • Hardware Management Console v840:
  • # lshmc -V
    lshmc -V
    "version= Version: 8
     Release: 8.4.0
     Service Pack: 0
    HMC Build level 20150803.3
  • Power 8 only, firmware 840 at least (both enterprise and scale out systems):
  • firmware

  • AIX 7.1TL4 or AIX 7.2:
  • # oslevel -s
    # cat /proc/version
    Oct 20 2015
    @(#) _kdb_buildinfo unix_64 Oct 20 2015 06:57:03 1543A_720
  • Obviously at least on SRIOV capable adapter!

Using the HMC GUI

The configuration of a vNIC is done at the partition level. The configuration is only available on the enhanced version of the GUI. Select the virtual machine on which you want to add the vNIC and in the Virtual I/O tab you’ll see that a new Virtual NICs session is here. Click on “Virtual NICs” and a new panel will be opened with a new button called “Add Virtual NIC”, just click this one to add a Virtual NIC:


All the SRIOV capable port will be displayed on the next screen. Choose the SRIOV port you want (a virtual function will be created on this one. Don’t do anything more, the creation of a vNIC will automatically create a Virtual Function; assign it to Virtual I/O Server and do the mapping to the vNIC for you). Choose the Virtual I/O Server that will be used for this vNIC (the vNIC server will be created on this Virtual I/O Server. Don’t worry we will talk about vNIC redundancy later in this post) and the Virtual NIC Capacity (the percentage the Phyiscal SRIOV port that will be dedicated to this vNIC)(this has to be a multiple of 2)(be careful with that it can’t be changed afterwards and you’ll have to delete your vNIC to redo the configuration) :


The “Advanced Virtual NIC Settings” allows you to choose the Virtual NIC Adapter ID, choosing a MAC Address, and configuring the vlan restrictions and vlan tagging. In the example below I’m configuring my Virtual NIC in the vlan 310:


Using the HMC Command Line

As always the configuration can be achieved using the HMC command line, using lshwres to list vNIC and chhwres to create a vNIC.

List SRIOV adapters to get the adapter_id needed by the chhwres command:

# lshwres -r sriov --rsubtype adapter -m blade-8286-41A-21AFFFF
# lshwres -r virtualio  -m blade-8286-41A-21AFFFF --rsubtype vnic --level lpar --filter "lpar_names=72vm1"

Create the vNIC:

# chhwres -r virtualio -m blade-8286-41A-21AFFFF -o a -p 72vm1 --rsubtype vnic -v -a "port_vlan_id=310,backing_devices=sriov/vios2/1/1/1/2"

List the vNIC after create:

# lshwres -r virtualio  -m blade-8286-41A-21AFFFF --rsubtype vnic --level lpar --filter "lpar_names=72vm1"

System and Virtual I/O Server Side:

  • On the Virtual I/O Server you can use two commands to check your vNIC configuration. You can first use the lsmap command to check the one to one relationship between the VF and the vNIC (you see on the output below that a VF and a vnicserver device are created)(you can also see the name of the vNIC in the client partition side) :
  • # lsdev | grep VF
    ent4             Available   PCIe2 100/1000 Base-TX 4-port Converged Network Adapter VF (df1028e214103c04)
    # lsdev | grep vnicserver
    vnicserver0      Available   Virtual NIC Server Device (vnicserver)
    # lsmap -vadapter vnicserver0 -vnic
    Name          Physloc                            ClntID ClntName       ClntOS
    ------------- ---------------------------------- ------ -------------- -------
    vnicserver0   U8286.41A.21FFFFF-V2-C32897             6 72nim1         AIX
    Backing device:ent4
    Client device name:ent1
    Client device physloc:U8286.41A.21FFFFF-V6-C3
  • You can get more details (QoS, vlan tagging, port states) by using the vnicstat command:
  • # vnicstat -b vnicserver0
    VNIC Server Statistics: vnicserver0
    Device Statistics:
    State: active
    Backing Device Name: ent4
    Client Partition ID: 6
    Client Partition Name: 72nim1
    Client Operating System: AIX
    Client Device Name: ent1
    Client Device Location Code: U8286.41A.21FFFFF-V6-C3
    Device ID: df1028e214103c04
    Version: 1
    Physical Port Link Status: Up
    Logical Port Link Status: Up
    Physical Port Speed: 1Gbps Full Duplex
    Port VLAN (Priority:ID): 0:3331
    VF Minimum Bandwidth: 2%
    VF Maximum Bandwidth: 100%
  • On the client side you can list your vNIC and as always have details using the entstat command:
  • # lsdev -c adapter -s vdevice -t IBM,vnic
    ent0 Available  Virtual NIC Client Adapter (vnic)
    ent1 Available  Virtual NIC Client Adapter (vnic)
    ent3 Available  Virtual NIC Client Adapter (vnic)
    ent4 Available  Virtual NIC Client Adapter (vnic)
    # entstat -d ent0 | more
    Device Type: Virtual NIC Client Adapter (vnic)
    Virtual NIC Client Adapter (vnic) Specific Statistics:
    Current Link State: Up
    Logical Port State: Up
    Physical Port State: Up
    Speed Running:  1 Gbps Full Duplex
    Jumbo Frames: Disabled
    Port VLAN ID Status: Enabled
            Port VLAN ID: 3331
            Port VLAN Priority: 0


You will certainly agree that having a such new cool feature without having something that is fully redundant would be a shame. Hopefully we have here a solution with the return with a great fanfare of the Network Interface Backup (NIB). As I told you before each time a vNIC is created a vnicserver is created on one of the Virtual I/O Server. (At the vNIC creation you have to choose on which Virtual I/O server it will be created). So to be fully redundant and to have a failover feature the only way is to create two vNIC adapters (one using the first Virtual I/O Server and the second one using the second Virtual I/O Server, on top of this you then have to create a Network Interface Backup, like in the old times :-) ). Here are a couple of things and best practices to know before doing this.

  • You can’t use two VF coming from the same SRIOV adapter physical port (the NIB creation will be ok, but any configuration on top of this NIB will fail).
  • You can use two VF coming from the same SRIOV adapter but with two different logical ports (this is the example I will show below).
  • The best partice is to use two VF coming from two different SRIOV adapters (you can then afford to loose one of the two SRIOV adapter).


Verify on your partition that you have two vNIC adapters and check that the status are ok using the ‘entstat‘ command:

  • Both vNIC are available on the client partition:
  • # lsdev -c adapter -s vdevice -t IBM,vnic
    ent0 Available  Virtual NIC Client Adapter (vnic)
    ent1 Available  Virtual NIC Client Adapter (vnic)
    # lsdev -c adapter -s vdevice -t IBM,vnic -F physloc
  • You can check on the first Virtual I/O Server that “Current Link State”, “Logical Port State” and “Physical Port State” are ok (all of them needs to be up):
  • # entstat -d ent0 | grep -p vnic
    Device Type: Virtual NIC Client Adapter (vnic)
    Hardware Address: ee:3b:86:f6:45:02
    Elapsed Time: 0 days 0 hours 0 minutes 0 seconds
    Virtual NIC Client Adapter (vnic) Specific Statistics:
    Current Link State: Up
    Logical Port State: Up
    Physical Port State: Up
  • Same on the second Virtual I/O Server:
  • # entstat -d ent1 | grep -p vnic
    Device Type: Virtual NIC Client Adapter (vnic)
    Hardware Address: ee:3b:86:f6:45:03
    Elapsed Time: 0 days 0 hours 0 minutes 0 seconds
    Virtual NIC Client Adapter (vnic) Specific Statistics:
    Current Link State: Up
    Logical Port State: Up
    Physical Port State: Up

Verify on both Virtual I/O Server that the two vNIC are coming from two different SRIOV adapters (for the purpose of this test I’m using two different ports on the same SRIOV adapters but it remains the same with two different adapters). You can see on the output below that on Virtual I/O Server 1 the vNIC is backed to the adapter on position 3 (T3) and that on Virtual I/O Server 2 the vNIC is backed to the adapter on position 4 (T4):

  • Once again use the lsmap command on the first Virtual I/O Server to check that (note that you can check the client name, and the client device):
  • # lsmap -vadapter vnicserver0 -vnic
    Name          Physloc                            ClntID ClntName       ClntOS
    ------------- ---------------------------------- ------ -------------- -------
    vnicserver0   U8286.41A.21AFF8V-V1-C32897             6 72nim1         AIX
    Backing device:ent4
    Client device name:ent0
    Client device physloc:U8286.41A.21AFF8V-V6-C2
  • Same thing on the second Virtual I/O Server:
  • # lsmap -vadapter vnicserver0 -vnic -fmt :

Finally create the Network Interface Backup and put and IP on top of it:

# mkdev -c adapter -s pseudo -t ibm_ech -a adapter_names=ent0 -a backup_adapter=ent1
ent2 Available
# mktcpip -h 72nim1 -a -i en2 -g -m -s
inet0 changed
en2 changed
inet0 changed
# echo "vnic" | kdb
|       pACS       | Device | Link |    State     |
| F1000A0032880000 |  ent0  |  Up  |     Open     |
| F1000A00329B0000 |  ent1  |  Up  |     Open     |

Let’s now try different things to see if the redundancy is working ok. First let’s shutdown one of the Virtual I/O Server and let’s ping our machine from another one:

# ping
PING ( 56(84) bytes of data.
64 bytes from icmp_seq=1 ttl=255 time=0.496 ms
64 bytes from icmp_seq=2 ttl=255 time=0.528 ms
64 bytes from icmp_seq=3 ttl=255 time=0.513 ms
64 bytes from icmp_seq=40 ttl=255 time=0.542 ms
64 bytes from icmp_seq=41 ttl=255 time=0.514 ms
64 bytes from icmp_seq=47 ttl=255 time=0.550 ms
64 bytes from icmp_seq=48 ttl=255 time=0.596 ms
--- ping statistics ---
50 packets transmitted, 45 received, 10% packet loss, time 49052ms
rtt min/avg/max/mdev = 0.457/0.525/0.596/0.043 ms
# errpt | more
59224136   1120200815 P H ent2           ETHERCHANNEL FAILOVER
F655DA07   1120200815 I S ent0           VNIC Link Down
3DEA4C5F   1120200815 T S ent0           VNIC Error CRQ
81453EE1   1120200815 T S vscsi1         Underlying transport error
DE3B8540   1120200815 P H hdisk0         PATH HAS FAILED
# echo "vnic" | kdb
(0)> vnic
|       pACS       | Device | Link |    State     |
| F1000A0032880000 |  ent0  | Down |   Unknown    |
| F1000A00329B0000 |  ent1  |  Up  |     Open     |

Same test with the addition of an address to ping, and I’m only loosing 4 packets:

# ping
64 bytes from icmp_seq=41 ttl=255 time=0.627 ms
64 bytes from icmp_seq=42 ttl=255 time=0.548 ms
64 bytes from icmp_seq=46 ttl=255 time=0.629 ms
64 bytes from icmp_seq=47 ttl=255 time=0.492 ms
# errpt | more
59224136   1120203215 P H ent2           ETHERCHANNEL FAILOVER
F655DA07   1120203215 I S ent0           VNIC Link Down
3DEA4C5F   1120203215 T S ent0           VNIC Error CRQ

vNIC Live Partition Mobility

By default you can use Live Partition Mobility with SRIOV vNIC, it is super simple and it is fully supported by IBM, as always I’ll show you how to do that using the HMC GUI and the command line:

Using the GUI

First validate the mobility operation, it will allow you to choose the destination SRIOV adapter/port on which to map your current vNIC. You have to choose:

  • The adapter (if you have more than one SRIOV adapter).
  • The Physical port on which the vNIC will be mapped.
  • The Virtual I/O Server on which the vnicserver will be created.

New options are now available in the mobility validation panel:


Modify each vNIC to match your destination SRIOV adapter and ports (choose the destination Virtual I/O Server here):


Then migrate:


A5E6DB96   1120205915 I S pmig           Client Partition Migration Completed
4FB9389C   1120205915 I S ent1           VNIC Link Up
F655DA07   1120205915 I S ent1           VNIC Link Down
11FDF493   1120205915 I H ent2           ETHERCHANNEL RECOVERY
4FB9389C   1120205915 I S ent1           VNIC Link Up
4FB9389C   1120205915 I S ent0           VNIC Link Up
59224136   1120205915 P H ent2           ETHERCHANNEL FAILOVER
B50A3F81   1120205915 P H ent2           TOTAL ETHERCHANNEL FAILURE
F655DA07   1120205915 I S ent1           VNIC Link Down
3DEA4C5F   1120205915 T S ent1           VNIC Error CRQ
F655DA07   1120205915 I S ent0           VNIC Link Down
3DEA4C5F   1120205915 T S ent0           VNIC Error CRQ
08917DC6   1120205915 I S pmig           Client Partition Migration Started

The ping test during the lpm show only 9 ping lost, due to etherchannel failover (on of my port was down at the destination server):

# ping
64 bytes from icmp_seq=23 ttl=255 time=0.504 ms
64 bytes from icmp_seq=31 ttl=255 time=0.607 ms

Using the command line

I’m moving back the partition using the HMC command line interface, check the manpage for all the details. Here is the details for the vnic_mappings: slot_num/ded/[vios_lpar_name]/[vios_lpar_id]/[adapter_id]/[physical_port_id]/[capacity]:

  • Validate:
  • # migrlpar -o v -m blade-8286-41A-21AFFFF -t  runner-8286-41A-21AEEEE  -p 72nim1 -i 'vnic_mappings="2/ded/vios1/1/1/2/2,3/ded/vios2/2/1/3/2"'
    HSCLA291 The selected partition may have an open virtual terminal session.  The management console will force termination of the partition's open virtual terminal session when the migration has completed.
  • Migrate:
  • # migrlpar -o m -m blade-8286-41A-21AFFFF -t  runner-8286-41A-21AEEEE  -p 72nim1 -i 'vnic_mappings="2/ded/vios1/1/1/2/2,3/ded/vios2/2/1/3/2"'

Port Labelling

One thing very annoying using LPM with vNIC is that you have to do the mapping of your vNIC each time you are moving. The default choices are never ok and the GUI will always show you the first port or the first adapter and you will have to do that job by yourself. Even worse with the command line the vnic_mappings can give you some headaches :-) . Hopefully there is a feature called port labelling. You can put a label on each SRIOV Physical port and all your machines. My advice is to tag the ports that are serving the same network and the same vlan with the same label on all your machines. During the mobility operation if labels are matching between two machine the adapter/port combination matching the label will be automatically chosen for the mobility and you will have nothing to do to map on your own. Super useful. The outputs below show you how to label your SRIOV ports:


# chhwres -m s00ka9942077-8286-41A-21C9F5V -r sriov --rsubtype physport -o s -a "adapter_id=1,phys_port_id=3,phys_port_label=adapter1port3"
# chhwres -m s00ka9942077-8286-41A-21C9F5V -r sriov --rsubtype physport -o s -a "adapter_id=1,phys_port_id=2,phys_port_label=adapter1port2"
# lshwres -m s00ka9942077-8286-41A-21C9F5V -r sriov --rsubtype physport --level eth -F adapter_id,phys_port_label

At the validation time source and destination ports will automatically be matched:


What about performance

One of the main reason I’m looking for SRIOV vNIC adapter is performance. As all of our design is based on the fact that we need to move all of our virtual machines from a host to one another we need a solution allowing both mobility and performance. If you have tried to run a TSM server in a virtualized environment you’ll probably understand what I mean about performance and virtualization. In the case of TSM you need a lot of network bandwidth. My current customer and my previous one tried to do that using Shared Ethernet Adapters and of course this solution did not work because a classic Virtual Ethernet Adapter is not able to provide enough bandwidth for a single Virtual I/O client. I’m not an expert about network performance but the result you will see below are pretty obvious to understand and will show you the power of vNIC and SRIOV (I know some optimization can be done on the SEA side but it’s just a super simple test).


I will try here to compare a classic Virtual Ethernet Adapter with a vNIC in the same configuration, both environments are the same, using same machines, same switches on so on:

  • Two machines are used to do the test. In case of vNIC both are using a single vNIC bacedk to a 10Gb adapter, in case of Virtual Ethernet Adapter both are backed to a SEA build on top of a 10Gb adapter.
  • The two machines are running on two different s814.
  • Entitlement and memory are the same for source and destination machines.
  • In the case of vNIC the capacity of the VF is set at 100% and the physical port of the SRIOV adapter is dedicated to the vNIC.
  • In the case of vent the SEA is dedicated to the test virtual machine.
  • In both cases a MTU of 1500 is utilized.
  • The tool used for the performance test is iperf (MTU 1500, Window Size 64K, and 10 TCP thread)

SEA test for reference only

  • iperf server:
  • seaserver1

  • iperf client:
  • seacli1


We are here running the exact same test:

  • iperf server:
  • iperf_vnic_client2

  • iperf client:
  • iperf_vnic_client

By using a vNIC I get 300% of the bandwidth I get with an virtual ethernet adapter. Just awesome ;-) no tuning (out of the box configuration). Nothing more to add about it it’s pretty obvious that the usage of vNIC for performance will be a must.


Are SRIOV vNICs the end of the SEAs ? Maybe, but not yet ! For some cases like performance and QoS it will be very useful and adopted (I’m pretty sure I will use that for my current customer to virtualized the TSM servers). But today in my opinion SRIOV lacks a real redundancy feature at the adapter level. What I want is a heartbeat communication between the two SRIOV adapters. Having such a feature on a SRIOV adapter will finish to convince customers to move from SEA to SRIOV vNIC. I know nothing about the future but I hope something like that will be available in the next few years. To sum up SRIOV vNICs are powerful, easy to use and simplify the configuration and management of your Power Servers. Please wait for the GA and try this new killer functionality. As always I hope it helps.

IBM Technical University for PowerSystems 2015 – Cannes (both sessions files included)

I’m traveling the world since my first IBM Technical University for PowerSystems in Dublin (4 years ago as far as I remember). I had the chance to be in Budapest last year and in Cannes this year (a little bit less funny for a French guy than Dublin and Budapest) but in a different way. I had this year the opportunity to be a speaker for two sessions (and two repeats) thanks to the kindness of Alex Abderrazag (thank you for trusting me Alex). My first plan was to go to Tokyo for the Openstack summit to talk about PowerVC but unfortunately for me I was not able to make it because of confidentiality issues I had with my current company (the goal here was to be a customer reference for PowerVC). I didn’t realized that creating two sessions from scratch on two topics which are pretty new would have been so hard for me. I thought it would take me a couple of hours for each one but it took me so many hours for each one that I now have to be impressed by people who are doing this as their daily job ;-) . Something that took me even more hours than creating the slides is the preparation of these two sessions (Speaker notes, practicing (special thanks here to the people who helped me to practice the sessions especially the fantastic Bill Miller ;-) ) and so on …). One last thing I didn’t realized is that you have to manage your stress. As it was my first time in a such a big event I can assure you that I was super stressed. One funny thing about the stress is that I didn’t have any stress anymore just one hour before the session. Before that moment I had to find solution to deal with the stress … and I just realized that I wasn’t stress because of the sessions but because I had to speak English in front of so much people (super tricky thing to do for a shy french guy, trust me !). My first sessions (on both topics) were full (no more chairs available in the room) and the repeat were ok too, so I think it was ok and I think I was not so bad at it ;-) .


I wanted here to thanks all the people who helped me to do this. Philippe Hermes (best pre-sales in France ;-) ) for believing in me and helping me to do that (re-reading my Powerpoint, and taking care of me during the event). Alex Abderrazag for allowing me to do that. Nigel Griffiths for re-reading the PowerVC session and giving me a couple of tips and tricks about being a speaker. Bill Miller and Alain Lechevalier for the rehearsal of both sessions and finally Rosa Davidson (she gave me the envy to do that). I’m not forgetting Jay Kruemcke who gave me some IBM shirts to do these sessions (and also for a lot of other things). Sorry for those whom I may have forgotten.

Many people asked me to share my Powerpoint files, you will find both files below in this post, here are the two presentations:

  • PowerVC for PowerVM deep dive – Tips & Tricks.
  • Using Chef Automation on AIX.

PowerVC for PowerVM deep dive – Tips & Tricks

This session is for PowerVC advanced users. You’ll find a lot of tips and tricks allowing you to customize your PowerVC. More than a couple of tips and tricks you’ll also find in this session how PowerVC works (images, activation, cloud-init, and so on). If you are not a PowerVC user this session can be a little bit difficult for you. But these tips and tricks are the lessons I learned from the field using PowerVC in a production environment:

Using Chef Automation on AIX

This session will give you all the basis to understand what is Chef and what you can do with this tool. You’ll also find examples on how to update service pack and technology level on AIX using Chef. Good examples about using Chef for post installation tasks and how to use it with PowerVC are also provided in this session.


I hope you enjoyed the session if you were at Cannes this year. On my side I really enjoyed doing that, it was a very good experience for me. I hope I’ll have the opportunity to do that again. Feel free to tell my if want to see me in future technical events like these one. The next step is now to do something at Edge … not so sure this dream will come true any time ;-) .

Updating AIX TL and SP using Chef

Creating something to automate the update of a service pack or a technology level has always been a dream that never come true. You can trust me almost every customers that I know tried to make that dream come true. Different customers, same story everywhere. They tried to do something and then tripped up in a miserable way. A fact that is always true in those stories is that the decision of doing that is always taken by someone that do not understand that AIX cannot be managed like a workstation or any other OS (who said windows). A good example of that is an IBM (and you know that I’m an IBM fan) tool call BigFix/TEM (for Tivoli Endpoint Manager): I’m not an expert about TEM (so maybe I am wrong) but you can use this one to update your Windows OS, your Linux, your AIX and even your Iphones or Android devices. LET ME LAUGH! How can it be possible that someone think about this: updating an Iphone the same way as you update an AIX. A good joke! (To be clear I am always and will always support IBM but my role is also to say what I think). Another good example is the utilization of IBM Systems Director (unfortunately … or fortunately this one has been withdrawn since a couple of days). I tried this one myself a few years ago (you can check this post). System Director was (in my humble opinion) the least worst solution to update an AIX or a Virtual I/O Server in a automated way. So how are we going to do this in a world that is always asking to do more with less people ?. I had to find a solution a few months ago to update more than 700 hosts from AIX 6.1 to AIX 7.1, the job was to create something that anybody can launch without knowing anything about AIX (one more time who can even think this is possible ?). I tried to do things like writing scripts to automate nimadm and I’m pretty happy with this solution (almost 80% were ok without any errors, but there were tons of prerequisites before launching the scripts and we faced some problems that were inevitable (nimsh error, sendmail configuration, broken filesets) forcing the AIX L3 team to fix tons of migrations). As everybody knows now I’m working on Chef since a few months and this can be the solution to what our world is asking today : replacing hundred of peoples by a single man launching a magical thing that can do everything without knowing anything about anything and save money! This is obviously ironical but unfortunately this is the reality of what happends today in France. “Money” and “resource” rules everything without having any plans about the future (to be clear I’m here talking about a generality, nothing here can reflect what’s going on in my place). It is like it is and as a good soldier I’m going to give you solutions to face the reality of this harsh world. But now it’s action time ! I don’t want to be too pessimistic but this is unfortunately the reality of what is happening today and my anger about that only reflects the fact that I’m living in fear, the fear of becoming bad or the fear of doing a job I really don’t like. I think I have to find a solution about this problem. The picture below is clear enough to give you a good a example of what I’m trying to do with Chef.


How do we update machines

I’m not here to teach you how to update a service pack or a technology level (I’m sure everybody know that) but in an automated way we need to talk about the method and identify each needed steps to perform an update. As there is always one more way to do it I have identified three ways to update a machine (the multibos way, the nimclient way and finally the alt_disk_copy way). To be able to update using Chef we obviously need to have an available provider for each method (you can do this with the execute resource, but we’re here to have fun and to learn some new things). So we need one provider capable of managing multibos, one capable of managing nimclient, and one capable of managing alt_disk_copy. All of these three providers are available now and can be used to write different recipes doing what is necessary to update a machine. Obviously there are pre-update and post-update steps needed (removing efixes, checking filesets). Let’s identify the step required first:

  • Verify with lppchk the consistency of all installed packages.
  • Remove any installed efixes (using emgr provider)
  • The multibos way:
    • You don’t need to create a backup of the rootvg using the multibos way.
    • Mount the SP or TL directory from the NIM server (using Chef mount resource).
    • Create the multibos instance and update using the remote mounted directory (using multibos resource).
  • The nimclient way:
    • Create a backup of your rootvg (using the altdisk resource).
    • Use nimclient to run a cust operation (using niminit,nimclient resource).
  • The alt_disk_copy way:
    • You don’t new to create a backup of the rootvg using the alt_disk_copy way.
    • Mount the SP or TL directory from the NIM server (using Chef mount).
    • Create the altinst_rootvg volume group and update it using the remote mounted directory (using altdisk provider).
  • Reboot the machine.
  • Remove any unwanted bos, old_rootvg.

Reminder where to download the AIX Chef cookbook:

Before trying to do all these steps in a single way let’s try to use the resources one by one to understand what each one is doing.

Fixes installation

This one is simple and allows you to install or remove fixes from your AIX machine, in the example below we are going to show how to do that with two Chef recipes: one for installing and the other one for removing! Super easy.

Installing fixes

In the recipe provides all the fixes name in an array and specify the name of the directory in which the filesets are (this can be an NFS mount point if you want to). Please note here that I’m using the cookbook_file resource to download the fixes, this resource allows you to download a file directly from the cookbook (so from the Chef server). Imagine using this single recipe to install a fix on all your machines. Quite easy ;-)

directory "/var/tmp/fixes" do
  action :create

cookbook_file "/var/tmp/fixes/IV75031s5a.150716.71TL03SP05.epkg.Z" do
  source 'IV75031s5a.150716.71TL03SP05.epkg.Z'
  action :create

cookbook_file "/var/tmp/fixes/IV77596s5a.150930.71TL03SP05.epkg.Z" do
  source 'IV77596s5a.150930.71TL03SP05.epkg.Z'
  action :create

aix_fixes "installing fixes" do
  fixes ["IV75031s5a.150716.71TL03SP05.epkg.Z", "IV77596s5a.150930.71TL03SP05.epkg.Z"]
  directory "/var/tmp/fixes"
  action :install

directory "/var/tmp/fixes" do
  recursive true
  action :delete


Removing fixes

The recipe is almost the same but with the remove action instead of the install action. Please note that you can specify which fixes to remove or use the keyword all to remove all the installed fixes (in the case of our recipe to update our servers we will use “all” as we want to remove all fixes before trying launch the update).

aix_fixes "remove fixes IV75031s5a and IV77596s5a" do
  fixes ["IV75031s5a", "IV77596s5a]
  action :remove
aix_fixes "remove all fixes" do
  fixes ["all"]


Alternate disks

In most AIX places I have seen the solution to backup your system before doing anything is to create an alternate disk using the alt_disk_copy command. Sometimes in some places where sysadmins love their job this disk is updated on the go to do a TL or SP upgrade. The altdisk resource I’ve coded for Chef take care of this. I’ll not detail with examples every actions available and will focus on create and cust:

  • create: This action create an alternate disk we will detail the attributes in the next section.
  • cleanup: Cleanup the alternate disk (remove it).
  • rename: Rename the alternate disk.
  • sleep: Put the alternate disk in sleep (umount every /alt_inst/* filesystem and varyoff the volume group)
  • wakeup: Wake up the alternate disk (varyon the volume group and mount every filesystems)
  • customize: Run a cust operation (the current resource is coded to use a directory to update the alternate disk with all the filesets present in a directory).


The alternate disk create action create an alternate disk an helps you to find an available disk for this creation. In any cases only free disks will be choosen (disks with no PVID and no volume group defined). Different types are available to choose the disk on which the alternate disk will be created:

  • Size: If type is size a disk by the exact same size of the value attribute will be used.
  • Name: If type is name a disk by the name of the value attribute will be used.
  • Auto: In auto mode available values for value are bigger and equals. If bigger is choose the first disk found with a size bigger than the current rootvg size will be used. If equals is choose the first disk found with a size equals to the current rootvg size is used.
aix_altdisk "cloning rootvg by name" do
  type :name
  value "hdisk3"
  action :create
aix_altdisk "cloning rootvg by size 66560" do
  type :size
  value "66560"
aix_altdisk "removing old alternates" do
  action :cleanup

aix_altdisk "cloning rootvg" do
  type :auto
  value "bigger"
  action :create



The customization action will update the previously created alternate disk with the filesets presents in an NFS mounted directory (from the NIM server). Please note in the recipe below that we are mounting the directory from NFS. The node[:nim_server] is an attribute of the node telling which nim server will be mounted. For instance you can define a nim server used for production environment and a nim server used for development environment.

# mounting /mnt
mount "/mnt" do
  device '#{node[:nim_server]}:/export/nim/lpp_source'
  fstype 'nfs'
  action :mount

# updating the current disk
aix_altdisk "altdisk_update" do
  image_location "/mnt/7100-03-05-1524"
  action :customize

mount "/mnt" do
  action :umount



The niminit and nimclient resources are used to register the nimclient to the nim master and then run a nimclient operation from the client. In my humble opinion this is the best way to do the update at the time of writing this blog post. One cool thing is that you can specify on which adapter the nimclient will be configured by using some ohai attributes. It’s an elegant way to do that, one more time this is showing you the power of Chef ;-) . Let’s start with some examples:


aix_niminit node[:hostname] do
  action :remove

aix_niminit node[:hostname] do 
  master "nimcloud"
  connect "nimsh"
  pif_name node[:network][:default_interface]
  action :setup



nimclient can first be used to install some filesets you may need. The provider is intelligent and can choose the good lpp_source for you. Please note that you will need lpp_source with a specific naming convention if you want to use this feature. To find the next/latest available sp/tl the provider is checking the current oslevel of the current machine and compare it with the available lpp_source present on you nim server. The naming convention needed is $(oslevel s)-lpp_source (ie. 7100-03-05-1524-lpp_source) (same principle is applicable to the spot when you need to use spot)

$ lsnim -t lpp_source | grep 7100
7100-03-00-0000-lpp_source             resources       lpp_source
7100-03-01-1341-lpp_source             resources       lpp_source
7100-03-02-1412-lpp_source             resources       lpp_source
7100-03-03-1415-lpp_source             resources       lpp_source
7100-03-04-1441-lpp_source             resources       lpp_source
7100-03-05-1524-lpp_source             resources       lpp_source

If your nim resources name are ok the lpp_source attribute can be:

  • latest_sp: the latest available service pack.
  • next_sp: the next available service.
  • latest_tl: the latest available technology level.
  • next_tl: the next available techonlogy level.
  • If you do not want to do this you can still specify the name of the lpp_source by hand.

Here are a few example to install packages

aix_nimclient "installing filesets" do
  installp_flags "aXYg"
  lpp_source "7100-03-04-1441-lpp_source"
  filesets ["openssh.base.client","openssh.base.server","openssh.license"]
  action :cust

aix_nimclient "installing filesets" do
  installp_flags "aXYg"
  lpp_source "7100-03-04-1441-lpp_source"
  filesets ["bos.compat.cmds", "bos.compat.libs"]
  action :cust

aix_nimclient "installing filesets" do
  installp_flags "aXYg"
  lpp_source "7100-03-04-1441-lpp_source"
  filesets ["Java6_64.samples"]
  action :cust


Please note that some filesets were already installed and the resource did not converge because of that ;-) . Let’s now try to update to the latest service pack:

aix_nimclient "updating to latest sp" do
  installp_flags "aXYg"
  lpp_source "latest_sp"
  fixes "update_all"
  action :cust


Tadam the machine was updated from 7100-03-04-1441 to 7100-03-05-1524 using a single a recipe and without knowing which service pack was available to update!


I really like the multibos way and I don’t know why today so few peoples are using it, anyway, I know some customers who are only working this way so I thought it was worth it working on a multibos resource. Here is a nice recipe creating a bos and updating it.

# creating dir for mount
directory "/var/tmp/mnt" do
  action :create

# mounting /mnt
mount "/var/tmp/mnt" do
  device "#{node[:nim_server]}:/export/nim/lpp_source"
  fstype 'nfs'
  action :mount

# removing standby multibos
aix_multibos "removing standby bos" do
  action :remove

# create multibos and updateit
aix_multibos "creating bos " do
  action :create

aix_multibos "update bos" do
  update_device "/var/tmp/mnt/7100-03-05-1524"
  action :update

# unmount /mnt
mount "/var/tmp/mnt" do
  action :umount

# deleting temp directory
directory "/var/tmp/mnt" do
  action :delete


Full recipes for updates

Let’s now write a big recipe doing all the things we need for an update. Remember that if one resource is failing the recipe stop by itself. For instance you’ll see in the recipe below that I’m doing an “lppchk -vm3″. If it returns something other than 0, the resources fail and the recipe fail. It’s obviously a normal behavior, it’s seems ok not to continue if there is a problem. So to sum up here are all the steps this recipe is doing: check fileset consistency, removing all fixes, committing filesets, creating an alternate disk, configuring the nimclient, running the update, deallocating resources

# if lppchk -vm return code is different
# than zero recipe will fail
# no guard needed here
execute "lppchk" do
  command 'lppchk -vm3'

# removing any efixes
aix_fixes "remvoving_efixes" do
  fixes ["all"]
  action :remove

# committing filesets
# no guard needed here
execute 'commit' do
  command 'installp -c all'

# cleaning exsiting altdisk
aix_altdisk "cleanup alternate rootvg" do
  action :cleanup

# creating an alternate disk using the
# first disk bigger than the actual rootvg
# bootlist to false as this disk is just a backup copy
aix_altdisk "altdisk_by_auto" do
  type :auto
  value "bigger"
  change_bootlist true
  action :create

# nimclient configuration
aix_niminit node[:hostname] do
  master "nimcloud"
  connect "nimsh"
  pif_name "en1"
  action :setup

# update to latest available tl/sp
aix_nimclient "updating to latest sp" do
  installp_flags "aXYg"
  lpp_source "latest_sp"
  fixes "update_all"
  action :cust

# dealloacate resource
aix_nimclient "deallocating resources" do
  action :deallocate

How about a single point of management “knife ssh”, “pushjobs”

Chef is and was designed on a pull model, it means that the client is asking to server to get the recipes and cookbooks and then execute them. This is the role of the chef-client. In a Linux environment, people are often running the client in demonized mode, it means that the client is waking up on a time interval basis and is executed (then every change to the cookbooks are run by the client). I’m almost sure that every AIX shop will be against this method because this one is dangerous. If you are doing that run the change first in test environment, then in dev, and finally in production. To be honest this is not the model I want to build where I am working. We want for some actions (like updates) a push model. By default Chef is delivered with a feature called push jobs. Push jobs is a way to run jobs like “execute the chef-client” from your knife workstation, unfortunately push jobs needs plugin to the chef-client and this one is only available on Linux OS …. not yet one AIX. Anyway we have an alternative, this one is the ssh knife plugin. This plugin that is in knife by default allows you to run commands on the nodes with ssh. Even better if you already have an ssh gateway with key sharing enabled knife ssh can use this gateway to communicate with the clients. Using knife ssh you’ll have the possibility to say “run chef-client on all my AIX 6.1 nodes” or “run this recipes installing this fix on all my AIX 7.1 nodes”, possibilities are infinite. Last note about knife ssh. This one is creating tunnels through your ssh gateway to communicate with the node, so if you use a shared key you have to copy the private key on the knife workstation (it tooks me time to understand that). Here are somes exmples:


  • On two nodes check the current os level:
  • ssh1

  • Run the update with Chef:
  • update3

  • Alternates disk have been created:
  • update4

  • Both systems are up to date:
  • update5


I think this blog post helped you to better understand Chef and what is Chef capable of. We are still on the very beginning of the Chef cookbook and I’m sure plenty of new things (recipes, providers) will come in the next few months. Try it by yourself and I’m sure you’ll like the way it work. I must admit that it is difficult to learn and to start but if you are doing this right you’ll get the benefit of an automation tool working on AIX … and honestly AIX needs an automation tool. I’m almost sure it will be Chef (in fact we have no other choice). Help us to write postinstall recipes, updates recipes and any other recipes you can think about. We need your help and it is happening now! You have the opportunity to be a part of this, a part of something new that will help AIX in the future. We don’t want a dying os, Chef will give AIX the opportunity to be an OS with a fully working automation tool. Go give it a try now!

Tips and tricks for PowerVC 1.2.3 (PVID, ghostdev, clouddev, rest API, growing volumes, deleting boot volume) | PowerVC 1.2.3 Redbook

Writing a Redbook was one of my main goal. After working days and nights for more than 6 years on PowerSystems IBM gave me the opportunity to write a Redbook. I was looking on the Redbook residencies page since a very very long time to find the right one. As there was nothing new on AIX and PowerVM (which are my favorite topics) I decided to give a try to the latest PowerVC Redbook (this Redbook is an update, but a huge one. PowerVC is moving fast). I am a Redbook reader since I’m working on AIX. Almost all Redbooks are good, most of them are the best source of information for AIX and Power administrators. I’m sure that like me, you saw that part about becoming an author every time you are reading a RedBook. I can now say THAT IT IS POSSIBLE (for everyone). I’m now one of this guys and you can also become one. Just find the Redbook that will fit for you and apply on the Redbook webpage (http://www.redbooks.ibm.com/residents.nsf/ResIndex). I wanted to say a BIG Thank you to all the people who gave me the opportunity to do that, especially Philippe Hermes, Jay Kruemcke, Eddie Shvartsman, Scott Vetter, Thomas R Bosthworth. In addition to these people I wanted also to thanks my teammates on this Redbook: Guillermo Corti, Marco Barboni and Liang Xu, they are all true professional people, very skilled and open … this was a great team ! One more time thank you guys. Last, I take the opportunity here to thanks the people who believed in me since the very beginning of my AIX career: Julien Gabel, Christophe Rousseau, and JL Guyot. Thank you guys ! You deserve it, stay like you are. I’m now not an anonymous guy anymore.


You can download the Redbook at this address: http://www.redbooks.ibm.com/redpieces/pdfs/sg248199.pdf. I’ve learn something during the writing of the Redbook and by talking to the members of the team. Redbooks are not there to tell and explain you what’s “behind the scene”. A Redbook can not be too long, and needs to be written in almost 3 weeks, there is no place for everything. Some topics are better integrated in a blog post than in a Redbook, and Scott told me that a couple of time during the writing session. I totally agree with him. So here is this long awaited blog post. The are advanced topics about PowerVC read the Redbook before reading this post.

Last one thanks to IBM (and just IBM) for believing in me :-). THANK YOU SO MUCH.

ghostdev, clouddev and cloud-init (ODM wipe if using inactive live partition mobility or remote restart)

Everybody who is using cloud-init should be aware of this. Cloud-init is only supported with AIX version that have the clouddev attribute available on sys0. To be totally clear at the time of writing this blog post you will be supported by IBM only if you use AIX 7.1 TL3 SP5 or AIX 6.1 TL9 SP5. All other versions are not supported by IBM. Let me explain why and how you can still use cloud-init on older versions just by doing a little trick. But let’s first explain what the problem is:

Let’s say you have different machines some of them using AIX 7100-03-05 and some of them using 7100-03-04, both use cloud-init for the activation. By looking at cloud-init code at this address here we can say that:

  • After the cloud-init installation cloud-init is:
  • Changing clouddev to 1 if sys0 has a clouddev attribute:
  • # oslevel -s
    # lsattr -El sys0 -a ghostdev
    ghostdev 0 Recreate ODM devices on system change / modify PVID True
    # lsattr -El sys0 -a clouddev
    clouddev 1 N/A True
  • Changing ghostdev to 1 if sys0 don’t have a clouddev attribute:
  • # oslevel -s
    # lsattr -El sys0 -a ghostdev
    ghostdev 1 Recreate ODM devices on system change / modify PVID True
    # lsattr -El sys0 -a clouddev
    lsattr: 0514-528 The "clouddev" attribute does not exist in the predefined
            device configuration database.

This behavior can directly be observed in the cloud-init code:


Now that we are aware of that, let’s make a remote restart test between two P8 boxes. I take the opportunity here to present you one of the coolest feature of PowerVC 1.2.3. You can now remote restart your virtual machines directly from the PowerVC GUI if you have one of your host in a failure state. I highly encourage you to check my latest post about this subject if you don’t know how to setup remote restartable partitions http://chmod666.org/index.php/using-the-simplified-remote-restart-capability-on-power8-scale-out-servers/:

  • Only simplified remote restart can be managed by PowerVC 1.2.3, the “normal” version of remote restart is not handle by PowerVC 1.2.3
  • In the compute template configuration there is now a checkbox allowing you to create remote restartable partition. Be careful: you can’t go back to a P7 box without having to reboot the machine. So be sure your Virtual Machines will stay on P8 box if you check this option.
  • remote_restart_compute_template

  • When the machine is shutdown or there is a problem on it you can click the “Remotely Restart Virtual Machines” button:
  • rr1

  • Select the machines you want to remote restart:
  • rr2

  • While the Virtual Machines are remote restarting, you can check the states of the VM and the state of the host:
  • rr4

  • After the evacuation the host is in “Remote Restart Evacuated State”:


Let’s now check the state of our two Virtual Machines:

  • The ghostdev one (the sys0 messages in the errpt indicates that the partition ID has changed AND DEVICES ARE RECREATED (ODM Wipe)) (no more ip address set on en0):
  • # errpt | more
    A6DF45AA   0803171115 I O RMCdaemon      The daemon is started.
    1BA7DF4E   0803171015 P S SRC            SOFTWARE PROGRAM ERROR
    CB4A951F   0803171015 I S SRC            SOFTWARE PROGRAM ERROR
    CB4A951F   0803171015 I S SRC            SOFTWARE PROGRAM ERROR
    D872C399   0803171015 I O sys0           Partition ID changed and devices recreat
    # ifconfig -a
    lo0: flags=e08084b,c0
            inet netmask 0xff000000 broadcast
            inet6 ::1%1/0
             tcp_sendspace 131072 tcp_recvspace 131072 rfc1323 1
  • The clouddev one (the sys0 message in the errpt indicate that the partition ID has changed) (note that the errpt message does not indicate that the devices are recreated):
  • # errpt |more
    60AFC9E5   0803232015 I O sys0           Partition ID changed since last boot.
    # ifconfig -a
    en0: flags=1e084863,480
            inet netmask 0xffffff00 broadcast
             tcp_sendspace 262144 tcp_recvspace 262144 rfc1323 1
    lo0: flags=e08084b,c0
            inet netmask 0xff000000 broadcast
            inet6 ::1%1/0
             tcp_sendspace 131072 tcp_recvspace 131072 rfc1323 1

VSAE is designed to manage ghostdev only OS on the other hand cloud-init is designed to manage clouddev OS. To be perfectly clear here are how ghostdev and clouddev works. But we first need to answer a question. Why do we need to set clouddev or ghostdev to 1 ? The answer is pretty obvious, one of this attribute needs to be set to 1 before capturing the Virtual Machine. When the Virtual Machines is captured, one of this attributes is set to 1. When you will deploy a new Virtual Machine this flag is needed to wipe the ODM before reconfiguring the virtual machine with the parameters set in the PowerVC GUI (ip, hostname). In both clouddev and ghostdev cases it is obvious that we need to wipe the ODM at the machine build/deploy time. Then VSAE or cloud-init (using config drive datasource) is setting hostname, ip address previously wiped by clouddev and ghostdev attributes. This is working well for a new deploy because we need to wipe the ODM in all cases but what about an inactive live partition mobility or a remote restart operation ? The Virtual Machine has moved (not on the same host, and not with the same lpar ID) and we need to keep the ODM as it is. How is it working:

  • If you are using VSAE, this one is managing the ghostdev attribute for you. At the capture time ghostdev is set to 1 by VSAE (when you run the pre-capture script). When deploying a new VM, at the activation time, VSAE is setting ghostdev back to 0. Inactive live partition mobility and remote restart operations will work fine with ghostdev set to 0.
  • If you are using cloud-init on a supported system clouddev is set to 1 at the installation of cloud-init. As cloud-init is doing nothing with both attributes at the activation time IBM needs to find a way to avoid wiping the ODM after the remote restart operation. The clouddev device was introduced: this one is writing a flag in the NVRAM, so when a new VM is built, there is no flag in the NVRAM for this one, the ODM is wiped. When an already existing VM is remote restarted, the flag exists in the NVRAM, the ODM is not wiped. By using clouddev there is no post deploy action needed.
  • If you are using cloud-init on an unsupported system ghostdev is set to 1 at the installation of cloud-init. As cloud-init is doing nothing at post-deploy time, ghostdev will remains to 1 in all cases and the ODM will always be wiped.


There is a way to use cloud-init on unsupported system. Keep in mind that in this case you will not be supported by IBM. So do this at you own risk. To be totally honest I’m using this method in production to use the same activation engine for all my AIX version:

  1. Pre-capture, set ghostdev to 1. What ever happens THIS IS MANDATORY.
  2. Post-capture, reboot the captured VM and set ghostdev to 0.
  3. Post-deploy on every Virtual machine set ghostdev to 0. You can put this in the activation input to do the job:
  4. #cloud-config
     - chdev -l sys0 -a ghostdev=0

The PVID problem

I realized I had this problem after using PowerVC for a while. As PowerVC images for rootvg and other volume group are created using Storage Volume Controller flashcopy (in case of a SVC configuration, but there are similar mechanisms for other storage providers) the PVID for both rootvg and additional volume groups will always be the same for each new virtual machines (all new virtual machines will have the same PVID for their rootvg, and the same PVID for each captured volume group). I did contact IBM about this and the PowerVC team told me that this behavior is totally normal and was observed since the release of VMcontrol. They didn’t have any issues related to this, so if you don’t care about it, just do nothing and keep this behavior as it is. I recommend doing nothing about this!

It’s a shame but most AIX administrators like to keep things as they are and don’t want any changes. (In my humble opinion this is one of the reason AIX is so outdated compared to Linux, we need a community, not narrow-minded people keeping their knowledge for them, just to stay in their daily job routine without having anything to learn). If you are in this case, facing angry colleagues about this particular point you can use the solution proposed below to calm the passions of the few ones who do not want to change !. :-). This is my rant : CHANGE !

By default if you build two virtual machines and check the PVID of each one, you will notice that the PVID are the same:

  • Machine A:
  • root@machinea:/root# lspv
    hdisk0          00c7102d2534adac                    rootvg          active
    hdisk1          00c7102d00d14660                    appsvg          active
  • Machine B:
  • root@machineb:root# lspv
    hdisk0          00c7102d2534adac                    rootvg          active
    hdisk1          00c7102d00d14660                    appsvg         active

For the rootvg the PVID is always set to 00c7102d2534adac and for the appsvg the PVID is always set to 00c7102d00d14660.

For the rootvg the solution is to change the ghostdev (only the ghostdev) to 2, and to reboot the machine. Putting ghostdev to 2 will change the PVID of the rootvg at the reboot time (after the PVID is changed ghostdev will be automatically set back to 0)

# lsattr -El sys0 -a ghostdev
ghostdev 2 Recreate ODM devices on system change / modify PVID True
# lsattr -l sys0 -R -a ghostdev
0...3 (+1)

For the non rootvg volume group this is a little bit tricky but still possible, the solution is to use the recreatevg (-d option) command to change the PVID of all the physical volumes of your volume group. Before rebooting the server ensure that:

  • Umount all the filesystems in the volume group on which you want to change the PVID.
  • varyoff the volume group.
  • Get the physical volumes names composing the volume group.
  • export the volume group.
  • recreate the volume group (this action will change the PVID)
  • re-import the volume group.

Here is the shell commands doing the trick:

# vg=appsvg
# lsvg -l $vg | awk '$6 == "open/syncd" && $7 != "N/A" { print "fuser -k " $NF }' | sh
# lsvg -l $vg | awk '$6 == "open/syncd" && $7 != "N/A" { print "umount " $NF }' | sh
# varyoffvg $vg
# pvs=$(lspv | awk -v my_vg=$vg '$3 == my_vg {print $1}')
# recreatevg -y $vg -d $pvs
# importvg -y $vg $(echo ${pvs} | awk '{print $1}'

We now agree that you want to do this, but as you are a smart person you want to do it automatically using cloud-init and the activation input, there are two way to do it, the silly way (using shell) and the noble way (using cloudinit syntax):

PowerVC activation engine (shell way)

Use this short ksh script in the activation input, this is not my recommendation, but you can do it for simplicity:


PowerVC activation engine (cloudinit way)

Here is the cloud-init way. Important note: use the latest version of cloud-init, the first one I used had a problem with the cc_power_state_change.py not using the right parameters for AIX:


Working with REST Api

I will not show you here how to work with the PowerVC RESTful API. I prefer to share a couple of scripts on my github account. Nice examples are often better than how to tutorials. So check the scripts on the github if you want a detailed how to … scripts are well commented. Just a couple of things to say before closing this topic: the best way to work with RESTful api is to code in python, there are a lot existing python libs to work with RESTful api (httplib2, pycurl, request). For my own understanding I prefer in my script using the simple httplib. I will put all my command line tools in a github repository called pvcmd (for PowerVC command line). You can download the scripts at this address, or just use git to clone the repo. One more time it is a community project, feel free to change and share anything: https://github.com/chmod666org/pvcmd:

Growing data lun

To be totally honest here is what I do when I’m creating a new machine with PowerVC. My customers always needs one additionnal volume groups for applications (we will call it appsvg). I’ve create a multi volume image with this volume group created (with a bunch of filesystem in it). As most of customers are asking for the volume group to be 100g large the capture was made with this size. Unfortunately for me we often get requests to create a bigger volume groups let’s say 500 or 600 Gb. Instead of creating a new lun and extending the volume group PowerVC allows you to grow the lun to the desired size. For volume group other than the boot one you must use the RESTful API to extend the volume. To do this I’ve created a python script to called pvcgrowlun (feel free to check the code on github) https://github.com/chmod666org/pvcmd/blob/master/pvcgrowlun. At each virtual machine creation I’m checking if the customer needs a larger volume group and extend it using the command provided below.

While coding this script I got a problem using the os-extend parameter in my http request. PowerVC is not exactly using the same parameters as Openstack is, if you want to code by yourself be aware of this and check in the PowerVC online documentation if you are using “extended attributes” (Thanks to Christine L Wang for this one):

  • In the Openstack documentation the attribute is “os-extend” link here:
  • os-extend

  • In the PowerVC documentation the attribute is “ibm-extend” link here:
  • ibm-extend

  • Identify the lun you want to grow (as the script is taking the name of the volume as parameter) (I have one not published to list all the volumes, tell me if you want it). In my case the volume name is multi-vol-bf697dfa-0000003a-828641A_XXXXXX-data-1, and I want to change its size from 60 to 80. This is not stated in the offical PowerVC documentation but this will work for both boot and data lun.
  • Check the size of the lun is lesser than the desired size:
  • before_grow

  • Run the script:
  • # pvcgrowlun -v multi-vol-bf697dfa-0000003a-828641A_XXXXX-data-1 -s 80 -p localhost -u root -P mysecretpassword
    [info] growing volume multi-vol-bf697dfa-0000003a-828641A_XXXXX-data-1 with id 840d4a60-2117-4807-a2d8-d9d9f6c7d0bf
    JSON Body: {"ibm-extend": {"new_size": 80}}
    [OK] Call successful
  • Check the size is changed after the command execution:
  • aftergrow_grow

  • Don’t forget to do the job in the operating system by running a “chvg -g” (check total PPS here):
  • # lsvg vg_apps
    VOLUME GROUP:       vg_apps                  VG IDENTIFIER:  00f9aff800004c000000014e6ee97071
    VG STATE:           active                   PP SIZE:        256 megabyte(s)
    VG PERMISSION:      read/write               TOTAL PPs:      239 (61184 megabytes)
    MAX LVs:            256                      FREE PPs:       239 (61184 megabytes)
    LVs:                0                        USED PPs:       0 (0 megabytes)
    OPEN LVs:           0                        QUORUM:         2 (Enabled)
    TOTAL PVs:          1                        VG DESCRIPTORS: 2
    STALE PVs:          0                        STALE PPs:      0
    ACTIVE PVs:         1                        AUTO ON:        yes
    MAX PPs per VG:     32768                    MAX PVs:        1024
    LTG size (Dynamic): 256 kilobyte(s)          AUTO SYNC:      no
    HOT SPARE:          no                       BB POLICY:      relocatable
    PV RESTRICTION:     none                     INFINITE RETRY: no
    DISK BLOCK SIZE:    512                      CRITICAL VG:    no
    # chvg -g appsvg
    # lsvg appsvg
    VOLUME GROUP:       appsvg                  VG IDENTIFIER:  00f9aff800004c000000014e6ee97071
    VG STATE:           active                   PP SIZE:        256 megabyte(s)
    VG PERMISSION:      read/write               TOTAL PPs:      319 (81664 megabytes)
    MAX LVs:            256                      FREE PPs:       319 (81664 megabytes)
    LVs:                0                        USED PPs:       0 (0 megabytes)
    OPEN LVs:           0                        QUORUM:         2 (Enabled)
    TOTAL PVs:          1                        VG DESCRIPTORS: 2
    STALE PVs:          0                        STALE PPs:      0
    ACTIVE PVs:         1                        AUTO ON:        yes
    MAX PPs per VG:     32768                    MAX PVs:        1024
    LTG size (Dynamic): 256 kilobyte(s)          AUTO SYNC:      no
    HOT SPARE:          no                       BB POLICY:      relocatable
    PV RESTRICTION:     none                     INFINITE RETRY: no
    DISK BLOCK SIZE:    512                      CRITICAL VG:    no

My own script to create VMs

I’m creating Virtual Machine every weeks, sometimes just a couple and sometime I got 10 Virtual Machines to create in a row. We are here using different storage connectivity groups, and different storage templates if the machine is in production, in development, and so on. We also have to choose the primary copy on the SVC side if the machine is in production (I am using a streched cluster between two distant sites, so I have to choose different storage templates depending on the site where the Virtual Machine is hosted). I make mistakes almost every time using the PowerVC gui (sometime I forgot to put the machine name, sometimes the connectivity group). I’m a lazy guy so I decided to code a script using the PowerVC rest api to create new machines based on a template file. We are planing to give the script to our outsourced teams to allow them to create machine, without knowing what PowerVC is \o/. The script is taking a file as parameter and create the virtual machine:

  • Create a file like the one below with all the information needed for your new virtual machine creation (name, ip address, vlan, host, image, storage connectivity group, ….):
  • # cat test.vm
    target_host:Default Group
  • Launch the script, the Virtual Machine will be created:
  • pvcmkvm -f test.vm -p localhost -u root -P mysecretpassword
    name: test
    vlan: vlan666
    target_host: Default Group
    image: multi-vol
    storage_connectivity_group: npiv
    virtual_processor: 1
    entitled_capacity: 0.1
    memory: 1024
    storage_template: storage1
    [info] found image multi-vol with id 041d830c-8edf-448b-9892-560056c450d8
    [info] found network vlan666 with id 5fae84a7-b463-4a1a-b4dd-9ab24cdb66b5
    [info] found host aggregation Default Group with id 1
    [info] found storage template storage1 with id bfb4f8cc-cd68-46a2-b3a2-c715867de706
    [info] found image multi-vol with id 041d830c-8edf-448b-9892-560056c450d8
    [info] found a volume with id b3783a95-822c-4179-8c29-c7db9d060b94
    [info] found a volume with id 9f2fc777-eed3-4c1f-8a02-00c9b7c91176
    JSON Body: {"os:scheduler_hints": {"host_aggregate_id": 1}, "server": {"name": "test", "imageRef": "041d830c-8edf-448b-9892-560056c450d8", "networkRef": "5fae84a7-b463-4a1a-b4dd-9ab24cdb66b5", "max_count": 1, "flavor": {"OS-FLV-EXT-DATA:ephemeral": 10, "disk": 60, "extra_specs": {"powervm:max_proc_units": 32, "powervm:min_mem": 1024, "powervm:proc_units": 0.1, "powervm:max_vcpu": 32, "powervm:image_volume_type_b3783a95-822c-4179-8c29-c7db9d060b94": "bfb4f8cc-cd68-46a2-b3a2-c715867de706", "powervm:image_volume_type_9f2fc777-eed3-4c1f-8a02-00c9b7c91176": "bfb4f8cc-cd68-46a2-b3a2-c715867de706", "powervm:min_proc_units": 0.1, "powervm:storage_connectivity_group": "npiv", "powervm:min_vcpu": 1, "powervm:max_mem": 66560}, "ram": 1024, "vcpus": 1}, "networks": [{"fixed_ip": "", "uuid": "5fae84a7-b463-4a1a-b4dd-9ab24cdb66b5"}]}}
    {u'server': {u'links': [{u'href': u'https://powervc.lab.chmod666.org:8774/v2/1471acf124a0479c8d525aa79b2582d0/servers/fc3ab837-f610-45ad-8c36-f50c04c8a7b3', u'rel': u'self'}, {u'href': u'https://powervc.lab.chmod666.org:8774/1471acf124a0479c8d525aa79b2582d0/servers/fc3ab837-f610-45ad-8c36-f50c04c8a7b3', u'rel': u'bookmark'}], u'OS-DCF:diskConfig': u'MANUAL', u'id': u'fc3ab837-f610-45ad-8c36-f50c04c8a7b3', u'security_groups': [{u'name': u'default'}], u'adminPass': u'u7rgHXKJXoLz'}}

One of the major advantage of using this is batching Virtual Machine creation. By using the script you can create one hundred Virtual Machine in a couple of minutes. Awesome !

Working with Openstack commands

PowerVC is based on Openstack, so why not using the Openstack command to work with PowerVC. It is possible, but I repeat one more time that this is not supported by IBM at all. Use this trick at you own risk. I was working with cloud manager with openstack (ICMO) and a script including shells variables is provided to “talk” to the ICMO Openstack. Based on the same file I created the same one for PowerVC. Before using any Openstack commands create a powervcrc file that match you PowerVC environement:

# cat powervcrc
export OS_USERNAME=root
export OS_PASSWORD=mypasswd
export OS_TENANT_NAME=ibm-default
export OS_AUTH_URL=https://powervc.lab.chmod666.org:5000/v3/
export OS_CACERT=/etc/pki/tls/certs/powervc.crt
export OS_REGION_NAME=RegionOne
export OS_USER_DOMAIN_NAME=Default

Then source the powervcrc file, and you are ready to play with all Openstack commands:

# source powervcrc

You can then play with Openstack commands, here are a few nice example:

  • List virtual machines:
  • # nova list
    | ID                                   | Name                  | Status | Task State | Power State | Networks               |
    | dc5c9fce-c839-43af-8af7-e69f823e57ca | ghostdev0clouddev1    | ACTIVE | -          | Running     | vlan666=    |
    | d7d0fd7e-a580-41c8-b3d8-d7aab180d861 | ghostdevto1cloudevto1 | ACTIVE | -          | Running     | vlan666=    |
    | bf697dfa-f69a-476c-8d0f-abb2fdcb44a7 | multi-vol             | ACTIVE | -          | Running     | vlan666=    |
    | 394ab4d4-729e-44c7-a4d0-57bf2c121902 | deckard               | ACTIVE | -          | Running     | vlan666=    |
    | cd53fb69-0530-451b-88de-557e86a2e238 | priss                 | ACTIVE | -          | Running     | vlan666=    |
    | 64a3b1f8-8120-4388-9d64-6243d237aa44 | rachael               | ACTIVE | -          | Running     |                        |
    | 2679e3bd-a2fb-4a43-b817-b56ead26852d | batty                 | ACTIVE | -          | Running     |                        |
    | 5fdfff7c-fea0-431a-b99b-fe20c49e6cfd | tyrel                 | ACTIVE | -          | Running     |                        |
  • Reboot a machine:
  • # nova reboot multi-vol
  • List the hosts:
  • # nova hypervisor-list
    | ID | Hypervisor hostname | State | Status  |
    | 21 | 828641A_XXXXXXX     | up    | enabled |
    | 23 | 828641A_YYYYYYY     | up    | enabled |
  • Migrate a virtual machine (run a live partition mobility operation):
  • # nova live-migration ghostdevto1cloudevto1 828641A_YYYYYYY
  • Evacuate and set a server in maintenance mode and move all the partitions to another host:
  • # nova maintenance-enable --migrate active-only --target-host 828641A_XXXXXX 828641A_YYYYYYY
  • Virtual Machine creation (output truncated):
  • # nova boot --image 7100-03-04-cic2-chef --flavor powervm.tiny --nic net-id=5fae84a7-b463-4a1a-b4dd-9ab24cdb66b5,v4-fixed-ip= novacreated
    | Property                            | Value                                                                                                                                            |
    | OS-DCF:diskConfig                   | MANUAL                                                                                                                                           |
    | OS-EXT-AZ:availability_zone         | nova                                                                                                                                             |
    | OS-EXT-SRV-ATTR:host                | -                                                                                                                                                |
    | OS-EXT-SRV-ATTR:hypervisor_hostname | -                                                                                                                                                |
    | OS-EXT-SRV-ATTR:instance_name       | novacreated-bf704dc6-00000040                                                                                                                    |
    | OS-EXT-STS:power_state              | 0                                                                                                                                                |
    | OS-EXT-STS:task_state               | scheduling                                                                                                                                       |
    | OS-EXT-STS:vm_state                 | building                                                                                                                                         |
    | accessIPv4                          |                                                                                                                                                  |
    | accessIPv6                          |                                                                                                                                                  |
    | adminPass                           | PDWuY2iwwqQZ                                                                                                                                     |
    | avail_priority                      | -                                                                                                                                                |
    | compliance_status                   | [{"status": "compliant", "category": "resource.allocation"}]                                                                                     |
    | cpu_utilization                     | -                                                                                                                                                |
    | cpus                                | 1                                                                                                                                                |
    | created                             | 2015-08-05T15:56:01Z                                                                                                                             |
    | current_compatibility_mode          | -                                                                                                                                                |
    | dedicated_sharing_mode              | -                                                                                                                                                |
    | desired_compatibility_mode          | -                                                                                                                                                |
    | endianness                          | big-endian                                                                                                                                       |
    | ephemeral_gb                        | 0                                                                                                                                                |
    | flavor                              | powervm.tiny (ac01ba9b-1576-450e-a093-92d53d4f5c33)                                                                                              |
    | health_status                       | {"health_value": "PENDING", "id": "bf704dc6-f255-46a6-b81b-d95bed00301e", "value_reason": "PENDING", "updated_at": "2015-08-05T15:56:02.307259"} |
    | hostId                              |                                                                                                                                                  |
    | id                                  | bf704dc6-f255-46a6-b81b-d95bed00301e                                                                                                             |
    | image                               | 7100-03-04-cic2-chef (96f86941-8480-4222-ba51-3f0c1a3b072b)                                                                                      |
    | metadata                            | {}                                                                                                                                               |
    | name                                | novacreated                                                                                                                                      |
    | operating_system                    | -                                                                                                                                                |
    | os_distro                           | aix                                                                                                                                              |
    | progress                            | 0                                                                                                                                                |
    | root_gb                             | 60                                                                                                                                               |
    | security_groups                     | default                                                                                                                                          |
    | status                              | BUILD                                                                                                                                            |
    | storage_connectivity_group_id       | -                                                                                                                                                |
    | tenant_id                           | 1471acf124a0479c8d525aa79b2582d0                                                                                                                 |
    | uncapped                            | -                                                                                                                                                |
    | updated                             | 2015-08-05T15:56:02Z                                                                                                                             |
    | user_id                             | 0688b01e6439ca32d698d20789d52169126fb41fb1a4ddafcebb97d854e836c9                                                                                 |

LUN order, remove a boot lun

If you are moving to PowerVC you will probably need to migrate existing machines to your PowerVC environment. One of my customer is asking to move its machines from old boxes using vscsi, to new PowerVC managed boxes using NPIV. I am doing it with the help of a SVC for the storage side. Instead of creating the Virtual Machine profile on the HMC, and then doing the zoning and masking on the Storage Volume Controller and on the SAN switches, I decided to let PowerVC do the job for me. Unfortunately, PowerVC can’t only “carve” Virtual Machine, if you want to do so you have to build a Virtual Machine (rootvg include). This is what I am doing. During the migration process I have to replace the PowerVC created lun by the lun used for the migration …. and finally delete the PowerVC created boot lun. There is a trick to know if you want to do this:

  • Let’s say the lun created by PowerVC is the one named “volume-clouddev-test….” and the orignal rootvg is named “good_rootvg”. The Virtual Machine is booted on the “good_rootvg” lun and I want to remove the “volume-clouddev-test….”:
  • root1

  • You first have to click the “Edit Details” button:
  • root2

  • Then toggle the boot set to “YES” for the “good_rootvg” lun and click move up (the rootvg order must be set to 1, it is mandatory, the lun at order 1 can’t be deleted):
  • root3

  • Toggle the boot set to “NO” for the PowerVC created rootvg:
  • root4

  • If you are trying to detach the volume in first position you will got an error:
  • root5

  • When the order are ok, you can detach and delete the lun created by PowerVC:
  • root6


There are always good things to learn about PowerVC and related AIX topics. Tell me if these tricks are useful for you and I will continue to write posts like this one. You don’t need to understand all this details to work with PowerVC, most customers don’t. I’m sure you prefer understand what is going on “behind the scene” instead of just clicking a nice GUI. I hope it helps you to better understand what PowerVC is made of. And don’t be shy share you tricks with me. Next: more to come about Chef ! Up the irons !