What’s new in VIOS 2.2.4.10 and PowerVM : Part 2 Shared Processor Pool weighting

First of all before beginning this blog post I owe you an explanation about these two months without new posts. These two months were very busy. On the personal side I was forced to move from my current apartment and had to find another one which was suitable for me (and I can assure you that this is not something really easy in Paris). As I was visiting apartments almost 3 days a week the time kept for writing blog posts (please remember that I’m doing that in my “after hours” work) was taken for something else :-(. At work things were crazy too, we had to build twelve new E870 boxes (with the provisioning toolkit and SRIOV adapters) and make them work with our current implementation of PowerVC. Then I had to do a huge vscsi to NPIV migration (more than 500 AIX machines to migrate from vscsi to NPIV and then move to P8 boxes in less than three weeks … yes more than 500 machines in less than 3 weeks (4000 zones created …). Thanks to the help of STG Lab Services consultant (Bonnie LeBarron) this was achieved using a modified version of her script (to fit our need (zoning and mapping part) (latest hmc releases)). I’m back in business now and I have planned a couple of blog posts this month. This first of this series is about the Shared Processor Pool weighting on the latest Power8 firmware versions. You’ll see that it changes a lot of things compared to P7 boxes.

A short history of Shared Processor Pool weighting

This long story began a few years ago for me (I’ll say at least 4 years ago) (I was planing to do a blog post about it a few years ago but decided not to do it because I was thinking this topic was considered as “sensible”, now that we have documentation and an official statement on this there is no reason to hide this anymore). I was working for a bank using two P795 with a lot of cores activated. We were using Multiple Shared Processor Pool in an unconventional way (as far as I remember two pools per customers one for Oracle and one for WAS, and we had more than 5 or 6 customers, so each box had at least 10 MSPP). As you may already know I only believe what I can see. So I decided to make tests on my own. By reading the Redbook I realized that there was not enough information about pool and partition weighting. We were like a lot of today’s customers having different weights for development (32), qualification (64), pre-production (128), production (192) and finally Virtual I/O Server (255). As we were using Shared Processor Pool I was expecting that when the Shared Processor Pool is full (contention) the weight will work and will prioritize the partition with the higher weight. What was my surprise when I realized the weighting was not working inside a Shared Processor Pool but only in the DefaultPool (Pool 0). Remember forever this statement on Power7 partition weighting is only working when the default pool is full. There is no “intelligence” in a Shared Processor Pool and you have to be very careful with the size of the pool because of that. On Power7 pools are used ONLY for licensing purpose. I then decided to contact my preferred IBM pre-sales in France to tell him about this incredible discovery. I had no answer for one month, then (as always) he came back with the answer of someone who already knows the truth about this. He introduced me to a performance expert (she was a performance expert at this time and is now specialized in security) and she was telling me that I was absolutely right with my discovery but that only a few people were aware of this. I decided to say nothing about it … but was sure that IBM realized there was a something to clarify about this. Then last year at the IBM Technical Collaboration Council I saw a PowerPoint slide telling that latest IBM Power8 firmware will add this long awaited feature. Partition weighting will work inside a Shared Processor Pool. Finally after waiting for more than four years I have what I want. As I was working on a new project in my current job I had to create a lot of Shared Processor Pool in a mixed Power7 (P770) and Power8 (E870) environment. It was the time to check if this new feature was really working and compare the differences between a Power8 (with latest firmware) and a Power7 machine (with latest firmware). The way we are implementing and monitoring the Shared Processor Pool on a Power8 will now be very different than it was on Power7 box. I think that this is really important and that everybody now needs to understand the differences for their future implementation. But let’s first have a look in the Redbooks to check the official statements:

The Redbook talking about this is “IBM PowerVM Virtualization Introduction and Configuration”, here is the key paragraph to understand (page 113 and 114):

redbook_statement

It was super hard to find but there is place were IBM is talking about this. I’m below quoting this link: https://www.ibm.com/support/knowledgecenter/9119-MME/p8hat/p8hat_sharedproc.htm

When the firmware is at level 8.3.0, or earlier, uncapped weight is used only when more virtual processors consume unused resources than the available physical processors in the shared processor pool. If no contention exists for processor resources, the virtual processors are immediately distributed across the physical processors, independent of their uncapped weights. This can result in situations where the uncapped weights of the logical partitions do not exactly reflect the amount of unused capacity.

For example, logical partition 2 has one virtual processor and an uncapped weight of 100. Logical partition 3 also has one virtual processor, but an uncapped weight of 200. If logical partitions 2 and 3 both require more processing capacity, and there is not enough physical processor capacity to run both logical partitions, logical partition 3 receives two more processing units for every additional processing unit that logical partition 2 receives. If logical partitions 2 and 3 both require more processing capacity, and there is enough physical processor capacity to run both logical partitions, logical partition 2 and 3 receive an equal amount of unused capacity. In this situation, their uncapped weights are ignored.

When the firmware is at level 8.4.0, or later, if multiple partitions are assigned to a shared processor pool, the uncapped weight is used as an indicator of how the processor resources must be distributed among the partitions in the shared processor pool with respect to the maximum amount of capacity that can be used by the shared processor pool. For example, logical partition 2 has one virtual processor and an uncapped weight of 100. Logical partition 3 also has one virtual processor, but an uncapped weight of 200. If logical partitions 2 and 3 both require more processing capacity, logical partition 3 receives two additional processing units for every additional processing unit that logical partition 2 receives.

The server distributes unused capacity among all of the uncapped shared processor partitions that are configured on the server, regardless of the shared processor pools to which they are assigned. For example, if you configure logical partition 1 to the default shared processor pool and you configure logical partitions 2 and 3 to a different shared processor pool, all three logical partitions compete for the same unused physical processor capacity in the server, even though they belong to different shared processor pools.

Testing methodology

We now need to demonstrate that the behavior of the weighting is different between a Power7 and a Power8 machine, here is how we are going to proceed :

  • On a Power8 machine (E870 SC840_056) we create a Shared Processor Pool with a “Maximum Processing unit” set to 1.
  • On a Power7 machine we create a Shared Processor Pool with a “Maximum Processing unit” set to 1.
  • We create two partitions in the P8 pool (1VP, 0.1EC) called mspp1 and mspp2.
  • We create two partitions in the P7 pool (1VP, 0.1EC) called mspp3 and mspp4.
  • Using ncpu providev with the nstress tools (http://public.dhe.ibm.com/systems/power/community/wikifiles/PerfTools/nstress_AIX6_April_2014.tar) we create an heavy load on each partition. Obviously this load can’t be higher than 1 processing unit in total (sum of each physc).
  • We then use these testing scenarios (each test has a duration of 15 minutes, we are recording cpu and pool stats with nmon and lpar2rrd)
    1. First partition with a weight of 128, the second partition with a weight of 128 (test with the same weight).
    2. First partition with a weight of 64, the second partition with a weight of 128 (test weight multiply by two 1/2).
    3. First partition with a weight of 32, the second partition with a weight of 128 (test weight multiply by four 1/4).
    4. First partition with a weight of 1, the second partition with a weight of 2 (we try here to prove that the ratio between two values is more important that the value itself. Values of 1 and 2 should give us the same result as 64 and 128)
    5. First partition with a weight of 1, the second partition with a weight of 255 (a ratio of 1:255) (you’ll see here that the result is pretty interesting :-) ).
  • You’ll see that It will not be necessary to do all these tests on the P7 box …. :-)

The Power8 case

Prerequistes

Firmware P8 SC840* or SV840* are mandatory to enable the weighting in a Shared Processor Pool on a machine without contention for processor resources (no contention in the DefaultPool). This means that all P6, P7 and P8 (with a firmware < 840) machines do not have this feature coded in the firmware. My advice is to update all your P8 machines to the latest level to enable this new behavior.

Tests

For each test, we prove the weight of each partition using the lparstat command, then we capture a nmon file every 30 seconds and we launch ncpu for a duration of 15 minutes with four CPUs (we are in SMT4) on both P8 and P7 box. We will show you here that weight are taken into account in a Power8 MSPP, but are not taken into account in a Power7 MSPP.

#lparstat -i | grep -iE "Variable Capacity Weight|^Partition"
Partition Name                             : mspp1-23bad3d7-00000898
Partition Number                           : 3
Partition Group-ID                         : 32771
Variable Capacity Weight                   : 255
Desired Variable Capacity Weight           : 255
# /usr/bin/nmon -F /admin/nmon/$(hostname)_weight255.nmon -s30 -c30 -t ; ./ncpu -p 4 -s 900
# lparstat 1 10
  • Both weights at 128, you can check in the picture below that the “physc” value are strictly equal (0.5 for both lpars) (the ratio of 1 between the two weight is respected) :
  • weight128

  • One partition to 64 and one partition to 128, you can check in the pictures below (lparstat output, and nmon analyser graph) that we now have different values for the physc value (0.36 for the mssp2 lpar and 0.64 for the mssp1 lpar). We now have a ratio of 2, mspp1 physc is two time the mspp2 physc (the weights are respected in the Shared Processor Pool):
  • weight64_128

nmonx2

This lpar2rrd graph show you the weighting behavior on a Power8 machine (test one: both weights equal to 128, and test two: with two different weights of 128 and 64).

graph_p8_128128_12864

  • One partition to 32 and one partition to 128: you can check in the picture below that the ratio of 3 (32:128) is respected (physc value to 0.26 and 0.74).
  • weight32_128

  • One partition to 1 and one partition to 2. The results here are exactly the same as the second test (128 and 64 weights), it proves you that the important thing to configure are the ratio between the weights and not the value itself (using 1 2 3 weights will give you the exact same results as 2 4 6):
  • weight1_2

  • Finally one partition to 1 and one partition to 255. Be careful here the ratio is big enough to have an unresponsive lpar when loading both partitions. I do not recommend putting such high ratios because of this:
  • weight1_255

graph_p8_12832_12_1255

The Power7 case

Let’s do one test on a Power7 machine with on lpar with a weight of 1 and the other one with a weight of 255 … you’ll see a huge difference here … and I think it is clear enough to avoid doing all the test scenarios on the Power7 machine.

Tests

You can see here that I’m doing the exact same test, weight to 1 and 255, now both partition have an equal physc value (0.5 for both partitions). On a Power7 box the weights will be taken into account only if the DefaultPool (pool0) is full (contention). The pictures below show you the reality of the Multiple Shared Processors pool running on a Power7 box. On Power7 MSPP must be used only for licensing purpose and nothing else.

weight1_255_power7
graph_p7_1255

Conclusion

I hope you better understand the Multiple Shared Processor Pools differences between Power8 and Power7. Now that you are aware of this my advice is to have different strategies when you are implementing MSPP on Power7 and Power8. On Power7 double check and monitor your MSPP to be sure the pools are never full and that you can get enough capacity to run you load. On a Power8 box setup you weights wisely on your different environments (backup, production, development). You can then be sure that the production will be prioritized whatever appends even if you reduce your MSPP sizes, by doing this you’ll maximize licensing costs. As always I hope it help.

NovaLink ‘HMC Co-Management’ and PowerVC 1.3.0.1Dynamic Resource Optimizer

Everybody now knows that I’m using PowerVC a lot in my current company. My environment is growing bigger and bigger and we are now managing more than 600 virtual machines with PowerVC (the goal is to reach ~ 3000 this year). Some of them were build by PowerVC itself and some of them were migrated through an homemade python script calling the PowerVC rest api and moving our old vSCSI machines to the new full NPIV/Live Partition Mobility/PowerVC environment (Still struggling with the “old mens” to move on SSP, but I’m alone versus everybody on this one). I’m happy with that but (there is always a but) I’m facing a lot problems. The first one is that we are doing more and more stuffs with PowerVC (Virtual Machine creation, virtual machines resizing, adding additional disks, moving machine with LPM, and finally using this python scripts to migrate the old machines to the new environment). I realized that the machine hosting the PowerVC was slower and slower and the more actions we do the more the PowerVC was “unresponsive”. By this I mean that the GUI was slow, creating objects was slower and slower. By looking at CPU graphs in lpar2rrd we noticed that the CPU consumption was growing as fast as we were doing stuffs on PowerVC (check the graph below). The second problem was my teams (unfortunately for me, we have here different teams doing different sort of stuffs here and everybody is using the Hardware Management Consoles it’s own way, some people are renaming the machine making them unusable with PowerVC, some people were changing the profiles disabling the synchronization, even worse we have some third party tools used for capacity planning making the Hardware Management Console unusable by PowerVC). The solution to all these problems is to use NovaLink and especially the NovaLink Co-Management. By doing this the Hardware Management Consoles will be restricted to a read-only view and PowerVC will stop querying the HMCs and will directly query the NovaLink partitions on each hosts instead of querying the Hardware Management Consoles.

cpu_powervc

What is NovaLink ?

If you are using PowerVC you know that this one is based on OpenStack. Until now all the Openstack services where running on the PowerVC host. If you check on the PowerVC today you can see that there is one Nova per managed host. In the example below I’m managing ten hosts so I have ten different Nova processes running :

# ps -ef | grep [n]ova-compute
nova       627     1 14 Jan16 ?        06:24:30 /usr/bin/python /usr/bin/nova-compute --config-file /etc/nova/nova-9117MMD_10D6666.conf --log-file /var/log/nova/nova-compute.log --log-file /var/log/nova/nova-compute-9117MMD_10D6666.log
nova       649     1 14 Jan16 ?        06:30:25 /usr/bin/python /usr/bin/nova-compute --config-file /etc/nova/nova-9117MMD_65E6666.conf --log-file /var/log/nova/nova-compute.log --log-file /var/log/nova/nova-compute-9117MMD_65E6666.log
nova       664     1 17 Jan16 ?        07:49:27 /usr/bin/python /usr/bin/nova-compute --config-file /etc/nova/nova-9117MMD_1086666.conf --log-file /var/log/nova/nova-compute.log --log-file /var/log/nova/nova-compute-9117MMD_1086666.log
nova       675     1 19 Jan16 ?        08:40:27 /usr/bin/python /usr/bin/nova-compute --config-file /etc/nova/nova-9117MMD_06D6666.conf --log-file /var/log/nova/nova-compute.log --log-file /var/log/nova/nova-compute-9117MMD_06D6666.log
nova       687     1 18 Jan16 ?        08:15:57 /usr/bin/python /usr/bin/nova-compute --config-file /etc/nova/nova-9117MMD_6576666.conf --log-file /var/log/nova/nova-compute.log --log-file /var/log/nova/nova-compute-9117MMD_6576666.log
nova       697     1 21 Jan16 ?        09:35:40 /usr/bin/python /usr/bin/nova-compute --config-file /etc/nova/nova-9117MMD_6556666.conf --log-file /var/log/nova/nova-compute.log --log-file /var/log/nova/nova-compute-9117MMD_6556666.log
nova       712     1 13 Jan16 ?        06:02:23 /usr/bin/python /usr/bin/nova-compute --config-file /etc/nova/nova-9117MMD_10A6666.conf --log-file /var/log/nova/nova-compute.log --log-file /var/log/nova/nova-compute-9117MMD_10A6666.log
nova       728     1 17 Jan16 ?        07:49:02 /usr/bin/python /usr/bin/nova-compute --config-file /etc/nova/nova-9117MMD_1016666.conf --log-file /var/log/nova/nova-compute.log --log-file /var/log/nova/nova-compute-9117MMD_1016666.log
nova       752     1 17 Jan16 ?        07:34:45 /usr/bin/python /usr/bin/nova-compute --config-file /etc/nova/nova-9117MMD_1036666.conf --log-file /var/log/nova/nova-compute.log --log-file /var/log/nova/nova-compute-9119MHE_1036666.log
nova       779     1 13 Jan16 ?        05:54:52 /usr/bin/python /usr/bin/nova-compute --config-file /etc/nova/nova-9117MMD_6596666.conf --log-file /var/log/nova/nova-compute.log --log-file /var/log/nova/nova-compute-9119MHE_6596666.log
# ps -ef | grep [n]ova-compute | wc -l
10

The goal of NovaLink is to move these processes on a dedicated partition running on each managed host (each PowerSystems). This partition is called the NovaLink partition. This one is running on an Ubuntu 15.10 Linux OS (Little endian) (so only available on Power8 hosts) and is in charge to run the Openstack nova processes. By doing that you will distribute the load across all the NovaLink partitions instead of charging one PowerVC host. Even better my understanding is that the NovaLink partition is able to communicate directly with the FSP. By using NovaLink you will be able to stop using the Hardware Management Consoles anymore and avoid the slowness of theses ones. As the NovaLink partition is hosted on the host itself the RMC connections are can now use a direct link (ipv6) through the PowerHypervisor. No more RMC connection problem at all ;-), it’s just awesome. NovaLink allows you to choose between two modes of management:

  • Full Nova Management: You install your new host directly with NovaLink on it and you will not need an Hardware Management Console Anymore (In this case the NovaLink installation is in charge to deploy the Virtual I/O Servers and the SEAs).
  • Nova Co-Management: Your host is already installed and you give the write access (setmaster) to the NovaLink partition, the Hardware Management Console will be limited in this mode (you will not be able to create partition anymore or modify profile, it’s not a “read only” mode as you will be able to start and stop the partitions and still do some stuffs with HMC but you will be very limited).
  • You can still mix NovaLink and Non-NovaLink management hosts, and still have P7/P6 managed by HMCs, P8 managed by HMCs, P8 Nova Co-Managed and P8 full Nova Managed ;-).
  • Nova1

Prerequisites

As always upgrade your systems to the latest code level if you want to use NovaLink and NovaLink Co-Management

  • Power 8 only with firmware version 840. (or later)
  • Virtual I/O Server 2.2.4.10 or later
  • For NovaLink co-management HMC V8R8.4.0
  • Obviously install NovaLink on each NovaLink managed system (install the latest patch version of NovaLink)
  • PowerVC 1.3.0.1 or later

NovaLink installation on an existing system

I’ll show you here how to install a NovaLink partition on an existing deployed system. Installing a new system from scratch is also possible. My advice is that you look at this address to start: , and check this youtube video showing you how a system is installed from scratch :

The goal of this post is to show you how to setup a co-managed system on an already existing system with Virtual I/O Servers already deployed on the host. My advice is to be very careful. The first thing you’ll need to do is to created a partition (2VP 0.5EC and 5GB Memory) (I’m calling it nova in the example below) and use the Virtual Optical device to load the NovaLink system on this one. In the example below the machine is “SSP” backed. Be very careful when do that: setup the profile name, and all the configuration stuffs before moving to co-managed mode … after that it will be harder for you to change things as the new pvmctl command will be very new to you:

# mkvdev -fbo -vadapter vhost0
vtopt0 Available
# lsrep
Size(mb) Free(mb) Parent Pool         Parent Size      Parent Free
    3059     1579 rootvg                   102272            73216

Name                                                  File Size Optical         Access
PowerVM_NovaLink_V1.1_122015.iso                           1479 None            rw
vopt_a19a8fbb57184aad8103e2c9ddefe7e7                         1 None            ro
# loadopt -disk PowerVM_NovaLink_V1.1_122015.iso -vtd vtopt0
# lsmap -vadapter vhost0 -fmt :
vhost0:U8286.41A.21AFF8V-V2-C40:0x00000003:nova_b1:Available:0x8100000000000000:nova_b1.7f863bacb45e3b32258864e499433b52: :N/A:vtopt0:Available:0x8200000000000000:/var/vio/VMLibrary/PowerVM_NovaLink_V1.1_122015.iso: :N/A
  • At the gurb page select the first entry:
  • install1

  • Wait for the machine to boot:
  • install2

  • Choose to perform an installation:
  • install3

  • Accept the licenses
  • install4

  • padmin user:/li>
    install5

  • Put you network configuration:
  • install6

  • Accept to install the Ubuntu system:
  • install8

  • You can then modify anything you want in the configuration file (in my case the timezone):
  • install9

    By default NovaLink (I think not 100% sure) is designed to be installed on SAS disk, so without multipathing. If like me you decide to install the NovaLink partition in a “boot-on-san” lpar my advice is to launch the installation without any multipathing enabled (only one vscsi adapter or one virtual fibre channel adapter). After the installation is completed install the Ubuntu multipathd service and configure the second vscsi or virtual fibre channel adapter. If you don’t do that you may experience problem at the installation time (RAID error). Please remember that you have to do that before enabling the co-management. Last thing about the installation it may takes a lot of time to finish. So be patient (especially the preseed step).

install10

Updating to the latest code level

The iso file provider in the Entitled Software Support is not updated to the latest available NovaLink code. Make a copy of the official repository available at this address: ftp://public.dhe.ibm.com/systems/virtualization/Novalink/debian. Serve the content of this ftp server on you how http server (use the command below to copy it):

# wget --mirror ftp://public.dhe.ibm.com/systems/virtualization/Novalink/debian

Modify the /etc/apt/sources.list (and source.list.d) and comment all the available deb repository to on only keep your copy

root@nova:~# grep -v ^# /etc/apt/sources.list
deb http://deckard.lab.chmod666.org/nova/Novalink/debian novalink_1.0.0 non-free
root@nova:/etc/apt/sources.list.d# apt-get upgrade
Reading package lists... Done
Building dependency tree
Reading state information... Done
Calculating upgrade... Done
The following packages will be upgraded:
  pvm-cli pvm-core pvm-novalink pvm-rest-app pvm-rest-server pypowervm
6 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Need to get 165 MB of archives.
After this operation, 53.2 kB of additional disk space will be used.
Do you want to continue? [Y/n]
Get:1 http://deckard.lab.chmod666.org/nova/Novalink/debian/ novalink_1.0.0/non-free pypowervm all 1.0.0.1-151203-1553 [363 kB]
Get:2 http://deckard.lab.chmod666.org/nova/Novalink/debian/ novalink_1.0.0/non-free pvm-cli all 1.0.0.1-151202-864 [63.4 kB]
Get:3 http://deckard.lab.chmod666.org/nova/Novalink/debian/ novalink_1.0.0/non-free pvm-core ppc64el 1.0.0.1-151202-1495 [2,080 kB]
Get:4 http://deckard.lab.chmod666.org/nova/Novalink/debian/ novalink_1.0.0/non-free pvm-rest-server ppc64el 1.0.0.1-151203-1563 [142 MB]
Get:5 http://deckard.lab.chmod666.org/nova/Novalink/debian/ novalink_1.0.0/non-free pvm-rest-app ppc64el 1.0.0.1-151203-1563 [21.1 MB]
Get:6 http://deckard.lab.chmod666.org/nova/Novalink/debian/ novalink_1.0.0/non-free pvm-novalink ppc64el 1.0.0.1-151203-408 [1,738 B]
Fetched 165 MB in 7s (20.8 MB/s)
(Reading database ... 72094 files and directories currently installed.)
Preparing to unpack .../pypowervm_1.0.0.1-151203-1553_all.deb ...
Unpacking pypowervm (1.0.0.1-151203-1553) over (1.0.0.0-151110-1481) ...
Preparing to unpack .../pvm-cli_1.0.0.1-151202-864_all.deb ...
Unpacking pvm-cli (1.0.0.1-151202-864) over (1.0.0.0-151110-761) ...
Preparing to unpack .../pvm-core_1.0.0.1-151202-1495_ppc64el.deb ...
Removed symlink /etc/systemd/system/multi-user.target.wants/pvm-core.service.
Unpacking pvm-core (1.0.0.1-151202-1495) over (1.0.0.0-151111-1375) ...
Preparing to unpack .../pvm-rest-server_1.0.0.1-151203-1563_ppc64el.deb ...
Unpacking pvm-rest-server (1.0.0.1-151203-1563) over (1.0.0.0-151110-1480) ...
Preparing to unpack .../pvm-rest-app_1.0.0.1-151203-1563_ppc64el.deb ...
Unpacking pvm-rest-app (1.0.0.1-151203-1563) over (1.0.0.0-151110-1480) ...
Preparing to unpack .../pvm-novalink_1.0.0.1-151203-408_ppc64el.deb ...
Unpacking pvm-novalink (1.0.0.1-151203-408) over (1.0.0.0-151112-304) ...
Processing triggers for ureadahead (0.100.0-19) ...
ureadahead will be reprofiled on next reboot
Setting up pypowervm (1.0.0.1-151203-1553) ...
Setting up pvm-cli (1.0.0.1-151202-864) ...
Installing bash completion script /etc/bash_completion.d/python-argcomplete.sh
Setting up pvm-core (1.0.0.1-151202-1495) ...
addgroup: The group `pvm_admin' already exists.
Created symlink from /etc/systemd/system/multi-user.target.wants/pvm-core.service to /usr/lib/systemd/system/pvm-core.service.
0513-071 The ctrmc Subsystem has been added.
Adding /usr/lib/systemd/system/ctrmc.service for systemctl ...
0513-059 The ctrmc Subsystem has been started. Subsystem PID is 3096.
Setting up pvm-rest-server (1.0.0.1-151203-1563) ...
The user `wlp' is already a member of `pvm_admin'.
Setting up pvm-rest-app (1.0.0.1-151203-1563) ...
Setting up pvm-novalink (1.0.0.1-151203-408) ...

NovaLink and HMC Co-Management configuration

Before adding the hosts on PowerVC you still need to do the most important thing. After the installation is finished enable the co-management mode to be able to have a system managed by NovaLink and still connected to an Hardware Management Console:

  • Enable the powerm_mgmt_capable attribute on the Nova partition:
  • # chsyscfg -r lpar -m br-8286-41A-2166666 -i "name=nova,powervm_mgmt_capable=1"
    # lssyscfg -r lpar -m br-8286-41A-2166666 -F name,powervm_mgmt_capable --filter "lpar_names=nova"
    nova,1
    
  • Enable co-management (please not here that you have to setmaster (you’ll see that the curr_master_name is the HMC) and then relmaster (you’ll see that the curr_master_name is the NovaLink Partition, this is that state where we want to be)):
  • # lscomgmt -m br-8286-41A-2166666
    is_master=null
    # chcomgmt -m br-8286-41A-2166666 -o setmaster -t norm --terms agree
    # lscomgmt -m br-8286-41A-2166666
    is_master=1,curr_master_name=myhmc1,curr_master_mtms=7042-CR8*2166666,curr_master_type=norm,pend_master_mtms=none
    # chcomgmt -m br-8286-41A-2166666 -o relmaster
    # lscomgmt -m br-8286-41A-2166666
    is_master=0,curr_master_name=nova,curr_master_mtms=3*8286-41A*2166666,curr_master_type=norm,pend_master_mtms=none
    

Going back to HMC managed system

You can go back to an Hardware Management Console managed system whenever you want (set the master to the HMC, delete the nova partition and release the master from the HMC).

# chcomgmt -m br-8286-41A-2166666 -o setmaster -t norm --terms agree
# lscomgmt -m br-8286-41A-2166666
is_master=1,curr_master_name=myhmc1,curr_master_mtms=7042-CR8*2166666,curr_master_type=norm,pend_master_mtms=none
# chlparstate -o shutdown -m br-8286-41A-2166666 --id 9 --immed
# rmsyscfg -r lpar -m br-8286-41A-2166666 --id 9
# chcomgmt -o relmaster -m br-8286-41A-2166666
# lscomgmt -m br-8286-41A-2166666
is_master=0,curr_master_mtms=none,curr_master_type=none,pend_master_mtms=none

Using NovaLink

After the installation you are now able to login on the NovaLink partition. (You can gain root access with “sudo su -” command). A command new called pvmctl is available on the NovaLink partition allowing you to perform any actions (stop, start virtual machine, list Virtual I/O Servers, ….). Before trying to add the host double check that the pvmctl command is working ok.

padmin@nova:~$ pvmctl lpar list
Logical Partitions
+------+----+---------+-----------+---------------+------+-----+-----+
| Name | ID |  State  |    Env    |    Ref Code   | Mem  | CPU | Ent |
+------+----+---------+-----------+---------------+------+-----+-----+
| nova | 3  | running | AIX/Linux | Linux ppc64le | 8192 |  2  | 0.5 |
+------+----+---------+-----------+---------------+------+-----+-----+

Adding hosts

On the PowerVC side add the NovaLink host by choosing the NovaLink option:

addhostnovalink

Some deb (ibmpowervc-power)packages will be installed on configured on the NovaLink machine:

addhostnovalink3
addhostnovalink4

By doing this, on each NovaLink machine you can check that a nova-compute process is here. (By adding the host the deb was installed and configured on the NovaLink host:

# ps -ef | grep nova
nova      4392     1  1 10:28 ?        00:00:07 /usr/bin/python /usr/bin/nova-compute --config-file /etc/nova/nova.conf --log-file /var/log/nova/nova-compute.log
root      5218  5197  0 10:39 pts/1    00:00:00 grep --color=auto nova
# grep host_display_name /etc/nova/nova.conf
host_display_name = XXXX-8286-41A-XXXX
# tail -1 /var/log/apt/history.log
Start-Date: 2016-01-18  10:27:54
Commandline: /usr/bin/apt-get -o Dpkg::Options::=--force-confdef -o Dpkg::Options::=--force-confold -y install --force-yes --allow-unauthenticated ibmpowervc-powervm
Install: python-keystoneclient:ppc64el (1.6.0-2.ibm.ubuntu1, automatic), python-oslo.reports:ppc64el (0.1.0-1.ibm.ubuntu1, automatic), ibmpowervc-powervm:ppc64el (1.3.0.1), python-ceilometer:ppc64el (5.0.0-201511171217.ibm.ubuntu1.199, automatic), ibmpowervc-powervm-compute:ppc64el (1.3.0.1, automatic), nova-common:ppc64el (12.0.0-201511171221.ibm.ubuntu1.213, automatic), python-oslo.service:ppc64el (0.11.0-2.ibm.ubuntu1, automatic), python-oslo.rootwrap:ppc64el (2.0.0-1.ibm.ubuntu1, automatic), python-pycadf:ppc64el (1.1.0-1.ibm.ubuntu1, automatic), python-nova:ppc64el (12.0.0-201511171221.ibm.ubuntu1.213, automatic), python-keystonemiddleware:ppc64el (2.4.1-2.ibm.ubuntu1, automatic), python-kafka:ppc64el (0.9.3-1.ibm.ubuntu1, automatic), ibmpowervc-powervm-monitor:ppc64el (1.3.0.1, automatic), ibmpowervc-powervm-oslo:ppc64el (1.3.0.1, automatic), neutron-common:ppc64el (7.0.0-201511171221.ibm.ubuntu1.280, automatic), python-os-brick:ppc64el (0.4.0-1.ibm.ubuntu1, automatic), python-tooz:ppc64el (1.22.0-1.ibm.ubuntu1, automatic), ibmpowervc-powervm-ras:ppc64el (1.3.0.1, automatic), networking-powervm:ppc64el (1.0.0.0-151109-25, automatic), neutron-plugin-ml2:ppc64el (7.0.0-201511171221.ibm.ubuntu1.280, automatic), python-ceilometerclient:ppc64el (1.5.0-1.ibm.ubuntu1, automatic), python-neutronclient:ppc64el (2.6.0-1.ibm.ubuntu1, automatic), python-oslo.middleware:ppc64el (2.8.0-1.ibm.ubuntu1, automatic), python-cinderclient:ppc64el (1.3.1-1.ibm.ubuntu1, automatic), python-novaclient:ppc64el (2.30.1-1.ibm.ubuntu1, automatic), python-nova-ibm-ego-resource-optimization:ppc64el (2015.1-201511110358, automatic), python-neutron:ppc64el (7.0.0-201511171221.ibm.ubuntu1.280, automatic), nova-compute:ppc64el (12.0.0-201511171221.ibm.ubuntu1.213, automatic), nova-powervm:ppc64el (1.0.0.1-151203-215, automatic), openstack-utils:ppc64el (2015.2.0-201511171223.ibm.ubuntu1.18, automatic), ibmpowervc-powervm-network:ppc64el (1.3.0.1, automatic), python-oslo.policy:ppc64el (0.5.0-1.ibm.ubuntu1, automatic), python-oslo.db:ppc64el (2.4.1-1.ibm.ubuntu1, automatic), python-oslo.versionedobjects:ppc64el (0.9.0-1.ibm.ubuntu1, automatic), python-glanceclient:ppc64el (1.1.0-1.ibm.ubuntu1, automatic), ceilometer-common:ppc64el (5.0.0-201511171217.ibm.ubuntu1.199, automatic), openstack-i18n:ppc64el (2015.2-3.ibm.ubuntu1, automatic), python-oslo.messaging:ppc64el (2.1.0-2.ibm.ubuntu1, automatic), python-swiftclient:ppc64el (2.4.0-1.ibm.ubuntu1, automatic), ceilometer-powervm:ppc64el (1.0.0.0-151119-44, automatic)
End-Date: 2016-01-18  10:28:00

The command line interface

You can do ALL the stuffs you were doing on the HMC using the pvmctl command. The syntax is pretty simple: pvcmtl |OBJECT| |ACTION| where the OBJECT can be vios, vm, vea(virtual ethernet adapter), vswitch, lu (logical unit), or anything you want and ACTION can be list, delete, create, update. Here are a few examples :

  • List the Virtual I/O Servers:
  • # pvmctl vios list
    Virtual I/O Servers
    +--------------+----+---------+----------+------+-----+-----+
    |     Name     | ID |  State  | Ref Code | Mem  | CPU | Ent |
    +--------------+----+---------+----------+------+-----+-----+
    | s00ia9940825 | 1  | running |          | 8192 |  2  | 0.2 |
    | s00ia9940826 | 2  | running |          | 8192 |  2  | 0.2 |
    +--------------+----+---------+----------+------+-----+-----+
    
  • List the partitions (note the -d for display-fields allowing me to print somes attributes):
  • # pvmctl vm list
    Logical Partitions
    +----------+----+----------+----------+----------+-------+-----+-----+
    |   Name   | ID |  State   |   Env    | Ref Code |  Mem  | CPU | Ent |
    +----------+----+----------+----------+----------+-------+-----+-----+
    | aix72ca> | 3  | not act> | AIX/Lin> | 00000000 |  2048 |  1  | 0.1 |
    |   nova   | 4  | running  | AIX/Lin> | Linux p> |  8192 |  2  | 0.5 |
    | s00vl99> | 5  | running  | AIX/Lin> | Linux p> | 10240 |  2  | 0.2 |
    | test-59> | 6  | not act> | AIX/Lin> | 00000000 |  2048 |  1  | 0.1 |
    +----------+----+----------+----------+----------+-------+-----+-----+
    # pvmctl list vm -d name id 
    [..]
    # pvmctl vm list -i id=4 --display-fields LogicalPartition.name
    name=aix72-1-d3707953-00000090
    # pvmctl vm list  --display-fields LogicalPartition.name LogicalPartition.id LogicalPartition.srr_enabled SharedProcessorConfiguration.desired_virtual SharedProcessorConfiguration.uncapped_weight
    name=aix72capture,id=3,srr_enabled=False,desired_virtual=1,uncapped_weight=64
    name=nova,id=4,srr_enabled=False,desired_virtual=2,uncapped_weight=128
    name=s00vl9940243,id=5,srr_enabled=False,desired_virtual=2,uncapped_weight=128
    name=test-5925058d-0000008d,id=6,srr_enabled=False,desired_virtual=1,uncapped_weight=128
    
  • Delete the virtual adapter on the partition name nova (note the –parent-id to select the partition) with a certain uuid which was found with (pvmclt list vea):
  • # pvmctl vea delete --parent-id name=nova --object-id uuid=fe7389a8-667f-38ca-b61e-84c94e5a3c97
    
  • Power off the lpar named aix72-2:
  • # pvmctl vm power-off -i name=aix72-2-536bf0f8-00000091
    Powering off partition aix72-2-536bf0f8-00000091, this may take a few minutes.
    Partition aix72-2-536bf0f8-00000091 power-off successful.
    
  • Delete the lpar named aix72-2:
  • # pvmctl vm delete -i name=aix72-2-536bf0f8-00000091
    
  • Delete the vswitch named MGMTVSWITCH:
  • # pvmctl vswitch delete -i name=MGMTVSWITCH
    
  • Open a console:
  • #  mkvterm --id 4
    vterm for partition 4 is active.  Press Control+] to exit.
    |
    Elapsed time since release of system processors: 57014 mins 10 secs
    [..]
    
  • Power on an lpar:
  • # pvmctl vm power-on -i name=aix72capture
    Powering on partition aix72capture, this may take a few minutes.
    Partition aix72capture power-on successful.
    

Is this a dream ? No more RMC connectivty problem anymore

I’m 100% sure that you always have problems with RMC connectivity due to firwall issues, ports not opened, and IDS blocking RMC ongoing or outgoing traffic. NovaLink is THE solution that will solve all the RMC problems forever. I’m not joking it’s a major improvement for PowerVM. As the NovaLink partition is installed on each hosts this one can communicate through a dedicated IPv6 link with all the partitions hosted on the host. A dedicated virtual switch called MGMTSWITCH is used to allow the RMC flow to transit between all the lpars and the NovaLink partition. Of course this Virtual Switch must be created and one Virtual Ethernet Adapter must also be created on the NovaLink partition. These are the first two actions to do if you want to implement this solution. Before starting here are a few things you need to know:

  • For security reason the MGMTSWITCH must be created in Vepa mode. If you are not aware of what are VEPA and VEB modes here is a reminder:
  • In VEB mode all the the partitions connected to the same vlan can communicate together. We do not want that as it is a security issue.
  • The VEPA mode gives us the ability to isolate lpars that are on the same subnet. lpar to lpar traffic is forced out of the machine. This is what we want.
  • The PVID for this VEPA network is 4094
  • The adapter in the NovaLink partition must be a trunk adapter.
  • It is mandatory to name the VEPA vswitch MGMTSWITCH.
  • At the lpar creation if the MGMTSWITCH exists a new Virtual Ethernet Adapter will be automatically created on the deployed lpar.
  • To be correctly configured the deployed lpar needs the latest level of rsct code (3.2.1.0 for now).
  • The latest cloud-init version must be deploy on the captured lpar used to make the image.
  • You don’t need to configure any addresses on this adapter (on the deployed lpars the adapter is configured with the local-link address (it’s the same thing as 169.254.0.0/16 addresses used in IPv4 format but for IPv6)(please note that any IPv6 adapter must “by design” have a local-link address).

mgmtswitch2

  • Create the virtual switch called MGMTSWITCH in Vepa mode:
  • # pvmctl vswitch create --name MGMTSWITCH --mode=Vepa
    # pvmctl vswitch list  --display-fields VirtualSwitch.name VirtualSwitch.mode 
    name=ETHERNET0,mode=Veb
    name=vdct,mode=Veb
    name=vdcb,mode=Veb
    name=vdca,mode=Veb
    name=MGMTSWITCH,mode=Vepa
    
  • Create a virtual ethernet adapter on the NovaLink partition with the PVID 4094 and a trunk priorty set to 1 (it’s a trunk adapter). Note that we now have two adapters on the NovaLink partition (one in IPv4 (routable) and the other one in IPv6 (non-routable):
  • # pvmctl vea create --pvid 4094 --vswitch MGMTSWITCH --trunk-pri 1 --parent-id name=nova
    # pvmctl vea list --parent-id name=nova
    --------------------------
    | VirtualEthernetAdapter |
    --------------------------
      is_tagged_vlan_supported=False
      is_trunk=False
      loc_code=U8286.41A.216666-V3-C2
      mac=EE3B84FD1402
      pvid=666
      slot=2
      uuid=05a91ab4-9784-3551-bb4b-9d22c98934e6
      vswitch_id=1
    --------------------------
    | VirtualEthernetAdapter |
    --------------------------
      is_tagged_vlan_supported=True
      is_trunk=True
      loc_code=U8286.41A.216666-V3-C34
      mac=B6F837192E63
      pvid=4094
      slot=34
      trunk_pri=1
      uuid=fe7389a8-667f-38ca-b61e-84c94e5a3c97
      vswitch_id=4
    

    Configure the local-link IPv6 address in the NovaLink partition:

    # more /etc/network/interfaces
    [..]
    auto eth1
    iface eth1 inet manual
     up /sbin/ifconfig eth1 0.0.0.0
    # ifup eth1
    # ifconfig eth1
    eth1      Link encap:Ethernet  HWaddr b6:f8:37:19:2e:63
              inet6 addr: fe80::b4f8:37ff:fe19:2e63/64 Scope:Link
              UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
              RX packets:0 errors:0 dropped:0 overruns:0 frame:0
              TX packets:17 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:1000
              RX bytes:0 (0.0 B)  TX bytes:1454 (1.4 KB)
              Interrupt:34
    

Capture an AIX host with the latest version of rsct installed (3.2.1.0) or later and the latest version of cloud-init installed. This version of RMC/rsct handle this new feature so this is mandatory to have it installed on the captured host. When PowerVC will deploy a Virtual Machine on a Nova managed host with this version of rsct installed a new adapter with the PVID 4094 in the virtual switch MGMTSWITCH will be created and finally all the RMC traffic will use this adapter instead of your public IP address:

# lslpp -L rsct*
  Fileset                      Level  State  Type  Description (Uninstaller)
  ----------------------------------------------------------------------------
  rsct.core.auditrm          3.2.1.0    C     F    RSCT Audit Log Resource
                                                   Manager
  rsct.core.errm             3.2.1.0    C     F    RSCT Event Response Resource
                                                   Manager
  rsct.core.fsrm             3.2.1.0    C     F    RSCT File System Resource
                                                   Manager
  rsct.core.gui              3.2.1.0    C     F    RSCT Graphical User Interface
  rsct.core.hostrm           3.2.1.0    C     F    RSCT Host Resource Manager
  rsct.core.lprm             3.2.1.0    C     F    RSCT Least Privilege Resource
                                                   Manager
  rsct.core.microsensor      3.2.1.0    C     F    RSCT MicroSensor Resource
                                                   Manager
  rsct.core.rmc              3.2.1.1    C     F    RSCT Resource Monitoring and
                                                   Control
  rsct.core.sec              3.2.1.0    C     F    RSCT Security
  rsct.core.sensorrm         3.2.1.0    C     F    RSCT Sensor Resource Manager
  rsct.core.sr               3.2.1.0    C     F    RSCT Registry
  rsct.core.utils            3.2.1.1    C     F    RSCT Utilities

When this image will be deployed a new adapter will be created in the MGMTSWITCH virtual switch, an IPv6 local-link address will be configured on it. You can check the cloud-init activation to see the IPv6 address is configured at the activation time:

# pvmctl vea list --parent-id name=aix72-2-0a0de5c5-00000095
--------------------------
| VirtualEthernetAdapter |
--------------------------
  is_tagged_vlan_supported=True
  is_trunk=False
  loc_code=U8286.41A.216666-V5-C32
  mac=FA620F66FF20
  pvid=3331
  slot=32
  uuid=7f1ec0ab-230c-38af-9325-eb16999061e2
  vswitch_id=1
--------------------------
| VirtualEthernetAdapter |
--------------------------
  is_tagged_vlan_supported=True
  is_trunk=False
  loc_code=U8286.41A.216666-V5-C33
  mac=46A066611B09
  pvid=4094
  slot=33
  uuid=560c67cd-733b-3394-80f3-3f2a02d1cb9d
  vswitch_id=4
# ifconfig -a
en0: flags=1e084863,14c0
        inet 10.10.66.66 netmask 0xffffff00 broadcast 10.14.33.255
         tcp_sendspace 262144 tcp_recvspace 262144 rfc1323 1
en1: flags=1e084863,14c0
        inet6 fe80::c032:52ff:fe34:6e4f/64
         tcp_sendspace 262144 tcp_recvspace 262144 rfc1323 1
sit0: flags=8100041
        inet6 ::10.10.66.66/96
[..]

Note that the local-link address is configured at the activation time (fe80 starting addresses):

# more /var/log/cloud-init-output.log
[..]
auto eth1

iface eth1 inet6 static
    address fe80::c032:52ff:fe34:6e4f
    hwaddress ether c2:32:52:34:6e:4f
    netmask 64
    pre-up [ $(ifconfig eth1 | grep -o -E '([[:xdigit:]]{1,2}:){5}[[:xdigit:]]{1,2}') = "c2:32:52:34:6e:4f" ]
        dns-search fr.net.intra
# entstat -d ent1 | grep -iE "switch|vlan"
Invalid VLAN ID Packets: 0
Port VLAN ID:  4094
VLAN Tag IDs:  None
Switch ID: MGMTSWITCH

To be sure all is working correctly here is a proof test. I’m taking down the en0 interface on which the IPv4 public address is configured. Then I’m launching a tcpdump on the en1 (on the MGMTSWITCH address). Finally I’m resizing the Virtual Machine with PowerVC. AND EVERYTHING IS WORKING GREAT !!!! AWESOME !!! :-) (note the fe80 to fe80 communication):

# ifconfig en0 down detach ; tcpdump -i en1 port 657
tcpdump: WARNING: en1: no IPv4 address assigned
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on en1, link-type 1, capture size 96 bytes
22:00:43.224964 IP6 fe80::9850:f6ff:fe9c:5739.32819 > fe80::d09e:aff:fecf:a868.rmc: S 4049792650:4049792650(0) win 65535 
22:00:43.225022 IP6 fe80::d09e:aff:fecf:a868.rmc > fe80::9850:f6ff:fe9c:5739.32819: S 2055569200:2055569200(0) ack 4049792651 win 28560 
22:00:43.225051 IP6 fe80::9850:f6ff:fe9c:5739.32819 > fe80::d09e:aff:fecf:a868.rmc: . ack 1 win 32844 
22:00:43.225547 IP6 fe80::9850:f6ff:fe9c:5739.32819 > fe80::d09e:aff:fecf:a868.rmc: P 1:209(208) ack 1 win 32844 
22:00:43.225593 IP6 fe80::d09e:aff:fecf:a868.rmc > fe80::9850:f6ff:fe9c:5739.32819: . ack 209 win 232 
22:00:43.225638 IP6 fe80::d09e:aff:fecf:a868.rmc > fe80::9850:f6ff:fe9c:5739.32819: P 1:97(96) ack 209 win 232 
22:00:43.225721 IP6 fe80::9850:f6ff:fe9c:5739.32819 > fe80::d09e:aff:fecf:a868.rmc: P 209:377(168) ack 97 win 32844 
22:00:43.225835 IP6 fe80::d09e:aff:fecf:a868.rmc > fe80::9850:f6ff:fe9c:5739.32819: P 97:193(96) ack 377 win 240 
22:00:43.225910 IP6 fe80::9850:f6ff:fe9c:5739.32819 > fe80::d09e:aff:fecf:a868.rmc: P 377:457(80) ack 193 win 32844 
22:00:43.226076 IP6 fe80::d09e:aff:fecf:a868.rmc > fe80::9850:f6ff:fe9c:5739.32819: P 193:289(96) ack 457 win 240 
22:00:43.226154 IP6 fe80::9850:f6ff:fe9c:5739.32819 > fe80::d09e:aff:fecf:a868.rmc: P 457:529(72) ack 289 win 32844 
22:00:43.226210 IP6 fe80::d09e:aff:fecf:a868.rmc > fe80::9850:f6ff:fe9c:5739.32819: P 289:385(96) ack 529 win 240 
22:00:43.226276 IP6 fe80::9850:f6ff:fe9c:5739.32819 > fe80::d09e:aff:fecf:a868.rmc: P 529:681(152) ack 385 win 32844 
22:00:43.226335 IP6 fe80::d09e:aff:fecf:a868.rmc > fe80::9850:f6ff:fe9c:5739.32819: P 385:481(96) ack 681 win 249 
22:00:43.424049 IP6 fe80::9850:f6ff:fe9c:5739.32819 > fe80::d09e:aff:fecf:a868.rmc: . ack 481 win 32844 
22:00:44.725800 IP6 fe80::d09e:aff:fecf:a868.rmc > fe80::9850:f6ff:fe9c:5739.rmc: UDP, length 88
22:00:44.726111 IP6 fe80::9850:f6ff:fe9c:5739.rmc > fe80::d09e:aff:fecf:a868.rmc: UDP, length 88
22:00:50.137605 IP6 fe80::d09e:aff:fecf:a868.rmc > fe80::9850:f6ff:fe9c:5739.rmc: UDP, length 632
22:00:50.137900 IP6 fe80::9850:f6ff:fe9c:5739.rmc > fe80::d09e:aff:fecf:a868.rmc: UDP, length 88
22:00:50.183108 IP6 fe80::9850:f6ff:fe9c:5739.rmc > fe80::d09e:aff:fecf:a868.rmc: UDP, length 408
22:00:51.683382 IP6 fe80::9850:f6ff:fe9c:5739.rmc > fe80::d09e:aff:fecf:a868.rmc: UDP, length 408
22:00:51.683661 IP6 fe80::d09e:aff:fecf:a868.rmc > fe80::9850:f6ff:fe9c:5739.rmc: UDP, length 88

To be sure security requirements are met from the lpar I’m pinging the NovaLink host (the first one) which is answering and then I’m pinging the second lpar (the second ping) which is not working. (And this is what we want !!!).

# ping fe80::d09e:aff:fecf:a868
PING fe80::d09e:aff:fecf:a868 (fe80::d09e:aff:fecf:a868): 56 data bytes
64 bytes from fe80::d09e:aff:fecf:a868: icmp_seq=0 ttl=64 time=0.203 ms
64 bytes from fe80::d09e:aff:fecf:a868: icmp_seq=1 ttl=64 time=0.206 ms
64 bytes from fe80::d09e:aff:fecf:a868: icmp_seq=2 ttl=64 time=0.216 ms
^C
--- fe80::d09e:aff:fecf:a868 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0/0/0 ms
# ping fe80::44a0:66ff:fe61:1b09
PING fe80::44a0:66ff:fe61:1b09 (fe80::44a0:66ff:fe61:1b09): 56 data bytes
^C
--- fe80::44a0:66ff:fe61:1b09 ping statistics ---
2 packets transmitted, 0 packets received, 100% packet loss

PowerVC 1.3.0.1 Dynamic Resource Optimizer

In addition to the NovaLink part of this blog post I also wanted to talk about the killer app of 2016. Dynamic Resource Optimizer. This feature can be used on any PowerVC 1.3.0.1 managed hosts (you obviously need at least to hosts). DRO is in charge to re-balance your Virtual Machines across all the available hosts (in the host-group). To sum up if a host is experiencing an heavy load and reaching a certain amount of CPU consumption over a period of time, DRO will move your virtual machines to re-balance the load across all the available hosts (this is done at a host level). Here are a few details about DRO:

  • The DRO configuration is done at a host level.
  • You setup a threshold (in the capture below) to reach to trigger the Live Partition Moblity or Mobily Cores movements (Power Entreprise Pool).
  • droo6
    droo3

  • To be triggered this threshold must be reached a certain number of time (stabilization) over a period you are defining (run interval).
  • You can choose to move virtual machines using Live Partition Mobilty, or to move “cores” using Power Entreprise Pool (you can do both; moving CPU will always be preferred as moving partitions)
  • DRO can be run in advise mode (nothing is done, a warning is thrown in the new DRO events tab) or in active mode (which is doing the job and moving things).
    droo2
    droo1

  • Your most critical virtual machines can be excluded from DRO:
  • droo5

How is DRO choosing which machines are moved

I’m running DRO in production since now one month and I had the time to check what is going on behind the scene. How is DRO choosing which machines are moved when a Live Partition Moblity operation must be run to face an heavy load on a host ? To do so I decided to launch 3 different cpuhog (16 forks, 4VP, SMT4) processes (which are eating CPU ressource) on three different lpars with 4VP each. On the PowerVC I can check that before launching this processes the CPU consumption is ok on this host (the three lpars are running on the same host) :

droo4

# cat cpuhog.pl
#!/usr/bin/perl

print "eating the CPUs\n";

foreach $i (1..16) {
      $pid = fork();
      last if $pid == 0;
      print "created PID $pid\n";
}

while (1) {
      $x++;
}
# perl cpuhog.pl
eating the CPUs
created PID 47514604
created PID 22675712
created PID 3015584
created PID 21496152
created PID 25166098
created PID 26018068
created PID 11796892
created PID 33424106
created PID 55444462
created PID 65077976
created PID 13369620
created PID 10813734
created PID 56623850
created PID 19333542
created PID 58393312
created PID 3211988

I’m waiting a couple of minutes and I realize that the virtual machines on which the cpuhog processes were launched are the ones which are migrated. So we can say that PowerVC is moving the machine that are eating CPU (another strategy could be to move all the non-eating CPU machines to let the working ones do their job without launching a mobility operation).

# errpt | head -3
IDENTIFIER TIMESTAMP  T C RESOURCE_NAME  DESCRIPTION
A5E6DB96   0118225116 I S pmig           Client Partition Migration Completed
08917DC6   0118225116 I S pmig           Client Partition Migration Started

After the moves are ok I can see that the load is now ok on the host. DRO has done the job for me and moved the lpar to met the configured thresold ;-)

droo7dro_effect

The images below will show you a good example of the “power” of PowerVC and DRO. To update my Virtual I/O Servers to the latest version the PowerVC maintenance mode was used to free up the Virtual I/O Servers. After leaving the maintenance mode the DRO was doing the job to re-balance the Virtual Machines across all the hosts (The red arrows symbolize the maintenance mode action and the purple ones the DRO actions). You can also see that some lpars were moved across 4 different hosts during this process. All these pictures are taken from real life experience on my production systems. This not a lab environment, this is one part of my production. So yes DRO and PowerVC 1.3.0.1 are production ready. Hell yes!

real1
real2
real3
real4
real5

Conclusion

As my environment is growing bigger the next step for me will be to move on NovaLink on my P8 hosts. Please note that the NovaLink Co-Management feature is today a “TechPreview” but should be released GA very soon. Talking about DRO I was waiting for that for years and it finally happens. I can assure you that it is production ready, to prove this I’ll just give you this number. To upgrade my Virtual I/O Servers to 2.2.4.10 release using PowerVC maintenance mode and DRO more than 1000 Live Partition Mobility moves were performed without any outage on production servers and during working hours. Nobody in my company was aware of this during the operations. It was a seamless experience for everybody.

A first look at SRIOV vNIC adapters

I have the chance to participate in the current Early Shipment Program (ESP) for Power Systems, especially the software part. One of my tasks is to test a new feature called SRIOV vNIC. For those who does not know anything about SRIOV this technology is comparable to LHEA except it is based on a industry standard (and have a couple of other features). By using SRIOV adapter you can divide a physical port into what we call a Virtual Function (or a Logical Port) and map this Virtual Function to a partition. You can also set “Quality Of Service” on these Virtual Functions. At the creation you will setup the Virtual Function allowing it to take a certain percentage of the physical port. These can be very useful if you want to be sure that your production server will always have a guaranteed bandwidth instead of using a Shared Ethernet Adapter where every clients partitions are competing for the bandwidth. Customers are also using SRIOV adapters for performance purpose ; as nothing is going through the Virtual I/O Server the latency added by this action is eliminated and CPU cycles are saved on the Virtual I/O Server side (Shared Ethernet Adapter consume a lot of CPU cycles). If you are not aware of what SRIOV is I encourage you to check the IBM Redbook about it (http://www.redbooks.ibm.com/abstracts/redp5065.html?Open. Unfortunately you can’t move a partition by using Live Partition Mobility if this one have a Virtual Function assigned to it. Using vNICs allows you to use SRIOV through the Virtual I/O Servers and enable the possibility to move your partition even if you are using an SRIOV logical port. The better of two worlds : performance/qos and virtualization. Is this the end of the Shared Ethernet Adapter ?

SRIOV vNIC, what’s this ?

Before talking about the technical details it is important to understand what vNICs are. When I’m explaining this to newbies I often refer to NPIV. Imagine something similar as the NPIV but for the network part. By using SRIOV vNIC:

  • A Virtual Function (SRIOV Logical Port) is created and assigned to the Virtual I/O Server.
  • A vNIC adapter is created in the client partition.
  • The Virtual Function and the vNIC adapter are linked (mapped) together.
  • This is a one to one relationship between a Virtual Function and a vNIC (like a vfcs adapter is a one to one relationship between your vfcs and the physical fiber channel adapter).

On the image below, the vNIC lpars are the “yellow” ones, you can see here that the SRIOV adapter is divided in different Virtual Function, and some of them are mapped to the Virtual I/O Server. The relationship between the Virtual Function and the vNIC is achieved by a vnicserver (this is a special Virtual I/O Server device).
vNIC

One of the major advantage of using vNIC is that you eliminate the need of the Virtual I/O Server for data flows:

  • The network data flow is direct between the partition memory and the SRIOV adapter, there is no data copy passing through the Virtual I/O Server and it eliminate the CPU cost and the latency of doing that. This is achieved by LRDMA. Pretty cool !
  • The vNIC will inherits the bandwidth allocation of the Virtual Function (QoS). If the VF is configured with a capacity of 2% the vNIC will also have this capacity.
  • vNIC2

vNIC Configuration

Before checking all the details on how to configure an SRIOV vNIC adapter you have to check all the prerequisites. As this is a new feature you will need the latest level of …. everything. My advice is to stay up to date as much as possible.

vNIC Prerequisites

These outputs are taken from the early shipment program. All of this can be changed at the GA release:

  • Hardware Management Console v840:
  • # lshmc -V
    lshmc -V
    "version= Version: 8
     Release: 8.4.0
     Service Pack: 0
    HMC Build level 20150803.3
    ","base_version=V8R8.4.0
    "
    
  • Power 8 only, firmware 840 at least (both enterprise and scale out systems):
  • firmware

  • AIX 7.1TL4 or AIX 7.2:
  • # oslevel -s
    7200-00-00-0000
    # cat /proc/version
    Oct 20 2015
    06:57:03
    1543A_720
    @(#) _kdb_buildinfo unix_64 Oct 20 2015 06:57:03 1543A_720
    
  • Obviously at least on SRIOV capable adapter!

Using the HMC GUI

The configuration of a vNIC is done at the partition level. The configuration is only available on the enhanced version of the GUI. Select the virtual machine on which you want to add the vNIC and in the Virtual I/O tab you’ll see that a new Virtual NICs session is here. Click on “Virtual NICs” and a new panel will be opened with a new button called “Add Virtual NIC”, just click this one to add a Virtual NIC:

vnic_n1
vnic_conf2

All the SRIOV capable port will be displayed on the next screen. Choose the SRIOV port you want (a virtual function will be created on this one. Don’t do anything more, the creation of a vNIC will automatically create a Virtual Function; assign it to Virtual I/O Server and do the mapping to the vNIC for you). Choose the Virtual I/O Server that will be used for this vNIC (the vNIC server will be created on this Virtual I/O Server. Don’t worry we will talk about vNIC redundancy later in this post) and the Virtual NIC Capacity (the percentage the Phyiscal SRIOV port that will be dedicated to this vNIC)(this has to be a multiple of 2)(be careful with that it can’t be changed afterwards and you’ll have to delete your vNIC to redo the configuration) :

vnic_conf3

The “Advanced Virtual NIC Settings” allows you to choose the Virtual NIC Adapter ID, choosing a MAC Address, and configuring the vlan restrictions and vlan tagging. In the example below I’m configuring my Virtual NIC in the vlan 310:

vnic_conf4
vnic_conf5
allvnic

Using the HMC Command Line

As always the configuration can be achieved using the HMC command line, using lshwres to list vNIC and chhwres to create a vNIC.

List SRIOV adapters to get the adapter_id needed by the chhwres command:

# lshwres -r sriov --rsubtype adapter -m blade-8286-41A-21AFFFF
adapter_id=1,slot_id=21020014,adapter_max_logical_ports=48,config_state=sriov,functional_state=1,logical_ports=48,phys_loc=U78C9.001.WZS06RN-P1-C12,phys_ports=4,sriov_status=running,alternate_config=0
# lshwres -r virtualio  -m blade-8286-41A-21AFFFF --rsubtype vnic --level lpar --filter "lpar_names=72vm1"
lpar_name=72vm1,lpar_id=9,slot_num=7,desired_mode=ded,curr_mode=ded,port_vlan_id=310,pvid_priority=0,allowed_vlan_ids=all,mac_addr=ee3b8cd87707,allowed_os_mac_addrs=all,desired_capacity=2.0,backing_devices=sriov/vios1/2/1/1/27004008/2.0

Create the vNIC:

# chhwres -r virtualio -m blade-8286-41A-21AFFFF -o a -p 72vm1 --rsubtype vnic -v -a "port_vlan_id=310,backing_devices=sriov/vios2/1/1/1/2"

List the vNIC after create:

# lshwres -r virtualio  -m blade-8286-41A-21AFFFF --rsubtype vnic --level lpar --filter "lpar_names=72vm1"
lpar_name=72vm1,lpar_id=9,slot_num=7,desired_mode=ded,curr_mode=ded,port_vlan_id=310,pvid_priority=0,allowed_vlan_ids=all,mac_addr=ee3b8cd87707,allowed_os_mac_addrs=all,desired_capacity=2.0,backing_devices=sriov/vios1/2/1/1/27004008/2.0
lpar_name=72vm1,lpar_id=9,slot_num=2,desired_mode=ded,curr_mode=ded,port_vlan_id=310,pvid_priority=0,allowed_vlan_ids=all,mac_addr=ee3b8cd87702,allowed_os_mac_addrs=all,desired_capacity=2.0,backing_devices=sriov/vios2/1/1/1/2700400a/2.0

System and Virtual I/O Server Side:

  • On the Virtual I/O Server you can use two commands to check your vNIC configuration. You can first use the lsmap command to check the one to one relationship between the VF and the vNIC (you see on the output below that a VF and a vnicserver device are created)(you can also see the name of the vNIC in the client partition side) :
  • # lsdev | grep VF
    ent4             Available   PCIe2 100/1000 Base-TX 4-port Converged Network Adapter VF (df1028e214103c04)
    # lsdev | grep vnicserver
    vnicserver0      Available   Virtual NIC Server Device (vnicserver)
    # lsmap -vadapter vnicserver0 -vnic
    Name          Physloc                            ClntID ClntName       ClntOS
    ------------- ---------------------------------- ------ -------------- -------
    vnicserver0   U8286.41A.21FFFFF-V2-C32897             6 72nim1         AIX
    
    Backing device:ent4
    Status:Available
    Physloc:U78C9.001.WZS06RN-P1-C12-T4-S16
    Client device name:ent1
    Client device physloc:U8286.41A.21FFFFF-V6-C3
    
  • You can get more details (QoS, vlan tagging, port states) by using the vnicstat command:
  • # vnicstat -b vnicserver0
    [..]
    --------------------------------------------------------------------------------
    VNIC Server Statistics: vnicserver0
    --------------------------------------------------------------------------------
    Device Statistics:
    ------------------
    State: active
    Backing Device Name: ent4
    
    Client Partition ID: 6
    Client Partition Name: 72nim1
    Client Operating System: AIX
    Client Device Name: ent1
    Client Device Location Code: U8286.41A.21FFFFF-V6-C3
    [..]
    Device ID: df1028e214103c04
    Version: 1
    Physical Port Link Status: Up
    Logical Port Link Status: Up
    Physical Port Speed: 1Gbps Full Duplex
    [..]
    Port VLAN (Priority:ID): 0:3331
    [..]
    VF Minimum Bandwidth: 2%
    VF Maximum Bandwidth: 100%
    
  • On the client side you can list your vNIC and as always have details using the entstat command:
  • # lsdev -c adapter -s vdevice -t IBM,vnic
    ent0 Available  Virtual NIC Client Adapter (vnic)
    ent1 Available  Virtual NIC Client Adapter (vnic)
    ent3 Available  Virtual NIC Client Adapter (vnic)
    ent4 Available  Virtual NIC Client Adapter (vnic)
    # entstat -d ent0 | more
    [..]
    ETHERNET STATISTICS (ent0) :
    Device Type: Virtual NIC Client Adapter (vnic)
    [..]
    Virtual NIC Client Adapter (vnic) Specific Statistics:
    ------------------------------------------------------
    Current Link State: Up
    Logical Port State: Up
    Physical Port State: Up
    
    Speed Running:  1 Gbps Full Duplex
    
    Jumbo Frames: Disabled
    [..]
    Port VLAN ID Status: Enabled
            Port VLAN ID: 3331
            Port VLAN Priority: 0
    

Redundancy

You will certainly agree that having a such new cool feature without having something that is fully redundant would be a shame. Hopefully we have here a solution with the return with a great fanfare of the Network Interface Backup (NIB). As I told you before each time a vNIC is created a vnicserver is created on one of the Virtual I/O Server. (At the vNIC creation you have to choose on which Virtual I/O server it will be created). So to be fully redundant and to have a failover feature the only way is to create two vNIC adapters (one using the first Virtual I/O Server and the second one using the second Virtual I/O Server, on top of this you then have to create a Network Interface Backup, like in the old times :-) ). Here are a couple of things and best practices to know before doing this.

  • You can’t use two VF coming from the same SRIOV adapter physical port (the NIB creation will be ok, but any configuration on top of this NIB will fail).
  • You can use two VF coming from the same SRIOV adapter but with two different logical ports (this is the example I will show below).
  • The best partice is to use two VF coming from two different SRIOV adapters (you can then afford to loose one of the two SRIOV adapter).

vNIC_nib

Verify on your partition that you have two vNIC adapters and check that the status are ok using the ‘entstat‘ command:

  • Both vNIC are available on the client partition:
  • # lsdev -c adapter -s vdevice -t IBM,vnic
    ent0 Available  Virtual NIC Client Adapter (vnic)
    ent1 Available  Virtual NIC Client Adapter (vnic)
    # lsdev -c adapter -s vdevice -t IBM,vnic -F physloc
    U8286.41A.21FFFFF-V6-C2
    U8286.41A.21FFFFF-V6-C3
    
  • You can check on the first Virtual I/O Server that “Current Link State”, “Logical Port State” and “Physical Port State” are ok (all of them needs to be up):
  • # entstat -d ent0 | grep -p vnic
    -------------------------------------------------------------
    ETHERNET STATISTICS (ent0) :
    Device Type: Virtual NIC Client Adapter (vnic)
    Hardware Address: ee:3b:86:f6:45:02
    Elapsed Time: 0 days 0 hours 0 minutes 0 seconds
    
    Virtual NIC Client Adapter (vnic) Specific Statistics:
    ------------------------------------------------------
    Current Link State: Up
    Logical Port State: Up
    Physical Port State: Up
    
  • Same on the second Virtual I/O Server:
  • # entstat -d ent1 | grep -p vnic
    -------------------------------------------------------------
    ETHERNET STATISTICS (ent1) :
    Device Type: Virtual NIC Client Adapter (vnic)
    Hardware Address: ee:3b:86:f6:45:03
    Elapsed Time: 0 days 0 hours 0 minutes 0 seconds
    
    Virtual NIC Client Adapter (vnic) Specific Statistics:
    ------------------------------------------------------
    Current Link State: Up
    Logical Port State: Up
    Physical Port State: Up
    

Verify on both Virtual I/O Server that the two vNIC are coming from two different SRIOV adapters (for the purpose of this test I’m using two different ports on the same SRIOV adapters but it remains the same with two different adapters). You can see on the output below that on Virtual I/O Server 1 the vNIC is backed to the adapter on position 3 (T3) and that on Virtual I/O Server 2 the vNIC is backed to the adapter on position 4 (T4):

  • Once again use the lsmap command on the first Virtual I/O Server to check that (note that you can check the client name, and the client device):
  • # lsmap -vadapter vnicserver0 -vnic
    Name          Physloc                            ClntID ClntName       ClntOS
    ------------- ---------------------------------- ------ -------------- -------
    vnicserver0   U8286.41A.21AFF8V-V1-C32897             6 72nim1         AIX
    
    Backing device:ent4
    Status:Available
    Physloc:U78C9.001.WZS06RN-P1-C12-T3-S13
    Client device name:ent0
    Client device physloc:U8286.41A.21AFF8V-V6-C2
    
  • Same thing on the second Virtual I/O Server:
  • # lsmap -vadapter vnicserver0 -vnic -fmt :
    vnicserver0:U8286.41A.21AFF8V-V2-C32897:6:72nim1:AIX:ent4:Available:U78C9.001.WZS06RN-P1-C12-T4-S14:ent1:U8286.41A.21AFF8V-V6-C3
    

Finally create the Network Interface Backup and put and IP on top of it:

# mkdev -c adapter -s pseudo -t ibm_ech -a adapter_names=ent0 -a backup_adapter=ent1
ent2 Available
# mktcpip -h 72nim1 -a 10.44.33.223 -i en2 -g 10.44.33.254 -m 255.255.255.0 -s
en2
72nim1
inet0 changed
en2 changed
inet0 changed
[..]
# echo "vnic" | kdb
+-------------------------------------------------+
|       pACS       | Device | Link |    State     |
|------------------+--------+------+--------------|
| F1000A0032880000 |  ent0  |  Up  |     Open     |
|------------------+--------+------+--------------|
| F1000A00329B0000 |  ent1  |  Up  |     Open     |
+-------------------------------------------------+

Let’s now try different things to see if the redundancy is working ok. First let’s shutdown one of the Virtual I/O Server and let’s ping our machine from another one:

# ping 10.14.33.223
PING 10.14.33.223 (10.14.33.223) 56(84) bytes of data.
64 bytes from 10.14.33.223: icmp_seq=1 ttl=255 time=0.496 ms
64 bytes from 10.14.33.223: icmp_seq=2 ttl=255 time=0.528 ms
64 bytes from 10.14.33.223: icmp_seq=3 ttl=255 time=0.513 ms
[..]
64 bytes from 10.14.33.223: icmp_seq=40 ttl=255 time=0.542 ms
64 bytes from 10.14.33.223: icmp_seq=41 ttl=255 time=0.514 ms
64 bytes from 10.14.33.223: icmp_seq=47 ttl=255 time=0.550 ms
64 bytes from 10.14.33.223: icmp_seq=48 ttl=255 time=0.596 ms
[..]
--- 10.14.33.223 ping statistics ---
50 packets transmitted, 45 received, 10% packet loss, time 49052ms
rtt min/avg/max/mdev = 0.457/0.525/0.596/0.043 ms
# errpt | more
IDENTIFIER TIMESTAMP  T C RESOURCE_NAME  DESCRIPTION
59224136   1120200815 P H ent2           ETHERCHANNEL FAILOVER
F655DA07   1120200815 I S ent0           VNIC Link Down
3DEA4C5F   1120200815 T S ent0           VNIC Error CRQ
81453EE1   1120200815 T S vscsi1         Underlying transport error
DE3B8540   1120200815 P H hdisk0         PATH HAS FAILED
# echo "vnic" | kdb
(0)> vnic
+-------------------------------------------------+
|       pACS       | Device | Link |    State     |
|------------------+--------+------+--------------|
| F1000A0032880000 |  ent0  | Down |   Unknown    |
|------------------+--------+------+--------------|
| F1000A00329B0000 |  ent1  |  Up  |     Open     |
+-------------------------------------------------+

Same test with the addition of an address to ping, and I’m only loosing 4 packets:

# ping 10.14.33.223
[..]
64 bytes from 10.14.33.223: icmp_seq=41 ttl=255 time=0.627 ms
64 bytes from 10.14.33.223: icmp_seq=42 ttl=255 time=0.548 ms
64 bytes from 10.14.33.223: icmp_seq=46 ttl=255 time=0.629 ms
64 bytes from 10.14.33.223: icmp_seq=47 ttl=255 time=0.492 ms
[..]
# errpt | more
59224136   1120203215 P H ent2           ETHERCHANNEL FAILOVER
F655DA07   1120203215 I S ent0           VNIC Link Down
3DEA4C5F   1120203215 T S ent0           VNIC Error CRQ

vNIC Live Partition Mobility

By default you can use Live Partition Mobility with SRIOV vNIC, it is super simple and it is fully supported by IBM, as always I’ll show you how to do that using the HMC GUI and the command line:

Using the GUI

First validate the mobility operation, it will allow you to choose the destination SRIOV adapter/port on which to map your current vNIC. You have to choose:

  • The adapter (if you have more than one SRIOV adapter).
  • The Physical port on which the vNIC will be mapped.
  • The Virtual I/O Server on which the vnicserver will be created.

New options are now available in the mobility validation panel:

lpmiov1

Modify each vNIC to match your destination SRIOV adapter and ports (choose the destination Virtual I/O Server here):

lpmiov2
lpmiov3

Then migrate:

lpmiov4

IDENTIFIER TIMESTAMP  T C RESOURCE_NAME  DESCRIPTION
A5E6DB96   1120205915 I S pmig           Client Partition Migration Completed
4FB9389C   1120205915 I S ent1           VNIC Link Up
F655DA07   1120205915 I S ent1           VNIC Link Down
11FDF493   1120205915 I H ent2           ETHERCHANNEL RECOVERY
4FB9389C   1120205915 I S ent1           VNIC Link Up
4FB9389C   1120205915 I S ent0           VNIC Link Up
[..]
59224136   1120205915 P H ent2           ETHERCHANNEL FAILOVER
B50A3F81   1120205915 P H ent2           TOTAL ETHERCHANNEL FAILURE
F655DA07   1120205915 I S ent1           VNIC Link Down
3DEA4C5F   1120205915 T S ent1           VNIC Error CRQ
F655DA07   1120205915 I S ent0           VNIC Link Down
3DEA4C5F   1120205915 T S ent0           VNIC Error CRQ
08917DC6   1120205915 I S pmig           Client Partition Migration Started

The ping test during the lpm show only 9 ping lost, due to etherchannel failover (on of my port was down at the destination server):

# ping 10.14.33.223
64 bytes from 10.14.33.223: icmp_seq=23 ttl=255 time=0.504 ms
64 bytes from 10.14.33.223: icmp_seq=31 ttl=255 time=0.607 ms

Using the command line

I’m moving back the partition using the HMC command line interface, check the manpage for all the details. Here is the details for the vnic_mappings: slot_num/ded/[vios_lpar_name]/[vios_lpar_id]/[adapter_id]/[physical_port_id]/[capacity]:

  • Validate:
  • # migrlpar -o v -m blade-8286-41A-21AFFFF -t  runner-8286-41A-21AEEEE  -p 72nim1 -i 'vnic_mappings="2/ded/vios1/1/1/2/2,3/ded/vios2/2/1/3/2"'
    
    Warnings:
    HSCLA291 The selected partition may have an open virtual terminal session.  The management console will force termination of the partition's open virtual terminal session when the migration has completed.
    
  • Migrate:
  • # migrlpar -o m -m blade-8286-41A-21AFFFF -t  runner-8286-41A-21AEEEE  -p 72nim1 -i 'vnic_mappings="2/ded/vios1/1/1/2/2,3/ded/vios2/2/1/3/2"'
    

Port Labelling

One thing very annoying using LPM with vNIC is that you have to do the mapping of your vNIC each time you are moving. The default choices are never ok and the GUI will always show you the first port or the first adapter and you will have to do that job by yourself. Even worse with the command line the vnic_mappings can give you some headaches :-) . Hopefully there is a feature called port labelling. You can put a label on each SRIOV Physical port and all your machines. My advice is to tag the ports that are serving the same network and the same vlan with the same label on all your machines. During the mobility operation if labels are matching between two machine the adapter/port combination matching the label will be automatically chosen for the mobility and you will have nothing to do to map on your own. Super useful. The outputs below show you how to label your SRIOV ports:

label1
label2

# chhwres -m s00ka9942077-8286-41A-21C9F5V -r sriov --rsubtype physport -o s -a "adapter_id=1,phys_port_id=3,phys_port_label=adapter1port3"
# chhwres -m s00ka9942077-8286-41A-21C9F5V -r sriov --rsubtype physport -o s -a "adapter_id=1,phys_port_id=2,phys_port_label=adapter1port2"
# lshwres -m s00ka9942077-8286-41A-21C9F5V -r sriov --rsubtype physport --level eth -F adapter_id,phys_port_label
1,adapter1port2
1,adapter1port3

At the validation time source and destination ports will automatically be matched:

labelautochoose

What about performance

One of the main reason I’m looking for SRIOV vNIC adapter is performance. As all of our design is based on the fact that we need to move all of our virtual machines from a host to one another we need a solution allowing both mobility and performance. If you have tried to run a TSM server in a virtualized environment you’ll probably understand what I mean about performance and virtualization. In the case of TSM you need a lot of network bandwidth. My current customer and my previous one tried to do that using Shared Ethernet Adapters and of course this solution did not work because a classic Virtual Ethernet Adapter is not able to provide enough bandwidth for a single Virtual I/O client. I’m not an expert about network performance but the result you will see below are pretty obvious to understand and will show you the power of vNIC and SRIOV (I know some optimization can be done on the SEA side but it’s just a super simple test).

Methodology

I will try here to compare a classic Virtual Ethernet Adapter with a vNIC in the same configuration, both environments are the same, using same machines, same switches on so on:

  • Two machines are used to do the test. In case of vNIC both are using a single vNIC bacedk to a 10Gb adapter, in case of Virtual Ethernet Adapter both are backed to a SEA build on top of a 10Gb adapter.
  • The two machines are running on two different s814.
  • Entitlement and memory are the same for source and destination machines.
  • In the case of vNIC the capacity of the VF is set at 100% and the physical port of the SRIOV adapter is dedicated to the vNIC.
  • In the case of vent the SEA is dedicated to the test virtual machine.
  • In both cases a MTU of 1500 is utilized.
  • The tool used for the performance test is iperf (MTU 1500, Window Size 64K, and 10 TCP thread)

SEA test for reference only

  • iperf server:
  • seaserver1

  • iperf client:
  • seacli1

vNIC SRIOV test

We are here running the exact same test:

  • iperf server:
  • iperf_vnic_client2

  • iperf client:
  • iperf_vnic_client

By using a vNIC I get 300% of the bandwidth I get with an virtual ethernet adapter. Just awesome ;-) no tuning (out of the box configuration). Nothing more to add about it it’s pretty obvious that the usage of vNIC for performance will be a must.

Conclusion

Are SRIOV vNICs the end of the SEAs ? Maybe, but not yet ! For some cases like performance and QoS it will be very useful and adopted (I’m pretty sure I will use that for my current customer to virtualized the TSM servers). But today in my opinion SRIOV lacks a real redundancy feature at the adapter level. What I want is a heartbeat communication between the two SRIOV adapters. Having such a feature on a SRIOV adapter will finish to convince customers to move from SEA to SRIOV vNIC. I know nothing about the future but I hope something like that will be available in the next few years. To sum up SRIOV vNICs are powerful, easy to use and simplify the configuration and management of your Power Servers. Please wait for the GA and try this new killer functionality. As always I hope it helps.

Using the Simplified Remote Restart capability on Power8 Scale Out Servers

A few weeks ago I had to work on simplified remote restart. I’m not lucky enough yet -because of some political decisions in my company- to have access to any E880 or E870. We just have a few scale-out machines to play with (S814). For some critical applications we need in the future to be able to reboot the virtual machine if the system hosting the machine has failed (Hardware problem). We decided a couple of month ago not to use remote restart because it was mandatory to use a reserved storage pool device and it was too hard to manage because of this mandatory storage. We now have enough P8 boxes to try and understand the new version of remote restart called simplified remote restart which does not need any reserved storage pool device. For those who want to understand what remote restart is I strongly recommend you to check my previous blog post about remote restart on two P7 boxes: Configuration of a remote restart partition. For the others here is what I learned about the simplified version of this awesome feature.

Please keep in mind that the FSP of the machine must be up to perform a simplified remote restart operation. It means that if for instance you loose one of your datacenter or the link between your two datacenters you cannot use simplified remote restart to restart you partitions on the main/backup site. Simplified Remote Restart only prevents you from an hardware failure of your machine. Maybe this will change in a near future but for the moment it is the most important thing to understand about simplified remote restart.

Updating to the latest version of firmware

I was very surprised when I got my Power8 machines. After deploying these boxes I decided to give a try to simplified remote restart but It was just not possible. Since the Power8 Scale Out servers were release they were NOT simplified remote restart capable. The release of the SV830 firmware now enables the Simplified Remote restart on Power8 Scale Out machines. Please note that there is nothing about it in the patch note, so chmod666.org is the only place where you can get this information :-). Here is the patch note: here. Last word you will find on the internet that you need Power8 to use simplified remote restart. It’s true but partially true. YOU NEED A P8 MACHINE WITH AT LEAST A 820 FIRMWARE.

The first thing to do is to update your firmware to the SV830 version (on both systems participating in the simplified remote restart operation):

# updlic -o u -t sys -l latest -m p814-1 -r mountpoint -d /home/hscroot/SV830_048 -v
[..]
# lslic -m p814-1 -F activated_spname,installed_level,ecnumber
FW830.00,48,01SV830
# lslic -m p814-2 -F activated_spname,installed_level,ecnumber
FW830.00,48,01SV830

You can check the firmware version directly from the Hardware Management Console or in the ASMI:

fw1
fw3

After the firmware upgrade verify that you now have the Simplfied Remote Restart capability set to true.

fw2

# lssyscfg -r sys -F name,powervm_lpar_simplified_remote_restart_capable
p720-1,0
p814-1,1
p720-2,0
p814-2,1

Prerequisites

These prerequisites are true ONLY for Scale out systems:

  • To update to the firmware SV830_048 you need the latest Hardware Management Console release which is v8r8.3.0 plus MH01514 PTF.
  • Obviously on Scale out system SV830_048 is the minimum firmware requirement.
  • Minimum level of Virtual I/O Servers is 2.2.3.4 (for both source and destination systems).
  • PowerVM enterprise. (to be confirmed)

Enabling simplified remote restart of an existing partition

You probably want to enable simplified remote restart after an LPM migration/evacuation. After migrating your virtual machine(s) to a Power 8 with the Simplified Remote Restart Capability you have to enable this capability on all the virtual machines. This can only be done when the machine is shutdown, so you first have to stop the virtual machines (after a live partition mobility move) if you want to enable the SRR. It can’t be done without having to reboot the virtual machine:

  • List current partition running on the system and check which one are “simplified remote restart capable” (here only one is simplified remote restart capable):
  • # lssyscfg -r lpar -m p814-1 -F name,simplified_remote_restart_capable
    vios1,0
    vios2,0
    lpar1,1
    lpar2,0
    lpar3,0
    lpar4,0
    lpar5,0
    lpar6,0
    lpar7,0
    
  • For each lpar not simplified remote restart capable change the simplified_remote_restart_capable attribute using the chssyscfg command. Please note that you can’t do this using the Hardware Management Console gui (in the latest 8r8.3.0, when enabling it by the Hardware management console the GUI is telling you that you need a reserved device storage which is needed by the Remote Restart Capability and not by the simplified version of remote restart. You have to use the command line ! (check screenshot below)
  • You can’t change this attribute while the machine is running:
  • gui_change_to_srr

  • You can’t do it with the GUI after the machine is shutdown:
  • gui_change_to_srr2
    gui_change_to_srr3

  • The only way to enable this attribute is to do it by using the Hardware Management Console command line (please note in the output below that running lpar cannot be changed):
  • # for i in lpar2 lpar3 lpar4 lpar5 lpar6 lpar7 ; do chsyscfg -r lpar -m p824-2 -i "name=$i,simplified_remote_restart_capable=1" ; done
    An error occurred while changing the partition named lpar6.
    HSCLA9F8 The remote restart capability of the partition can only be changed when the partition is shutdown.
    An error occurred while changing the partition named lpar7.
    HSCLA9F8 The remote restart capability of the partition can only be changed when the partition is shutdown.
    # lssyscfg -r lpar -m p824-1 -F name,simplified_remote_restart_capable,lpar_env | grep -v vioserver
    lpar1,1,aixlinux
    lpar2,1,aixlinux
    lpar3,1,aixlinux
    lpar4,1,aixlinux
    lpar5,1,aixlinux
    lpar6,0,aixlinux
    lpar7,0,aixlinux
    

Remote restarting

If you are trying to do a live partition mobility operation back to a P7 or P8 box without the simplified remote restart capability it will not be possible. Enabling the simplified remote restart will force the virtual machine to stay on P8 boxes with simplified remote restart capability. This is one of the reason why most of customers are not doing it:

# migrlpar -o v -m p814-1 -t p720-1 -p lpar2
Errors:
HSCLB909 This operation is not allowed because managed system p720-1 does not support PowerVM Simplified Partition Remote Restart.

lpm_not_capable_anymore

On the Hardware Management Console you can see that the virtual machine is simplified remote restart capable by checking its properties:

gui_change_to_srr4

You can now try to remote restart your virtual machines to another server. As always the status of the server has to be different from Operating (Power Off, Error, Error – Dump in progress, Initializing). As always my advice is to validate before restarting:

# rrstartlpar -o validate -m p824-1 -t p824-2 -p lpar1
# echo $?
0
# rrstartlpar -o restart -m p824-1 -t p824-2 -p lpar1
HSCLA9CE The managed system is not in a valid state to support partition remote restart operations.
# lssyscfg -r sys -F name,state
p824-2,Operating
p824-1,Power Off
# rrstartlpar -o restart -m p824-1 -t p824-2 -p lpar1

By doing a remote restart operation the machine will boot automatically. You can check in the errpt that in most cases the partition ID will be changed (proving that you are on another machine):

# errpt | more
IDENTIFIER TIMESTAMP  T C RESOURCE_NAME  DESCRIPTION
A6DF45AA   0618170615 I O RMCdaemon      The daemon is started.
1BA7DF4E   0618170615 P S SRC            SOFTWARE PROGRAM ERROR
CB4A951F   0618170615 I S SRC            SOFTWARE PROGRAM ERROR
CB4A951F   0618170615 I S SRC            SOFTWARE PROGRAM ERROR
D872C399   0618170615 I O sys0           Partition ID changed and devices recreat

Be very careful with the ghostdev sys0 attribute. Every VM remote restarted needs to have ghostdev set to 0 to avoid an ODM wipe (If you remote restart an lpar with ghostdev set to 1 you will loose all ODM customization)

# lsattr -El sys0 -a ghostdev
ghostdev 0 Recreate ODM devices on system change / modify PVID True

When the source machine is up and running you have to clean the old definition of the remote restarted lpar by launching a cleanup operation. This will wipe the old lpar defintion:

# rrstartlpar -o cleanup -m p814-1 -p lpar1

The RRmonitor (modified version)

There is a script delivered by IBM called rrMonitor, this one is looking at the PowerSystem‘s state and if this one is in particular state is restarting a specific virtual machine. This script is just not usable by a user because it has to be executed directly on the HMC (you need a pesh password to put the script on the hmc) and is only checking one particular virtual machine. I had to modify this script to ssh to the HMC and then check for every lpar on the machine and not just one in particular. You can download my modified version here : rrMonitor. Here is what’s the script is doing:

  • Checking the state of the source machine.
  • If this one is not “Operating”, the script search for every remote restartable lpars on the machine.
  • The script is launching remote restart operations to remote restart all the partitions.
  • The script is telling the user the command to cleanup the old lpar when the source machine will be running again.
# ./rrMonitor p814-1 p814-2 all 60 myhmc
Getting remote restartable lpars
lpar1 is rr simplified capable
lpar1 rr status is Remote Restartable
lpar2 is rr simplified capable
lpar2 rr status is Remote Restartable
lpar3 is rr simplified capable
lpar3 rr status is Remote Restartable
lpar4 is rr simplified capable
lpar4 rr status is Remote Restartable
Checking for source server state....
Source server state is Operating
Checking for source server state....
Source server state is Operating
Checking for source server state....
Source server state is Power Off In Progress
Checking for source server state....
Source server state is Power Off
It's time to remote restart
Remote restarting lpar1
Remote restarting lpar2
Remote restarting lpar3
Remote restarting lpar4
Thu Jun 18 20:20:40 CEST 2015
Source server p814-1 state is Power Off
Source server has crashed and hence attempting a remote restart of the partition lpar1 in the destination server p814-2
Thu Jun 18 20:23:12 CEST 2015
The remote restart operation was successful
The cleanup operation has to be executed on the source server once the server is back to operating state
The following command can be used to execute the cleanup operation,
rrstartlpar -m p814-1 -p lpar1 -o cleanup
Thu Jun 18 20:23:12 CEST 2015
Source server p814-1 state is Power Off
Source server has crashed and hence attempting a remote restart of the partition lpar2 in the destination server p814-2
Thu Jun 18 20:25:42 CEST 2015
The remote restart operation was successful
The cleanup operation has to be executed on the source server once the server is back to operating state
The following command can be used to execute the cleanup operation,
rrstartlpar -m sp814-1 -p lpar2 -o cleanup
Thu Jun 18 20:25:42 CEST 2015
[..]

Conclusion

As you can see the Simplified version of the remote restart feature is simpler that the normal one. My advice is to create all your lpars with the simplified remote restart attribute. It’s that easy :). If you plan to LPM back to P6 or P7 box, don’t use simplified remote restart. I think this functionality will become more popular when all the old P7 and P6 will be replaced by P8. As always I hope it helps.

Here are a couple of link with great documentations about Simplified Remote Restart:

  • Simplified Remote Restart Whitepaper: here
  • Original rrMonitor: here
  • Materials about lastest HMC release and a couple of videos related to the Simplified Remote Restart: here