pvcctl : Using python Openstack api to code a PowerVC command line | Automating PowerVC and NovaLink (post)-installation with Ansible

The world is changing fast, especially regarding sysadmin jobs and skills. Everybody has noticed that being a good sysadmin now implies two things. The first one is “being a good dev”. Not just someone who knows how to write a ksh or bash script but someone who is able to write in one of these three languages: python, ruby, go. Most of my team members does not understand that. This is now almost mandatory. I truly believe what I’m saying here. The second one is to have strong skills in automation tools. On this part I’m almost ok being good at Chef, Saltstack and Ansible. Unfortunately for the world the best tool is never the one who wins and that’s why Ansible is almost winning everywhere in the battle of automation. It is simple to understand why: Ansible is simple to use and to understand and it is based on ssh. My opinion is that Ansible is ok for administration stuff but not ok when scaling. Being based on ssh make it being based on a “push” model and in my humble opinion push models are bad and pull models is the future. (One thing to say about that: this is just my opinion. I don’t want this to end in never ending trolls on Twitter. Please do blog posts if you want to express yourself) (I’m saying this because twitter is becoming a place to troll and not anymore a place to share skills and knowledge, like it was before). This is said.

The first part of this blog post will talk about a tool I am coding called pvcctl. This tool is a python tool allowing you to use PowerVC in command line. It was also for me the opportunity to be better at python and to improve my skills developing in this language. Keep in mind that I’m not a developer but I’m here going to give you simple tips and tricks to use python to write your own tools to query and interact with Openstack. I must admit that I’ve tried everything: httplib, librequest, Chef, Ansible, Salstack. None of this solution was ok for me. I finally ended in using Openstack python api to write this tool. It’s not that hard and it now allows me to write my own programs to interact with PowerVC. Once again keep in mind that this tool fit my needs and it will probably not fit yours. This is an example of how to write a tool based on python Openstack api. Not an official tool or anything else.

The second part of this blog post will talk about an Ansible playbook I’ve written to take care of PowerVC installation and NovaLink post installation. The more machine I deploy the more PowerVC I need (yeah yeah, I was forced to) and the more NovaLink I need too. Instead of doing the same thing over and over again the best solution was to use an automation tool and as it is now the most common one used on Linux the one I choose was ansible.

Using python Openstack api to code a PowerVC command line

This part of the post will show you how to use the python Openstack api to create scripts to query and interact with PowerVC. First of all I know that their are other ways to use the APIs but (it’s my opinion) I think that using service-specific clients is the simplest way to understand and to work with the API. This part of the blog post will only talk about service-specific clients (ie. novaclient, cinderclient, and so on …). I wanted to thanks Matthew Edmonds from the PowerVC team. He helped me to better understand the api and gave me good advises. So a big shout out to you Matthew :-). Thank you.

Initialize you script

Sessions

Almost all Openstack tools are using “rc” files to load authentication credentials and endpoints. As I wanted my tool to work like this (ie. sourcing an rc file containing my credentials) I have found that the best way to do this was to use session. By using session you don’t have to manage or work with any tokens or to be worried about that. Sessions is taking care of that for you and you have nothing to do. As you can see on the code below the “OS_*” environment variables are used here. So before running the tool all you have to do is to export these variables. It’s as simple as that:

  • An example “rc” file filled with the OS_* values (note that crt file must be copied from the PowerVC host to the host running the tool (/etc/pki/tls/certs/powervc.crt)):
  • # cat powervrc
    export OS_AUTH_URL=https://mypowervc:5000/v3/
    export OS_USERNAME=root
    export OS_PASSWORD=root
    export OS_TENANT_NAME=ibm-default
    export OS_REGION_NAME=RegionOne
    export OS_USER_DOMAIN_NAME=Default
    export OS_PROJECT_DOMAIN_NAME=Default
    export OS_CACERT=~/powervc_labp8.crt
    export OS_IMAGE_ENDPOINT=https://mypowervc:9292/
    export NOVACLIENT_DEBUG=0
    # source powervcrc
    
  • The python piece of code creating a session object:
  • from keystoneauth1.identity import v3
    from keystoneauth1 import session
    
    auth = v3.Password(auth_url=env['OS_AUTH_URL'],
                       username=env['OS_USERNAME'],
                       password=env['OS_PASSWORD'],
                       project_name=env['OS_TENANT_NAME'],
                       user_domain_id=env['OS_USER_DOMAIN_NAME'],
                       project_domain_id=env['OS_PROJECT_DOMAIN_NAME'])
    
    sess = session.Session(auth=auth, verify=env['OS_CACERT'])
    

The logger

Instead of using the print statement each time I need to debug my script I have found that most of Openstack API can be used with a python logger object. By using a logger you’ll be able to see all your http calls to the Openstack API (your post, put, get, delete with their json body, their response and their url). It is super useful to debug your scripts and it’s super simple to use. The piece of code below will create a logger object writing to my log directory. You’ll see after how to use a logger when creating a client object (a nova, cinder, or neutron object):

import logging

logger = logging.getLogger('pvcctl')
hdlr = logging.FileHandler(BASE_DIR + "/logs/pvcctl.log")
logger.addHandler(hdlr)
logger.setLevel(logging.DEBUG)

Here is an exemple of the output created by the logger with a novaclient (the novaclient was created specifying a logger object):

REQ: curl -g -i --cacert "/data/tools/ditools/pvcctl/conf/powervc.crt" -X POST https://mypowervc/powervc/openstack/compute/v2.1/51488ae7be7e4ec59759ccab496c8793/servers/a3cea5b8-33b4-432e-88ec-e11e47941846/os-volume_attachments -H "User-Agent: python-novaclient" -H "Content-Type: application/json" -H "Accept: application/json" -H "X-OpenStack-Nova-API-Version: 2.19" -H "X-Auth-Token: {SHA1}9b8becc425d25fdfb98d5e4f055c71498d2e744f" -d '{"volumeAttachment": {"volumeId": "a90f04ce-feb2-4163-9b36-23765777c6a0"}}' 
RESP: [200] Date: Fri, 28 Oct 2016 13:05:24 GMT Server: Apache Content-Length: 194 Content-Type: application/json X-Openstack-Nova-Api-Version: 2.19 Vary: X-OpenStack-Nova-API-Version X-Compute-Request-Id: req-58addd19-7421-4c9f-a712-9c386f46b6cb Cache-control: max-age=0, no-cache, no-store, must-revalidate Pragma: no-cache Keep-Alive: timeout=5, max=93 Connection: Keep-Alive 
RESP BODY: {"volumeAttachment": {"device": "/dev/sdb", "serverId": "a3cea5b8-33b4-432e-88ec-e11e47941846", "id": "a90f04ce-feb2-4163-9b36-23765777c6a0", "volumeId": "a90f04ce-feb2-4163-9b36-23765777c6a0"}} 

The clients

Each Openstack service (nova, glance, neutron, swift, cinder, …) is provided with a python Openstack API. I’m -in my tool- only using novaclient, cinderclient and neutronclient but it will be the exact same thing if you want to use ceilomeiter or glance. Before doing anything else you have to install the clients you want to use (using your package manager (yum on my side) or using pip)

# yum install python2-novaclient.noarch
# pip install python-neutronclient

Initializing the clients

After the clients are installed you can use them in your python scripts, import them and create the objects. Use the previously created session object to create the clients objects (session=sess in my example below):

from novaclient import client as client_nova
from neutronclient.v2_0 import client as client_neutron
from cinderclient import client as client_cinder

nova = client_nova.Client(2.19, session=sess)
neutron = client_neutron.Client(session=sess)
cinder = client_cinder.Client(2.0, service_type="volume", session=sess)

If your client can be created using a logger object you can specify this at the time of the object creation. Here is an example with novaclient:

nova = client_nova.Client(2.19, session=sess, http_log_debug=True, logger=logger)

Using the clients

After the objects are created using them is super simple. I’ll give you here a couple of examples below:

  • Searching a vm (this will return a server object):
  • server = nova.servers.find(name=machine)
    
  • Renaming a vm:
  • server = nova.servers.find(name=machine)
    server.update(name=new_name)
    
  • Starting a vm:
  • server = nova.servers.find(name=machine)
    server.start()
    
  • Stopping a vm:
  • server = nova.servers.find(name=machine)
    server.stop()
    
  • Listing vms:
  • for server in nova.servers.list():
      name = getattr(server, 'OS-EXT-SRV-ATTR:hostname')
      print name
    
  • Find a vlan:
  • vlan = neutron.list_networks(name=vlan)
    
  • Creating a volume:
  • cinder.volumes.create(name=volume_name, size=size, volume_type=storage_template)
    

And so on. Each client type has it’s own method, the best way to find which methods are available for each object is to check in the official Openstack API documentation:

What about PowerVC extensions ? (using get,put,delete …)

If you have already read my blog posts about PowerVC you will probably already know that PowerVC add some extensions to OpenStack. That means that for the PowerVC extension using the Openstack method shipped with the API will not work. To be more specific the methods used to query or interact with the PowerVC extensions will simply not exists at all. The good part of these API is that they are also shipped with the http common methods. This means that for each Openstack api object, let’s say nova, you’ll be able to directly use the put, post, get, delete and so on method. By doing that you’ll be able to use the same object to use all api method (let’s say create or rename a server) and to use the PowerVC extensions. For instance the “host-sea” is a PowerVC added extension (link here). You can simply use a novaclient to query or post something to the extension (the example below shows you both post and a get on the PowerVC extension “host-seas”:

resp, host_seas = nova.client.get("/host-seas?network_id=" + net_id + "&vlan_id" + vlan_id)
resp, body = nova.client.post("/host-network-mapping", body=mapping_json)

Here is another example for onboarding or unmanaging volume (which is a PowerVC extension to Openstack):

resp, body = cinder.client.post("/os-hosts/" + oshost['host_name'] + "/onboard", body=onboard_json)
resp, body = cinder.client.post("/os-hosts/" + oshost['host_name'] + "/unmanage", body=onboard_json)

Working with json

Last part for this tips and tricks on how to write your own python code using Openstack api: you’ll probably see that you’ll need to work with json. What is cool with python is that json can be managed as a dict object. It’s super simple to use:

  • Importing json:
  • import json
    
  • Loading json:
  • json_load = json.loads('{ "ibm-extend": { "new_size": 0 } }')
    
  • Using the dict:
  • json_load['ibm-extend']['new_size'] = 200
    
  • Use it as a body in a post call (grow a volume):
  • resp, body = cinder.client.post("/volumes/" + volume.id + "/action", body=json_grow)
    

The pvcctl tool

Now that you have understand this I can now say to you that I’ve written a tool called pvcctl based on the Openstack python api. This tool is freely available on github. As I said before this tools fit my needs and is an example of what can be done using the Openstack API in python. Keep in mind that I’m not a developer and the code can probably be better. But this tool is used by my whole team on PowerVC so … it will probably be good enough to create shells scripts on top of it or for daily PowerVC administration. The tool can be found a this address: https://github.com/chmod666org/pvcctl. Give it a try and tell me what you think a about it. I give you below a couple of example of how to use the tools. You’ll see it’s super simple:

  • Create a network:
  • # pvcctl network create name=newvlan id=666 cidr='10.10.20.20/24' dns1='8.8.8.8' dns2='8.8.9.9' gw='10.10.20.254'
    
  • Add description on a vm:
  • # pvcctl vm set_description vm=deckard desc="We call it Voight-Kampff"
    
  • Migrate a vm:
  • # pvcctl vm migrate vm=tyrell host=21AFF8V
    
  • Attach a volume to a vm:
  • # pvcctl vm attach_vol vm=tyrell vol=myvol
    
  • Create a vm
  • # pvcctl vm create ip='10.14.20.240' ec_max=1 ec_min=0.1 ec=0.1 vp=1 vp_min=1 vp_max=4 mem=4096 mem_min=1024 mem_max=8192 weight=240 name=bcubcu disks="death" scg=ssp vlan=vlan-1331 image=kitchen-aix72 aggregate=hg2 user_data=testvm srr=yes
    
  • Create a volume:
  • # pvcctl volume create provider=mystorageprovider name=volume_test size=10
    
  • Grow a volume!
  • # pvcctl volume grow vol=test_volume size=50
    

Automating PowerVC and NovaLink installation and post-installation with ansible

At the same time I have released my pvcctl tool I also had the idea that releasing my PowerVC and NovaLink playbook for Ansible will be a good thing. This playbook is not so huge and is not doing a lot of things but I was using it a lot when deploying all my NovaLink hosts (I now have 16 MME managed by NovaLink) and when creating PowerVC servers for different kind of project. That’s a shame that everybody in my company has not yet understood why having multiple PowerVC is just a wrong idea and a waste of time (I’m not surprised that between a good and a bad idea they prefer to choose the bad one :-) , obvious when you never touch to production at all but when you still have the power of deciding things in your hands). Anyway this playbook is used for two things, first one is preparing my novalink hosts (being sure I’m at the latest version of NovaLink, being sure that everything is configured as I want to (ntp, dns, rsct)), second one is installing PowerVC hosts (installing PowerVC is just super boring you always have to install tons of rpms needed for dependencies and if like me, you do not have a satellite connection or access to the internet it can be a real pain). The only thing you have to do is to configure the inventories files and the group_vars files located in the playbook directory. The playbook can be founded at this address https://github.com/chmod666org/ansible-powervc-novalink.

  • Put the name of your NovaLink hosts in the hosts.novalink file:
  • # cat inventories/hosts.novalink
    nl1.lab.chmod666.org
    nl2.lab.chmod666.org
    [..]
    
  • Put the name of your PowerVC hosts in the hosts.powervcfile:
  • # cat inventories/hosts.powervc
    pvc1.lab.chmod666.org
    pvc2.lab.chmod666.org
    [..]
    
  • Next prepare group_vars files for NovaLink …
  • ntpservers:
      - myntp1
      - myntp2
      - myntp2
    dnsservers:
      - 8.8.8.8
      - 8.8.9.9
    dnssearch:
      - lab.chmod666.org
    vepa_iface: ibmveth6
    repo: novalinkrepo
    
  • and PowerVC:
  • ntpservers:
      - myntp1
      - myntp2
      - myntpd3
    dnsservers:
      - 8.8.8.8
      - 8.8.9.9
    dnssearch:
      - lab.chmod666.org
    repo_rhel: http://myrepo.lab.chmod666.org/rhel72le/
    repo_ibmtools: http://myrepo.lab.chmod666.org/ibmpowertools71le/
    repo_powervc: http://myrepo.lab.chmod666.org/powervc
    powervc_base: PowerVC_V1.3.1_for_Power_Linux_LE_RHEL_7.1_062016.tar.gz
    powervc_upd: powervc-update-ppcle-1.3.1.2.tgz
    powervc_rpm: [ 'python-dns-1.12.0-1.20150617git465785f.el7.noarch.rpm', 'selinux-policy-3.13.1-60.el7.noarch.rpm', 'selinux-policy-targeted-3.13.1-60.el7.noarch.rpm', 'python-fpconst-0.7.3-12.el7.noarch.rpm', 'python-pyasn1-0.1.6-2.el7.noarch.rpm', 'python-pyasn1-modules-0.1.6-2.el7.noarch.rpm', 'python-twisted-web-12.1.0-5.el7_2.ppc64le.rpm', 'sysfsutils-2.1.0-16.el7.ppc64le.rpm', 'SOAPpy-0.11.6-17.el7.noarch.rpm', 'SOAPpy-0.11.6-17.el7.noarch.rpm', 'python-twisted-core-12.2.0-4.el7.ppc64le.rpm', 'python-zope-interface-4.0.5-4.el7.ppc64le.rpm', 'pyserial-2.6-5.el7.noarch.rpm' ]
    powervc_base_version: 1.3.1.0
    powervc_upd_version: 1.3.1.2
    powervc_edition: cloud_powervm
    

You then just have to run the playbook for Novalink and PowerVC hosts to run the installation and post-installation:

  • Novalink post-install:
  • # ansible-playbook -i inventories/hosts.novalink site.yml
    
  • PowerVC install:
  • # ansible-playbook -i inventories/hosts.powervc site.yml
    

powervcansible

Just to give you an example of one the the tasks of this playbook here is the task in charge to install PowerVC. Pretty simple :-) :

## install powervc
- name: check previous installation
  command: bash -c "rpm -qa | grep ibmpowervc-"
  register: check_base
  ignore_errors: True
- debug: var=check_base

- name: install powervc binaires
  command: chdir=/tmp/powervc-{{ powervc_base_version }} /tmp/powervc-{{ powervc_base_version }}/install -s cloud_powervm
  environment:
    HOST_INTERFACE: "{{ ansible_default_ipv4.interface }}"
    EGO_ENABLE_SUPPORT_IPV6: N
    PATH: $PATH:/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.65-3.b17.el7.ppc64le/jre/bin/:/usr/sbin:/usr/bin
    ERL_EPMD_ADDRESS: "::ffff:127.0.1.1"
  when: check_base.rc == 1

- name: check previous update
  command: rpm -q ibmpowervc-{{ powervc_upd_version }}-1.noarch
  register: check_upd
  ignore_errors: True
- debug: var=check_upd
  
- name: updating powervc
  command: chdir=/tmp/powervc-{{ powervc_upd_version }} /tmp/powervc-{{ powervc_upd_version }}/update -s 
  when: check_upd.rc == 1

The goal here is not to explain how Ansible is working but to show you a simple example of what I’m doing with Ansible on my Linux boxes (all of this related to Power). If you want to check further have a look in the playbook itself on github :-)

Conclusion

This blog post is just a way to show you my work on both pvcctl and Ansible playbook for NovaLink and PowerVC. It’s not a detailed blog post about deep technical stuffs. I hope you’ll give a try to the tools and tell me what can be improved or changed. As always … I hope it helps.

Putting NovaLink in Production & more PowerVC (1.3.1.2) tips and tricks

I’ve been quite busy and writing the blog is getting to be more and more difficult with the amount of work I have but I try to stick to my thing as writing these blogs posts is almost the only thing I can do properly in my whole life. So why do without ? As my place is one of the craziest place I have ever worked in -(for the good … and the bad (I’ll not talk here about how are the things organized here or how is the recognition of your work but be sure it is probably be one the main reason I’ll probably leave this place one day or another)- the PowerSystems growth is crazy and the number of AIX partitions we are managing with PowerVC never stops increasing and I think that we are one the biggest PowerVC customer in the whole world (I don’t know if it is a good thing or not). Just to give you a couple of examples we have here on the biggest Power Enterprise Pool I have ever seen (384 Power8 mobile cores), the number of partitions managed by PowerVC is around 2600 and we have a PowerVC managing almost 30 hosts. You have understand well … theses numbers are huge. It’s seems to be very funny, but it’s not ; the growth is problem, a technical problem and we are facing problems that most of you will never hit. I’m speaking about density and scalability. Hopefully for us the “vertical” design of PowerVC can now be replaced by what I call an “horizontal” design. Instead of putting all the nova instances on one single machine, we now have the possibility to spread the load on each host by using NovaLink. As we needed to solve these density and scalability problems we decided to move all the P8 hosts to NovaLink (this process is still ongoing but most of the engineering stuffs are already done). As you now know we are not deploying a host every year but generally a couple by month and that’s why we needed to find a solution to automate this. So this blog post will talk about all the things and the best practices I have learn using and implementing NovaLink in a huge production environment (automated installation, tips and tricks, post-install, migration and so on). But we will not stop here I’ll also talk about the new things I have learn about PowerVC (1.3.1.2 and 1.3.0.1) and give more tips and tricks to use the product as it best. Before going any further I’d first want to say a big thank you to the whole PowerVC team for their kindness and the precious time they gave to us to advise and educate the OpenStack noob I am. (A special thanks to Drew Thorstensen for the long discussions we had about Openstack and PowerVC. He is probably one the most passionate guy I have ever met at IBM).

Novalink Automated installation

I’ll not write big introduction, let’s work and let’s start with NovaLink and how to automate the Novalink installation process. Copy the content of the installation cdrom to a directory that can be served by an http server on your NIM server (I’m using my NIM server for the bootp and tftp part). Note that I’m doing this with a tar command because there are symbolic links in the iso and a simple cp will end up with a full filesystem.

# loopmount -i ESD_-_PowerVM_NovaLink_V1.0.0.3_062016.iso -o "-V cdrfs -o ro" -m /mnt
# tar cvf iso.tar /mnt/*
# tar xvf ios.tar -C /export/nim/lpp_source/powervc/novalink/1.0.0.3/iso
# ls -l /export/nim/lpp_source/powervc/novalink/1.0.0.3/iso
total 320
dr-xr-xr-x    2 root     system          256 Jul 28 17:54 .disk
-r--r--r--    1 root     system          243 Apr 20 21:27 README.diskdefines
-r--r--r--    1 root     system         3053 May 25 22:25 TRANS.TBL
dr-xr-xr-x    3 root     system          256 Apr 20 11:59 boot
dr-xr-xr-x    3 root     system          256 Apr 20 21:27 dists
dr-xr-xr-x    3 root     system          256 Apr 20 21:27 doc
dr-xr-xr-x    2 root     system         4096 Aug 09 15:59 install
-r--r--r--    1 root     system       145981 Apr 20 21:34 md5sum.txt
dr-xr-xr-x    2 root     system         4096 Apr 20 21:27 pics
dr-xr-xr-x    3 root     system          256 Apr 20 21:27 pool
dr-xr-xr-x    3 root     system          256 Apr 20 11:59 ppc
dr-xr-xr-x    2 root     system          256 Apr 20 21:27 preseed
dr-xr-xr-x    4 root     system          256 May 25 22:25 pvm
lrwxrwxrwx    1 root     system            1 Aug 29 14:55 ubuntu -> .
dr-xr-xr-x    3 root     system          256 May 25 22:25 vios

Prepare the PowerVM NovaLink repository. The content of the repository can be found in the NovaLink iso image in pvm/repo/pvmrepo.tgz:

# ls -l /export/nim/lpp_source/powervc/novalink/1.0.0.3/iso/pvm/repo/
total 720192
-r--r--r--    1 root     system          223 May 25 22:25 TRANS.TBL
-rw-r--r--    1 root     system         2106 Sep 05 15:56 pvm-install.cfg
-r--r--r--    1 root     system    368722592 May 25 22:25 pvmrepo.tgz

Extract the content of this tgz file in a directory that can be served by the http server:

# mkdir /export/nim/lpp_source/powervc/novalink/1.0.0.3/pvmrepo
# cp /export/nim/lpp_source/powervc/novalink/1.0.0.3/iso/pvm/repo/pvmrepo.tgz
# cd /export/nim/lpp_source/powervc/novalink/1.0.0.3/pvmrepo
# gunzip pvmrepo.tgz
# tar xvf pvmrepo.tar
[..]
x ./pool/non-free/p/pvm-core/pvm-core-dbg_1.0.0.3-160525-2192_ppc64el.deb, 54686380 bytes, 106810 media blocks.
x ./pool/non-free/p/pvm-core/pvm-core_1.0.0.3-160525-2192_ppc64el.deb, 2244784 bytes, 4385 media blocks.
x ./pool/non-free/p/pvm-core/pvm-core-dev_1.0.0.3-160525-2192_ppc64el.deb, 618378 bytes, 1208 media blocks.
x ./pool/non-free/p/pvm-pkg-tools/pvm-pkg-tools_1.0.0.3-160525-492_ppc64el.deb, 170700 bytes, 334 media blocks.
x ./pool/non-free/p/pvm-rest-server/pvm-rest-server_1.0.0.3-160524-2229_ppc64el.deb, 263084432 bytes, 513837 media blocks.
# rm pvmrepo.tar 
# ls -l 
total 16
drwxr-xr-x    2 root     system          256 Sep 11 13:26 conf
drwxr-xr-x    2 root     system          256 Sep 11 13:26 db
-rw-r--r--    1 root     system          203 May 26 02:19 distributions
drwxr-xr-x    3 root     system          256 Sep 11 13:26 dists
-rw-r--r--    1 root     system         3132 May 24 20:25 novalink-gpg-pub.key
drwxr-xr-x    4 root     system          256 Sep 11 13:26 pool

Copy the NovaLink boot files in a directory that can be served by your tftp server (I’m using /var/lib/tftpboot):

# mkdir /var/lib/tftpboot
# cp -r /export/nim/lpp_source/powervc/novalink/1.0.0.3/iso/pvm /var/lib/tftpboot
# ls -l /var/lib/tftpboot
total 1016
-r--r--r--    1 root     system         1120 Jul 26 20:53 TRANS.TBL
-r--r--r--    1 root     system       494072 Jul 26 20:53 core.elf
-r--r--r--    1 root     system          856 Jul 26 21:18 grub.cfg
-r--r--r--    1 root     system        12147 Jul 26 20:53 pvm-install-config.template
dr-xr-xr-x    2 root     system          256 Jul 26 20:53 repo
dr-xr-xr-x    2 root     system          256 Jul 26 20:53 rootfs
-r--r--r--    1 root     system         2040 Jul 26 20:53 sample_grub.cfg

I still don’t know why this is the case on AIX but the tftp server is searching for the grub.cfg in the root directory of your AIX system. It’s not the case for my RedHat Enterprise Linux installation but it’s the case for the NovaLink/Ubuntu installation. Copy the sample-grub.cfg to /grub.cfg and modify the content of the file:

  • As the gateway, netmask and nameserver will be provided the the pvm-install-config.cfg (the configuration file of the Novalink installer we will talk about this later) file comment those three lines.
  • The hostname will still be needed.
  • Modify the linux line and point to the vmlinux file provided in the NovaLink iso image.
  • Modify the live-installer to point to the filesystem.squashfs provided in the NovaLink iso image.
  • Modify the pvm-repo line to point to the pvm-repository directory we created before.
  • Modify the pvm-installer line to point to the NovaLink install configuration file (we will modify this one after).
  • Don’t do anything with the pvm-vios line as we are installing NovaLink on a system already having Virtual I/O Servers installed (I’m not installing Scale Out system but high end models only).
  • I’ll talk later about the pvm-disk line (this line is not by default in the pvm-install-config.template provided in the NovaLink iso image).
# cp /var/lib/tftpboot/sample_grub.cfg /grub.cfg
# cat /grub.cfg
# Sample GRUB configuration for NovaLink network installation
set default=0
set timeout=10

menuentry 'PowerVM NovaLink Install/Repair' {
 insmod http
 insmod tftp
 regexp -s 1:mac_pos1 -s 2:mac_pos2 -s 3:mac_pos3 -s 4:mac_pos4 -s 5:mac_pos5 -s 6:mac_pos6 '(..):(..):(..):(..):(..):(..)' ${net_default_mac}
 set bootif=01-${mac_pos1}-${mac_pos2}-${mac_pos3}-${mac_pos4}-${mac_pos5}-${mac_pos6}
 regexp -s 1:prefix '(.*)\.(\.*)' ${net_default_ip}
# Setup variables with values from Grub's default variables
 set ip=${net_default_ip}
 set serveraddress=${net_default_server}
 set domain=${net_ofnet_network_domain}
# If tftp is desired, replace http with tftp in the line below
 set root=http,${serveraddress}
# Remove comment after providing the values below for
# GATEWAY_ADDRESS, NETWORK_MASK, NAME_SERVER_IP_ADDRESS
# set gateway=10.10.10.1
# set netmask=255.255.255.0
# set namserver=10.20.2.22
  set hostname=nova0696010
# In this sample file, the directory novalink is assumed to exist on the
# BOOTP server and has the NovaLink ISO content
 linux /export/nim/lpp_source/powervc/novalink/1.0.0.3/iso/install/vmlinux \
 live-installer/net-image=http://${serveraddress}/export/nim/lpp_source/powervc/novalink/1.0.0.3/iso/install/filesystem.squashfs \
 pkgsel/language-pack-patterns= \
 pkgsel/install-language-support=false \
 netcfg/disable_dhcp=true \
 netcfg/choose_interface=auto \
 netcfg/get_ipaddress=${ip} \
 netcfg/get_netmask=${netmask} \
 netcfg/get_gateway=${gateway} \
 netcfg/get_nameservers=${nameserver} \
 netcfg/get_hostname=${hostname} \
 netcfg/get_domain=${domain} \
 debian-installer/locale=en_US.UTF-8 \
 debian-installer/country=US \
# The directory novalink-repo on the BOOTP server contains the content
# of the pvmrepo.tgz file obtained from the pvm/repo directory on the
# NovaLink ISO file.
# The directory novalink-vios on the BOOTP server contains the files
# needed to perform a NIM install of VIOS server(s)
#  pvmdebug=1
 pvm-repo=http://${serveraddress}/export/nim/lpp_source/powervc/novalink/1.0.0.3/novalink-repo/ \
 pvm-installer-config=http://${serveraddress}/export/nim/lpp_source/powervc/novalink/1.0.0.3/mnt/pvm/repo/pvm-install.cfg \
 pvm-viosdir=http://${serveraddress}/novalink-vios \
 pvmdisk=/dev/mapper/mpatha \
 initrd /export/nim/lpp_source/powervc/novalink/1.0.0.3/mnt/install/netboot_initrd.gz
}

Modify the pvm-install.cfg, it’s the NovaLink installer configuration file. We just need to modify here the [SystemConfig],[NovaLinkGeneralSettings],[NovaLinkNetworkSettings],[NovaLinkAPTRepoConfig] and [NovaLinkAdminCredential]. My advice is to configure one NovaLink by hand (by doing an installation directly with the iso image, then after the installation your configuration file is saved in /var/log/pvm-install/novalink-install.cfg. You can copy this one as your template on your installation server. This file is filled by the answers you gave during the NovaLink installation)

# more /export/nim/lpp_source/powervc/novalink/1.0.0.3/mnt/pvm/repo/pvm-install.cfg
[SystemConfig]
serialnumber = XXXXXXXX
lmbsize = 256

[NovaLinkGeneralSettings]
ntpenabled = True
ntpserver = timeserver1
timezone = Europe/Paris

[NovaLinkNetworkSettings]
dhcpip = DISABLED
ipaddress = YYYYYYYY
gateway = ZZZZZZZZ
netmask = 255.255.255.0
dns1 = 8.8.8.8
dns2 = 8.8.9.9
hostname = WWWWWWWW
domain = lab.chmod666.org

[NovaLinkAPTRepoConfig]
downloadprotocol = http
mirrorhostname = nimserver
mirrordirectory = /export/nim/lpp_source/powervc/novalink/1.0.0.3/mnt/
mirrorproxy =

[VIOSNIMServerConfig]
novalink_private_ip = 192.168.128.1
vios1_private_ip = 192.168.128.2
vios2_private_ip = 192.168.128.3
novalink_netmask = 255.255.128.0
viosinstallprompt = False

[NovaLinkAdminCredentials]
username = padmin
password = $6$N1hP6cJ32p17VMpQ$sdThvaGaR8Rj12SRtJsTSRyEUEhwPaVtCTvbdocW8cRzSQDglSbpS.jgKJpmz9L5SAv8qptgzUrHDCz5ureCS.
userdescription = NovaLink System Administrator

Finally modify the /etc/bootptab file and add a line matching your installation:

# tail -1 /etc/bootptab
nova0696010:bf=/var/lib/tftpboot/core.elf:ip=10.20.65.16:ht=ethernet:sa=10.255.228.37:gw=10.20.65.1:sm=255.255.255.0:

Don’t forget to setup an http server, serving all the needed files. I know this configuration is super unsecured. But honestly I don’t care my NIM server is in a super secured network just accessible by the VIOS and NovaLink partition. So I’m good :-) :

# cd /opt/freeware/etc/httpd/ 
# grep -Ei "^Listen|^DocumentRoot" conf/httpd.conf
Listen 80
DocumentRoot "/"

novaserved

Instead of doing this over and over and over at every NovaLink installation I have written a custom script preparing my NovaLink installation file, what I do in this script is:

  • Preparing the pvm-install.cfg file.
  • Modifying the grub.cfg file.
  • Adding a line to the /etc/bootptab file.
#  ./custnovainstall.ksh nova0696010 10.20.65.16 10.20.65.1 255.255.255.0
#!/usr/bin/ksh

novalinkname=$1
novalinkip=$2
novalinkgw=$3
novalinknm=$4
cfgfile=/export/nim/lpp_source/powervc/novalink/novalink-install.cfg
desfile=/export/nim/lpp_source/powervc/novalink/1.0.0.3/mnt/pvm/repo/pvm-install.cfg
grubcfg=/export/nim/lpp_source/powervc/novalink/grub.cfg
grubdes=/grub.cfg

echo "+--------------------------------------+"
echo "NovaLink name: ${novalinkname}"
echo "NovaLink IP: ${novalinkip}"
echo "NovaLink GW: ${novalinkgw}"
echo "NovaLink NM: ${novalinknm}"
echo "+--------------------------------------+"
echo "Cfg ref: ${cfgfile}"
echo "Cfg file: ${cfgfile}.${novalinkname}"
echo "+--------------------------------------+"

typeset -u serialnumber
serialnumber=$(echo ${novalinkname} | sed 's/nova//g')

echo "SerNum: ${serialnumber}"

cat ${cfgfile} | sed "s/serialnumber = XXXXXXXX/serialnumber = ${serialnumber}/g" | sed "s/ipaddress = YYYYYYYY/ipaddress = ${novalinkip}/g" | sed "s/gateway = ZZZZZZZZ/gateway = ${novalinkgw}
/g" | sed "s/netmask = 255.255.255.0/netmask = ${novalinknm}/g" | sed "s/hostname = WWWWWWWW/hostname = ${novalinkname}/g" > ${cfgfile}.${novalinkname}
cp ${cfgfile}.${novalinkname} ${desfile}
cat ${grubcfg} | sed "s/  set hostname=WWWWWWWW/  set hostname=${novalinkname}/g" > ${grubcfg}.${novalinkname}
cp ${grubcfg}.${novalinkname} ${grubdes}
# nova1009425:bf=/var/lib/tftpboot/core.elf:ip=10.20.65.15:ht=ethernet:sa=10.255.248.37:gw=10.20.65.1:sm=255.255.255.0:
echo "${novalinkname}:bf=/var/lib/tftpboot/core.elf:ip=${novalinkip}:ht=ethernet:sa=10.255.248.37:gw=${novalinkgw}:sm=${novalinknm}:" >> /etc/bootptab

Novalink installation: vSCSI or NPIV ?

NovaLink is not designed to be installed of top of NPIV it’s a fact. As it is designed to be installed on a totally new system without any Virtual I/O Servers configured the NovaLink installation is by default creating the Virtual I/O Servers and using these VIOS the installation process is creating backing devices on top of logical volumes created in the default VIOS storage pool. Then the Novalink installation partition is created on top of these two logical volumes and at the end mirrored. This is the way NovaLink is doing for Scale Out systems.

For High End systems NovaLink is assuming your going to install the NovaLink partition on top of vSCSI (have personnaly tried with hdisk backed and SSP Logical Unit backed and both are working ok). For those like me who wants to install NovaLink on top of NPIV (I know this is not a good choice, but once again I was forced to do that) there still is a possiblity to do it. (In my humble opinion the NPIV design is done for high performance and the Novalink partition is not going to be an I/O intensive partition. Even worse our whole new design is based on NPIV for LPARs …. it’s a shame as NPIV is not a solution designed for high denstity and high scalability. Every PowerVM system administrator should remember this. NPIV IS NOT A GOOD CHOICE FOR DENSITY AND SCALABILITY USE IT FOR PERFORMANCE ONLY !!!. The story behind this is funny. I’m 100% sure that SSP is ten time a better choice to achieve density and scalability. I decided to open a poll on twitter asking this question “Will you choose SSP or NPIV to design a scalable AIX cloud based on PowerVC ?”. I was 100% sure SSP will win and made a bet with friend (I owe him beers now) that I’ll be right. What was my surprise when seeing the results. 90% of people vote for NPIV. I’m sorry to say that guys but there are two possibilities: 1/ You don’t really know what scalability and density means because you never faced it so that’s why you made the wrong choice. 2/ You know it and you’re just wrong :-) . This little story is another proof telling that IBM is not responsible about the dying of AIX and PowerVM … but unfortunately you are responsible of it not understanding that the only way to survive is to face high scalable solution like Linux is doing with Openstack and Ceph. It’s a fact. Period.)

This said … if you are trying to install NovaLink on top of NPIV you’ll get an error. A workaround to this problem is to add the following line to the grub.cfg file

 pvmdisk=/dev/mapper/mpatha \

If you do that you’ll be able to install NovaLink on your NPIV disk but still have an error the first time you’ll install it at the “grub-install step”. Just re-run the installation a second time and the grub-install command will work ok :-) (I’ll explain how to do to avoid this second issue later).

One work-around to this second issue is to recreate the initrd by adding a line in the debian-installer config file.

Fully automated installation by example

  • Here the core.elf file is downloaded by tftp. You can se in the capture below that the grub.cfg file is searched in / :
  • 1m
    13m

  • The installer is starting:
  • 2

  • The vmlinux is downloaded (http):
  • 3

  • The root.squashfs is downloaded (http):
  • 4m

  • The pvm-install.cfg configuration file is downloaded (http):
  • 5

  • pvm services are started. At this time if you are running in co-management mode you’ll see the Red lock in the HMC Server status:
  • 6

  • The Linux and Novalink istallation is ongoing:
  • 7
    8
    9
    10
    11
    12

  • System is ready:
  • 14

Novalink code auto update

When adding a NovaLink host to PowerVC the powervc packages coming from the powervc management host will be installed on the NovaLink partition. You can check this during the installation. Here is what’s going on when adding the NovaLink host to PowerVC:

15
16

# cat /opt/ibm/powervc/log/powervc_install_2016-09-11-164205.log
################################################################################
Starting the IBM PowerVC Novalink Installation on:
2016-09-11T16:42:05+02:00
################################################################################

LOG file is /opt/ibm/powervc/log/powervc_install_2016-09-11-164205.log

2016-09-11T16:42:05.18+02:00 Installation directory is /opt/ibm/powervc
2016-09-11T16:42:05.18+02:00 Installation source location is /tmp/powervc_img_temp_1473611916_1627713/powervc-1.3.1.2
[..]
Setting up python-neutron (10:8.0.0-201608161728.ibm.ubuntu1.375) ...
Setting up neutron-common (10:8.0.0-201608161728.ibm.ubuntu1.375) ...
Setting up neutron-plugin-ml2 (10:8.0.0-201608161728.ibm.ubuntu1.375) ...
Setting up ibmpowervc-powervm-network (1.3.1.2) ...
Setting up ibmpowervc-powervm-oslo (1.3.1.2) ...
Setting up ibmpowervc-powervm-ras (1.3.1.2) ...
Setting up ibmpowervc-powervm (1.3.1.2) ...
W: --force-yes is deprecated, use one of the options starting with --allow instead.

***************************************************************************
IBM PowerVC Novalink installation
 successfully completed at 2016-09-11T17:02:30+02:00.
 Refer to
 /opt/ibm/powervc/log/powervc_install_2016-09-11-165617.log
 for more details.
***************************************************************************

17

Installing the missing deb packages if NovaLink host was added before PowerVC upgrade

If the NovaLink host was added in PowerVC 1.3.1.1 and you updated to PowerVC 1.3.1.2 you have to update the package by hand because there is a little bug during the update of some packages:

  • From the PowerVC management host copy the latest packages to the NovaLink host:
  • # scp /opt/ibm/powervc/images/powervm/powervc-powervm-compute-1.3.1.2.tgz padmin@nova0696010:~
    padmin@nova0696010's password:
    powervc-powervm-compute-1.3.1.2.tgz
    
  • Update the packages on the NovaLink host
  • # tar xvzf powervc-powervm-compute-1.3.1.2.tgz
    # cd powervc-1.3.1.2/packages/powervm
    # dpkg -i nova-powervm_2.0.3-160816-48_all.deb
    # dpkg -i networking-powervm_2.0.1-160816-6_all.deb
    # dpkg -i ceilometer-powervm_2.0.1-160816-17_all.deb
    # /opt/ibm/powervc/bin/powervc-services restart
    

rsct and pvm deb update

Never forget to install latest rsct and pvm packages after the installation. You can clone the official IBM repository for pvm and rsct files (you can check my previous post about Novalink for more details about cloning the repository). Then create two files in /etc/apt/sources.list.d one for pvm, the other for rsct

# vi /etc/apt/sources.list.d/pvm.list
deb http://nimserver/export/nim/lpp_source/powervc/novalink/nova/debian novalink_1.0.0 non-free
# vi /etc/apt/source.list.d/rsct.list
deb http://nimserver/export/nim/lpp_source/powervc/novalink/rsct/ubuntu xenial main
# dpkg -l | grep -i rsct
ii  rsct.basic                                3.2.1.0-15300                           ppc64el      Reliable Scalable Cluster Technology - Basic
ii  rsct.core                                 3.2.1.3-16106-1ubuntu1                  ppc64el      Reliable Scalable Cluster Technology - Core
ii  rsct.core.utils                           3.2.1.3-16106-1ubuntu1                  ppc64el      Reliable Scalable Cluster Technology - Utilities
# # dpkg -l | grep -i pvm
ii  pvm-cli                                   1.0.0.3-160516-1488                     all          Power VM Command Line Interface
ii  pvm-core                                  1.0.0.3-160525-2192                     ppc64el      PVM core runtime package
ii  pvm-novalink                              1.0.0.3-160525-1000                     ppc64el      Meta package for all PowerVM Novalink packages
ii  pvm-rest-app                              1.0.0.3-160524-2229                     ppc64el      The PowerVM NovaLink REST API Application
ii  pvm-rest-server                           1.0.0.3-160524-2229                     ppc64el      Holds the basic installation of the REST WebServer (Websphere Liberty Profile) for PowerVM NovaLink 
# apt-get install rsct.core rsct.basic
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages were automatically installed and are no longer required:
  docutils-common libpaper-utils libpaper1 python-docutils python-roman
Use 'apt autoremove' to remove them.
The following additional packages will be installed:
  rsct.core.utils src
The following packages will be upgraded:
  rsct.core rsct.core.utils src
3 upgraded, 0 newly installed, 0 to remove and 6 not upgraded.
Need to get 9,356 kB of archives.
After this operation, 548 kB disk space will be freed.
[..]
# apt-get install pvm-novalink
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages were automatically installed and are no longer required:
  docutils-common libpaper-utils libpaper1 python-docutils python-roman
Use 'apt autoremove' to remove them.
The following additional packages will be installed:
  pvm-core pvm-rest-app pvm-rest-server pypowervm
The following packages will be upgraded:
  pvm-core pvm-novalink pvm-rest-app pvm-rest-server pypowervm
5 upgraded, 0 newly installed, 0 to remove and 1 not upgraded.
Need to get 287 MB of archives.
After this operation, 203 kB of additional disk space will be used.
Do you want to continue? [Y/n] Y
[..]

After the installation, here is what you should have if everything was updated properly:

dpkg -l | grep rsct
ii  rsct.basic                                3.2.1.4-16154-1ubuntu1                  ppc64el      Reliable Scalable Cluster Technology - Basic
ii  rsct.core                                 3.2.1.4-16154-1ubuntu1                  ppc64el      Reliable Scalable Cluster Technology - Core
ii  rsct.core.utils                           3.2.1.4-16154-1ubuntu1                  ppc64el      Reliable Scalable Cluster Technology - Utilities
dpkg -l | grep pvm
ii  pvm-cli                                   1.0.0.3-160516-1488                     all          Power VM Command Line Interface
ii  pvm-core                                  1.0.0.3.1-160713-2441                   ppc64el      PVM core runtime package
ii  pvm-novalink                              1.0.0.3.1-160714-1152                   ppc64el      Meta package for all PowerVM Novalink packages
ii  pvm-rest-app                              1.0.0.3.1-160713-2417                   ppc64el      The PowerVM NovaLink REST API Application
ii  pvm-rest-server                           1.0.0.3.1-160713-2417                   ppc64el      Holds the basic installation of the REST WebServer (Websphere Liberty Profile) for PowerVM NovaLink

Novalink post-installation (my ansible way to do that)

You all now know that I’m not very fond of doing the same things over and over again, that’s why I have create an ansible post-install playbook especially for NovaLink post installation. You can download it here: nova_ansible. Then install ansible on a host that has an ssh access to all your NovaLink partitions and run the the ansible playbook:

  • Untar the ansible playbook:
  • # mkdir /srv/ansible
    # cd /srv/ansible
    # tar xvf novalink_ansible.tar 
    
  • Modify the group_vars/novalink.yml to fit your environment:
  • # cat group_vars/novalink.yml
    ntpservers:
      - ntpserver1
      - ntpserver2
    dnsservers:
      - 8.8.8.8
      - 8.8.9.9
    dnssearch:
      - lab.chmod666.org
    vepa_iface: ibmveth6
    repo: nimserver
    
  • Share root ssh key to the NovaLink host (be careful by default NovaLink does not allow root login you have to modify the sshd configuration file):
  • Put all your Novalink hosts into the inventory file:
  • #cat inventories/hosts.novalink
    [novalink]
    nova65a0cab
    nova65ff4cd
    nova10094ef
    nova06960ab
    
  • Run ansible-playbook and you’re done:
  • # ansible-playbook -i inventories/hosts.novalink site.yml
    

    ansible1
    ansible2
    ansible3

More details about NovaLink

MGMTSWITCH vswitch automatic creation

Do not try to create the MGMTSWITCH by yourself. The NovaLink installer is doing it for you. As my Virtual I/O Servers are installed using the IBM Provisioning Toolkit for PowerVM … I was creating the MGMTSWITCH at this time but I was wrong. You can see this in the file /var/log/pvm-install/pvminstall.log on the NovaLink partition:

# cat /var/log/pvm-install/pvminstall.log
Fri Aug 12 17:26:07 UTC 2016: PVMDebug = 0
Fri Aug 12 17:26:07 UTC 2016: Running initEnv
[..]
Fri Aug 12 17:27:08 UTC 2016: Using user provided pvm-install configuration file
Fri Aug 12 17:27:08 UTC 2016: Auto Install set
[..]
Fri Aug 12 17:27:44 UTC 2016: Auto Install = 1
Fri Aug 12 17:27:44 UTC 2016: Validating configuration file
Fri Aug 12 17:27:44 UTC 2016: Initializing private network configuration
Fri Aug 12 17:27:45 UTC 2016: Running /opt/ibm/pvm-install/bin/switchnetworkcfg -o c
Fri Aug 12 17:27:46 UTC 2016: Running /opt/ibm/pvm-install/bin/switchnetworkcfg -o n -i 3 -n MGMTSWITCH -p 4094 -t 1
Fri Aug 12 17:27:49 UTC 2016: Start setupinstalldisk operation for /dev/mapper/mpatha
Fri Aug 12 17:27:49 UTC 2016: Running updatedebconf
Fri Aug 12 17:56:06 UTC 2016: Pre-seeding disk recipe

NPIV lpar creation problem !

As you know my environment is crazy. Every lpar we are creating have 4 virtual fibre channels adapters. Obviously two on fabric A and two on fabric B. And obviously again each fabric must be present on each Virtual I/O Servers. So to sum up. An lpar must have access to fabric A and B using VIOS1 and to fabric A and B using VIOS2. Unfortunately there was a little bug in the current NovaLink (1.0.0.3) code and all the lpar created were created with only two adapters. The PowerVC team gave my a patch to handle this particular issue patching the npiv.py file. This patch needs to be installed on the NovaLink partition itself.:

# cd /usr/lib/python2.7/dist-packages/powervc_nova/virt/ibmpowervm/pvm/volume
# sdiff npiv.py.back npiv.bck

npivpb

I’m intentionally not giving you the solution here (just by copying/pasting code) because an issue is addressed and an APAR has been opened for this issue and is resolved in 1.3.1.2 version. IT16534

From NovaLink to HMC …. and the opposite

One of the challenge for me was to be sure everything was working ok regarding LPM and NovaLink. So I decided to test different cases:

  • From NovaLink host to Novalink host (didn’t had any trouble) :-)
  • From NovaLink host to HMC host (didn’t had any trouble) :-)
  • From HMC host to Novalink host (had a trouble) :-(

Once again this issue avoiding HMC to Novalink LPM to work correctly is related to storage. A patch is ongoing but let me explain this issue a little bit (only if you have to absolutely move an LPAR from HMC to NovaLink and your are in the same case as I am):

PowerVC is not correctly doing the mapping to the destination Virtual I/O Servers and is trying to map two times the fabric A on the VIOS1 and two time the fabric B on the VIOS2. Hopefully for us you can do the migration by hand :

  • Do the LPM operation from PowerVC and check on the HMC side how PowerVC is doing the mapping (log on the HMC to check this):
  • #  lssvcevents -t console -d 0 | grep powervc_admin | grep migrlpar
    time=08/31/2016 18:53:27,"text=HSCE2124 User name powervc_admin: migrlpar -m 9119-MME-656C38A -t 9119-MME-65A0C31 --id 18 --ip 10.22.33.198 -u wlp -i ""virtual_fc_mappings=6/vios1/2//fcs2,3/vios2/1//fcs2,4/vios2/1//fcs1,5/vios1/2//fcs1"",shared_proc_pool_id=0 -o m command failed."
    
  • One interesting point you can see here is that the NovaLink user used for LPM is not padmin but wlp. Have look on the Novalink machine if you are a little bit curious:
  • 18

  • If you are double checking the mapping you’ll see that PowerVC is mixing up the VIOS. Just rerun the command in the right order and you’ll see that you’re going to be able to do HMC to NovaLink LPM (By the way PowerVC is automattically detecting that the host has changed for this lpar (moved outside of PowerVC)):
  • # migrlpar -m 9119-MME-656C38A -t 9119-MME-65A0C31 --id 18 --ip 10.22.33.198 -u wlp -i '"virtual_fc_mappings=6/vios2/1//fcs2,3/vios1/2//fcs2,4/vios2/1//fcs1,5/vios1/2//fcs1"',shared_proc_pool_id=0 -o m
    # lssvcevents -t console -d 0 | grep powervc_admin | grep migrlpar
    time=08/31/2016 19:13:00,"text=HSCE2123 User name powervc_admin: migrlpar -m 9119-MME-656C38A -t 9119-MME-65A0C31 --id 18 --ip 10.22.33.198 -u wlp -i ""virtual_fc_mappings=6/vios2/1//fcs2,3/vios1/2//fcs2,4/vios2/1//fcs1,5/vios1/2//fcs1"",shared_proc_pool_id=0 -o m command was executed successfully."
    
hmctonova

One more time don't worry about this issue a patch is on the way. But I thought it was interessting to talk about it just to show you how PowerVC is handling this (user, key sharing, check on the HMC).

Deep dive into the initrd

I am curious and there is no way to change this. As I wanted to know how the NovaLink installer is working I had to check into the netboot_initrd.gz file. There are a lot of interesting stuff to check in this initrd. Run the commands below on a Linux partition if you also want to have a look:

# scp nimdy:/export/nim/lpp_source/powervc/novalink/1.0.0.3/iso/install/netboot_initrd.gz .
# gunzip netboot_initrd
# cpio -i < netboot_initrd
185892 blocks

The installer is located in opt/ibm/pvm-install:

# ls opt/ibm/pvm-install/data/
40mirror.pvm  debpkgs.txt  license.txt  nimclient.info  pvm-install-config.template  pvm-install-preseed.cfg  rsct-gpg-pub.key  vios_diagram.txt
# ls opt/ibm/pvm-install/bin
assignio.py        envsetup        installpvm                    monitor        postProcessing    pvmwizardmain.py  restore.py        switchnetworkcfg  vios
cfgviosnetwork.py  functions       installPVMPartitionWizard.py  network        procmem           recovery          setupinstalldisk  updatedebconf     vioscfg
chviospasswd       getnetworkinfo  ioadapter                     networkbridge  pvmconfigdata.py  removemem         setupviosinstall  updatenimsetup    welcome.py
editpvmconfig      initEnv         mirror                        nimscript      pvmtime           resetsystem       summary.py        user              wizpkg

You can for instance check what's the installer is exactly doing. Let's take again the exemple of the MGMTSWITCH creation, you can see in the output below that I was right saying that:

initrd1

Remember that I was telling you before that I had problem with installation on NPIV. You can avoid installing NovaLink two times by modifying the debian installer directly in the initrd by adding a line in the debian installer file opt/ibm/pvm-install/data/pvm-install-preseed.cfg (you have to rebuild the initrd after doing this) :

# grep bootdev opt/ibm/pvm-install/data/pvm-install-preseed.cfg
d-i grub-installer/bootdev string /dev/mapper/mpatha
# find | cpio -H newc -o > ../new_initrd_file
# gzip -9 ../new_initrd_file
# scp ../new_initrdfile.gz nimdy:/export/nim/lpp_source/powervc/novalink/1.0.0.3/iso/install/netboot_initrd.gz

You can also find good example here of pvmctl commands:

# grep -R pvmctl *
pvmctl lv create --size $LV_SIZE --name $LV_NAME -p id=$vid
pvmctl scsi create --type lv --vg name=rootvg --lpar id=1 -p id=$vid --stor-id name=$LV_NAME

Troubleshooting

NovaLink is not PowerVC so here is a little reminder of what I do to troubleshot Novalink:

  • Installation troubleshooting:
  • #cat /var/log/pvm-install/pvminstall.log
    
  • Neutron Agent log (always double check this one):
  • # cat /var/log/neutron/neutron-powervc-pvm-sea-agent.log
    
  • Nova logs for this host are not accessible on the PowerVC management host anymore, so check it on the NovaLink partition if needed:
  • # cat /var/log/nova/nova-compute.log
    
  • pvmctl logs:
  • # cat /var/log/pvm/pvmctl.log
    

One last thing to add about NovaLink. One thing I like a lot is that Novalink is doing backups of the system and VIOS hourly/daily. These backup are stored in /var/backup/pvm :

# crontab -l
# VIOS hourly backups - at 15 past every hour except for midnight
15 1-23 * * * /usr/sbin/pvm-backup --type vios --frequency hourly
# Hypervisor hourly backups - at 15 past every hour except for midnight
15 1-23 * * * /usr/sbin/pvm-backup --type system --frequency hourly
# VIOS daily backups - at 15 past midnight
15 0    * * * /usr/sbin/pvm-backup --type vios --frequency daily
# Hypervisor daily backups - at 15 past midnight
15 0    * * * /usr/sbin/pvm-backup --type system --frequency daily
#ls -l /var/backups/pvm
total 4
drwxr-xr-x 2 root pvm_admin 4096 Sep  9 00:15 9119-MME*0265FF47B

More PowerVC tips and tricks

Let's finish this blog post with more PowerVC tips and tricks. Before giving you the tricks I have to warn you. All of these tricks are not supported by PowerVC, use them at your own risk OR contact your support before doing anything else. You may break and destroy everything if you are not aware of what you are doing. So please be very careful using all these tricks. YOU HAVE BEEN WARNED !!!!!!

Accessing and querying the database

This first trick is funny and will allow you to query and modify the PowerVC database. Once again do this a your own risks. One of the issue I had was strange. I do not remeber how it happends exactly but some of my luns that were not attached to any hosts and were still showing an attachmenent number equals to 1 and I didn't had the possibility to remove it. Even worse someone has deleted these luns on the SVC side. So these luns were what I called "ghost lun". Non existing but non-deletable luns. (I had also to remove the storage provider related to these luns). The only way to change this was to change the state to detached directly in the cinder database. Be careful this trick is only working with MariaDB.

First get the database password. Get the encrypted password from /opt/ibm/powervc/data/powervc-db.conf file and decode it to have the clear password:

# grep ^db_password /opt/ibm/powervc/data/powervc-db.conf
db_password = aes-ctr:NjM2ODM5MjM0NTAzMTg4MzQzNzrQZWi+mrUC+HYj9Mxi5fQp1XyCXA==
# python -c "from powervc_keystone.encrypthandler import EncryptHandler; print EncryptHandler().decode('aes-ctr:NjM2ODM5MjM0NTAzMTg4MzQzNzrQZWi+mrUC+HYj9Mxi5fQp1XyCXA==')"
OhnhBBS_gvbCcqHVfx2N
# mysql -u root -p cinder
Enter password:
MariaDB [cinder]> MariaDB [cinder]> show tables;
+----------------------------+
| Tables_in_cinder           |
+----------------------------+
| backups                    |
| cgsnapshots                |
| consistencygroups          |
| driver_initiator_data      |
| encryption                 |
[..]

Then get the lun uuid on the PowerVC gui for the lun you want to change, and follow the commands below:

dummy

MariaDB [cinder]> select * from volume_attachment where volume_id='9cf6d85a-3edd-4ab7-b797-577ff6566f78' \G
*************************** 1. row ***************************
   created_at: 2016-05-26 08:52:51
   updated_at: 2016-05-26 08:54:23
   deleted_at: 2016-05-26 08:54:23
      deleted: 1
           id: ce4238b5-ea39-4ce1-9ae7-6e305dd506b1
    volume_id: 9cf6d85a-3edd-4ab7-b797-577ff6566f78
attached_host: NULL
instance_uuid: 44c7a72c-610c-4af1-a3ed-9476746841ab
   mountpoint: /dev/sdb
  attach_time: 2016-05-26 08:52:51
  detach_time: 2016-05-26 08:54:23
  attach_mode: rw
attach_status: attached
1 row in set (0.01 sec)
MariaDB [cinder]> select * from volumes where id='9cf6d85a-3edd-4ab7-b797-577ff6566f78' \G
*************************** 1. row ***************************
                 created_at: 2016-05-26 08:51:57
                 updated_at: 2016-05-26 08:54:23
                 deleted_at: NULL
                    deleted: 0
                         id: 9cf6d85a-3edd-4ab7-b797-577ff6566f78
                     ec2_id: NULL
                    user_id: 0688b01e6439ca32d698d20789d52169126fb41fb1a4ddafcebb97d854e836c9
                 project_id: 1471acf124a0479c8d525aa79b2582d0
                       host: pb01_mn_svc_qual
                       size: 1
          availability_zone: nova
                     status: available
              attach_status: attached
               scheduled_at: 2016-05-26 08:51:57
                launched_at: 2016-05-26 08:51:59
              terminated_at: NULL
               display_name: dummy
        display_description: NULL
          provider_location: NULL
              provider_auth: NULL
                snapshot_id: NULL
             volume_type_id: e49e9cc3-efc3-4e7e-bcb9-0291ad28df42
               source_volid: NULL
                   bootable: 0
          provider_geometry: NULL
                   _name_id: NULL
          encryption_key_id: NULL
           migration_status: NULL
         replication_status: disabled
replication_extended_status: NULL
    replication_driver_data: NULL
        consistencygroup_id: NULL
                provider_id: NULL
                multiattach: 0
            previous_status: NULL
1 row in set (0.00 sec)
MariaDB [cinder]> update volume_attachment set attach_status='detached' where volume_id='9cf6d85a-3edd-4ab7-b797-577ff6566f78';
Query OK, 1 row affected (0.00 sec)
Rows matched: 1  Changed: 1  Warnings: 0
MariaDB [cinder]> update volumes set attach_status='detached' where id='9cf6d85a-3edd-4ab7-b797-577ff6566f78';
Query OK, 1 row affected (0.00 sec)
Rows matched: 1  Changed: 1  Warnings: 0

The second issue I had was about having some machines in deleted state but the reality was that the HMC just rebooted and for an unknow reason these machines where seen as 'deleted' .. but they were not. Using this trick I was able to force a re-evalutation of each machine is this case:

#  mysql -u root -p nova
Enter password:
MariaDB [nova]> select * from instance_health_status where health_state='WARNING';
+---------------------+---------------------+------------+---------+--------------------------------------+--------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------+
| created_at          | updated_at          | deleted_at | deleted | id                                   | health_state | reason                                                                                                                                                                                                                | unknown_reason_details |
+---------------------+---------------------+------------+---------+--------------------------------------+--------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------+
| 2016-07-11 08:58:37 | NULL                | NULL       |       0 | 1af1805c-bb59-4bc9-8b6d-adeaeb4250f3 | WARNING      | [{"resource_local": "server", "display_name": "p00ww6754398", "resource_property_key": "rmc_state", "resource_property_value": "initializing", "resource_id": "1af1805c-bb59-4bc9-8b6d-adeaeb4250f3"}]                |                        |
| 2015-07-31 16:53:50 | 2015-07-31 18:49:50 | NULL       |       0 | 2668e808-10a1-425f-a272-6b052584557d | WARNING      | [{"resource_local": "server", "display_name": "multi-vol", "resource_property_key": "vm_state", "resource_property_value": "deleted", "resource_id": "2668e808-10a1-425f-a272-6b052584557d"}]                         |                        |
| 2015-08-03 11:22:38 | 2015-08-03 15:47:41 | NULL       |       0 | 2934fb36-5d91-48cd-96de-8c16459c50f3 | WARNING      | [{"resource_local": "server", "display_name": "clouddev-test-754df319-00000038", "resource_property_key": "rmc_state", "resource_property_value": "inactive", "resource_id": "2934fb36-5d91-48cd-96de-8c16459c50f3"}] |                        |
| 2016-07-11 09:03:59 | NULL                | NULL       |       0 | 3fc42502-856b-46a5-9c36-3d0864d6aa4c | WARNING      | [{"resource_local": "server", "display_name": "p00ww3254401", "resource_property_key": "rmc_state", "resource_property_value": "initializing", "resource_id": "3fc42502-856b-46a5-9c36-3d0864d6aa4c"}]                |                        |
| 2015-07-08 20:11:48 | 2015-07-08 20:14:09 | NULL       |       0 | 54d02c60-bd0e-4f34-9cb6-9c0a0b366873 | WARNING      | [{"resource_local": "server", "display_name": "p00wb3740870", "resource_property_key": "rmc_state", "resource_property_value": "inactive", "resource_id": "54d02c60-bd0e-4f34-9cb6-9c0a0b366873"}]                    |                        |
| 2015-07-31 17:44:16 | 2015-07-31 18:49:50 | NULL       |       0 | d5ec2a9c-221b-44c0-8573-d8e3695a8dd7 | WARNING      | [{"resource_local": "server", "display_name": "multi-vol-sp5", "resource_property_key": "vm_state", "resource_property_value": "deleted", "resource_id": "d5ec2a9c-221b-44c0-8573-d8e3695a8dd7"}]                     |                        |
+---------------------+---------------------+------------+---------+--------------------------------------+--------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------+
6 rows in set (0.00 sec)
MariaDB [nova]> update instance_health_status set health_state='PENDING',reason='' where health_state='WARNING';
Query OK, 6 rows affected (0.00 sec)
Rows matched: 6  Changed: 6  Warnings: 0

pending

The ceilometer issue

When updating from PowerVC 1.3.0.1 to 1.3.1.1 PowerVC is changing the database backend from DB2 to MariaDB. This is a good thing but the way the update is done is by exporting all the data in flat files and then re-inserting it in the MariaDB database records per records. I had a huge problem because of this, just because my ceilodb base was huge because of the number of machines I had and the number of operations we run on PowerVC since it is in production. The DB insert took more than 3 days and never finish. If you don't need the ceilo data my advice is to change the retention from 270 days y default to 2 hours:

# powervc-config metering event_ttl --set 2 --unit hr 
# ceilometer-expirer --config-file /etc/ceilometer/ceilometer.conf

If this is not enough an you still experiencing problems regarding the update the best way is to flush the entire table before the update:

# /opt/ibm/powervc/bin/powervc-services stop
# /opt/ibm/powervc/bin/powervc-services db2 start
# /bin/su - pwrvcdb -c "db2 drop database ceilodb2"
# /bin/su - pwrvcdb -c "db2 CREATE DATABASE ceilodb2 AUTOMATIC STORAGE YES ON /home/pwrvcdb DBPATH ON /home/pwrvcdb USING CODESET UTF-8 TERRITORY US COLLATE USING SYSTEM PAGESIZE 16384 RESTRICTIVE"
# /bin/su - pwrvcdb -c "db2 connect to ceilodb2 ; db2 grant dbadm on database to user ceilometer"
# /opt/ibm/powervc/bin/powervc-dbsync ceilometer
# /bin/su - pwrvcdb -c "db2 connect TO ceilodb2; db2 CALL GET_DBSIZE_INFO '(?, ?, ?, 0)' > /tmp/ceilodb2_db_size.out; db2 terminate" > /dev/null

Multi tenancy ... how to deal with a huge environment

As my environment is growing bigger and bigger I faced a couple people trying to force me to multiply the number of PowerVC machine we have. As Openstack is a solution designed to handle both density and scalability I said that doing this is just a "non-sense". Seriously people who still believe in this have not understand anything about the cloud, openstack and PowerVC. Hopefully we found a solution acceptable by everybody. As we are created what we are calling "building-block" we had to find a way to isolate one "block" from one another. The solution for host isolation is called mutly tenancy isolation. For the storage side we are just going to play with quotas. By doing this a user will be able to manage a couple of hosts and the associated storage (storage template) without having the right to do anything on the others:

multitenancyisolation

Before doing anything create the tenant (or project) and a user associated with it:

# cat /opt/ibm/powervc/version.properties | grep cloud_enabled
cloud_enabled = yes
# ~/powervcrc
export OS_USERNAME=root
export OS_PASSWORD=root
export OS_TENANT_NAME=ibm-default
export OS_AUTH_URL=https://powervc.lab.chmod666.org:5000/v3/
export OS_IDENTITY_API_VERSION=3
export OS_CACERT=/etc/pki/tls/certs/powervc.crt
export OS_REGION_NAME=RegionOne
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_COMPUTE_API_VERSION=2.25
export OS_NETWORK_API_VERSION=2.0
export OS_IMAGE_API_VERSION=2
export OS_VOLUME_API_VERSION=2
# source powervcrc
# openstack project create hb01
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description |                                  |
| domain_id   | default                          |
| enabled     | True                             |
| id          | 90d064b4abea4339acd32a8b6a8b1fdf |
| is_domain   | False                            |
| name        | hb01                             |
| parent_id   | default                          |
+-------------+----------------------------------+
# openstack role list
+----------------------------------+---------------------+
| ID                               | Name                |
+----------------------------------+---------------------+
| 1a76014f12594214a50c36e6a8e3722c | deployer            |
| 54616a8b136742098dd81eede8fd5aa8 | vm_manager          |
| 7bd6de32c14d46f2bd5300530492d4a4 | storage_manager     |
| 8260b7c3a4c24a38ba6bee8e13ced040 | deployer_restricted |
| 9b69a55c6b9346e2b317d0806a225621 | image_manager       |
| bc455ed006154d56ad53cca3a50fa7bd | admin               |
| c19a43973db148608eb71eb3d86d4735 | service             |
| cb130e4fa4dc4f41b7bb4f1fdcf79fc2 | self_service        |
| f1a0c1f9041d4962838ec10671befe33 | vm_user             |
| f8cf9127468045e891d5867ce8825d30 | viewer              |
+----------------------------------+---------------------+
# useradd hb01_admin
# openstack role add --project hb01 --user hb01_admin admin

Then associate each host group (aggregates in Openstack terms) (you have to put your allowed hosts in an host group to enable this feature) that are allowed for this tenant using filter_tenant_id meta-data. For each allowed host group add this field to the metatadata of the host. (first find the tenant id):

# openstack project list
+----------------------------------+-------------+
| ID                               | Name        |
+----------------------------------+-------------+
| 1471acf124a0479c8d525aa79b2582d0 | ibm-default |
| 90d064b4abea4339acd32a8b6a8b1fdf | hb01        |
| b79b694c70734a80bc561e84a95b313d | powervm     |
| c8c42d45ef9e4a97b3b55d7451d72591 | service     |
| f371d1f29c774f2a97f4043932b94080 | project1    |
+----------------------------------+-------------+
# openstack aggregate list
+----+---------------+-------------------+
| ID | Name          | Availability Zone |
+----+---------------+-------------------+
|  1 | Default Group | None              |
| 21 | aggregate2    | None              |
| 41 | hg2           | None              |
| 43 | hb01_mn       | None              |
| 44 | hb01_me       | None              |
+----+---------------+-------------------+
# nova aggregate-set-metadata hb01_mn filter_tenant_id=90d064b4abea4339acd32a8b6a8b1fdf 
Metadata has been successfully updated for aggregate 43.
| Id | Name    | Availability Zone | Hosts             | Metadata                                                                                                                                   
| 43 | hb01_mn | -                 | '9119MME_1009425' | 'dro_enabled=False', 'filter_tenant_id=90d064b4abea4339acd32a8b6a8b1fdf', 'hapolicy-id=1', 'hapolicy-run_interval=1', 'hapolicy-stabilization=1', 'initialpolicy-id=4', 'runtimepolicy-action=migrate_vm_advise_only', 'runtimepolicy-id=5', 'runtimepolicy-max_parallel=10', 'runtimepolicy-run_interval=5', 'runtimepolicy-stabilization=2', 'runtimepolicy-threshold=70' |
# nova aggregate-set-metadata hb01_me filter_tenant_id=90d064b4abea4339acd32a8b6a8b1fdf 
Metadata has been successfully updated for aggregate 44.
| Id | Name    | Availability Zone | Hosts             | Metadata                                                                                                                                   
| 44 | hb01_me | -                 | '9119MME_0696010' | 'dro_enabled=False', 'filter_tenant_id=90d064b4abea4339acd32a8b6a8b1fdf', 'hapolicy-id=1', 'hapolicy-run_interval=1', 'hapolicy-stabilization=1', 'initialpolicy-id=2', 'runtimepolicy-action=migrate_vm_advise_only', 'runtimepolicy-id=5', 'runtimepolicy-max_parallel=10', 'runtimepolicy-run_interval=5', 'runtimepolicy-stabilization=2', 'runtimepolicy-threshold=70' |

To make this work add the AggregateMultiTenancyIsolation to the scheduler_default_filter in nova.conf file and restart nova services:

# grep scheduler_default_filter /etc/nova/nova.conf
scheduler_default_filters = RamFilter,CoreFilter,ComputeFilter,RetryFilter,AvailabilityZoneFilter,ImagePropertiesFilter,ComputeCapabilitiesFilter,MaintenanceFilter,PowerVCServerGroupAffinityFilter,PowerVCServerGroupAntiAffinityFilter,PowerVCHostAggregateFilter,PowerVMNetworkFilter,PowerVMProcCompatModeFilter,PowerLMBSizeFilter,PowerMigrationLicenseFilter,PowerVMMigrationCountFilter,PowerVMStorageFilter,PowerVMIBMiMobilityFilter,PowerVMRemoteRestartFilter,PowerVMRemoteRestartSameHMCFilter,PowerVMEndianFilter,PowerVMGuestCapableFilter,PowerVMSharedProcPoolFilter,PowerVCResizeSameHostFilter,PowerVCDROFilter,PowerVMActiveMemoryExpansionFilter,PowerVMNovaLinkMobilityFilter,AggregateMultiTenancyIsolation
# powervc-services restart

We are done regarding the hosts.

Enabling quotas

To allow one user/tenant to create volumes only on onz storage provider we first need to enable quotas using the following commands:

# grep quota /opt/ibm/powervc/policy/cinder/policy.json
    "volume_extension:quotas:show": "",
    "volume_extension:quotas:update": "rule:admin_only",
    "volume_extension:quotas:delete": "rule:admin_only",
    "volume_extension:quota_classes": "rule:admin_only",
    "volume_extension:quota_classes:validate_setup_for_nested_quota_use": "rule:admin_only",

Then put to 0 all the non-allowed storage template for this tenant and let the only one you want to 10000. Easy:

# cinder --service-type volume type-list
+--------------------------------------+---------------------------------------------+-------------+-----------+
|                  ID                  |                     Name                    | Description | Is_Public |
+--------------------------------------+---------------------------------------------+-------------+-----------+
| 53434872-a0d2-49ea-9683-15c7940b30e5 |               svc2 base template            |      -      |    True   |
| e49e9cc3-efc3-4e7e-bcb9-0291ad28df42 |               svc1 base template            |      -      |    True   |
| f45469d5-df66-44cf-8b60-b226425eee4f |                     svc3                    |      -      |    True   |
+--------------------------------------+---------------------------------------------+-------------+-----------+
# cinder --service-type volume quota-update --volumes 0 --volume-type "svc2" 90d064b4abea4339acd32a8b6a8b1fdf
# cinder --service-type volume quota-update --volumes 0 --volume-type "svc3" 90d064b4abea4339acd32a8b6a8b1fdf
+-------------------------------------------------------+----------+
|                        Property                       |  Value   |
+-------------------------------------------------------+----------+
|                    backup_gigabytes                   |   1000   |
|                        backups                        |    10    |
|                       gigabytes                       | 1000000  |
|              gigabytes_svc2 base template             | 10000000 |
|              gigabytes_svc1 base template             | 10000000 |
|                     gigabytes_svc3                    |    -1    |
|                  per_volume_gigabytes                 |    -1    |
|                       snapshots                       |  100000  |
|             snapshots_svc2 base template              |  100000  |
|             snapshots_svc1 base template              |  100000  |
|                     snapshots_svc3                    |    -1    |
|                        volumes                        |  100000  |
|            volumes_svc2 base template                 |  100000  |
|            volumes_svc1 base template                 |    0     |
|                      volumes_svc3                     |    0     |
+-------------------------------------------------------+----------+
# powervc-services stop
# powervc-services start

By doing this you have enable the isolation between two tenants. Then use the appropriate user to do the appropriate task.

PowerVC cinder above the Petabyte

Now that quota are enabled use this command if you want to be able to have more that one petabyte of data managed by PowerVC:

# cinder --service-type volume quota-class-update --gigabytes -1 default
# powervc-services stop
# powervc-services start

PowerVC cinder above 10000 luns

Change the osapi_max_limit in cinder.conf if you want to go above the 10000 lun limits (check every cinder configuration files; the cinder.conf if for the global number of volumes):

# grep ^osapi_max_limit cinder.conf
osapi_max_limit = 15000
# powervc-services stop
# powervc-services start

Snapshot and consistncy group

There is a new cool feature available with the latest version of PowerVC (1.3.1.2). This feature allows you to create snapshots of volume (only on SVC and Storwise for the moment). You now have the possibility to create consistency group (group of volumes) and create snapshots of these consistency groups (allowing for instance to make a backup of a volume group directly from OpenStack. I'm doing the example below using the command line because I think it is easier to understand with these commands rather than showing you the same thing with the rest api):

First create a consistency group:

# cinder --service-type volume type-list
+--------------------------------------+---------------------------------------------+-------------+-----------+
|                  ID                  |                     Name                    | Description | Is_Public |
+--------------------------------------+---------------------------------------------+-------------+-----------+
| 53434872-a0d2-49ea-9683-15c7940b30e5 |              svc2 base template             |      -      |    True   |
| 862b0a8e-cab4-400c-afeb-99247838f889 |             p8_ssp base template            |      -      |    True   |
| e49e9cc3-efc3-4e7e-bcb9-0291ad28df42 |               svc1 base template            |      -      |    True   |
| f45469d5-df66-44cf-8b60-b226425eee4f |                     svc3                    |      -      |    True   |
+--------------------------------------+---------------------------------------------+-------------+-----------+
# cinder --service-type volume consisgroup-create --name foovg_cg "svc1 base template"
+-------------------+-------------------------------------------+
|      Property     |                   Value                   |
+-------------------+-------------------------------------------+
| availability_zone |                    nova                   |
|     created_at    |         2016-09-11T21:10:58.000000        |
|    description    |                    None                   |
|         id        |    950a5193-827b-49ab-9511-41ba120c9ebd   |
|        name       |                  foovg_cg                 |
|       status      |                  creating                 |
|    volume_types   | [u'e49e9cc3-efc3-4e7e-bcb9-0291ad28df42'] |
+-------------------+-------------------------------------------+
# cinder --service-type volume consisgroup-list
+--------------------------------------+-----------+----------+
|                  ID                  |   Status  |   Name   |
+--------------------------------------+-----------+----------+
| 950a5193-827b-49ab-9511-41ba120c9ebd | available | foovg_cg |
+--------------------------------------+-----------+----------+

Create volume in this consistency group:

# cinder --service-type volume create --volume-type "svc1 base template" --name foovg_vol1 --consisgroup-id 950a5193-827b-49ab-9511-41ba120c9ebd 200
# cinder --service-type volume create --volume-type "svc1 base template" --name foovg_vol2 --consisgroup-id 950a5193-827b-49ab-9511-41ba120c9ebd 200
+------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------+
|           Property           |                                                                          Value                                                                           |
+------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------+
|         attachments          |                                                                            []                                                                            |
|      availability_zone       |                                                                           nova                                                                           |
|           bootable           |                                                                          false                                                                           |
|     consistencygroup_id      |                                                           950a5193-827b-49ab-9511-41ba120c9ebd                                                           |
|          created_at          |                                                                2016-09-11T21:23:02.000000                                                                |
|         description          |                                                                           None                                                                           |
|          encrypted           |                                                                          False                                                                           |
|        health_status         | {u'health_value': u'PENDING', u'id': u'8d078772-00b5-45fc-89c8-82c63e2c48ed', u'value_reason': u'PENDING', u'updated_at': u'2016-09-11T21:23:02.669372'} |
|              id              |                                                           8d078772-00b5-45fc-89c8-82c63e2c48ed                                                           |
|           metadata           |                                                                            {}                                                                            |
|       migration_status       |                                                                           None                                                                           |
|         multiattach          |                                                                          False                                                                           |
|             name             |                                                                        foovg_vol2                                                                        |
|    os-vol-host-attr:host     |                                                                           None                                                                           |
| os-vol-tenant-attr:tenant_id |                                                             1471acf124a0479c8d525aa79b2582d0                                                             |
|      replication_status      |                                                                         disabled                                                                         |
|             size             |                                                                           200                                                                            |
|         snapshot_id          |                                                                           None                                                                           |
|         source_volid         |                                                                           None                                                                           |
|            status            |                                                                         creating                                                                         |
|          updated_at          |                                                                           None                                                                           |
|           user_id            |                                             0688b01e6439ca32d698d20789d52169126fb41fb1a4ddafcebb97d854e836c9                                             |
|         volume_type          |                                                                   svc1 base template                                                                     |
+------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------+

You're now able to attach these two volumes to a machine from the PowerVC GUI:

consist

# lsmpio -q
Device           Vendor Id  Product Id       Size    Volume Name
------------------------------------------------------------------------------
hdisk0           IBM        2145                 64G volume-aix72-44c7a72c-000000e0-
hdisk1           IBM        2145                100G volume-snap1-dab0e2d1-130a
hdisk2           IBM        2145                100G volume-snap2-5e863fdb-ab8c
hdisk3           IBM        2145                200G volume-foovg_vol1-3ba0ff59-acd8
hdisk4           IBM        2145                200G volume-foovg_vol2-8d078772-00b5
# cfgmr
# lspv
hdisk0          00c8b2add70d7db0                    rootvg          active
hdisk1          00f9c9f51afe960e                    None
hdisk2          00f9c9f51afe9698                    None
hdisk3          none                                None
hdisk4          none                                None

Then you can create a snapshot fo these two volumes. It's that easy :-) :

# cinder --service-type volume cgsnapshot-create 950a5193-827b-49ab-9511-41ba120c9ebd
+---------------------+--------------------------------------+
|       Property      |                Value                 |
+---------------------+--------------------------------------+
| consistencygroup_id | 950a5193-827b-49ab-9511-41ba120c9ebd |
|      created_at     |      2016-09-11T21:31:12.000000      |
|     description     |                 None                 |
|          id         | 20e2ce6b-9c4a-4eea-b05d-f0b0b6e4768f |
|         name        |                 None                 |
|        status       |               creating               |
+---------------------+--------------------------------------+
# cinder --service-type volume cgsnapshot-list
+--------------------------------------+-----------+------+
|                  ID                  |   Status  | Name |
+--------------------------------------+-----------+------+
| 20e2ce6b-9c4a-4eea-b05d-f0b0b6e4768f | available |  -   |
+--------------------------------------+-----------+------+
# cinder --service-type volume cgsnapshot-show 20e2ce6b-9c4a-4eea-b05d-f0b0b6e4768f
+---------------------+--------------------------------------+
|       Property      |                Value                 |
+---------------------+--------------------------------------+
| consistencygroup_id | 950a5193-827b-49ab-9511-41ba120c9ebd |
|      created_at     |      2016-09-11T21:31:12.000000      |
|     description     |                 None                 |
|          id         | 20e2ce6b-9c4a-4eea-b05d-f0b0b6e4768f |
|         name        |                 None                 |
|        status       |              available               |
+---------------------+--------------------------------------+

cgsnap

Conclusion

Please keep in mind that the content of this blog post comes from real life and production examples. I hope you will be able to better understand that scalability, density, fast deployment, snapshots, multi tenancy are some features that are absolutely needed in the AIX world. As you can see the PowerVC team is moving fast. Probably faster than every customer I have ever seen. I must admit they are right. Doing this is the only way the face the Linux X86 offering. And I must confess this is damn fun to work on those things. I'm so happy to have the best of two worlds AIX/PowerSystem and Openstack. This is the only direction we have to take if we want AIX to survive. So please stop being scared or not convinced by these solutions they are damn good, production ready. Please face and embrace the future and stop looking at the past. As always I hope it help.

Continuous integration for your Chef AIX cookbooks (using PowerVC, Jenkins, test-kitchen and gitlab)

My Journey to integrate Chef on AIX is still going on and I’m working more than ever on these topics. I know that using such tools is not something widely adopted by AIX customers. But what I also know is that whatever happens you will in a near -or distant- future use an automation tool. These tools are so widely used in the Linux world that you just can’t ignore it. The way you were managing your AIX ten years ago is not the same as what you are doing today, and what you do today will not be what you’ll do in the future. The AIX world needs a facelift to survive, a huge step has already be done (and is still ongoing) with PowerVC thanks to a fantastic team composed by very smart people at IBM (@amarteyp; @drewthorst, @jwcroppe, and all the other persons in this team!) The AIX world is now compatible with Openstack and with this other things are coming … such as automation. When all of these things will be ready AIX we will be able to offer something comparable to Linux. Openstack and automation are the first brick to what we call today “devops” (to be more specific it’s the ops part of the devops word).

I will today focus on how to manage your AIX machines using Chef. By using the word “how” I mean what are the best practices and infrastructures to build to start using Chef on AIX. If you remember my session about Chef on AIX at the IBM Technical University in Cannes I was saying that by using Chef your infrastructure will be testable, repeatable, and versionnable. We will focus on this blog post on how to do that. To test your AIX Chef cookbooks you will need to understand what is the test kitchen (we will use the test kitchen to drive PowerVC to build virtual machines on the fly and run the chef recipes on it). To repeat this over and over to be sure everything is working (code review, be sure that your cookbook is converging) ok without having to do anything we will use Jenkins to automate these tests. Then to version your cookbooks development we will use gitlab.

To better understand why I’m doing such a thing there is nothing better than a concrete example. My goal is to do all my AIX post-installation tasks using Chef (motd configuration, dns, devices attributes, fileset installation, enabling services … everything that you are today doing using korn shells scripts). Who has never experienced someone changing one of these scripts (most of the time without warning the other members of the team) resulting in a syntax error then resulting in an outage for all your new builds. Doing this is possible if you are in a little team creating one machine per month but is inconceivable in an environment driven by PowerVC where sysadmin are not doing anything “by hand”. In such an environment if someone is doing this kind of error all the new builds are failing …. even worse you’ll probably not be aware of this until someone who is connecting on the machine will say that there is an error (most of the time the final customer). By using continuous integration your AIX build will be tested at every change, all this changes will be stored in a git repository and even better you will not be able to put a change in production without passing all these tests. Even if using this is just mandatory to do that for people using PowerVC today people who are not can still do the same thing. By doing that you’ll have a clean and proper AIX build (post-install) and no errors will be possible anymore, so I highly encourage you to do this even if you are not adopting the Openstack way or even if today you don’t see the benefits. In the future this effort will pay. Trust me.

The test-kitchen

What is the kitchen

The test-kitchen is a tool that allows you to run your AIX Chef cookbooks and recipes in a quick way without having to do manual task. During the development of your recipes if you don’t use the test kitchen you’ll have many tasks to do manually. Build a virtual machine, install the chef client, copy the cookbook and the recipes, run it, check everything is in the state that you want. Imagine doing that on different AIX version (6.1, 7.1, 7.2) everytime you are changing something in your post-installation recipes (I was doing that before and I can assure you that creating and destroy machine over and over and over is just a waste of time). The test kitchen is here to do the job for you. It will build the machine for you (using the PowerVC kitchen driver), install the chef-client (using an omnibus server), copy the content of your cookbook (the files), run a bunch of recipe (described in what we call suites) and then test it (using bats, or serverspec). You can configure your kitchen to test different kind of images (6.1, 7.1, 7.2) and differents suites (cookbooks, recipes) depending on the environment you want to test. By default the test kitchen is using a Linux tool called Vagrant to build your VM. Obsiouvly Vagrant is not able to build an AIX machine, that’s why we will use a modified version of the kitchen-openstack driver (modified by my self) called kitchen-powervc to build the virtual machines:

Installing the kitchen and the PowerVC driver

If you have an access to an enterprise proxy you can directly download and install the gem files from your host (in my case this is a Linux on Power … so Linux on Power is working great for this).

  • Install the test kitchen :
  • # gem install --http-proxy http://bcreau:mypasswd@proxy:8080 test-kitchen
    Successfully installed test-kitchen-1.7.2
    Parsing documentation for test-kitchen-1.7.2
    1 gem installed
    
  • Install kitchen-powervc :
  • # gem install --http-proxy http://bcreau:mypasswd@proxy:8080 kitchen-powervc
    Successfully installed kitchen-powervc-0.1.0
    Parsing documentation for kitchen-powervc-0.1.0
    1 gem installed
    
  • Install kitchen-openstack :
  • # gem install --http-proxy http://bcreau:mypasswd@proxy:8080 kitchen-openstack
    Successfully installed kitchen-openstack-3.0.0
    Fetching: fog-core-1.38.0.gem (100%)
    Successfully installed fog-core-1.38.0
    Fetching: fuzzyurl-0.8.0.gem (100%)
    Successfully installed fuzzyurl-0.8.0
    Parsing documentation for kitchen-openstack-3.0.0
    Installing ri documentation for kitchen-openstack-3.0.0
    Parsing documentation for fog-core-1.38.0
    Installing ri documentation for fog-core-1.38.0
    Parsing documentation for fuzzyurl-0.8.0
    Installing ri documentation for fuzzyurl-0.8.0
    3 gems installed
    

If you don’t have the access to an enterprise proxy you can still download the gems from home and install it on your work machine:

# gem install test-kitchen kitchen-powervc kitchen-openstack -i repo --no-ri --no-rdoc
# # copy the files (repo directory) on your destination machine
# gem install *.gem

Setup the kitchen (.kitchen.yml file)

The kitchen configuration file is the .kitchen.yml, when you’ll run the kitchen command, the kitchen will look at this file. You have to put it in the chef-repo (where the cookbook directory is, the kitchen will copy the file from the cookbook to the test machine that’s why it’s important to put this file at the root of the chef-repo.) This file is separated in different sections:

  • The driver section. In this section you will configure howto created virtual machines. In our case how to connect to PowerVC (credentials, region). You’ll also tell in this section which image you want to use (PowerVC images), which flavor (PowerVC template) and which network will be used at the VM creation (please note that you can put some driver_config in the platform section, to tell which image or which ip you want to use for each specific platform.:
    • name: the name of the driver (here powervc).
    • openstack*: the PowerVC url, user, password, region, domain.
    • image_ref: the name of the image (we will put this in driver_config in the platform section).
    • flavor_ref: the name of the PowerVC template used at the VM creation.
    • fixed_ip: the ip_address used for the virtual machine creation.
    • server_name_prefix: each vm created by the kitchen will be prefixed by this parameter.
    • network_ref: the name of the PowerVC vlan to be used at the machine creation.
    • public_key_path: The kitchen needs to connect to the machine with ssh, you need to provide the public key used.
    • private_key_path: Same but for the private key.
    • username: The ssh username (we will use root, but you can use another user and then tell the kitchen to use sudo)
    • user_data: The activation input used by cloud-init we will in this one put the public key to be sure you can access the machine without password (it’s the PowerVC activation input).
    • driver:
        name: powervc
        server_wait: 100
        openstack_username: "root"
        openstack_api_key: "root"
        openstack_auth_url: "https://mypowervc:5000/v3/auth/tokens"
        openstack_region: "RegionOne"
        openstack_project_domain: "Default"
        openstack_user_domain: "Default"
        openstack_project_name: "ibm-default"
        flavor_ref: "mytemplate"
        server_name_prefix: "chefkitchen"
        network_ref: "vlan666"
        public_key_path: "/home/chef/.ssh/id_dsa.pub"
        private_key_path: "/home/chef/.ssh/id_dsa"
        username: "root"
        user_data: userdata.txt
      
      #cloud-config
      ssh_authorized_keys:
        - ssh-dss AAAAB3NzaC1kc3MAAACBAIVZx6Pic+FyUisoNrm6Znxd48DQ/YGNRgsed+fc+yL1BVESyTU5kqnupS8GXG2I0VPMWN7ZiPnbT1Fe2D[..]
      
  • The provisioner section: This section can be use to specify if you want to user chef-zero or chef-solo as a provisioner. You can also specify an omnibus url (use to download and install the chef-client at the machine creation time). In my case the omnibus url is a link to an http server “serving” a script (install.sh) installing the chef client fileset for AIX (more details later in the blog post). I’m also putting “sudo” to false as I’ll connect with the root user:
  • provisioner:
      name: chef_solo
      chef_omnibus_url: "http://myomnibusserver:8080/chefclient/install.sh"
      sudo: false
    
  • The platefrom section: The plateform section will describe each plateform that the test-kitchen can create (I’m putting here the image_ref and the fixed_ip for each plateform (AIX 6.1, AIX 7.1, AIX 7.2)
  • platforms:
      - name: aix72
        driver_config:
          image_ref: "kitchen-aix72"
          fixed_ip: "10.66.33.234"
      - name: aix71
        driver_config:
          image_ref: "kitchen-aix71"
          fixed_ip: "10.66.33.235"
      - name: aix61
        driver_config:
          image_ref: "kitchen-aix61"
          fixed_ip: "10.66.33.236"
    
  • The suite section: this section describe which cookbook and which recipes you want to run in the machines created by the test-kitchen. For the simplicity of this example I’m just running two recipe the first on called root_authorized_keys (creating the /root directory, changing the home directory of root and the putting a public key in the .ssh directory) and the second one call gem_source (we will check later in the post why I’m also calling this recipe):
  • suites:
      - name: aixcookbook
        run_list:
        - recipe[aix::root_authorized_keys]
        - recipe[aix::gem_source]
        attributes: { gem_source: { add_urls: [ "http://10.14.66.100:8808" ], delete_urls: [ "https://rubygems.org/" ] } }
    
  • The busser section: this section describe how to run you tests (more details later in the post ;-) ):
  • busser:
      sudo: false
    

After configuring the kitchen you can check the yml file is ok by listing what’s configured on the kitchen:

# kitchen list
Instance           Driver   Provisioner  Verifier  Transport  Last Action
aixcookbook-aix72  Powervc  ChefSolo     Busser    Ssh        
aixcookbook-aix71  Powervc  ChefSolo     Busser    Ssh        
aixcookbook-aix61  Powervc  ChefSolo     Busser    Ssh        

kitchen1
kitchen2

Anatomy of a kitchen run

A kitchen run is divided into five steps. At first we are creating a virtual machine (the create action), then we are installing the chef-client (using an omnibus url) and running some recipes (converge), then we are installing testing tools on the virtual machine (in my case serverspec) (setup) and we are running the tests (verify). Finally if everything was ok we are deleting the virtual machines (destroy). Instead of running all theses steps one by one you can use the “test” option. This one will do destroy,create,converge,setup,verify,destroy in on single “pass”. Let’s check in details each steps:

kitchen1

  • Create: This will create the virtual machine using PowerVC. If you choose to use the “fixed_ip” option in the .kitchen.yml file this ip will be choose at the machine creation time. If you prefer to pick an ip from the network (in the pool) don’t set the “fixed_ip”. You’ll see the details in the picture below. You can at the end test the connectivity (transport) (ssh) to the machine using “kitchen login”. The ssh public key was automatically added using the userdata.txt file used by cloud-init at the machine creation time. After the machine is created you can use the “kitchen list” command to check the machine was successfully created:
# kitchen create

kitchencreate3
kitchencreate1
kitchencreate2
kitchenlistcreate1

  • Converge: This will converge the kitchen (on more time converge = chef-client installation and running chef-solo with the suite configuration describing which recipe will be launched). The converge action will download the chef client and install it on the machine (using the omnibus url) and run the recipe specified in the suite stanza of the .kitchen.yml file. Here is the script I use for the omnibus installation this script is “served” by an http server:
  • # cat install.sh
    #!/usr/bin/ksh
    echo "[omnibus] [start] starting omnibus install"
    echo "[omnibus] downloading chef client http://chefomnibus:8080/chefclient/lastest"
    perl -le 'use LWP::Simple;getstore("http://chefomnibus:8080/chefclient/latest", "/tmp/chef.bff")'
    echo "[omnibus] installing chef client"
    installp -aXYgd /tmp/ chef
    echo "[omnibus] [end] ending omnibus install"
    
  • The http server is serving this install.sh file. Here is the httpd.conf configuration file for the omnibus installation on AIX:
  • # ls -l /apps/chef/chefclient
    total 647896
    -rw-r--r--    1 apache   apache     87033856 Dec 16 17:15 chef-12.1.2-1.powerpc.bff
    -rwxr-xr-x    1 apache   apache     91922944 Nov 25 00:24 chef-12.5.1-1.powerpc.bff
    -rw-------    2 apache   apache     76375040 Jan  6 11:23 chef-12.6.0-1.powerpc.bff
    -rwxr-xr-x    1 apache   apache          364 Apr 15 10:23 install.sh
    -rw-------    2 apache   apache     76375040 Jan  6 11:23 latest
    # cat httpd.conf
    [..]
         Alias /chefclient/ "/apps/chef/chefclient/"
         
             Options Indexes FollowSymlinks MultiViews
           AllowOverride None
           Require all granted
         
    
# kitchen converge

kitchenconverge1
kitchenconverge2b
kitchenlistconverge1

  • Setup and verify: these actions will run a bunch of tests to verify the machine is in the state you want. The test I am writing are checking that the root home directory was created and the key was successfully created in the .ssh directory. In a few words you need to write tests checking that your recipes are working well (in chef words: “check that the machine is in the correct state”). In my case I’m using serverspec to describe my tests (there are different tools using for testing, you can also use bats). To describe the tests suite just create serverspec files (describing the tests) in the chef-repo directory (in ~/test/integration//serverspec in my case ~/test/integration/aixcookbook/serverspec). All the serverspec test files are suffixed by _spec:
  • # ls test/integration/aixcookbook/serverspec/
    root_authorized_keys_spec.rb  spec_helper.rb
    
  • The “_spec” file describe the tests that will be run by the kitchen. In my very simple tests here I’m just checking my files exists and the content of the public_key is the same as my public_key (the key created by cloud-init in AIX is located in ~/.ssh and my test recipe here is changing the root home directory and putting the key in the right place). By looking at the file you can see that the serverspec language is very simple to understand:
  • # ls test/integration/aixcookbook/serverspec/
    root_authorized_keys_spec.rb  spec_helper.rb
    
    # cat spec_helper.rb
    require 'serverspec'
    set :backend, :exec
    # cat root_authorized_keys_spec.rb
    require 'spec_helper'
    
    describe file('/root/.ssh') do
      it { should exist }
      it { should be_directory }
      it { should be_owned_by 'root' }
    end
    
    describe file('/root/.ssh/authorized_keys') do
      it { should exist }
      it { should be_owned_by 'root' }
      it { should contain 'from="1[..]" ssh-rsa AAAAB3NzaC1[..]' }
    end
    
  • The kitchen will try to install needed ruby gems for serverspec (serverspec needs to be installed on the server to run the automated test). As my server has no connectivity to the internet I need to run my own gem server. I’m lucky all the gem needed are installed on my chef workstation (if you have no internet access from the workstation use the tip described at the beginning of this blog post). I just need to run a local gem server by running “gem server” on the chef workstation. The server is listening on port 8808 and will serve all the needed gems:
  • # gem list | grep -E "busser|serverspec"
    busser (0.7.1)
    busser-bats (0.3.0)
    busser-serverspec (0.5.9)
    serverspec (2.31.1)
    # gem server
    Server started at http://0.0.0.0:8808
    
  • If you look on the output above you can see that the recipe gem_server was executed. This recipe change the gem source on the virtual machine (from https://rubygems.org to my own local server). In the .kitchen.yml file the urls to add and remove to the gem source are specified in the suite attributes:
  • # cat gem_source.rb
    ruby_block 'Changing gem source' do
      block do
        node['gem_source']['add_urls'].each do |url|
          current_sources = Mixlib::ShellOut.new('/opt/chef/embedded/bin/gem source')
          current_sources.run_command
          next if current_sources.stdout.include?(url)
          add = Mixlib::ShellOut.new("/opt/chef/embedded/bin/gem source --add #{url}")
          add.run_command
          Chef::Application.fatal!("Adding gem source #{url} failed #{add.status}") unless add.status == 0
          Chef::Log.info("Add gem source #{url}")
        end
    
        node['gem_source']['delete_urls'].each do |url|
          current_sources = Mixlib::ShellOut.new('/opt/chef/embedded/bin/gem source')
          current_sources.run_command
          next unless current_sources.stdout.include?(url)
          del = Mixlib::ShellOut.new("/opt/chef/embedded/bin/gem source --remove #{url}")
          del.run_command
          Chef::Application.fatal!("Removing gem source #{url} failed #{del.status}") unless del.status == 0
          Chef::Log.info("Remove gem source #{url}")
        end
      end
      action :run
    end
    
# kitchen setup
# kitchen verify

kitchensetupeverify1
kitchenlistverfied1

  • Destroy: This will destroy the virtual machine on PowerVC.
# kitchen destroy

kitchendestroy1
kitchendestroy2
kitchenlistdestroy1

Now that you understand how the kitchen is working and that you are now able to run it to create and test AIX machines you are ready to use the kitchen to develop and create the chef cookbook that will fit your infrastructure. To run the all the steps “create,converge,setup,verify,destroy”, just use the “kitchen test” command:

# kitchen test

As you are going to change a lot of things in your cookbook you’ll need to version the code you are creating, for this we will use a gitlab server.

Gitlab: version your AIX cookbook

Unfortunately for you and for me I didn’t had the time to run gitlab on a Linux on Power machine. I’m sure it is possible (if you find a way to do this please mail me). Anyway my version of gitlab is running on an x86 box. The goal here is to allow the chef workstation (in my environment this user is “chef”) user to push all the new developments (providers, recipes) to the git development branch for this we will:

  • Allow the chef user to push its source to the git server trough ssh (we are creating a chefworkstation user and adding the key to authorize this user to push the changes to the git repository with ssh).
  • gitlabchefworkst

  • Create a new repository called aix-cookbook.
  • createrepo

  • Push your current work to the master branch. The master branch will be the production branch.
  • # git config --global user.name "chefworkstation"
    # git config --global user.email "chef@myworkstation.chmod666.org"
    # git init
    # git add -A .
    # git commit -m "first commit"
    # git remote add origin git@gitlabserver:chefworkstation/aix-cookbook.git
    # git push origin master
    

    masterbranch

  • Create a development branch (you’ll need to push all your new development to this branch, and you’ll never have to do anything else on the master branch as Jenkins is going to do the job for us.
  • # git checkout -b dev
    # git commit -a
    # git push origin dev
    

    devbranch

The git server is ready: we have a repository accessible by the chef user. Two branch created the dev one (the one we are working on used for all our development) and the master branch used for production that will be never touched by us and will be only updated (by jenkins) if all the tests (foodcritic, rubocop and the test-kitchen) are ok

Automating the continous integration with Jenkins

What is Jenkins

The goal of Jenkins is to automate all tests and run them over and over again every time a change is applied onto the cookbook you are developing. By using Jenkins you will be sure that every change will be tested and you will never push something that is not working or not passing the tests you have defined in your production environment. To be sure the cookbook is working as desired we will use three different tools. foodcritic will check the will check your chef cookbook for common problems by checking rules that are defined within the tools (this rules will check that everything is ok for the chef execution, so you will be sure that there is no syntax error, and that all the coding convention will be respected), rubocop will check the ruby syntax, and then we will run a kitchen test to be sure that the developement branch is working with the kitchen and that all our serverspec tests are ok. Jenkins will automate the following steps:

  1. Pull the dev branch from git server (gitlab) if anything has changed on this branch.
  2. Run foodcritic on the code.
  3. If foodcritic tests are ok this will trigger the next step.
  4. Pull the dev branch again
  5. Run rubocop on the code.
  6. If rubocop tests are ok this will trigger the next step.
  7. Run the test-kitchen
  8. This will build a new machine on PowerVC and test the cookbook against it (kitchen test).
  9. If the test kitchen is ok push the dev branch to the master branch.
  10. You are ready for production :-)

kitchen2

First: Foodcritic

The first test we are running is foodcritic. Better than trying to do my own explanation of this with my weird english I prefer to quote the chef website:

Foodcritic is a static linting tool that analyzes all of the Ruby code that is authored in a cookbook against a number of rules, and then returns a list of violations. Because Foodcritic is a static linting tool, using it is fast. The code in a cookbook is read, broken down, and then compared to Foodcritic rules. The code is not run (a chef-client run does not occur). Foodcritic does not validate the intention of a recipe, rather it evaluates the structure of the code, and helps enforce specific behavior, detect portability of recipes, identify potential run-time failures, and spot common anti-patterns.

# foodcritic -f correctness ./cookbooks/
FC014: Consider extracting long ruby_block to library: ./cookbooks/aix/recipes/gem_source.rb:1

In Jenkins here are the steps to create a foodcritic test:

  • Pull dev branch from gitlab:
  • food1

  • Check for changes (the Jenkins test will be triggered only if there was a change in the git repository):
  • food2

  • Run foodcritic
  • food3

  • After the build parse the code (to archive and record the evolution of the foodcritic errors) and run the rubocop project if the build is stable (passed without any errors):
  • food4

  • To configure the parser go in the Jenkins configuration and add the foodcritic compiler warnings:
  • food5

Second: Rubocop

The second test we are running is rubocop it’s a Ruby static code analyzer, based on the community Ruby style guide. Here is an example below

# rubocop .
Inspecting 71 files
..CCCCWWCWC.WC..CC........C.....CC.........C.C.....C..................C

Offenses:

cookbooks/aix/providers/fixes.rb:31:1: C: Assignment Branch Condition size for load_current_resource is too high. [20.15/15]
def load_current_resource
^^^
cookbooks/aix/providers/fixes.rb:31:1: C: Method has too many lines. [19/10]
def load_current_resource ...
^^^^^^^^^^^^^^^^^^^^^^^^^
cookbooks/aix/providers/sysdump.rb:11:1: C: Assignment Branch Condition size for load_current_resource is too high. [25.16/15]
def load_current_resource

In Jenkins here are the steps to create a rubocop test:

  • Do the same thing as foodcritic except for the build and post-build action steps:
  • Run rubocop:
  • rubo1

  • After the build parse the code and run the test-kitchen project even if the build is fails (rubocop will generate tons of things to correct … once you are ok with rubocop change this to “trigger only if the build is stable”) :
  • rubo2

Third: test-kitchen

I don’t have to explain again what is the test-kitchen ;-) . It is the third test we are creating with Jenkins and if this one is ok we are pushing the changes in production:

  • Do the same thing as foodcritic except for the build and post-build action steps:
  • Run the test-kitchen:
  • kitchen1

  • If the test kitchen is ok push dev branch to master branch (dev to production):
  • kitchen3

More about Jenkins

The three tests are now linked together. On the Jenkins home page you can check the current state of your tests. Here are a couple of screenshots:

meteo
timeline

Conclusion

I know that for most of you working this way is something totally new. As AIX sysadmins we are used to our ksh and bash scripts and we like the way it is today. But as the world is changing and as you are going to manage more and more machines with less and less admins you will understand how powerful it is to use automation and how powerful it is to work in a “continuous integration” way. Even if you don’t like this “concept” or this new work habit … give it a try and you’ll see that working this way is worth the effort. First for you … you’ll discover a lot of new interesting things, second for your boss that will discover that working this way is safer and more productive. Trust me AIX needs to face Linux today and we are not going anywhere without having a proper fight versus the Linux guys :-) (yep it’s a joke).

NovaLink ‘HMC Co-Management’ and PowerVC 1.3.0.1Dynamic Resource Optimizer

Everybody now knows that I’m using PowerVC a lot in my current company. My environment is growing bigger and bigger and we are now managing more than 600 virtual machines with PowerVC (the goal is to reach ~ 3000 this year). Some of them were build by PowerVC itself and some of them were migrated through an homemade python script calling the PowerVC rest api and moving our old vSCSI machines to the new full NPIV/Live Partition Mobility/PowerVC environment (Still struggling with the “old mens” to move on SSP, but I’m alone versus everybody on this one). I’m happy with that but (there is always a but) I’m facing a lot problems. The first one is that we are doing more and more stuffs with PowerVC (Virtual Machine creation, virtual machines resizing, adding additional disks, moving machine with LPM, and finally using this python scripts to migrate the old machines to the new environment). I realized that the machine hosting the PowerVC was slower and slower and the more actions we do the more the PowerVC was “unresponsive”. By this I mean that the GUI was slow, creating objects was slower and slower. By looking at CPU graphs in lpar2rrd we noticed that the CPU consumption was growing as fast as we were doing stuffs on PowerVC (check the graph below). The second problem was my teams (unfortunately for me, we have here different teams doing different sort of stuffs here and everybody is using the Hardware Management Consoles it’s own way, some people are renaming the machine making them unusable with PowerVC, some people were changing the profiles disabling the synchronization, even worse we have some third party tools used for capacity planning making the Hardware Management Console unusable by PowerVC). The solution to all these problems is to use NovaLink and especially the NovaLink Co-Management. By doing this the Hardware Management Consoles will be restricted to a read-only view and PowerVC will stop querying the HMCs and will directly query the NovaLink partitions on each hosts instead of querying the Hardware Management Consoles.

cpu_powervc

What is NovaLink ?

If you are using PowerVC you know that this one is based on OpenStack. Until now all the Openstack services where running on the PowerVC host. If you check on the PowerVC today you can see that there is one Nova per managed host. In the example below I’m managing ten hosts so I have ten different Nova processes running :

# ps -ef | grep [n]ova-compute
nova       627     1 14 Jan16 ?        06:24:30 /usr/bin/python /usr/bin/nova-compute --config-file /etc/nova/nova-9117MMD_10D6666.conf --log-file /var/log/nova/nova-compute.log --log-file /var/log/nova/nova-compute-9117MMD_10D6666.log
nova       649     1 14 Jan16 ?        06:30:25 /usr/bin/python /usr/bin/nova-compute --config-file /etc/nova/nova-9117MMD_65E6666.conf --log-file /var/log/nova/nova-compute.log --log-file /var/log/nova/nova-compute-9117MMD_65E6666.log
nova       664     1 17 Jan16 ?        07:49:27 /usr/bin/python /usr/bin/nova-compute --config-file /etc/nova/nova-9117MMD_1086666.conf --log-file /var/log/nova/nova-compute.log --log-file /var/log/nova/nova-compute-9117MMD_1086666.log
nova       675     1 19 Jan16 ?        08:40:27 /usr/bin/python /usr/bin/nova-compute --config-file /etc/nova/nova-9117MMD_06D6666.conf --log-file /var/log/nova/nova-compute.log --log-file /var/log/nova/nova-compute-9117MMD_06D6666.log
nova       687     1 18 Jan16 ?        08:15:57 /usr/bin/python /usr/bin/nova-compute --config-file /etc/nova/nova-9117MMD_6576666.conf --log-file /var/log/nova/nova-compute.log --log-file /var/log/nova/nova-compute-9117MMD_6576666.log
nova       697     1 21 Jan16 ?        09:35:40 /usr/bin/python /usr/bin/nova-compute --config-file /etc/nova/nova-9117MMD_6556666.conf --log-file /var/log/nova/nova-compute.log --log-file /var/log/nova/nova-compute-9117MMD_6556666.log
nova       712     1 13 Jan16 ?        06:02:23 /usr/bin/python /usr/bin/nova-compute --config-file /etc/nova/nova-9117MMD_10A6666.conf --log-file /var/log/nova/nova-compute.log --log-file /var/log/nova/nova-compute-9117MMD_10A6666.log
nova       728     1 17 Jan16 ?        07:49:02 /usr/bin/python /usr/bin/nova-compute --config-file /etc/nova/nova-9117MMD_1016666.conf --log-file /var/log/nova/nova-compute.log --log-file /var/log/nova/nova-compute-9117MMD_1016666.log
nova       752     1 17 Jan16 ?        07:34:45 /usr/bin/python /usr/bin/nova-compute --config-file /etc/nova/nova-9117MMD_1036666.conf --log-file /var/log/nova/nova-compute.log --log-file /var/log/nova/nova-compute-9119MHE_1036666.log
nova       779     1 13 Jan16 ?        05:54:52 /usr/bin/python /usr/bin/nova-compute --config-file /etc/nova/nova-9117MMD_6596666.conf --log-file /var/log/nova/nova-compute.log --log-file /var/log/nova/nova-compute-9119MHE_6596666.log
# ps -ef | grep [n]ova-compute | wc -l
10

The goal of NovaLink is to move these processes on a dedicated partition running on each managed host (each PowerSystems). This partition is called the NovaLink partition. This one is running on an Ubuntu 15.10 Linux OS (Little endian) (so only available on Power8 hosts) and is in charge to run the Openstack nova processes. By doing that you will distribute the load across all the NovaLink partitions instead of charging one PowerVC host. Even better my understanding is that the NovaLink partition is able to communicate directly with the FSP. By using NovaLink you will be able to stop using the Hardware Management Consoles anymore and avoid the slowness of theses ones. As the NovaLink partition is hosted on the host itself the RMC connections are can now use a direct link (ipv6) through the PowerHypervisor. No more RMC connection problem at all ;-), it’s just awesome. NovaLink allows you to choose between two modes of management:

  • Full Nova Management: You install your new host directly with NovaLink on it and you will not need an Hardware Management Console Anymore (In this case the NovaLink installation is in charge to deploy the Virtual I/O Servers and the SEAs).
  • Nova Co-Management: Your host is already installed and you give the write access (setmaster) to the NovaLink partition, the Hardware Management Console will be limited in this mode (you will not be able to create partition anymore or modify profile, it’s not a “read only” mode as you will be able to start and stop the partitions and still do some stuffs with HMC but you will be very limited).
  • You can still mix NovaLink and Non-NovaLink management hosts, and still have P7/P6 managed by HMCs, P8 managed by HMCs, P8 Nova Co-Managed and P8 full Nova Managed ;-).
  • Nova1

Prerequisites

As always upgrade your systems to the latest code level if you want to use NovaLink and NovaLink Co-Management

  • Power 8 only with firmware version 840. (or later)
  • Virtual I/O Server 2.2.4.10 or later
  • For NovaLink co-management HMC V8R8.4.0
  • Obviously install NovaLink on each NovaLink managed system (install the latest patch version of NovaLink)
  • PowerVC 1.3.0.1 or later

NovaLink installation on an existing system

I’ll show you here how to install a NovaLink partition on an existing deployed system. Installing a new system from scratch is also possible. My advice is that you look at this address to start: , and check this youtube video showing you how a system is installed from scratch :

The goal of this post is to show you how to setup a co-managed system on an already existing system with Virtual I/O Servers already deployed on the host. My advice is to be very careful. The first thing you’ll need to do is to created a partition (2VP 0.5EC and 5GB Memory) (I’m calling it nova in the example below) and use the Virtual Optical device to load the NovaLink system on this one. In the example below the machine is “SSP” backed. Be very careful when do that: setup the profile name, and all the configuration stuffs before moving to co-managed mode … after that it will be harder for you to change things as the new pvmctl command will be very new to you:

# mkvdev -fbo -vadapter vhost0
vtopt0 Available
# lsrep
Size(mb) Free(mb) Parent Pool         Parent Size      Parent Free
    3059     1579 rootvg                   102272            73216

Name                                                  File Size Optical         Access
PowerVM_NovaLink_V1.1_122015.iso                           1479 None            rw
vopt_a19a8fbb57184aad8103e2c9ddefe7e7                         1 None            ro
# loadopt -disk PowerVM_NovaLink_V1.1_122015.iso -vtd vtopt0
# lsmap -vadapter vhost0 -fmt :
vhost0:U8286.41A.21AFF8V-V2-C40:0x00000003:nova_b1:Available:0x8100000000000000:nova_b1.7f863bacb45e3b32258864e499433b52: :N/A:vtopt0:Available:0x8200000000000000:/var/vio/VMLibrary/PowerVM_NovaLink_V1.1_122015.iso: :N/A
  • At the gurb page select the first entry:
  • install1

  • Wait for the machine to boot:
  • install2

  • Choose to perform an installation:
  • install3

  • Accept the licenses
  • install4

  • padmin user:/li>
    install5

  • Put you network configuration:
  • install6

  • Accept to install the Ubuntu system:
  • install8

  • You can then modify anything you want in the configuration file (in my case the timezone):
  • install9

    By default NovaLink (I think not 100% sure) is designed to be installed on SAS disk, so without multipathing. If like me you decide to install the NovaLink partition in a “boot-on-san” lpar my advice is to launch the installation without any multipathing enabled (only one vscsi adapter or one virtual fibre channel adapter). After the installation is completed install the Ubuntu multipathd service and configure the second vscsi or virtual fibre channel adapter. If you don’t do that you may experience problem at the installation time (RAID error). Please remember that you have to do that before enabling the co-management. Last thing about the installation it may takes a lot of time to finish. So be patient (especially the preseed step).

install10

Updating to the latest code level

The iso file provider in the Entitled Software Support is not updated to the latest available NovaLink code. Make a copy of the official repository available at this address: ftp://public.dhe.ibm.com/systems/virtualization/Novalink/debian. Serve the content of this ftp server on you how http server (use the command below to copy it):

# wget --mirror ftp://public.dhe.ibm.com/systems/virtualization/Novalink/debian

Modify the /etc/apt/sources.list (and source.list.d) and comment all the available deb repository to on only keep your copy

root@nova:~# grep -v ^# /etc/apt/sources.list
deb http://deckard.lab.chmod666.org/nova/Novalink/debian novalink_1.0.0 non-free
root@nova:/etc/apt/sources.list.d# apt-get upgrade
Reading package lists... Done
Building dependency tree
Reading state information... Done
Calculating upgrade... Done
The following packages will be upgraded:
  pvm-cli pvm-core pvm-novalink pvm-rest-app pvm-rest-server pypowervm
6 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Need to get 165 MB of archives.
After this operation, 53.2 kB of additional disk space will be used.
Do you want to continue? [Y/n]
Get:1 http://deckard.lab.chmod666.org/nova/Novalink/debian/ novalink_1.0.0/non-free pypowervm all 1.0.0.1-151203-1553 [363 kB]
Get:2 http://deckard.lab.chmod666.org/nova/Novalink/debian/ novalink_1.0.0/non-free pvm-cli all 1.0.0.1-151202-864 [63.4 kB]
Get:3 http://deckard.lab.chmod666.org/nova/Novalink/debian/ novalink_1.0.0/non-free pvm-core ppc64el 1.0.0.1-151202-1495 [2,080 kB]
Get:4 http://deckard.lab.chmod666.org/nova/Novalink/debian/ novalink_1.0.0/non-free pvm-rest-server ppc64el 1.0.0.1-151203-1563 [142 MB]
Get:5 http://deckard.lab.chmod666.org/nova/Novalink/debian/ novalink_1.0.0/non-free pvm-rest-app ppc64el 1.0.0.1-151203-1563 [21.1 MB]
Get:6 http://deckard.lab.chmod666.org/nova/Novalink/debian/ novalink_1.0.0/non-free pvm-novalink ppc64el 1.0.0.1-151203-408 [1,738 B]
Fetched 165 MB in 7s (20.8 MB/s)
(Reading database ... 72094 files and directories currently installed.)
Preparing to unpack .../pypowervm_1.0.0.1-151203-1553_all.deb ...
Unpacking pypowervm (1.0.0.1-151203-1553) over (1.0.0.0-151110-1481) ...
Preparing to unpack .../pvm-cli_1.0.0.1-151202-864_all.deb ...
Unpacking pvm-cli (1.0.0.1-151202-864) over (1.0.0.0-151110-761) ...
Preparing to unpack .../pvm-core_1.0.0.1-151202-1495_ppc64el.deb ...
Removed symlink /etc/systemd/system/multi-user.target.wants/pvm-core.service.
Unpacking pvm-core (1.0.0.1-151202-1495) over (1.0.0.0-151111-1375) ...
Preparing to unpack .../pvm-rest-server_1.0.0.1-151203-1563_ppc64el.deb ...
Unpacking pvm-rest-server (1.0.0.1-151203-1563) over (1.0.0.0-151110-1480) ...
Preparing to unpack .../pvm-rest-app_1.0.0.1-151203-1563_ppc64el.deb ...
Unpacking pvm-rest-app (1.0.0.1-151203-1563) over (1.0.0.0-151110-1480) ...
Preparing to unpack .../pvm-novalink_1.0.0.1-151203-408_ppc64el.deb ...
Unpacking pvm-novalink (1.0.0.1-151203-408) over (1.0.0.0-151112-304) ...
Processing triggers for ureadahead (0.100.0-19) ...
ureadahead will be reprofiled on next reboot
Setting up pypowervm (1.0.0.1-151203-1553) ...
Setting up pvm-cli (1.0.0.1-151202-864) ...
Installing bash completion script /etc/bash_completion.d/python-argcomplete.sh
Setting up pvm-core (1.0.0.1-151202-1495) ...
addgroup: The group `pvm_admin' already exists.
Created symlink from /etc/systemd/system/multi-user.target.wants/pvm-core.service to /usr/lib/systemd/system/pvm-core.service.
0513-071 The ctrmc Subsystem has been added.
Adding /usr/lib/systemd/system/ctrmc.service for systemctl ...
0513-059 The ctrmc Subsystem has been started. Subsystem PID is 3096.
Setting up pvm-rest-server (1.0.0.1-151203-1563) ...
The user `wlp' is already a member of `pvm_admin'.
Setting up pvm-rest-app (1.0.0.1-151203-1563) ...
Setting up pvm-novalink (1.0.0.1-151203-408) ...

NovaLink and HMC Co-Management configuration

Before adding the hosts on PowerVC you still need to do the most important thing. After the installation is finished enable the co-management mode to be able to have a system managed by NovaLink and still connected to an Hardware Management Console:

  • Enable the powerm_mgmt_capable attribute on the Nova partition:
  • # chsyscfg -r lpar -m br-8286-41A-2166666 -i "name=nova,powervm_mgmt_capable=1"
    # lssyscfg -r lpar -m br-8286-41A-2166666 -F name,powervm_mgmt_capable --filter "lpar_names=nova"
    nova,1
    
  • Enable co-management (please not here that you have to setmaster (you’ll see that the curr_master_name is the HMC) and then relmaster (you’ll see that the curr_master_name is the NovaLink Partition, this is that state where we want to be)):
  • # lscomgmt -m br-8286-41A-2166666
    is_master=null
    # chcomgmt -m br-8286-41A-2166666 -o setmaster -t norm --terms agree
    # lscomgmt -m br-8286-41A-2166666
    is_master=1,curr_master_name=myhmc1,curr_master_mtms=7042-CR8*2166666,curr_master_type=norm,pend_master_mtms=none
    # chcomgmt -m br-8286-41A-2166666 -o relmaster
    # lscomgmt -m br-8286-41A-2166666
    is_master=0,curr_master_name=nova,curr_master_mtms=3*8286-41A*2166666,curr_master_type=norm,pend_master_mtms=none
    

Going back to HMC managed system

You can go back to an Hardware Management Console managed system whenever you want (set the master to the HMC, delete the nova partition and release the master from the HMC).

# chcomgmt -m br-8286-41A-2166666 -o setmaster -t norm --terms agree
# lscomgmt -m br-8286-41A-2166666
is_master=1,curr_master_name=myhmc1,curr_master_mtms=7042-CR8*2166666,curr_master_type=norm,pend_master_mtms=none
# chlparstate -o shutdown -m br-8286-41A-2166666 --id 9 --immed
# rmsyscfg -r lpar -m br-8286-41A-2166666 --id 9
# chcomgmt -o relmaster -m br-8286-41A-2166666
# lscomgmt -m br-8286-41A-2166666
is_master=0,curr_master_mtms=none,curr_master_type=none,pend_master_mtms=none

Using NovaLink

After the installation you are now able to login on the NovaLink partition. (You can gain root access with “sudo su -” command). A command new called pvmctl is available on the NovaLink partition allowing you to perform any actions (stop, start virtual machine, list Virtual I/O Servers, ….). Before trying to add the host double check that the pvmctl command is working ok.

padmin@nova:~$ pvmctl lpar list
Logical Partitions
+------+----+---------+-----------+---------------+------+-----+-----+
| Name | ID |  State  |    Env    |    Ref Code   | Mem  | CPU | Ent |
+------+----+---------+-----------+---------------+------+-----+-----+
| nova | 3  | running | AIX/Linux | Linux ppc64le | 8192 |  2  | 0.5 |
+------+----+---------+-----------+---------------+------+-----+-----+

Adding hosts

On the PowerVC side add the NovaLink host by choosing the NovaLink option:

addhostnovalink

Some deb (ibmpowervc-power)packages will be installed on configured on the NovaLink machine:

addhostnovalink3
addhostnovalink4

By doing this, on each NovaLink machine you can check that a nova-compute process is here. (By adding the host the deb was installed and configured on the NovaLink host:

# ps -ef | grep nova
nova      4392     1  1 10:28 ?        00:00:07 /usr/bin/python /usr/bin/nova-compute --config-file /etc/nova/nova.conf --log-file /var/log/nova/nova-compute.log
root      5218  5197  0 10:39 pts/1    00:00:00 grep --color=auto nova
# grep host_display_name /etc/nova/nova.conf
host_display_name = XXXX-8286-41A-XXXX
# tail -1 /var/log/apt/history.log
Start-Date: 2016-01-18  10:27:54
Commandline: /usr/bin/apt-get -o Dpkg::Options::=--force-confdef -o Dpkg::Options::=--force-confold -y install --force-yes --allow-unauthenticated ibmpowervc-powervm
Install: python-keystoneclient:ppc64el (1.6.0-2.ibm.ubuntu1, automatic), python-oslo.reports:ppc64el (0.1.0-1.ibm.ubuntu1, automatic), ibmpowervc-powervm:ppc64el (1.3.0.1), python-ceilometer:ppc64el (5.0.0-201511171217.ibm.ubuntu1.199, automatic), ibmpowervc-powervm-compute:ppc64el (1.3.0.1, automatic), nova-common:ppc64el (12.0.0-201511171221.ibm.ubuntu1.213, automatic), python-oslo.service:ppc64el (0.11.0-2.ibm.ubuntu1, automatic), python-oslo.rootwrap:ppc64el (2.0.0-1.ibm.ubuntu1, automatic), python-pycadf:ppc64el (1.1.0-1.ibm.ubuntu1, automatic), python-nova:ppc64el (12.0.0-201511171221.ibm.ubuntu1.213, automatic), python-keystonemiddleware:ppc64el (2.4.1-2.ibm.ubuntu1, automatic), python-kafka:ppc64el (0.9.3-1.ibm.ubuntu1, automatic), ibmpowervc-powervm-monitor:ppc64el (1.3.0.1, automatic), ibmpowervc-powervm-oslo:ppc64el (1.3.0.1, automatic), neutron-common:ppc64el (7.0.0-201511171221.ibm.ubuntu1.280, automatic), python-os-brick:ppc64el (0.4.0-1.ibm.ubuntu1, automatic), python-tooz:ppc64el (1.22.0-1.ibm.ubuntu1, automatic), ibmpowervc-powervm-ras:ppc64el (1.3.0.1, automatic), networking-powervm:ppc64el (1.0.0.0-151109-25, automatic), neutron-plugin-ml2:ppc64el (7.0.0-201511171221.ibm.ubuntu1.280, automatic), python-ceilometerclient:ppc64el (1.5.0-1.ibm.ubuntu1, automatic), python-neutronclient:ppc64el (2.6.0-1.ibm.ubuntu1, automatic), python-oslo.middleware:ppc64el (2.8.0-1.ibm.ubuntu1, automatic), python-cinderclient:ppc64el (1.3.1-1.ibm.ubuntu1, automatic), python-novaclient:ppc64el (2.30.1-1.ibm.ubuntu1, automatic), python-nova-ibm-ego-resource-optimization:ppc64el (2015.1-201511110358, automatic), python-neutron:ppc64el (7.0.0-201511171221.ibm.ubuntu1.280, automatic), nova-compute:ppc64el (12.0.0-201511171221.ibm.ubuntu1.213, automatic), nova-powervm:ppc64el (1.0.0.1-151203-215, automatic), openstack-utils:ppc64el (2015.2.0-201511171223.ibm.ubuntu1.18, automatic), ibmpowervc-powervm-network:ppc64el (1.3.0.1, automatic), python-oslo.policy:ppc64el (0.5.0-1.ibm.ubuntu1, automatic), python-oslo.db:ppc64el (2.4.1-1.ibm.ubuntu1, automatic), python-oslo.versionedobjects:ppc64el (0.9.0-1.ibm.ubuntu1, automatic), python-glanceclient:ppc64el (1.1.0-1.ibm.ubuntu1, automatic), ceilometer-common:ppc64el (5.0.0-201511171217.ibm.ubuntu1.199, automatic), openstack-i18n:ppc64el (2015.2-3.ibm.ubuntu1, automatic), python-oslo.messaging:ppc64el (2.1.0-2.ibm.ubuntu1, automatic), python-swiftclient:ppc64el (2.4.0-1.ibm.ubuntu1, automatic), ceilometer-powervm:ppc64el (1.0.0.0-151119-44, automatic)
End-Date: 2016-01-18  10:28:00

The command line interface

You can do ALL the stuffs you were doing on the HMC using the pvmctl command. The syntax is pretty simple: pvcmtl |OBJECT| |ACTION| where the OBJECT can be vios, vm, vea(virtual ethernet adapter), vswitch, lu (logical unit), or anything you want and ACTION can be list, delete, create, update. Here are a few examples :

  • List the Virtual I/O Servers:
  • # pvmctl vios list
    Virtual I/O Servers
    +--------------+----+---------+----------+------+-----+-----+
    |     Name     | ID |  State  | Ref Code | Mem  | CPU | Ent |
    +--------------+----+---------+----------+------+-----+-----+
    | s00ia9940825 | 1  | running |          | 8192 |  2  | 0.2 |
    | s00ia9940826 | 2  | running |          | 8192 |  2  | 0.2 |
    +--------------+----+---------+----------+------+-----+-----+
    
  • List the partitions (note the -d for display-fields allowing me to print somes attributes):
  • # pvmctl vm list
    Logical Partitions
    +----------+----+----------+----------+----------+-------+-----+-----+
    |   Name   | ID |  State   |   Env    | Ref Code |  Mem  | CPU | Ent |
    +----------+----+----------+----------+----------+-------+-----+-----+
    | aix72ca> | 3  | not act> | AIX/Lin> | 00000000 |  2048 |  1  | 0.1 |
    |   nova   | 4  | running  | AIX/Lin> | Linux p> |  8192 |  2  | 0.5 |
    | s00vl99> | 5  | running  | AIX/Lin> | Linux p> | 10240 |  2  | 0.2 |
    | test-59> | 6  | not act> | AIX/Lin> | 00000000 |  2048 |  1  | 0.1 |
    +----------+----+----------+----------+----------+-------+-----+-----+
    # pvmctl list vm -d name id 
    [..]
    # pvmctl vm list -i id=4 --display-fields LogicalPartition.name
    name=aix72-1-d3707953-00000090
    # pvmctl vm list  --display-fields LogicalPartition.name LogicalPartition.id LogicalPartition.srr_enabled SharedProcessorConfiguration.desired_virtual SharedProcessorConfiguration.uncapped_weight
    name=aix72capture,id=3,srr_enabled=False,desired_virtual=1,uncapped_weight=64
    name=nova,id=4,srr_enabled=False,desired_virtual=2,uncapped_weight=128
    name=s00vl9940243,id=5,srr_enabled=False,desired_virtual=2,uncapped_weight=128
    name=test-5925058d-0000008d,id=6,srr_enabled=False,desired_virtual=1,uncapped_weight=128
    
  • Delete the virtual adapter on the partition name nova (note the –parent-id to select the partition) with a certain uuid which was found with (pvmclt list vea):
  • # pvmctl vea delete --parent-id name=nova --object-id uuid=fe7389a8-667f-38ca-b61e-84c94e5a3c97
    
  • Power off the lpar named aix72-2:
  • # pvmctl vm power-off -i name=aix72-2-536bf0f8-00000091
    Powering off partition aix72-2-536bf0f8-00000091, this may take a few minutes.
    Partition aix72-2-536bf0f8-00000091 power-off successful.
    
  • Delete the lpar named aix72-2:
  • # pvmctl vm delete -i name=aix72-2-536bf0f8-00000091
    
  • Delete the vswitch named MGMTVSWITCH:
  • # pvmctl vswitch delete -i name=MGMTVSWITCH
    
  • Open a console:
  • #  mkvterm --id 4
    vterm for partition 4 is active.  Press Control+] to exit.
    |
    Elapsed time since release of system processors: 57014 mins 10 secs
    [..]
    
  • Power on an lpar:
  • # pvmctl vm power-on -i name=aix72capture
    Powering on partition aix72capture, this may take a few minutes.
    Partition aix72capture power-on successful.
    

Is this a dream ? No more RMC connectivty problem anymore

I’m 100% sure that you always have problems with RMC connectivity due to firwall issues, ports not opened, and IDS blocking RMC ongoing or outgoing traffic. NovaLink is THE solution that will solve all the RMC problems forever. I’m not joking it’s a major improvement for PowerVM. As the NovaLink partition is installed on each hosts this one can communicate through a dedicated IPv6 link with all the partitions hosted on the host. A dedicated virtual switch called MGMTSWITCH is used to allow the RMC flow to transit between all the lpars and the NovaLink partition. Of course this Virtual Switch must be created and one Virtual Ethernet Adapter must also be created on the NovaLink partition. These are the first two actions to do if you want to implement this solution. Before starting here are a few things you need to know:

  • For security reason the MGMTSWITCH must be created in Vepa mode. If you are not aware of what are VEPA and VEB modes here is a reminder:
  • In VEB mode all the the partitions connected to the same vlan can communicate together. We do not want that as it is a security issue.
  • The VEPA mode gives us the ability to isolate lpars that are on the same subnet. lpar to lpar traffic is forced out of the machine. This is what we want.
  • The PVID for this VEPA network is 4094
  • The adapter in the NovaLink partition must be a trunk adapter.
  • It is mandatory to name the VEPA vswitch MGMTSWITCH.
  • At the lpar creation if the MGMTSWITCH exists a new Virtual Ethernet Adapter will be automatically created on the deployed lpar.
  • To be correctly configured the deployed lpar needs the latest level of rsct code (3.2.1.0 for now).
  • The latest cloud-init version must be deploy on the captured lpar used to make the image.
  • You don’t need to configure any addresses on this adapter (on the deployed lpars the adapter is configured with the local-link address (it’s the same thing as 169.254.0.0/16 addresses used in IPv4 format but for IPv6)(please note that any IPv6 adapter must “by design” have a local-link address).

mgmtswitch2

  • Create the virtual switch called MGMTSWITCH in Vepa mode:
  • # pvmctl vswitch create --name MGMTSWITCH --mode=Vepa
    # pvmctl vswitch list  --display-fields VirtualSwitch.name VirtualSwitch.mode 
    name=ETHERNET0,mode=Veb
    name=vdct,mode=Veb
    name=vdcb,mode=Veb
    name=vdca,mode=Veb
    name=MGMTSWITCH,mode=Vepa
    
  • Create a virtual ethernet adapter on the NovaLink partition with the PVID 4094 and a trunk priorty set to 1 (it’s a trunk adapter). Note that we now have two adapters on the NovaLink partition (one in IPv4 (routable) and the other one in IPv6 (non-routable):
  • # pvmctl vea create --pvid 4094 --vswitch MGMTSWITCH --trunk-pri 1 --parent-id name=nova
    # pvmctl vea list --parent-id name=nova
    --------------------------
    | VirtualEthernetAdapter |
    --------------------------
      is_tagged_vlan_supported=False
      is_trunk=False
      loc_code=U8286.41A.216666-V3-C2
      mac=EE3B84FD1402
      pvid=666
      slot=2
      uuid=05a91ab4-9784-3551-bb4b-9d22c98934e6
      vswitch_id=1
    --------------------------
    | VirtualEthernetAdapter |
    --------------------------
      is_tagged_vlan_supported=True
      is_trunk=True
      loc_code=U8286.41A.216666-V3-C34
      mac=B6F837192E63
      pvid=4094
      slot=34
      trunk_pri=1
      uuid=fe7389a8-667f-38ca-b61e-84c94e5a3c97
      vswitch_id=4
    

    Configure the local-link IPv6 address in the NovaLink partition:

    # more /etc/network/interfaces
    [..]
    auto eth1
    iface eth1 inet manual
     up /sbin/ifconfig eth1 0.0.0.0
    # ifup eth1
    # ifconfig eth1
    eth1      Link encap:Ethernet  HWaddr b6:f8:37:19:2e:63
              inet6 addr: fe80::b4f8:37ff:fe19:2e63/64 Scope:Link
              UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
              RX packets:0 errors:0 dropped:0 overruns:0 frame:0
              TX packets:17 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:1000
              RX bytes:0 (0.0 B)  TX bytes:1454 (1.4 KB)
              Interrupt:34
    

Capture an AIX host with the latest version of rsct installed (3.2.1.0) or later and the latest version of cloud-init installed. This version of RMC/rsct handle this new feature so this is mandatory to have it installed on the captured host. When PowerVC will deploy a Virtual Machine on a Nova managed host with this version of rsct installed a new adapter with the PVID 4094 in the virtual switch MGMTSWITCH will be created and finally all the RMC traffic will use this adapter instead of your public IP address:

# lslpp -L rsct*
  Fileset                      Level  State  Type  Description (Uninstaller)
  ----------------------------------------------------------------------------
  rsct.core.auditrm          3.2.1.0    C     F    RSCT Audit Log Resource
                                                   Manager
  rsct.core.errm             3.2.1.0    C     F    RSCT Event Response Resource
                                                   Manager
  rsct.core.fsrm             3.2.1.0    C     F    RSCT File System Resource
                                                   Manager
  rsct.core.gui              3.2.1.0    C     F    RSCT Graphical User Interface
  rsct.core.hostrm           3.2.1.0    C     F    RSCT Host Resource Manager
  rsct.core.lprm             3.2.1.0    C     F    RSCT Least Privilege Resource
                                                   Manager
  rsct.core.microsensor      3.2.1.0    C     F    RSCT MicroSensor Resource
                                                   Manager
  rsct.core.rmc              3.2.1.1    C     F    RSCT Resource Monitoring and
                                                   Control
  rsct.core.sec              3.2.1.0    C     F    RSCT Security
  rsct.core.sensorrm         3.2.1.0    C     F    RSCT Sensor Resource Manager
  rsct.core.sr               3.2.1.0    C     F    RSCT Registry
  rsct.core.utils            3.2.1.1    C     F    RSCT Utilities

When this image will be deployed a new adapter will be created in the MGMTSWITCH virtual switch, an IPv6 local-link address will be configured on it. You can check the cloud-init activation to see the IPv6 address is configured at the activation time:

# pvmctl vea list --parent-id name=aix72-2-0a0de5c5-00000095
--------------------------
| VirtualEthernetAdapter |
--------------------------
  is_tagged_vlan_supported=True
  is_trunk=False
  loc_code=U8286.41A.216666-V5-C32
  mac=FA620F66FF20
  pvid=3331
  slot=32
  uuid=7f1ec0ab-230c-38af-9325-eb16999061e2
  vswitch_id=1
--------------------------
| VirtualEthernetAdapter |
--------------------------
  is_tagged_vlan_supported=True
  is_trunk=False
  loc_code=U8286.41A.216666-V5-C33
  mac=46A066611B09
  pvid=4094
  slot=33
  uuid=560c67cd-733b-3394-80f3-3f2a02d1cb9d
  vswitch_id=4
# ifconfig -a
en0: flags=1e084863,14c0
        inet 10.10.66.66 netmask 0xffffff00 broadcast 10.14.33.255
         tcp_sendspace 262144 tcp_recvspace 262144 rfc1323 1
en1: flags=1e084863,14c0
        inet6 fe80::c032:52ff:fe34:6e4f/64
         tcp_sendspace 262144 tcp_recvspace 262144 rfc1323 1
sit0: flags=8100041
        inet6 ::10.10.66.66/96
[..]

Note that the local-link address is configured at the activation time (fe80 starting addresses):

# more /var/log/cloud-init-output.log
[..]
auto eth1

iface eth1 inet6 static
    address fe80::c032:52ff:fe34:6e4f
    hwaddress ether c2:32:52:34:6e:4f
    netmask 64
    pre-up [ $(ifconfig eth1 | grep -o -E '([[:xdigit:]]{1,2}:){5}[[:xdigit:]]{1,2}') = "c2:32:52:34:6e:4f" ]
        dns-search fr.net.intra
# entstat -d ent1 | grep -iE "switch|vlan"
Invalid VLAN ID Packets: 0
Port VLAN ID:  4094
VLAN Tag IDs:  None
Switch ID: MGMTSWITCH

To be sure all is working correctly here is a proof test. I’m taking down the en0 interface on which the IPv4 public address is configured. Then I’m launching a tcpdump on the en1 (on the MGMTSWITCH address). Finally I’m resizing the Virtual Machine with PowerVC. AND EVERYTHING IS WORKING GREAT !!!! AWESOME !!! :-) (note the fe80 to fe80 communication):

# ifconfig en0 down detach ; tcpdump -i en1 port 657
tcpdump: WARNING: en1: no IPv4 address assigned
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on en1, link-type 1, capture size 96 bytes
22:00:43.224964 IP6 fe80::9850:f6ff:fe9c:5739.32819 > fe80::d09e:aff:fecf:a868.rmc: S 4049792650:4049792650(0) win 65535 
22:00:43.225022 IP6 fe80::d09e:aff:fecf:a868.rmc > fe80::9850:f6ff:fe9c:5739.32819: S 2055569200:2055569200(0) ack 4049792651 win 28560 
22:00:43.225051 IP6 fe80::9850:f6ff:fe9c:5739.32819 > fe80::d09e:aff:fecf:a868.rmc: . ack 1 win 32844 
22:00:43.225547 IP6 fe80::9850:f6ff:fe9c:5739.32819 > fe80::d09e:aff:fecf:a868.rmc: P 1:209(208) ack 1 win 32844 
22:00:43.225593 IP6 fe80::d09e:aff:fecf:a868.rmc > fe80::9850:f6ff:fe9c:5739.32819: . ack 209 win 232 
22:00:43.225638 IP6 fe80::d09e:aff:fecf:a868.rmc > fe80::9850:f6ff:fe9c:5739.32819: P 1:97(96) ack 209 win 232 
22:00:43.225721 IP6 fe80::9850:f6ff:fe9c:5739.32819 > fe80::d09e:aff:fecf:a868.rmc: P 209:377(168) ack 97 win 32844 
22:00:43.225835 IP6 fe80::d09e:aff:fecf:a868.rmc > fe80::9850:f6ff:fe9c:5739.32819: P 97:193(96) ack 377 win 240 
22:00:43.225910 IP6 fe80::9850:f6ff:fe9c:5739.32819 > fe80::d09e:aff:fecf:a868.rmc: P 377:457(80) ack 193 win 32844 
22:00:43.226076 IP6 fe80::d09e:aff:fecf:a868.rmc > fe80::9850:f6ff:fe9c:5739.32819: P 193:289(96) ack 457 win 240 
22:00:43.226154 IP6 fe80::9850:f6ff:fe9c:5739.32819 > fe80::d09e:aff:fecf:a868.rmc: P 457:529(72) ack 289 win 32844 
22:00:43.226210 IP6 fe80::d09e:aff:fecf:a868.rmc > fe80::9850:f6ff:fe9c:5739.32819: P 289:385(96) ack 529 win 240 
22:00:43.226276 IP6 fe80::9850:f6ff:fe9c:5739.32819 > fe80::d09e:aff:fecf:a868.rmc: P 529:681(152) ack 385 win 32844 
22:00:43.226335 IP6 fe80::d09e:aff:fecf:a868.rmc > fe80::9850:f6ff:fe9c:5739.32819: P 385:481(96) ack 681 win 249 
22:00:43.424049 IP6 fe80::9850:f6ff:fe9c:5739.32819 > fe80::d09e:aff:fecf:a868.rmc: . ack 481 win 32844 
22:00:44.725800 IP6 fe80::d09e:aff:fecf:a868.rmc > fe80::9850:f6ff:fe9c:5739.rmc: UDP, length 88
22:00:44.726111 IP6 fe80::9850:f6ff:fe9c:5739.rmc > fe80::d09e:aff:fecf:a868.rmc: UDP, length 88
22:00:50.137605 IP6 fe80::d09e:aff:fecf:a868.rmc > fe80::9850:f6ff:fe9c:5739.rmc: UDP, length 632
22:00:50.137900 IP6 fe80::9850:f6ff:fe9c:5739.rmc > fe80::d09e:aff:fecf:a868.rmc: UDP, length 88
22:00:50.183108 IP6 fe80::9850:f6ff:fe9c:5739.rmc > fe80::d09e:aff:fecf:a868.rmc: UDP, length 408
22:00:51.683382 IP6 fe80::9850:f6ff:fe9c:5739.rmc > fe80::d09e:aff:fecf:a868.rmc: UDP, length 408
22:00:51.683661 IP6 fe80::d09e:aff:fecf:a868.rmc > fe80::9850:f6ff:fe9c:5739.rmc: UDP, length 88

To be sure security requirements are met from the lpar I’m pinging the NovaLink host (the first one) which is answering and then I’m pinging the second lpar (the second ping) which is not working. (And this is what we want !!!).

# ping fe80::d09e:aff:fecf:a868
PING fe80::d09e:aff:fecf:a868 (fe80::d09e:aff:fecf:a868): 56 data bytes
64 bytes from fe80::d09e:aff:fecf:a868: icmp_seq=0 ttl=64 time=0.203 ms
64 bytes from fe80::d09e:aff:fecf:a868: icmp_seq=1 ttl=64 time=0.206 ms
64 bytes from fe80::d09e:aff:fecf:a868: icmp_seq=2 ttl=64 time=0.216 ms
^C
--- fe80::d09e:aff:fecf:a868 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0/0/0 ms
# ping fe80::44a0:66ff:fe61:1b09
PING fe80::44a0:66ff:fe61:1b09 (fe80::44a0:66ff:fe61:1b09): 56 data bytes
^C
--- fe80::44a0:66ff:fe61:1b09 ping statistics ---
2 packets transmitted, 0 packets received, 100% packet loss

PowerVC 1.3.0.1 Dynamic Resource Optimizer

In addition to the NovaLink part of this blog post I also wanted to talk about the killer app of 2016. Dynamic Resource Optimizer. This feature can be used on any PowerVC 1.3.0.1 managed hosts (you obviously need at least to hosts). DRO is in charge to re-balance your Virtual Machines across all the available hosts (in the host-group). To sum up if a host is experiencing an heavy load and reaching a certain amount of CPU consumption over a period of time, DRO will move your virtual machines to re-balance the load across all the available hosts (this is done at a host level). Here are a few details about DRO:

  • The DRO configuration is done at a host level.
  • You setup a threshold (in the capture below) to reach to trigger the Live Partition Moblity or Mobily Cores movements (Power Entreprise Pool).
  • droo6
    droo3

  • To be triggered this threshold must be reached a certain number of time (stabilization) over a period you are defining (run interval).
  • You can choose to move virtual machines using Live Partition Mobilty, or to move “cores” using Power Entreprise Pool (you can do both; moving CPU will always be preferred as moving partitions)
  • DRO can be run in advise mode (nothing is done, a warning is thrown in the new DRO events tab) or in active mode (which is doing the job and moving things).
    droo2
    droo1

  • Your most critical virtual machines can be excluded from DRO:
  • droo5

How is DRO choosing which machines are moved

I’m running DRO in production since now one month and I had the time to check what is going on behind the scene. How is DRO choosing which machines are moved when a Live Partition Moblity operation must be run to face an heavy load on a host ? To do so I decided to launch 3 different cpuhog (16 forks, 4VP, SMT4) processes (which are eating CPU ressource) on three different lpars with 4VP each. On the PowerVC I can check that before launching this processes the CPU consumption is ok on this host (the three lpars are running on the same host) :

droo4

# cat cpuhog.pl
#!/usr/bin/perl

print "eating the CPUs\n";

foreach $i (1..16) {
      $pid = fork();
      last if $pid == 0;
      print "created PID $pid\n";
}

while (1) {
      $x++;
}
# perl cpuhog.pl
eating the CPUs
created PID 47514604
created PID 22675712
created PID 3015584
created PID 21496152
created PID 25166098
created PID 26018068
created PID 11796892
created PID 33424106
created PID 55444462
created PID 65077976
created PID 13369620
created PID 10813734
created PID 56623850
created PID 19333542
created PID 58393312
created PID 3211988

I’m waiting a couple of minutes and I realize that the virtual machines on which the cpuhog processes were launched are the ones which are migrated. So we can say that PowerVC is moving the machine that are eating CPU (another strategy could be to move all the non-eating CPU machines to let the working ones do their job without launching a mobility operation).

# errpt | head -3
IDENTIFIER TIMESTAMP  T C RESOURCE_NAME  DESCRIPTION
A5E6DB96   0118225116 I S pmig           Client Partition Migration Completed
08917DC6   0118225116 I S pmig           Client Partition Migration Started

After the moves are ok I can see that the load is now ok on the host. DRO has done the job for me and moved the lpar to met the configured thresold ;-)

droo7dro_effect

The images below will show you a good example of the “power” of PowerVC and DRO. To update my Virtual I/O Servers to the latest version the PowerVC maintenance mode was used to free up the Virtual I/O Servers. After leaving the maintenance mode the DRO was doing the job to re-balance the Virtual Machines across all the hosts (The red arrows symbolize the maintenance mode action and the purple ones the DRO actions). You can also see that some lpars were moved across 4 different hosts during this process. All these pictures are taken from real life experience on my production systems. This not a lab environment, this is one part of my production. So yes DRO and PowerVC 1.3.0.1 are production ready. Hell yes!

real1
real2
real3
real4
real5

Conclusion

As my environment is growing bigger the next step for me will be to move on NovaLink on my P8 hosts. Please note that the NovaLink Co-Management feature is today a “TechPreview” but should be released GA very soon. Talking about DRO I was waiting for that for years and it finally happens. I can assure you that it is production ready, to prove this I’ll just give you this number. To upgrade my Virtual I/O Servers to 2.2.4.10 release using PowerVC maintenance mode and DRO more than 1000 Live Partition Mobility moves were performed without any outage on production servers and during working hours. Nobody in my company was aware of this during the operations. It was a seamless experience for everybody.