Continuous integration for your Chef AIX cookbooks (using PowerVC, Jenkins, test-kitchen and gitlab)

My Journey to integrate Chef on AIX is still going on and I’m working more than ever on these topics. I know that using such tools is not something widely adopted by AIX customers. But what I also know is that whatever happens you will in a near -or distant- future use an automation tool. These tools are so widely used in the Linux world that you just can’t ignore it. The way you were managing your AIX ten years ago is not the same as what you are doing today, and what you do today will not be what you’ll do in the future. The AIX world needs a facelift to survive, a huge step has already be done (and is still ongoing) with PowerVC thanks to a fantastic team composed by very smart people at IBM (@amarteyp; @drewthorst, @jwcroppe, and all the other persons in this team!) The AIX world is now compatible with Openstack and with this other things are coming … such as automation. When all of these things will be ready AIX we will be able to offer something comparable to Linux. Openstack and automation are the first brick to what we call today “devops” (to be more specific it’s the ops part of the devops word).

I will today focus on how to manage your AIX machines using Chef. By using the word “how” I mean what are the best practices and infrastructures to build to start using Chef on AIX. If you remember my session about Chef on AIX at the IBM Technical University in Cannes I was saying that by using Chef your infrastructure will be testable, repeatable, and versionnable. We will focus on this blog post on how to do that. To test your AIX Chef cookbooks you will need to understand what is the test kitchen (we will use the test kitchen to drive PowerVC to build virtual machines on the fly and run the chef recipes on it). To repeat this over and over to be sure everything is working (code review, be sure that your cookbook is converging) ok without having to do anything we will use Jenkins to automate these tests. Then to version your cookbooks development we will use gitlab.

To better understand why I’m doing such a thing there is nothing better than a concrete example. My goal is to do all my AIX post-installation tasks using Chef (motd configuration, dns, devices attributes, fileset installation, enabling services … everything that you are today doing using korn shells scripts). Who has never experienced someone changing one of these scripts (most of the time without warning the other members of the team) resulting in a syntax error then resulting in an outage for all your new builds. Doing this is possible if you are in a little team creating one machine per month but is inconceivable in an environment driven by PowerVC where sysadmin are not doing anything “by hand”. In such an environment if someone is doing this kind of error all the new builds are failing …. even worse you’ll probably not be aware of this until someone who is connecting on the machine will say that there is an error (most of the time the final customer). By using continuous integration your AIX build will be tested at every change, all this changes will be stored in a git repository and even better you will not be able to put a change in production without passing all these tests. Even if using this is just mandatory to do that for people using PowerVC today people who are not can still do the same thing. By doing that you’ll have a clean and proper AIX build (post-install) and no errors will be possible anymore, so I highly encourage you to do this even if you are not adopting the Openstack way or even if today you don’t see the benefits. In the future this effort will pay. Trust me.

The test-kitchen

What is the kitchen

The test-kitchen is a tool that allows you to run your AIX Chef cookbooks and recipes in a quick way without having to do manual task. During the development of your recipes if you don’t use the test kitchen you’ll have many tasks to do manually. Build a virtual machine, install the chef client, copy the cookbook and the recipes, run it, check everything is in the state that you want. Imagine doing that on different AIX version (6.1, 7.1, 7.2) everytime you are changing something in your post-installation recipes (I was doing that before and I can assure you that creating and destroy machine over and over and over is just a waste of time). The test kitchen is here to do the job for you. It will build the machine for you (using the PowerVC kitchen driver), install the chef-client (using an omnibus server), copy the content of your cookbook (the files), run a bunch of recipe (described in what we call suites) and then test it (using bats, or serverspec). You can configure your kitchen to test different kind of images (6.1, 7.1, 7.2) and differents suites (cookbooks, recipes) depending on the environment you want to test. By default the test kitchen is using a Linux tool called Vagrant to build your VM. Obsiouvly Vagrant is not able to build an AIX machine, that’s why we will use a modified version of the kitchen-openstack driver (modified by my self) called kitchen-powervc to build the virtual machines:

Installing the kitchen and the PowerVC driver

If you have an access to an enterprise proxy you can directly download and install the gem files from your host (in my case this is a Linux on Power … so Linux on Power is working great for this).

  • Install the test kitchen :
  • # gem install --http-proxy http://bcreau:mypasswd@proxy:8080 test-kitchen
    Successfully installed test-kitchen-1.7.2
    Parsing documentation for test-kitchen-1.7.2
    1 gem installed
    
  • Install kitchen-powervc :
  • # gem install --http-proxy http://bcreau:mypasswd@proxy:8080 kitchen-powervc
    Successfully installed kitchen-powervc-0.1.0
    Parsing documentation for kitchen-powervc-0.1.0
    1 gem installed
    
  • Install kitchen-openstack :
  • # gem install --http-proxy http://bcreau:mypasswd@proxy:8080 kitchen-openstack
    Successfully installed kitchen-openstack-3.0.0
    Fetching: fog-core-1.38.0.gem (100%)
    Successfully installed fog-core-1.38.0
    Fetching: fuzzyurl-0.8.0.gem (100%)
    Successfully installed fuzzyurl-0.8.0
    Parsing documentation for kitchen-openstack-3.0.0
    Installing ri documentation for kitchen-openstack-3.0.0
    Parsing documentation for fog-core-1.38.0
    Installing ri documentation for fog-core-1.38.0
    Parsing documentation for fuzzyurl-0.8.0
    Installing ri documentation for fuzzyurl-0.8.0
    3 gems installed
    

If you don’t have the access to an enterprise proxy you can still download the gems from home and install it on your work machine:

# gem install test-kitchen kitchen-powervc kitchen-openstack -i repo --no-ri --no-rdoc
# # copy the files (repo directory) on your destination machine
# gem install *.gem

Setup the kitchen (.kitchen.yml file)

The kitchen configuration file is the .kitchen.yml, when you’ll run the kitchen command, the kitchen will look at this file. You have to put it in the chef-repo (where the cookbook directory is, the kitchen will copy the file from the cookbook to the test machine that’s why it’s important to put this file at the root of the chef-repo.) This file is separated in different sections:

  • The driver section. In this section you will configure howto created virtual machines. In our case how to connect to PowerVC (credentials, region). You’ll also tell in this section which image you want to use (PowerVC images), which flavor (PowerVC template) and which network will be used at the VM creation (please note that you can put some driver_config in the platform section, to tell which image or which ip you want to use for each specific platform.:
    • name: the name of the driver (here powervc).
    • openstack*: the PowerVC url, user, password, region, domain.
    • image_ref: the name of the image (we will put this in driver_config in the platform section).
    • flavor_ref: the name of the PowerVC template used at the VM creation.
    • fixed_ip: the ip_address used for the virtual machine creation.
    • server_name_prefix: each vm created by the kitchen will be prefixed by this parameter.
    • network_ref: the name of the PowerVC vlan to be used at the machine creation.
    • public_key_path: The kitchen needs to connect to the machine with ssh, you need to provide the public key used.
    • private_key_path: Same but for the private key.
    • username: The ssh username (we will use root, but you can use another user and then tell the kitchen to use sudo)
    • user_data: The activation input used by cloud-init we will in this one put the public key to be sure you can access the machine without password (it’s the PowerVC activation input).
    • driver:
        name: powervc
        server_wait: 100
        openstack_username: "root"
        openstack_api_key: "root"
        openstack_auth_url: "https://mypowervc:5000/v3/auth/tokens"
        openstack_region: "RegionOne"
        openstack_project_domain: "Default"
        openstack_user_domain: "Default"
        openstack_project_name: "ibm-default"
        flavor_ref: "mytemplate"
        server_name_prefix: "chefkitchen"
        network_ref: "vlan666"
        public_key_path: "/home/chef/.ssh/id_dsa.pub"
        private_key_path: "/home/chef/.ssh/id_dsa"
        username: "root"
        user_data: userdata.txt
      
      #cloud-config
      ssh_authorized_keys:
        - ssh-dss AAAAB3NzaC1kc3MAAACBAIVZx6Pic+FyUisoNrm6Znxd48DQ/YGNRgsed+fc+yL1BVESyTU5kqnupS8GXG2I0VPMWN7ZiPnbT1Fe2D[..]
      
  • The provisioner section: This section can be use to specify if you want to user chef-zero or chef-solo as a provisioner. You can also specify an omnibus url (use to download and install the chef-client at the machine creation time). In my case the omnibus url is a link to an http server “serving” a script (install.sh) installing the chef client fileset for AIX (more details later in the blog post). I’m also putting “sudo” to false as I’ll connect with the root user:
  • provisioner:
      name: chef_solo
      chef_omnibus_url: "http://myomnibusserver:8080/chefclient/install.sh"
      sudo: false
    
  • The platefrom section: The plateform section will describe each plateform that the test-kitchen can create (I’m putting here the image_ref and the fixed_ip for each plateform (AIX 6.1, AIX 7.1, AIX 7.2)
  • platforms:
      - name: aix72
        driver_config:
          image_ref: "kitchen-aix72"
          fixed_ip: "10.66.33.234"
      - name: aix71
        driver_config:
          image_ref: "kitchen-aix71"
          fixed_ip: "10.66.33.235"
      - name: aix61
        driver_config:
          image_ref: "kitchen-aix61"
          fixed_ip: "10.66.33.236"
    
  • The suite section: this section describe which cookbook and which recipes you want to run in the machines created by the test-kitchen. For the simplicity of this example I’m just running two recipe the first on called root_authorized_keys (creating the /root directory, changing the home directory of root and the putting a public key in the .ssh directory) and the second one call gem_source (we will check later in the post why I’m also calling this recipe):
  • suites:
      - name: aixcookbook
        run_list:
        - recipe[aix::root_authorized_keys]
        - recipe[aix::gem_source]
        attributes: { gem_source: { add_urls: [ "http://10.14.66.100:8808" ], delete_urls: [ "https://rubygems.org/" ] } }
    
  • The busser section: this section describe how to run you tests (more details later in the post ;-) ):
  • busser:
      sudo: false
    

After configuring the kitchen you can check the yml file is ok by listing what’s configured on the kitchen:

# kitchen list
Instance           Driver   Provisioner  Verifier  Transport  Last Action
aixcookbook-aix72  Powervc  ChefSolo     Busser    Ssh        
aixcookbook-aix71  Powervc  ChefSolo     Busser    Ssh        
aixcookbook-aix61  Powervc  ChefSolo     Busser    Ssh        

kitchen1
kitchen2

Anatomy of a kitchen run

A kitchen run is divided into five steps. At first we are creating a virtual machine (the create action), then we are installing the chef-client (using an omnibus url) and running some recipes (converge), then we are installing testing tools on the virtual machine (in my case serverspec) (setup) and we are running the tests (verify). Finally if everything was ok we are deleting the virtual machines (destroy). Instead of running all theses steps one by one you can use the “test” option. This one will do destroy,create,converge,setup,verify,destroy in on single “pass”. Let’s check in details each steps:

kitchen1

  • Create: This will create the virtual machine using PowerVC. If you choose to use the “fixed_ip” option in the .kitchen.yml file this ip will be choose at the machine creation time. If you prefer to pick an ip from the network (in the pool) don’t set the “fixed_ip”. You’ll see the details in the picture below. You can at the end test the connectivity (transport) (ssh) to the machine using “kitchen login”. The ssh public key was automatically added using the userdata.txt file used by cloud-init at the machine creation time. After the machine is created you can use the “kitchen list” command to check the machine was successfully created:
# kitchen create

kitchencreate3
kitchencreate1
kitchencreate2
kitchenlistcreate1

  • Converge: This will converge the kitchen (on more time converge = chef-client installation and running chef-solo with the suite configuration describing which recipe will be launched). The converge action will download the chef client and install it on the machine (using the omnibus url) and run the recipe specified in the suite stanza of the .kitchen.yml file. Here is the script I use for the omnibus installation this script is “served” by an http server:
  • # cat install.sh
    #!/usr/bin/ksh
    echo "[omnibus] [start] starting omnibus install"
    echo "[omnibus] downloading chef client http://chefomnibus:8080/chefclient/lastest"
    perl -le 'use LWP::Simple;getstore("http://chefomnibus:8080/chefclient/latest", "/tmp/chef.bff")'
    echo "[omnibus] installing chef client"
    installp -aXYgd /tmp/ chef
    echo "[omnibus] [end] ending omnibus install"
    
  • The http server is serving this install.sh file. Here is the httpd.conf configuration file for the omnibus installation on AIX:
  • # ls -l /apps/chef/chefclient
    total 647896
    -rw-r--r--    1 apache   apache     87033856 Dec 16 17:15 chef-12.1.2-1.powerpc.bff
    -rwxr-xr-x    1 apache   apache     91922944 Nov 25 00:24 chef-12.5.1-1.powerpc.bff
    -rw-------    2 apache   apache     76375040 Jan  6 11:23 chef-12.6.0-1.powerpc.bff
    -rwxr-xr-x    1 apache   apache          364 Apr 15 10:23 install.sh
    -rw-------    2 apache   apache     76375040 Jan  6 11:23 latest
    # cat httpd.conf
    [..]
         Alias /chefclient/ "/apps/chef/chefclient/"
         
             Options Indexes FollowSymlinks MultiViews
           AllowOverride None
           Require all granted
         
    
# kitchen converge

kitchenconverge1
kitchenconverge2b
kitchenlistconverge1

  • Setup and verify: these actions will run a bunch of tests to verify the machine is in the state you want. The test I am writing are checking that the root home directory was created and the key was successfully created in the .ssh directory. In a few words you need to write tests checking that your recipes are working well (in chef words: “check that the machine is in the correct state”). In my case I’m using serverspec to describe my tests (there are different tools using for testing, you can also use bats). To describe the tests suite just create serverspec files (describing the tests) in the chef-repo directory (in ~/test/integration//serverspec in my case ~/test/integration/aixcookbook/serverspec). All the serverspec test files are suffixed by _spec:
  • # ls test/integration/aixcookbook/serverspec/
    root_authorized_keys_spec.rb  spec_helper.rb
    
  • The “_spec” file describe the tests that will be run by the kitchen. In my very simple tests here I’m just checking my files exists and the content of the public_key is the same as my public_key (the key created by cloud-init in AIX is located in ~/.ssh and my test recipe here is changing the root home directory and putting the key in the right place). By looking at the file you can see that the serverspec language is very simple to understand:
  • # ls test/integration/aixcookbook/serverspec/
    root_authorized_keys_spec.rb  spec_helper.rb
    
    # cat spec_helper.rb
    require 'serverspec'
    set :backend, :exec
    # cat root_authorized_keys_spec.rb
    require 'spec_helper'
    
    describe file('/root/.ssh') do
      it { should exist }
      it { should be_directory }
      it { should be_owned_by 'root' }
    end
    
    describe file('/root/.ssh/authorized_keys') do
      it { should exist }
      it { should be_owned_by 'root' }
      it { should contain 'from="1[..]" ssh-rsa AAAAB3NzaC1[..]' }
    end
    
  • The kitchen will try to install needed ruby gems for serverspec (serverspec needs to be installed on the server to run the automated test). As my server has no connectivity to the internet I need to run my own gem server. I’m lucky all the gem needed are installed on my chef workstation (if you have no internet access from the workstation use the tip described at the beginning of this blog post). I just need to run a local gem server by running “gem server” on the chef workstation. The server is listening on port 8808 and will serve all the needed gems:
  • # gem list | grep -E "busser|serverspec"
    busser (0.7.1)
    busser-bats (0.3.0)
    busser-serverspec (0.5.9)
    serverspec (2.31.1)
    # gem server
    Server started at http://0.0.0.0:8808
    
  • If you look on the output above you can see that the recipe gem_server was executed. This recipe change the gem source on the virtual machine (from https://rubygems.org to my own local server). In the .kitchen.yml file the urls to add and remove to the gem source are specified in the suite attributes:
  • # cat gem_source.rb
    ruby_block 'Changing gem source' do
      block do
        node['gem_source']['add_urls'].each do |url|
          current_sources = Mixlib::ShellOut.new('/opt/chef/embedded/bin/gem source')
          current_sources.run_command
          next if current_sources.stdout.include?(url)
          add = Mixlib::ShellOut.new("/opt/chef/embedded/bin/gem source --add #{url}")
          add.run_command
          Chef::Application.fatal!("Adding gem source #{url} failed #{add.status}") unless add.status == 0
          Chef::Log.info("Add gem source #{url}")
        end
    
        node['gem_source']['delete_urls'].each do |url|
          current_sources = Mixlib::ShellOut.new('/opt/chef/embedded/bin/gem source')
          current_sources.run_command
          next unless current_sources.stdout.include?(url)
          del = Mixlib::ShellOut.new("/opt/chef/embedded/bin/gem source --remove #{url}")
          del.run_command
          Chef::Application.fatal!("Removing gem source #{url} failed #{del.status}") unless del.status == 0
          Chef::Log.info("Remove gem source #{url}")
        end
      end
      action :run
    end
    
# kitchen setup
# kitchen verify

kitchensetupeverify1
kitchenlistverfied1

  • Destroy: This will destroy the virtual machine on PowerVC.
# kitchen destroy

kitchendestroy1
kitchendestroy2
kitchenlistdestroy1

Now that you understand how the kitchen is working and that you are now able to run it to create and test AIX machines you are ready to use the kitchen to develop and create the chef cookbook that will fit your infrastructure. To run the all the steps “create,converge,setup,verify,destroy”, just use the “kitchen test” command:

# kitchen test

As you are going to change a lot of things in your cookbook you’ll need to version the code you are creating, for this we will use a gitlab server.

Gitlab: version your AIX cookbook

Unfortunately for you and for me I didn’t had the time to run gitlab on a Linux on Power machine. I’m sure it is possible (if you find a way to do this please mail me). Anyway my version of gitlab is running on an x86 box. The goal here is to allow the chef workstation (in my environment this user is “chef”) user to push all the new developments (providers, recipes) to the git development branch for this we will:

  • Allow the chef user to push its source to the git server trough ssh (we are creating a chefworkstation user and adding the key to authorize this user to push the changes to the git repository with ssh).
  • gitlabchefworkst

  • Create a new repository called aix-cookbook.
  • createrepo

  • Push your current work to the master branch. The master branch will be the production branch.
  • # git config --global user.name "chefworkstation"
    # git config --global user.email "chef@myworkstation.chmod666.org"
    # git init
    # git add -A .
    # git commit -m "first commit"
    # git remote add origin git@gitlabserver:chefworkstation/aix-cookbook.git
    # git push origin master
    

    masterbranch

  • Create a development branch (you’ll need to push all your new development to this branch, and you’ll never have to do anything else on the master branch as Jenkins is going to do the job for us.
  • # git checkout -b dev
    # git commit -a
    # git push origin dev
    

    devbranch

The git server is ready: we have a repository accessible by the chef user. Two branch created the dev one (the one we are working on used for all our development) and the master branch used for production that will be never touched by us and will be only updated (by jenkins) if all the tests (foodcritic, rubocop and the test-kitchen) are ok

Automating the continous integration with Jenkins

What is Jenkins

The goal of Jenkins is to automate all tests and run them over and over again every time a change is applied onto the cookbook you are developing. By using Jenkins you will be sure that every change will be tested and you will never push something that is not working or not passing the tests you have defined in your production environment. To be sure the cookbook is working as desired we will use three different tools. foodcritic will check the will check your chef cookbook for common problems by checking rules that are defined within the tools (this rules will check that everything is ok for the chef execution, so you will be sure that there is no syntax error, and that all the coding convention will be respected), rubocop will check the ruby syntax, and then we will run a kitchen test to be sure that the developement branch is working with the kitchen and that all our serverspec tests are ok. Jenkins will automate the following steps:

  1. Pull the dev branch from git server (gitlab) if anything has changed on this branch.
  2. Run foodcritic on the code.
  3. If foodcritic tests are ok this will trigger the next step.
  4. Pull the dev branch again
  5. Run rubocop on the code.
  6. If rubocop tests are ok this will trigger the next step.
  7. Run the test-kitchen
  8. This will build a new machine on PowerVC and test the cookbook against it (kitchen test).
  9. If the test kitchen is ok push the dev branch to the master branch.
  10. You are ready for production :-)

kitchen2

First: Foodcritic

The first test we are running is foodcritic. Better than trying to do my own explanation of this with my weird english I prefer to quote the chef website:

Foodcritic is a static linting tool that analyzes all of the Ruby code that is authored in a cookbook against a number of rules, and then returns a list of violations. Because Foodcritic is a static linting tool, using it is fast. The code in a cookbook is read, broken down, and then compared to Foodcritic rules. The code is not run (a chef-client run does not occur). Foodcritic does not validate the intention of a recipe, rather it evaluates the structure of the code, and helps enforce specific behavior, detect portability of recipes, identify potential run-time failures, and spot common anti-patterns.

# foodcritic -f correctness ./cookbooks/
FC014: Consider extracting long ruby_block to library: ./cookbooks/aix/recipes/gem_source.rb:1

In Jenkins here are the steps to create a foodcritic test:

  • Pull dev branch from gitlab:
  • food1

  • Check for changes (the Jenkins test will be triggered only if there was a change in the git repository):
  • food2

  • Run foodcritic
  • food3

  • After the build parse the code (to archive and record the evolution of the foodcritic errors) and run the rubocop project if the build is stable (passed without any errors):
  • food4

  • To configure the parser go in the Jenkins configuration and add the foodcritic compiler warnings:
  • food5

Second: Rubocop

The second test we are running is rubocop it’s a Ruby static code analyzer, based on the community Ruby style guide. Here is an example below

# rubocop .
Inspecting 71 files
..CCCCWWCWC.WC..CC........C.....CC.........C.C.....C..................C

Offenses:

cookbooks/aix/providers/fixes.rb:31:1: C: Assignment Branch Condition size for load_current_resource is too high. [20.15/15]
def load_current_resource
^^^
cookbooks/aix/providers/fixes.rb:31:1: C: Method has too many lines. [19/10]
def load_current_resource ...
^^^^^^^^^^^^^^^^^^^^^^^^^
cookbooks/aix/providers/sysdump.rb:11:1: C: Assignment Branch Condition size for load_current_resource is too high. [25.16/15]
def load_current_resource

In Jenkins here are the steps to create a rubocop test:

  • Do the same thing as foodcritic except for the build and post-build action steps:
  • Run rubocop:
  • rubo1

  • After the build parse the code and run the test-kitchen project even if the build is fails (rubocop will generate tons of things to correct … once you are ok with rubocop change this to “trigger only if the build is stable”) :
  • rubo2

Third: test-kitchen

I don’t have to explain again what is the test-kitchen ;-) . It is the third test we are creating with Jenkins and if this one is ok we are pushing the changes in production:

  • Do the same thing as foodcritic except for the build and post-build action steps:
  • Run the test-kitchen:
  • kitchen1

  • If the test kitchen is ok push dev branch to master branch (dev to production):
  • kitchen3

More about Jenkins

The three tests are now linked together. On the Jenkins home page you can check the current state of your tests. Here are a couple of screenshots:

meteo
timeline

Conclusion

I know that for most of you working this way is something totally new. As AIX sysadmins we are used to our ksh and bash scripts and we like the way it is today. But as the world is changing and as you are going to manage more and more machines with less and less admins you will understand how powerful it is to use automation and how powerful it is to work in a “continuous integration” way. Even if you don’t like this “concept” or this new work habit … give it a try and you’ll see that working this way is worth the effort. First for you … you’ll discover a lot of new interesting things, second for your boss that will discover that working this way is safer and more productive. Trust me AIX needs to face Linux today and we are not going anywhere without having a proper fight versus the Linux guys :-) (yep it’s a joke).

NovaLink ‘HMC Co-Management’ and PowerVC 1.3.0.1Dynamic Resource Optimizer

Everybody now knows that I’m using PowerVC a lot in my current company. My environment is growing bigger and bigger and we are now managing more than 600 virtual machines with PowerVC (the goal is to reach ~ 3000 this year). Some of them were build by PowerVC itself and some of them were migrated through an homemade python script calling the PowerVC rest api and moving our old vSCSI machines to the new full NPIV/Live Partition Mobility/PowerVC environment (Still struggling with the “old mens” to move on SSP, but I’m alone versus everybody on this one). I’m happy with that but (there is always a but) I’m facing a lot problems. The first one is that we are doing more and more stuffs with PowerVC (Virtual Machine creation, virtual machines resizing, adding additional disks, moving machine with LPM, and finally using this python scripts to migrate the old machines to the new environment). I realized that the machine hosting the PowerVC was slower and slower and the more actions we do the more the PowerVC was “unresponsive”. By this I mean that the GUI was slow, creating objects was slower and slower. By looking at CPU graphs in lpar2rrd we noticed that the CPU consumption was growing as fast as we were doing stuffs on PowerVC (check the graph below). The second problem was my teams (unfortunately for me, we have here different teams doing different sort of stuffs here and everybody is using the Hardware Management Consoles it’s own way, some people are renaming the machine making them unusable with PowerVC, some people were changing the profiles disabling the synchronization, even worse we have some third party tools used for capacity planning making the Hardware Management Console unusable by PowerVC). The solution to all these problems is to use NovaLink and especially the NovaLink Co-Management. By doing this the Hardware Management Consoles will be restricted to a read-only view and PowerVC will stop querying the HMCs and will directly query the NovaLink partitions on each hosts instead of querying the Hardware Management Consoles.

cpu_powervc

What is NovaLink ?

If you are using PowerVC you know that this one is based on OpenStack. Until now all the Openstack services where running on the PowerVC host. If you check on the PowerVC today you can see that there is one Nova per managed host. In the example below I’m managing ten hosts so I have ten different Nova processes running :

# ps -ef | grep [n]ova-compute
nova       627     1 14 Jan16 ?        06:24:30 /usr/bin/python /usr/bin/nova-compute --config-file /etc/nova/nova-9117MMD_10D6666.conf --log-file /var/log/nova/nova-compute.log --log-file /var/log/nova/nova-compute-9117MMD_10D6666.log
nova       649     1 14 Jan16 ?        06:30:25 /usr/bin/python /usr/bin/nova-compute --config-file /etc/nova/nova-9117MMD_65E6666.conf --log-file /var/log/nova/nova-compute.log --log-file /var/log/nova/nova-compute-9117MMD_65E6666.log
nova       664     1 17 Jan16 ?        07:49:27 /usr/bin/python /usr/bin/nova-compute --config-file /etc/nova/nova-9117MMD_1086666.conf --log-file /var/log/nova/nova-compute.log --log-file /var/log/nova/nova-compute-9117MMD_1086666.log
nova       675     1 19 Jan16 ?        08:40:27 /usr/bin/python /usr/bin/nova-compute --config-file /etc/nova/nova-9117MMD_06D6666.conf --log-file /var/log/nova/nova-compute.log --log-file /var/log/nova/nova-compute-9117MMD_06D6666.log
nova       687     1 18 Jan16 ?        08:15:57 /usr/bin/python /usr/bin/nova-compute --config-file /etc/nova/nova-9117MMD_6576666.conf --log-file /var/log/nova/nova-compute.log --log-file /var/log/nova/nova-compute-9117MMD_6576666.log
nova       697     1 21 Jan16 ?        09:35:40 /usr/bin/python /usr/bin/nova-compute --config-file /etc/nova/nova-9117MMD_6556666.conf --log-file /var/log/nova/nova-compute.log --log-file /var/log/nova/nova-compute-9117MMD_6556666.log
nova       712     1 13 Jan16 ?        06:02:23 /usr/bin/python /usr/bin/nova-compute --config-file /etc/nova/nova-9117MMD_10A6666.conf --log-file /var/log/nova/nova-compute.log --log-file /var/log/nova/nova-compute-9117MMD_10A6666.log
nova       728     1 17 Jan16 ?        07:49:02 /usr/bin/python /usr/bin/nova-compute --config-file /etc/nova/nova-9117MMD_1016666.conf --log-file /var/log/nova/nova-compute.log --log-file /var/log/nova/nova-compute-9117MMD_1016666.log
nova       752     1 17 Jan16 ?        07:34:45 /usr/bin/python /usr/bin/nova-compute --config-file /etc/nova/nova-9117MMD_1036666.conf --log-file /var/log/nova/nova-compute.log --log-file /var/log/nova/nova-compute-9119MHE_1036666.log
nova       779     1 13 Jan16 ?        05:54:52 /usr/bin/python /usr/bin/nova-compute --config-file /etc/nova/nova-9117MMD_6596666.conf --log-file /var/log/nova/nova-compute.log --log-file /var/log/nova/nova-compute-9119MHE_6596666.log
# ps -ef | grep [n]ova-compute | wc -l
10

The goal of NovaLink is to move these processes on a dedicated partition running on each managed host (each PowerSystems). This partition is called the NovaLink partition. This one is running on an Ubuntu 15.10 Linux OS (Little endian) (so only available on Power8 hosts) and is in charge to run the Openstack nova processes. By doing that you will distribute the load across all the NovaLink partitions instead of charging one PowerVC host. Even better my understanding is that the NovaLink partition is able to communicate directly with the FSP. By using NovaLink you will be able to stop using the Hardware Management Consoles anymore and avoid the slowness of theses ones. As the NovaLink partition is hosted on the host itself the RMC connections are can now use a direct link (ipv6) through the PowerHypervisor. No more RMC connection problem at all ;-), it’s just awesome. NovaLink allows you to choose between two modes of management:

  • Full Nova Management: You install your new host directly with NovaLink on it and you will not need an Hardware Management Console Anymore (In this case the NovaLink installation is in charge to deploy the Virtual I/O Servers and the SEAs).
  • Nova Co-Management: Your host is already installed and you give the write access (setmaster) to the NovaLink partition, the Hardware Management Console will be limited in this mode (you will not be able to create partition anymore or modify profile, it’s not a “read only” mode as you will be able to start and stop the partitions and still do some stuffs with HMC but you will be very limited).
  • You can still mix NovaLink and Non-NovaLink management hosts, and still have P7/P6 managed by HMCs, P8 managed by HMCs, P8 Nova Co-Managed and P8 full Nova Managed ;-).
  • Nova1

Prerequisites

As always upgrade your systems to the latest code level if you want to use NovaLink and NovaLink Co-Management

  • Power 8 only with firmware version 840. (or later)
  • Virtual I/O Server 2.2.4.10 or later
  • For NovaLink co-management HMC V8R8.4.0
  • Obviously install NovaLink on each NovaLink managed system (install the latest patch version of NovaLink)
  • PowerVC 1.3.0.1 or later

NovaLink installation on an existing system

I’ll show you here how to install a NovaLink partition on an existing deployed system. Installing a new system from scratch is also possible. My advice is that you look at this address to start: , and check this youtube video showing you how a system is installed from scratch :

The goal of this post is to show you how to setup a co-managed system on an already existing system with Virtual I/O Servers already deployed on the host. My advice is to be very careful. The first thing you’ll need to do is to created a partition (2VP 0.5EC and 5GB Memory) (I’m calling it nova in the example below) and use the Virtual Optical device to load the NovaLink system on this one. In the example below the machine is “SSP” backed. Be very careful when do that: setup the profile name, and all the configuration stuffs before moving to co-managed mode … after that it will be harder for you to change things as the new pvmctl command will be very new to you:

# mkvdev -fbo -vadapter vhost0
vtopt0 Available
# lsrep
Size(mb) Free(mb) Parent Pool         Parent Size      Parent Free
    3059     1579 rootvg                   102272            73216

Name                                                  File Size Optical         Access
PowerVM_NovaLink_V1.1_122015.iso                           1479 None            rw
vopt_a19a8fbb57184aad8103e2c9ddefe7e7                         1 None            ro
# loadopt -disk PowerVM_NovaLink_V1.1_122015.iso -vtd vtopt0
# lsmap -vadapter vhost0 -fmt :
vhost0:U8286.41A.21AFF8V-V2-C40:0x00000003:nova_b1:Available:0x8100000000000000:nova_b1.7f863bacb45e3b32258864e499433b52: :N/A:vtopt0:Available:0x8200000000000000:/var/vio/VMLibrary/PowerVM_NovaLink_V1.1_122015.iso: :N/A
  • At the gurb page select the first entry:
  • install1

  • Wait for the machine to boot:
  • install2

  • Choose to perform an installation:
  • install3

  • Accept the licenses
  • install4

  • padmin user:/li>
    install5

  • Put you network configuration:
  • install6

  • Accept to install the Ubuntu system:
  • install8

  • You can then modify anything you want in the configuration file (in my case the timezone):
  • install9

    By default NovaLink (I think not 100% sure) is designed to be installed on SAS disk, so without multipathing. If like me you decide to install the NovaLink partition in a “boot-on-san” lpar my advice is to launch the installation without any multipathing enabled (only one vscsi adapter or one virtual fibre channel adapter). After the installation is completed install the Ubuntu multipathd service and configure the second vscsi or virtual fibre channel adapter. If you don’t do that you may experience problem at the installation time (RAID error). Please remember that you have to do that before enabling the co-management. Last thing about the installation it may takes a lot of time to finish. So be patient (especially the preseed step).

install10

Updating to the latest code level

The iso file provider in the Entitled Software Support is not updated to the latest available NovaLink code. Make a copy of the official repository available at this address: ftp://public.dhe.ibm.com/systems/virtualization/Novalink/debian. Serve the content of this ftp server on you how http server (use the command below to copy it):

# wget --mirror ftp://public.dhe.ibm.com/systems/virtualization/Novalink/debian

Modify the /etc/apt/sources.list (and source.list.d) and comment all the available deb repository to on only keep your copy

root@nova:~# grep -v ^# /etc/apt/sources.list
deb http://deckard.lab.chmod666.org/nova/Novalink/debian novalink_1.0.0 non-free
root@nova:/etc/apt/sources.list.d# apt-get upgrade
Reading package lists... Done
Building dependency tree
Reading state information... Done
Calculating upgrade... Done
The following packages will be upgraded:
  pvm-cli pvm-core pvm-novalink pvm-rest-app pvm-rest-server pypowervm
6 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Need to get 165 MB of archives.
After this operation, 53.2 kB of additional disk space will be used.
Do you want to continue? [Y/n]
Get:1 http://deckard.lab.chmod666.org/nova/Novalink/debian/ novalink_1.0.0/non-free pypowervm all 1.0.0.1-151203-1553 [363 kB]
Get:2 http://deckard.lab.chmod666.org/nova/Novalink/debian/ novalink_1.0.0/non-free pvm-cli all 1.0.0.1-151202-864 [63.4 kB]
Get:3 http://deckard.lab.chmod666.org/nova/Novalink/debian/ novalink_1.0.0/non-free pvm-core ppc64el 1.0.0.1-151202-1495 [2,080 kB]
Get:4 http://deckard.lab.chmod666.org/nova/Novalink/debian/ novalink_1.0.0/non-free pvm-rest-server ppc64el 1.0.0.1-151203-1563 [142 MB]
Get:5 http://deckard.lab.chmod666.org/nova/Novalink/debian/ novalink_1.0.0/non-free pvm-rest-app ppc64el 1.0.0.1-151203-1563 [21.1 MB]
Get:6 http://deckard.lab.chmod666.org/nova/Novalink/debian/ novalink_1.0.0/non-free pvm-novalink ppc64el 1.0.0.1-151203-408 [1,738 B]
Fetched 165 MB in 7s (20.8 MB/s)
(Reading database ... 72094 files and directories currently installed.)
Preparing to unpack .../pypowervm_1.0.0.1-151203-1553_all.deb ...
Unpacking pypowervm (1.0.0.1-151203-1553) over (1.0.0.0-151110-1481) ...
Preparing to unpack .../pvm-cli_1.0.0.1-151202-864_all.deb ...
Unpacking pvm-cli (1.0.0.1-151202-864) over (1.0.0.0-151110-761) ...
Preparing to unpack .../pvm-core_1.0.0.1-151202-1495_ppc64el.deb ...
Removed symlink /etc/systemd/system/multi-user.target.wants/pvm-core.service.
Unpacking pvm-core (1.0.0.1-151202-1495) over (1.0.0.0-151111-1375) ...
Preparing to unpack .../pvm-rest-server_1.0.0.1-151203-1563_ppc64el.deb ...
Unpacking pvm-rest-server (1.0.0.1-151203-1563) over (1.0.0.0-151110-1480) ...
Preparing to unpack .../pvm-rest-app_1.0.0.1-151203-1563_ppc64el.deb ...
Unpacking pvm-rest-app (1.0.0.1-151203-1563) over (1.0.0.0-151110-1480) ...
Preparing to unpack .../pvm-novalink_1.0.0.1-151203-408_ppc64el.deb ...
Unpacking pvm-novalink (1.0.0.1-151203-408) over (1.0.0.0-151112-304) ...
Processing triggers for ureadahead (0.100.0-19) ...
ureadahead will be reprofiled on next reboot
Setting up pypowervm (1.0.0.1-151203-1553) ...
Setting up pvm-cli (1.0.0.1-151202-864) ...
Installing bash completion script /etc/bash_completion.d/python-argcomplete.sh
Setting up pvm-core (1.0.0.1-151202-1495) ...
addgroup: The group `pvm_admin' already exists.
Created symlink from /etc/systemd/system/multi-user.target.wants/pvm-core.service to /usr/lib/systemd/system/pvm-core.service.
0513-071 The ctrmc Subsystem has been added.
Adding /usr/lib/systemd/system/ctrmc.service for systemctl ...
0513-059 The ctrmc Subsystem has been started. Subsystem PID is 3096.
Setting up pvm-rest-server (1.0.0.1-151203-1563) ...
The user `wlp' is already a member of `pvm_admin'.
Setting up pvm-rest-app (1.0.0.1-151203-1563) ...
Setting up pvm-novalink (1.0.0.1-151203-408) ...

NovaLink and HMC Co-Management configuration

Before adding the hosts on PowerVC you still need to do the most important thing. After the installation is finished enable the co-management mode to be able to have a system managed by NovaLink and still connected to an Hardware Management Console:

  • Enable the powerm_mgmt_capable attribute on the Nova partition:
  • # chsyscfg -r lpar -m br-8286-41A-2166666 -i "name=nova,powervm_mgmt_capable=1"
    # lssyscfg -r lpar -m br-8286-41A-2166666 -F name,powervm_mgmt_capable --filter "lpar_names=nova"
    nova,1
    
  • Enable co-management (please not here that you have to setmaster (you’ll see that the curr_master_name is the HMC) and then relmaster (you’ll see that the curr_master_name is the NovaLink Partition, this is that state where we want to be)):
  • # lscomgmt -m br-8286-41A-2166666
    is_master=null
    # chcomgmt -m br-8286-41A-2166666 -o setmaster -t norm --terms agree
    # lscomgmt -m br-8286-41A-2166666
    is_master=1,curr_master_name=myhmc1,curr_master_mtms=7042-CR8*2166666,curr_master_type=norm,pend_master_mtms=none
    # chcomgmt -m br-8286-41A-2166666 -o relmaster
    # lscomgmt -m br-8286-41A-2166666
    is_master=0,curr_master_name=nova,curr_master_mtms=3*8286-41A*2166666,curr_master_type=norm,pend_master_mtms=none
    

Going back to HMC managed system

You can go back to an Hardware Management Console managed system whenever you want (set the master to the HMC, delete the nova partition and release the master from the HMC).

# chcomgmt -m br-8286-41A-2166666 -o setmaster -t norm --terms agree
# lscomgmt -m br-8286-41A-2166666
is_master=1,curr_master_name=myhmc1,curr_master_mtms=7042-CR8*2166666,curr_master_type=norm,pend_master_mtms=none
# chlparstate -o shutdown -m br-8286-41A-2166666 --id 9 --immed
# rmsyscfg -r lpar -m br-8286-41A-2166666 --id 9
# chcomgmt -o relmaster -m br-8286-41A-2166666
# lscomgmt -m br-8286-41A-2166666
is_master=0,curr_master_mtms=none,curr_master_type=none,pend_master_mtms=none

Using NovaLink

After the installation you are now able to login on the NovaLink partition. (You can gain root access with “sudo su -” command). A command new called pvmctl is available on the NovaLink partition allowing you to perform any actions (stop, start virtual machine, list Virtual I/O Servers, ….). Before trying to add the host double check that the pvmctl command is working ok.

padmin@nova:~$ pvmctl lpar list
Logical Partitions
+------+----+---------+-----------+---------------+------+-----+-----+
| Name | ID |  State  |    Env    |    Ref Code   | Mem  | CPU | Ent |
+------+----+---------+-----------+---------------+------+-----+-----+
| nova | 3  | running | AIX/Linux | Linux ppc64le | 8192 |  2  | 0.5 |
+------+----+---------+-----------+---------------+------+-----+-----+

Adding hosts

On the PowerVC side add the NovaLink host by choosing the NovaLink option:

addhostnovalink

Some deb (ibmpowervc-power)packages will be installed on configured on the NovaLink machine:

addhostnovalink3
addhostnovalink4

By doing this, on each NovaLink machine you can check that a nova-compute process is here. (By adding the host the deb was installed and configured on the NovaLink host:

# ps -ef | grep nova
nova      4392     1  1 10:28 ?        00:00:07 /usr/bin/python /usr/bin/nova-compute --config-file /etc/nova/nova.conf --log-file /var/log/nova/nova-compute.log
root      5218  5197  0 10:39 pts/1    00:00:00 grep --color=auto nova
# grep host_display_name /etc/nova/nova.conf
host_display_name = XXXX-8286-41A-XXXX
# tail -1 /var/log/apt/history.log
Start-Date: 2016-01-18  10:27:54
Commandline: /usr/bin/apt-get -o Dpkg::Options::=--force-confdef -o Dpkg::Options::=--force-confold -y install --force-yes --allow-unauthenticated ibmpowervc-powervm
Install: python-keystoneclient:ppc64el (1.6.0-2.ibm.ubuntu1, automatic), python-oslo.reports:ppc64el (0.1.0-1.ibm.ubuntu1, automatic), ibmpowervc-powervm:ppc64el (1.3.0.1), python-ceilometer:ppc64el (5.0.0-201511171217.ibm.ubuntu1.199, automatic), ibmpowervc-powervm-compute:ppc64el (1.3.0.1, automatic), nova-common:ppc64el (12.0.0-201511171221.ibm.ubuntu1.213, automatic), python-oslo.service:ppc64el (0.11.0-2.ibm.ubuntu1, automatic), python-oslo.rootwrap:ppc64el (2.0.0-1.ibm.ubuntu1, automatic), python-pycadf:ppc64el (1.1.0-1.ibm.ubuntu1, automatic), python-nova:ppc64el (12.0.0-201511171221.ibm.ubuntu1.213, automatic), python-keystonemiddleware:ppc64el (2.4.1-2.ibm.ubuntu1, automatic), python-kafka:ppc64el (0.9.3-1.ibm.ubuntu1, automatic), ibmpowervc-powervm-monitor:ppc64el (1.3.0.1, automatic), ibmpowervc-powervm-oslo:ppc64el (1.3.0.1, automatic), neutron-common:ppc64el (7.0.0-201511171221.ibm.ubuntu1.280, automatic), python-os-brick:ppc64el (0.4.0-1.ibm.ubuntu1, automatic), python-tooz:ppc64el (1.22.0-1.ibm.ubuntu1, automatic), ibmpowervc-powervm-ras:ppc64el (1.3.0.1, automatic), networking-powervm:ppc64el (1.0.0.0-151109-25, automatic), neutron-plugin-ml2:ppc64el (7.0.0-201511171221.ibm.ubuntu1.280, automatic), python-ceilometerclient:ppc64el (1.5.0-1.ibm.ubuntu1, automatic), python-neutronclient:ppc64el (2.6.0-1.ibm.ubuntu1, automatic), python-oslo.middleware:ppc64el (2.8.0-1.ibm.ubuntu1, automatic), python-cinderclient:ppc64el (1.3.1-1.ibm.ubuntu1, automatic), python-novaclient:ppc64el (2.30.1-1.ibm.ubuntu1, automatic), python-nova-ibm-ego-resource-optimization:ppc64el (2015.1-201511110358, automatic), python-neutron:ppc64el (7.0.0-201511171221.ibm.ubuntu1.280, automatic), nova-compute:ppc64el (12.0.0-201511171221.ibm.ubuntu1.213, automatic), nova-powervm:ppc64el (1.0.0.1-151203-215, automatic), openstack-utils:ppc64el (2015.2.0-201511171223.ibm.ubuntu1.18, automatic), ibmpowervc-powervm-network:ppc64el (1.3.0.1, automatic), python-oslo.policy:ppc64el (0.5.0-1.ibm.ubuntu1, automatic), python-oslo.db:ppc64el (2.4.1-1.ibm.ubuntu1, automatic), python-oslo.versionedobjects:ppc64el (0.9.0-1.ibm.ubuntu1, automatic), python-glanceclient:ppc64el (1.1.0-1.ibm.ubuntu1, automatic), ceilometer-common:ppc64el (5.0.0-201511171217.ibm.ubuntu1.199, automatic), openstack-i18n:ppc64el (2015.2-3.ibm.ubuntu1, automatic), python-oslo.messaging:ppc64el (2.1.0-2.ibm.ubuntu1, automatic), python-swiftclient:ppc64el (2.4.0-1.ibm.ubuntu1, automatic), ceilometer-powervm:ppc64el (1.0.0.0-151119-44, automatic)
End-Date: 2016-01-18  10:28:00

The command line interface

You can do ALL the stuffs you were doing on the HMC using the pvmctl command. The syntax is pretty simple: pvcmtl |OBJECT| |ACTION| where the OBJECT can be vios, vm, vea(virtual ethernet adapter), vswitch, lu (logical unit), or anything you want and ACTION can be list, delete, create, update. Here are a few examples :

  • List the Virtual I/O Servers:
  • # pvmctl vios list
    Virtual I/O Servers
    +--------------+----+---------+----------+------+-----+-----+
    |     Name     | ID |  State  | Ref Code | Mem  | CPU | Ent |
    +--------------+----+---------+----------+------+-----+-----+
    | s00ia9940825 | 1  | running |          | 8192 |  2  | 0.2 |
    | s00ia9940826 | 2  | running |          | 8192 |  2  | 0.2 |
    +--------------+----+---------+----------+------+-----+-----+
    
  • List the partitions (note the -d for display-fields allowing me to print somes attributes):
  • # pvmctl vm list
    Logical Partitions
    +----------+----+----------+----------+----------+-------+-----+-----+
    |   Name   | ID |  State   |   Env    | Ref Code |  Mem  | CPU | Ent |
    +----------+----+----------+----------+----------+-------+-----+-----+
    | aix72ca> | 3  | not act> | AIX/Lin> | 00000000 |  2048 |  1  | 0.1 |
    |   nova   | 4  | running  | AIX/Lin> | Linux p> |  8192 |  2  | 0.5 |
    | s00vl99> | 5  | running  | AIX/Lin> | Linux p> | 10240 |  2  | 0.2 |
    | test-59> | 6  | not act> | AIX/Lin> | 00000000 |  2048 |  1  | 0.1 |
    +----------+----+----------+----------+----------+-------+-----+-----+
    # pvmctl list vm -d name id 
    [..]
    # pvmctl vm list -i id=4 --display-fields LogicalPartition.name
    name=aix72-1-d3707953-00000090
    # pvmctl vm list  --display-fields LogicalPartition.name LogicalPartition.id LogicalPartition.srr_enabled SharedProcessorConfiguration.desired_virtual SharedProcessorConfiguration.uncapped_weight
    name=aix72capture,id=3,srr_enabled=False,desired_virtual=1,uncapped_weight=64
    name=nova,id=4,srr_enabled=False,desired_virtual=2,uncapped_weight=128
    name=s00vl9940243,id=5,srr_enabled=False,desired_virtual=2,uncapped_weight=128
    name=test-5925058d-0000008d,id=6,srr_enabled=False,desired_virtual=1,uncapped_weight=128
    
  • Delete the virtual adapter on the partition name nova (note the –parent-id to select the partition) with a certain uuid which was found with (pvmclt list vea):
  • # pvmctl vea delete --parent-id name=nova --object-id uuid=fe7389a8-667f-38ca-b61e-84c94e5a3c97
    
  • Power off the lpar named aix72-2:
  • # pvmctl vm power-off -i name=aix72-2-536bf0f8-00000091
    Powering off partition aix72-2-536bf0f8-00000091, this may take a few minutes.
    Partition aix72-2-536bf0f8-00000091 power-off successful.
    
  • Delete the lpar named aix72-2:
  • # pvmctl vm delete -i name=aix72-2-536bf0f8-00000091
    
  • Delete the vswitch named MGMTVSWITCH:
  • # pvmctl vswitch delete -i name=MGMTVSWITCH
    
  • Open a console:
  • #  mkvterm --id 4
    vterm for partition 4 is active.  Press Control+] to exit.
    |
    Elapsed time since release of system processors: 57014 mins 10 secs
    [..]
    
  • Power on an lpar:
  • # pvmctl vm power-on -i name=aix72capture
    Powering on partition aix72capture, this may take a few minutes.
    Partition aix72capture power-on successful.
    

Is this a dream ? No more RMC connectivty problem anymore

I’m 100% sure that you always have problems with RMC connectivity due to firwall issues, ports not opened, and IDS blocking RMC ongoing or outgoing traffic. NovaLink is THE solution that will solve all the RMC problems forever. I’m not joking it’s a major improvement for PowerVM. As the NovaLink partition is installed on each hosts this one can communicate through a dedicated IPv6 link with all the partitions hosted on the host. A dedicated virtual switch called MGMTSWITCH is used to allow the RMC flow to transit between all the lpars and the NovaLink partition. Of course this Virtual Switch must be created and one Virtual Ethernet Adapter must also be created on the NovaLink partition. These are the first two actions to do if you want to implement this solution. Before starting here are a few things you need to know:

  • For security reason the MGMTSWITCH must be created in Vepa mode. If you are not aware of what are VEPA and VEB modes here is a reminder:
  • In VEB mode all the the partitions connected to the same vlan can communicate together. We do not want that as it is a security issue.
  • The VEPA mode gives us the ability to isolate lpars that are on the same subnet. lpar to lpar traffic is forced out of the machine. This is what we want.
  • The PVID for this VEPA network is 4094
  • The adapter in the NovaLink partition must be a trunk adapter.
  • It is mandatory to name the VEPA vswitch MGMTSWITCH.
  • At the lpar creation if the MGMTSWITCH exists a new Virtual Ethernet Adapter will be automatically created on the deployed lpar.
  • To be correctly configured the deployed lpar needs the latest level of rsct code (3.2.1.0 for now).
  • The latest cloud-init version must be deploy on the captured lpar used to make the image.
  • You don’t need to configure any addresses on this adapter (on the deployed lpars the adapter is configured with the local-link address (it’s the same thing as 169.254.0.0/16 addresses used in IPv4 format but for IPv6)(please note that any IPv6 adapter must “by design” have a local-link address).

mgmtswitch2

  • Create the virtual switch called MGMTSWITCH in Vepa mode:
  • # pvmctl vswitch create --name MGMTSWITCH --mode=Vepa
    # pvmctl vswitch list  --display-fields VirtualSwitch.name VirtualSwitch.mode 
    name=ETHERNET0,mode=Veb
    name=vdct,mode=Veb
    name=vdcb,mode=Veb
    name=vdca,mode=Veb
    name=MGMTSWITCH,mode=Vepa
    
  • Create a virtual ethernet adapter on the NovaLink partition with the PVID 4094 and a trunk priorty set to 1 (it’s a trunk adapter). Note that we now have two adapters on the NovaLink partition (one in IPv4 (routable) and the other one in IPv6 (non-routable):
  • # pvmctl vea create --pvid 4094 --vswitch MGMTSWITCH --trunk-pri 1 --parent-id name=nova
    # pvmctl vea list --parent-id name=nova
    --------------------------
    | VirtualEthernetAdapter |
    --------------------------
      is_tagged_vlan_supported=False
      is_trunk=False
      loc_code=U8286.41A.216666-V3-C2
      mac=EE3B84FD1402
      pvid=666
      slot=2
      uuid=05a91ab4-9784-3551-bb4b-9d22c98934e6
      vswitch_id=1
    --------------------------
    | VirtualEthernetAdapter |
    --------------------------
      is_tagged_vlan_supported=True
      is_trunk=True
      loc_code=U8286.41A.216666-V3-C34
      mac=B6F837192E63
      pvid=4094
      slot=34
      trunk_pri=1
      uuid=fe7389a8-667f-38ca-b61e-84c94e5a3c97
      vswitch_id=4
    

    Configure the local-link IPv6 address in the NovaLink partition:

    # more /etc/network/interfaces
    [..]
    auto eth1
    iface eth1 inet manual
     up /sbin/ifconfig eth1 0.0.0.0
    # ifup eth1
    # ifconfig eth1
    eth1      Link encap:Ethernet  HWaddr b6:f8:37:19:2e:63
              inet6 addr: fe80::b4f8:37ff:fe19:2e63/64 Scope:Link
              UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
              RX packets:0 errors:0 dropped:0 overruns:0 frame:0
              TX packets:17 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:1000
              RX bytes:0 (0.0 B)  TX bytes:1454 (1.4 KB)
              Interrupt:34
    

Capture an AIX host with the latest version of rsct installed (3.2.1.0) or later and the latest version of cloud-init installed. This version of RMC/rsct handle this new feature so this is mandatory to have it installed on the captured host. When PowerVC will deploy a Virtual Machine on a Nova managed host with this version of rsct installed a new adapter with the PVID 4094 in the virtual switch MGMTSWITCH will be created and finally all the RMC traffic will use this adapter instead of your public IP address:

# lslpp -L rsct*
  Fileset                      Level  State  Type  Description (Uninstaller)
  ----------------------------------------------------------------------------
  rsct.core.auditrm          3.2.1.0    C     F    RSCT Audit Log Resource
                                                   Manager
  rsct.core.errm             3.2.1.0    C     F    RSCT Event Response Resource
                                                   Manager
  rsct.core.fsrm             3.2.1.0    C     F    RSCT File System Resource
                                                   Manager
  rsct.core.gui              3.2.1.0    C     F    RSCT Graphical User Interface
  rsct.core.hostrm           3.2.1.0    C     F    RSCT Host Resource Manager
  rsct.core.lprm             3.2.1.0    C     F    RSCT Least Privilege Resource
                                                   Manager
  rsct.core.microsensor      3.2.1.0    C     F    RSCT MicroSensor Resource
                                                   Manager
  rsct.core.rmc              3.2.1.1    C     F    RSCT Resource Monitoring and
                                                   Control
  rsct.core.sec              3.2.1.0    C     F    RSCT Security
  rsct.core.sensorrm         3.2.1.0    C     F    RSCT Sensor Resource Manager
  rsct.core.sr               3.2.1.0    C     F    RSCT Registry
  rsct.core.utils            3.2.1.1    C     F    RSCT Utilities

When this image will be deployed a new adapter will be created in the MGMTSWITCH virtual switch, an IPv6 local-link address will be configured on it. You can check the cloud-init activation to see the IPv6 address is configured at the activation time:

# pvmctl vea list --parent-id name=aix72-2-0a0de5c5-00000095
--------------------------
| VirtualEthernetAdapter |
--------------------------
  is_tagged_vlan_supported=True
  is_trunk=False
  loc_code=U8286.41A.216666-V5-C32
  mac=FA620F66FF20
  pvid=3331
  slot=32
  uuid=7f1ec0ab-230c-38af-9325-eb16999061e2
  vswitch_id=1
--------------------------
| VirtualEthernetAdapter |
--------------------------
  is_tagged_vlan_supported=True
  is_trunk=False
  loc_code=U8286.41A.216666-V5-C33
  mac=46A066611B09
  pvid=4094
  slot=33
  uuid=560c67cd-733b-3394-80f3-3f2a02d1cb9d
  vswitch_id=4
# ifconfig -a
en0: flags=1e084863,14c0
        inet 10.10.66.66 netmask 0xffffff00 broadcast 10.14.33.255
         tcp_sendspace 262144 tcp_recvspace 262144 rfc1323 1
en1: flags=1e084863,14c0
        inet6 fe80::c032:52ff:fe34:6e4f/64
         tcp_sendspace 262144 tcp_recvspace 262144 rfc1323 1
sit0: flags=8100041
        inet6 ::10.10.66.66/96
[..]

Note that the local-link address is configured at the activation time (fe80 starting addresses):

# more /var/log/cloud-init-output.log
[..]
auto eth1

iface eth1 inet6 static
    address fe80::c032:52ff:fe34:6e4f
    hwaddress ether c2:32:52:34:6e:4f
    netmask 64
    pre-up [ $(ifconfig eth1 | grep -o -E '([[:xdigit:]]{1,2}:){5}[[:xdigit:]]{1,2}') = "c2:32:52:34:6e:4f" ]
        dns-search fr.net.intra
# entstat -d ent1 | grep -iE "switch|vlan"
Invalid VLAN ID Packets: 0
Port VLAN ID:  4094
VLAN Tag IDs:  None
Switch ID: MGMTSWITCH

To be sure all is working correctly here is a proof test. I’m taking down the en0 interface on which the IPv4 public address is configured. Then I’m launching a tcpdump on the en1 (on the MGMTSWITCH address). Finally I’m resizing the Virtual Machine with PowerVC. AND EVERYTHING IS WORKING GREAT !!!! AWESOME !!! :-) (note the fe80 to fe80 communication):

# ifconfig en0 down detach ; tcpdump -i en1 port 657
tcpdump: WARNING: en1: no IPv4 address assigned
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on en1, link-type 1, capture size 96 bytes
22:00:43.224964 IP6 fe80::9850:f6ff:fe9c:5739.32819 > fe80::d09e:aff:fecf:a868.rmc: S 4049792650:4049792650(0) win 65535 
22:00:43.225022 IP6 fe80::d09e:aff:fecf:a868.rmc > fe80::9850:f6ff:fe9c:5739.32819: S 2055569200:2055569200(0) ack 4049792651 win 28560 
22:00:43.225051 IP6 fe80::9850:f6ff:fe9c:5739.32819 > fe80::d09e:aff:fecf:a868.rmc: . ack 1 win 32844 
22:00:43.225547 IP6 fe80::9850:f6ff:fe9c:5739.32819 > fe80::d09e:aff:fecf:a868.rmc: P 1:209(208) ack 1 win 32844 
22:00:43.225593 IP6 fe80::d09e:aff:fecf:a868.rmc > fe80::9850:f6ff:fe9c:5739.32819: . ack 209 win 232 
22:00:43.225638 IP6 fe80::d09e:aff:fecf:a868.rmc > fe80::9850:f6ff:fe9c:5739.32819: P 1:97(96) ack 209 win 232 
22:00:43.225721 IP6 fe80::9850:f6ff:fe9c:5739.32819 > fe80::d09e:aff:fecf:a868.rmc: P 209:377(168) ack 97 win 32844 
22:00:43.225835 IP6 fe80::d09e:aff:fecf:a868.rmc > fe80::9850:f6ff:fe9c:5739.32819: P 97:193(96) ack 377 win 240 
22:00:43.225910 IP6 fe80::9850:f6ff:fe9c:5739.32819 > fe80::d09e:aff:fecf:a868.rmc: P 377:457(80) ack 193 win 32844 
22:00:43.226076 IP6 fe80::d09e:aff:fecf:a868.rmc > fe80::9850:f6ff:fe9c:5739.32819: P 193:289(96) ack 457 win 240 
22:00:43.226154 IP6 fe80::9850:f6ff:fe9c:5739.32819 > fe80::d09e:aff:fecf:a868.rmc: P 457:529(72) ack 289 win 32844 
22:00:43.226210 IP6 fe80::d09e:aff:fecf:a868.rmc > fe80::9850:f6ff:fe9c:5739.32819: P 289:385(96) ack 529 win 240 
22:00:43.226276 IP6 fe80::9850:f6ff:fe9c:5739.32819 > fe80::d09e:aff:fecf:a868.rmc: P 529:681(152) ack 385 win 32844 
22:00:43.226335 IP6 fe80::d09e:aff:fecf:a868.rmc > fe80::9850:f6ff:fe9c:5739.32819: P 385:481(96) ack 681 win 249 
22:00:43.424049 IP6 fe80::9850:f6ff:fe9c:5739.32819 > fe80::d09e:aff:fecf:a868.rmc: . ack 481 win 32844 
22:00:44.725800 IP6 fe80::d09e:aff:fecf:a868.rmc > fe80::9850:f6ff:fe9c:5739.rmc: UDP, length 88
22:00:44.726111 IP6 fe80::9850:f6ff:fe9c:5739.rmc > fe80::d09e:aff:fecf:a868.rmc: UDP, length 88
22:00:50.137605 IP6 fe80::d09e:aff:fecf:a868.rmc > fe80::9850:f6ff:fe9c:5739.rmc: UDP, length 632
22:00:50.137900 IP6 fe80::9850:f6ff:fe9c:5739.rmc > fe80::d09e:aff:fecf:a868.rmc: UDP, length 88
22:00:50.183108 IP6 fe80::9850:f6ff:fe9c:5739.rmc > fe80::d09e:aff:fecf:a868.rmc: UDP, length 408
22:00:51.683382 IP6 fe80::9850:f6ff:fe9c:5739.rmc > fe80::d09e:aff:fecf:a868.rmc: UDP, length 408
22:00:51.683661 IP6 fe80::d09e:aff:fecf:a868.rmc > fe80::9850:f6ff:fe9c:5739.rmc: UDP, length 88

To be sure security requirements are met from the lpar I’m pinging the NovaLink host (the first one) which is answering and then I’m pinging the second lpar (the second ping) which is not working. (And this is what we want !!!).

# ping fe80::d09e:aff:fecf:a868
PING fe80::d09e:aff:fecf:a868 (fe80::d09e:aff:fecf:a868): 56 data bytes
64 bytes from fe80::d09e:aff:fecf:a868: icmp_seq=0 ttl=64 time=0.203 ms
64 bytes from fe80::d09e:aff:fecf:a868: icmp_seq=1 ttl=64 time=0.206 ms
64 bytes from fe80::d09e:aff:fecf:a868: icmp_seq=2 ttl=64 time=0.216 ms
^C
--- fe80::d09e:aff:fecf:a868 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0/0/0 ms
# ping fe80::44a0:66ff:fe61:1b09
PING fe80::44a0:66ff:fe61:1b09 (fe80::44a0:66ff:fe61:1b09): 56 data bytes
^C
--- fe80::44a0:66ff:fe61:1b09 ping statistics ---
2 packets transmitted, 0 packets received, 100% packet loss

PowerVC 1.3.0.1 Dynamic Resource Optimizer

In addition to the NovaLink part of this blog post I also wanted to talk about the killer app of 2016. Dynamic Resource Optimizer. This feature can be used on any PowerVC 1.3.0.1 managed hosts (you obviously need at least to hosts). DRO is in charge to re-balance your Virtual Machines across all the available hosts (in the host-group). To sum up if a host is experiencing an heavy load and reaching a certain amount of CPU consumption over a period of time, DRO will move your virtual machines to re-balance the load across all the available hosts (this is done at a host level). Here are a few details about DRO:

  • The DRO configuration is done at a host level.
  • You setup a threshold (in the capture below) to reach to trigger the Live Partition Moblity or Mobily Cores movements (Power Entreprise Pool).
  • droo6
    droo3

  • To be triggered this threshold must be reached a certain number of time (stabilization) over a period you are defining (run interval).
  • You can choose to move virtual machines using Live Partition Mobilty, or to move “cores” using Power Entreprise Pool (you can do both; moving CPU will always be preferred as moving partitions)
  • DRO can be run in advise mode (nothing is done, a warning is thrown in the new DRO events tab) or in active mode (which is doing the job and moving things).
    droo2
    droo1

  • Your most critical virtual machines can be excluded from DRO:
  • droo5

How is DRO choosing which machines are moved

I’m running DRO in production since now one month and I had the time to check what is going on behind the scene. How is DRO choosing which machines are moved when a Live Partition Moblity operation must be run to face an heavy load on a host ? To do so I decided to launch 3 different cpuhog (16 forks, 4VP, SMT4) processes (which are eating CPU ressource) on three different lpars with 4VP each. On the PowerVC I can check that before launching this processes the CPU consumption is ok on this host (the three lpars are running on the same host) :

droo4

# cat cpuhog.pl
#!/usr/bin/perl

print "eating the CPUs\n";

foreach $i (1..16) {
      $pid = fork();
      last if $pid == 0;
      print "created PID $pid\n";
}

while (1) {
      $x++;
}
# perl cpuhog.pl
eating the CPUs
created PID 47514604
created PID 22675712
created PID 3015584
created PID 21496152
created PID 25166098
created PID 26018068
created PID 11796892
created PID 33424106
created PID 55444462
created PID 65077976
created PID 13369620
created PID 10813734
created PID 56623850
created PID 19333542
created PID 58393312
created PID 3211988

I’m waiting a couple of minutes and I realize that the virtual machines on which the cpuhog processes were launched are the ones which are migrated. So we can say that PowerVC is moving the machine that are eating CPU (another strategy could be to move all the non-eating CPU machines to let the working ones do their job without launching a mobility operation).

# errpt | head -3
IDENTIFIER TIMESTAMP  T C RESOURCE_NAME  DESCRIPTION
A5E6DB96   0118225116 I S pmig           Client Partition Migration Completed
08917DC6   0118225116 I S pmig           Client Partition Migration Started

After the moves are ok I can see that the load is now ok on the host. DRO has done the job for me and moved the lpar to met the configured thresold ;-)

droo7dro_effect

The images below will show you a good example of the “power” of PowerVC and DRO. To update my Virtual I/O Servers to the latest version the PowerVC maintenance mode was used to free up the Virtual I/O Servers. After leaving the maintenance mode the DRO was doing the job to re-balance the Virtual Machines across all the hosts (The red arrows symbolize the maintenance mode action and the purple ones the DRO actions). You can also see that some lpars were moved across 4 different hosts during this process. All these pictures are taken from real life experience on my production systems. This not a lab environment, this is one part of my production. So yes DRO and PowerVC 1.3.0.1 are production ready. Hell yes!

real1
real2
real3
real4
real5

Conclusion

As my environment is growing bigger the next step for me will be to move on NovaLink on my P8 hosts. Please note that the NovaLink Co-Management feature is today a “TechPreview” but should be released GA very soon. Talking about DRO I was waiting for that for years and it finally happens. I can assure you that it is production ready, to prove this I’ll just give you this number. To upgrade my Virtual I/O Servers to 2.2.4.10 release using PowerVC maintenance mode and DRO more than 1000 Live Partition Mobility moves were performed without any outage on production servers and during working hours. Nobody in my company was aware of this during the operations. It was a seamless experience for everybody.

Tips and tricks for PowerVC 1.2.3 (PVID, ghostdev, clouddev, rest API, growing volumes, deleting boot volume) | PowerVC 1.2.3 Redbook

Writing a Redbook was one of my main goal. After working days and nights for more than 6 years on PowerSystems IBM gave me the opportunity to write a Redbook. I was looking on the Redbook residencies page since a very very long time to find the right one. As there was nothing new on AIX and PowerVM (which are my favorite topics) I decided to give a try to the latest PowerVC Redbook (this Redbook is an update, but a huge one. PowerVC is moving fast). I am a Redbook reader since I’m working on AIX. Almost all Redbooks are good, most of them are the best source of information for AIX and Power administrators. I’m sure that like me, you saw that part about becoming an author every time you are reading a RedBook. I can now say THAT IT IS POSSIBLE (for everyone). I’m now one of this guys and you can also become one. Just find the Redbook that will fit for you and apply on the Redbook webpage (http://www.redbooks.ibm.com/residents.nsf/ResIndex). I wanted to say a BIG Thank you to all the people who gave me the opportunity to do that, especially Philippe Hermes, Jay Kruemcke, Eddie Shvartsman, Scott Vetter, Thomas R Bosthworth. In addition to these people I wanted also to thanks my teammates on this Redbook: Guillermo Corti, Marco Barboni and Liang Xu, they are all true professional people, very skilled and open … this was a great team ! One more time thank you guys. Last, I take the opportunity here to thanks the people who believed in me since the very beginning of my AIX career: Julien Gabel, Christophe Rousseau, and JL Guyot. Thank you guys ! You deserve it, stay like you are. I’m now not an anonymous guy anymore.

redbook

You can download the Redbook at this address: http://www.redbooks.ibm.com/redpieces/pdfs/sg248199.pdf. I’ve learn something during the writing of the Redbook and by talking to the members of the team. Redbooks are not there to tell and explain you what’s “behind the scene”. A Redbook can not be too long, and needs to be written in almost 3 weeks, there is no place for everything. Some topics are better integrated in a blog post than in a Redbook, and Scott told me that a couple of time during the writing session. I totally agree with him. So here is this long awaited blog post. The are advanced topics about PowerVC read the Redbook before reading this post.

Last one thanks to IBM (and just IBM) for believing in me :-). THANK YOU SO MUCH.

ghostdev, clouddev and cloud-init (ODM wipe if using inactive live partition mobility or remote restart)

Everybody who is using cloud-init should be aware of this. Cloud-init is only supported with AIX version that have the clouddev attribute available on sys0. To be totally clear at the time of writing this blog post you will be supported by IBM only if you use AIX 7.1 TL3 SP5 or AIX 6.1 TL9 SP5. All other versions are not supported by IBM. Let me explain why and how you can still use cloud-init on older versions just by doing a little trick. But let’s first explain what the problem is:

Let’s say you have different machines some of them using AIX 7100-03-05 and some of them using 7100-03-04, both use cloud-init for the activation. By looking at cloud-init code at this address here we can say that:

  • After the cloud-init installation cloud-init is:
  • Changing clouddev to 1 if sys0 has a clouddev attribute:
  • # oslevel -s
    7100-03-05-1524
    # lsattr -El sys0 -a ghostdev
    ghostdev 0 Recreate ODM devices on system change / modify PVID True
    # lsattr -El sys0 -a clouddev
    clouddev 1 N/A True
    
  • Changing ghostdev to 1 if sys0 don’t have a clouddev attribute:
  • # oslevel -s
    7100-03-04-1441
    # lsattr -El sys0 -a ghostdev
    ghostdev 1 Recreate ODM devices on system change / modify PVID True
    # lsattr -El sys0 -a clouddev
    lsattr: 0514-528 The "clouddev" attribute does not exist in the predefined
            device configuration database.
    

This behavior can directly be observed in the cloud-init code:

ghostdev_clouddev_cloudinit

Now that we are aware of that, let’s make a remote restart test between two P8 boxes. I take the opportunity here to present you one of the coolest feature of PowerVC 1.2.3. You can now remote restart your virtual machines directly from the PowerVC GUI if you have one of your host in a failure state. I highly encourage you to check my latest post about this subject if you don’t know how to setup remote restartable partitions http://chmod666.org/index.php/using-the-simplified-remote-restart-capability-on-power8-scale-out-servers/:

  • Only simplified remote restart can be managed by PowerVC 1.2.3, the “normal” version of remote restart is not handle by PowerVC 1.2.3
  • In the compute template configuration there is now a checkbox allowing you to create remote restartable partition. Be careful: you can’t go back to a P7 box without having to reboot the machine. So be sure your Virtual Machines will stay on P8 box if you check this option.
  • remote_restart_compute_template

  • When the machine is shutdown or there is a problem on it you can click the “Remotely Restart Virtual Machines” button:
  • rr1

  • Select the machines you want to remote restart:
  • rr2
    rr3

  • While the Virtual Machines are remote restarting, you can check the states of the VM and the state of the host:
  • rr4
    rr5

  • After the evacuation the host is in “Remote Restart Evacuated State”:

rr6

Let’s now check the state of our two Virtual Machines:

  • The ghostdev one (the sys0 messages in the errpt indicates that the partition ID has changed AND DEVICES ARE RECREATED (ODM Wipe)) (no more ip address set on en0):
  • # errpt | more
    IDENTIFIER TIMESTAMP  T C RESOURCE_NAME  DESCRIPTION
    A6DF45AA   0803171115 I O RMCdaemon      The daemon is started.
    1BA7DF4E   0803171015 P S SRC            SOFTWARE PROGRAM ERROR
    CB4A951F   0803171015 I S SRC            SOFTWARE PROGRAM ERROR
    CB4A951F   0803171015 I S SRC            SOFTWARE PROGRAM ERROR
    D872C399   0803171015 I O sys0           Partition ID changed and devices recreat
    # ifconfig -a
    lo0: flags=e08084b,c0
            inet 127.0.0.1 netmask 0xff000000 broadcast 127.255.255.255
            inet6 ::1%1/0
             tcp_sendspace 131072 tcp_recvspace 131072 rfc1323 1
    
  • The clouddev one (the sys0 message in the errpt indicate that the partition ID has changed) (note that the errpt message does not indicate that the devices are recreated):
  • # errpt |more
    60AFC9E5   0803232015 I O sys0           Partition ID changed since last boot.
    # ifconfig -a
    en0: flags=1e084863,480
            inet 10.10.10.20 netmask 0xffffff00 broadcast 10.244.248.63
             tcp_sendspace 262144 tcp_recvspace 262144 rfc1323 1
    lo0: flags=e08084b,c0
            inet 127.0.0.1 netmask 0xff000000 broadcast 127.255.255.255
            inet6 ::1%1/0
             tcp_sendspace 131072 tcp_recvspace 131072 rfc1323 1
    

VSAE is designed to manage ghostdev only OS on the other hand cloud-init is designed to manage clouddev OS. To be perfectly clear here are how ghostdev and clouddev works. But we first need to answer a question. Why do we need to set clouddev or ghostdev to 1 ? The answer is pretty obvious, one of this attribute needs to be set to 1 before capturing the Virtual Machine. When the Virtual Machines is captured, one of this attributes is set to 1. When you will deploy a new Virtual Machine this flag is needed to wipe the ODM before reconfiguring the virtual machine with the parameters set in the PowerVC GUI (ip, hostname). In both clouddev and ghostdev cases it is obvious that we need to wipe the ODM at the machine build/deploy time. Then VSAE or cloud-init (using config drive datasource) is setting hostname, ip address previously wiped by clouddev and ghostdev attributes. This is working well for a new deploy because we need to wipe the ODM in all cases but what about an inactive live partition mobility or a remote restart operation ? The Virtual Machine has moved (not on the same host, and not with the same lpar ID) and we need to keep the ODM as it is. How is it working:

  • If you are using VSAE, this one is managing the ghostdev attribute for you. At the capture time ghostdev is set to 1 by VSAE (when you run the pre-capture script). When deploying a new VM, at the activation time, VSAE is setting ghostdev back to 0. Inactive live partition mobility and remote restart operations will work fine with ghostdev set to 0.
  • If you are using cloud-init on a supported system clouddev is set to 1 at the installation of cloud-init. As cloud-init is doing nothing with both attributes at the activation time IBM needs to find a way to avoid wiping the ODM after the remote restart operation. The clouddev device was introduced: this one is writing a flag in the NVRAM, so when a new VM is built, there is no flag in the NVRAM for this one, the ODM is wiped. When an already existing VM is remote restarted, the flag exists in the NVRAM, the ODM is not wiped. By using clouddev there is no post deploy action needed.
  • If you are using cloud-init on an unsupported system ghostdev is set to 1 at the installation of cloud-init. As cloud-init is doing nothing at post-deploy time, ghostdev will remains to 1 in all cases and the ODM will always be wiped.

cloudghost

There is a way to use cloud-init on unsupported system. Keep in mind that in this case you will not be supported by IBM. So do this at you own risk. To be totally honest I’m using this method in production to use the same activation engine for all my AIX version:

  1. Pre-capture, set ghostdev to 1. What ever happens THIS IS MANDATORY.
  2. Post-capture, reboot the captured VM and set ghostdev to 0.
  3. Post-deploy on every Virtual machine set ghostdev to 0. You can put this in the activation input to do the job:
  4. #cloud-config
    runcmd:
     - chdev -l sys0 -a ghostdev=0
    

The PVID problem

I realized I had this problem after using PowerVC for a while. As PowerVC images for rootvg and other volume group are created using Storage Volume Controller flashcopy (in case of a SVC configuration, but there are similar mechanisms for other storage providers) the PVID for both rootvg and additional volume groups will always be the same for each new virtual machines (all new virtual machines will have the same PVID for their rootvg, and the same PVID for each captured volume group). I did contact IBM about this and the PowerVC team told me that this behavior is totally normal and was observed since the release of VMcontrol. They didn’t have any issues related to this, so if you don’t care about it, just do nothing and keep this behavior as it is. I recommend doing nothing about this!

It’s a shame but most AIX administrators like to keep things as they are and don’t want any changes. (In my humble opinion this is one of the reason AIX is so outdated compared to Linux, we need a community, not narrow-minded people keeping their knowledge for them, just to stay in their daily job routine without having anything to learn). If you are in this case, facing angry colleagues about this particular point you can use the solution proposed below to calm the passions of the few ones who do not want to change !. :-). This is my rant : CHANGE !

By default if you build two virtual machines and check the PVID of each one, you will notice that the PVID are the same:

  • Machine A:
  • root@machinea:/root# lspv
    hdisk0          00c7102d2534adac                    rootvg          active
    hdisk1          00c7102d00d14660                    appsvg          active
    
  • Machine B:
  • root@machineb:root# lspv
    hdisk0          00c7102d2534adac                    rootvg          active
    hdisk1          00c7102d00d14660                    appsvg         active
    

For the rootvg the PVID is always set to 00c7102d2534adac and for the appsvg the PVID is always set to 00c7102d00d14660.

For the rootvg the solution is to change the ghostdev (only the ghostdev) to 2, and to reboot the machine. Putting ghostdev to 2 will change the PVID of the rootvg at the reboot time (after the PVID is changed ghostdev will be automatically set back to 0)

# lsattr -El sys0 -a ghostdev
ghostdev 2 Recreate ODM devices on system change / modify PVID True
# lsattr -l sys0 -R -a ghostdev
0...3 (+1)

For the non rootvg volume group this is a little bit tricky but still possible, the solution is to use the recreatevg (-d option) command to change the PVID of all the physical volumes of your volume group. Before rebooting the server ensure that:

  • Umount all the filesystems in the volume group on which you want to change the PVID.
  • varyoff the volume group.
  • Get the physical volumes names composing the volume group.
  • export the volume group.
  • recreate the volume group (this action will change the PVID)
  • re-import the volume group.

Here is the shell commands doing the trick:

# vg=appsvg
# lsvg -l $vg | awk '$6 == "open/syncd" && $7 != "N/A" { print "fuser -k " $NF }' | sh
# lsvg -l $vg | awk '$6 == "open/syncd" && $7 != "N/A" { print "umount " $NF }' | sh
# varyoffvg $vg
# pvs=$(lspv | awk -v my_vg=$vg '$3 == my_vg {print $1}')
# recreatevg -y $vg -d $pvs
# importvg -y $vg $(echo ${pvs} | awk '{print $1}'

We now agree that you want to do this, but as you are a smart person you want to do it automatically using cloud-init and the activation input, there are two way to do it, the silly way (using shell) and the noble way (using cloudinit syntax):

PowerVC activation engine (shell way)

Use this short ksh script in the activation input, this is not my recommendation, but you can do it for simplicity:

activation_input_shell

PowerVC activation engine (cloudinit way)

Here is the cloud-init way. Important note: use the latest version of cloud-init, the first one I used had a problem with the cc_power_state_change.py not using the right parameters for AIX:

activation_input_ci

Working with REST Api

I will not show you here how to work with the PowerVC RESTful API. I prefer to share a couple of scripts on my github account. Nice examples are often better than how to tutorials. So check the scripts on the github if you want a detailed how to … scripts are well commented. Just a couple of things to say before closing this topic: the best way to work with RESTful api is to code in python, there are a lot existing python libs to work with RESTful api (httplib2, pycurl, request). For my own understanding I prefer in my script using the simple httplib. I will put all my command line tools in a github repository called pvcmd (for PowerVC command line). You can download the scripts at this address, or just use git to clone the repo. One more time it is a community project, feel free to change and share anything: https://github.com/chmod666org/pvcmd:

Growing data lun

To be totally honest here is what I do when I’m creating a new machine with PowerVC. My customers always needs one additionnal volume groups for applications (we will call it appsvg). I’ve create a multi volume image with this volume group created (with a bunch of filesystem in it). As most of customers are asking for the volume group to be 100g large the capture was made with this size. Unfortunately for me we often get requests to create a bigger volume groups let’s say 500 or 600 Gb. Instead of creating a new lun and extending the volume group PowerVC allows you to grow the lun to the desired size. For volume group other than the boot one you must use the RESTful API to extend the volume. To do this I’ve created a python script to called pvcgrowlun (feel free to check the code on github) https://github.com/chmod666org/pvcmd/blob/master/pvcgrowlun. At each virtual machine creation I’m checking if the customer needs a larger volume group and extend it using the command provided below.

While coding this script I got a problem using the os-extend parameter in my http request. PowerVC is not exactly using the same parameters as Openstack is, if you want to code by yourself be aware of this and check in the PowerVC online documentation if you are using “extended attributes” (Thanks to Christine L Wang for this one):

  • In the Openstack documentation the attribute is “os-extend” link here:
  • os-extend

  • In the PowerVC documentation the attribute is “ibm-extend” link here:
  • ibm-extend

  • Identify the lun you want to grow (as the script is taking the name of the volume as parameter) (I have one not published to list all the volumes, tell me if you want it). In my case the volume name is multi-vol-bf697dfa-0000003a-828641A_XXXXXX-data-1, and I want to change its size from 60 to 80. This is not stated in the offical PowerVC documentation but this will work for both boot and data lun.
  • Check the size of the lun is lesser than the desired size:
  • before_grow

  • Run the script:
  • # pvcgrowlun -v multi-vol-bf697dfa-0000003a-828641A_XXXXX-data-1 -s 80 -p localhost -u root -P mysecretpassword
    [info] growing volume multi-vol-bf697dfa-0000003a-828641A_XXXXX-data-1 with id 840d4a60-2117-4807-a2d8-d9d9f6c7d0bf
    JSON Body: {"ibm-extend": {"new_size": 80}}
    [OK] Call successful
    None
    
  • Check the size is changed after the command execution:
  • aftergrow_grow

  • Don’t forget to do the job in the operating system by running a “chvg -g” (check total PPS here):
  • # lsvg vg_apps
    VOLUME GROUP:       vg_apps                  VG IDENTIFIER:  00f9aff800004c000000014e6ee97071
    VG STATE:           active                   PP SIZE:        256 megabyte(s)
    VG PERMISSION:      read/write               TOTAL PPs:      239 (61184 megabytes)
    MAX LVs:            256                      FREE PPs:       239 (61184 megabytes)
    LVs:                0                        USED PPs:       0 (0 megabytes)
    OPEN LVs:           0                        QUORUM:         2 (Enabled)
    TOTAL PVs:          1                        VG DESCRIPTORS: 2
    STALE PVs:          0                        STALE PPs:      0
    ACTIVE PVs:         1                        AUTO ON:        yes
    MAX PPs per VG:     32768                    MAX PVs:        1024
    LTG size (Dynamic): 256 kilobyte(s)          AUTO SYNC:      no
    HOT SPARE:          no                       BB POLICY:      relocatable
    MIRROR POOL STRICT: off
    PV RESTRICTION:     none                     INFINITE RETRY: no
    DISK BLOCK SIZE:    512                      CRITICAL VG:    no
    # chvg -g appsvg
    # lsvg appsvg
    VOLUME GROUP:       appsvg                  VG IDENTIFIER:  00f9aff800004c000000014e6ee97071
    VG STATE:           active                   PP SIZE:        256 megabyte(s)
    VG PERMISSION:      read/write               TOTAL PPs:      319 (81664 megabytes)
    MAX LVs:            256                      FREE PPs:       319 (81664 megabytes)
    LVs:                0                        USED PPs:       0 (0 megabytes)
    OPEN LVs:           0                        QUORUM:         2 (Enabled)
    TOTAL PVs:          1                        VG DESCRIPTORS: 2
    STALE PVs:          0                        STALE PPs:      0
    ACTIVE PVs:         1                        AUTO ON:        yes
    MAX PPs per VG:     32768                    MAX PVs:        1024
    LTG size (Dynamic): 256 kilobyte(s)          AUTO SYNC:      no
    HOT SPARE:          no                       BB POLICY:      relocatable
    MIRROR POOL STRICT: off
    PV RESTRICTION:     none                     INFINITE RETRY: no
    DISK BLOCK SIZE:    512                      CRITICAL VG:    no
    

My own script to create VMs

I’m creating Virtual Machine every weeks, sometimes just a couple and sometime I got 10 Virtual Machines to create in a row. We are here using different storage connectivity groups, and different storage templates if the machine is in production, in development, and so on. We also have to choose the primary copy on the SVC side if the machine is in production (I am using a streched cluster between two distant sites, so I have to choose different storage templates depending on the site where the Virtual Machine is hosted). I make mistakes almost every time using the PowerVC gui (sometime I forgot to put the machine name, sometimes the connectivity group). I’m a lazy guy so I decided to code a script using the PowerVC rest api to create new machines based on a template file. We are planing to give the script to our outsourced teams to allow them to create machine, without knowing what PowerVC is \o/. The script is taking a file as parameter and create the virtual machine:

  • Create a file like the one below with all the information needed for your new virtual machine creation (name, ip address, vlan, host, image, storage connectivity group, ….):
  • # cat test.vm
    name:test
    ip_address:10.16.66.20
    vlan:vlan6666
    target_host:Default Group
    image:multi-vol
    storage_connectivity_group:npiv
    virtual_processor:1
    entitled_capacity:0.1
    memory:1024
    storage_template:storage1
    
  • Launch the script, the Virtual Machine will be created:
  • pvcmkvm -f test.vm -p localhost -u root -P mysecretpassword
    name: test
    ip_address: 10.16.66.20
    vlan: vlan666
    target_host: Default Group
    image: multi-vol
    storage_connectivity_group: npiv
    virtual_processor: 1
    entitled_capacity: 0.1
    memory: 1024
    storage_template: storage1
    [info] found image multi-vol with id 041d830c-8edf-448b-9892-560056c450d8
    [info] found network vlan666 with id 5fae84a7-b463-4a1a-b4dd-9ab24cdb66b5
    [info] found host aggregation Default Group with id 1
    [info] found storage template storage1 with id bfb4f8cc-cd68-46a2-b3a2-c715867de706
    [info] found image multi-vol with id 041d830c-8edf-448b-9892-560056c450d8
    [info] found a volume with id b3783a95-822c-4179-8c29-c7db9d060b94
    [info] found a volume with id 9f2fc777-eed3-4c1f-8a02-00c9b7c91176
    JSON Body: {"os:scheduler_hints": {"host_aggregate_id": 1}, "server": {"name": "test", "imageRef": "041d830c-8edf-448b-9892-560056c450d8", "networkRef": "5fae84a7-b463-4a1a-b4dd-9ab24cdb66b5", "max_count": 1, "flavor": {"OS-FLV-EXT-DATA:ephemeral": 10, "disk": 60, "extra_specs": {"powervm:max_proc_units": 32, "powervm:min_mem": 1024, "powervm:proc_units": 0.1, "powervm:max_vcpu": 32, "powervm:image_volume_type_b3783a95-822c-4179-8c29-c7db9d060b94": "bfb4f8cc-cd68-46a2-b3a2-c715867de706", "powervm:image_volume_type_9f2fc777-eed3-4c1f-8a02-00c9b7c91176": "bfb4f8cc-cd68-46a2-b3a2-c715867de706", "powervm:min_proc_units": 0.1, "powervm:storage_connectivity_group": "npiv", "powervm:min_vcpu": 1, "powervm:max_mem": 66560}, "ram": 1024, "vcpus": 1}, "networks": [{"fixed_ip": "10.244.248.53", "uuid": "5fae84a7-b463-4a1a-b4dd-9ab24cdb66b5"}]}}
    {u'server': {u'links': [{u'href': u'https://powervc.lab.chmod666.org:8774/v2/1471acf124a0479c8d525aa79b2582d0/servers/fc3ab837-f610-45ad-8c36-f50c04c8a7b3', u'rel': u'self'}, {u'href': u'https://powervc.lab.chmod666.org:8774/1471acf124a0479c8d525aa79b2582d0/servers/fc3ab837-f610-45ad-8c36-f50c04c8a7b3', u'rel': u'bookmark'}], u'OS-DCF:diskConfig': u'MANUAL', u'id': u'fc3ab837-f610-45ad-8c36-f50c04c8a7b3', u'security_groups': [{u'name': u'default'}], u'adminPass': u'u7rgHXKJXoLz'}}
    

One of the major advantage of using this is batching Virtual Machine creation. By using the script you can create one hundred Virtual Machine in a couple of minutes. Awesome !

Working with Openstack commands

PowerVC is based on Openstack, so why not using the Openstack command to work with PowerVC. It is possible, but I repeat one more time that this is not supported by IBM at all. Use this trick at you own risk. I was working with cloud manager with openstack (ICMO) and a script including shells variables is provided to “talk” to the ICMO Openstack. Based on the same file I created the same one for PowerVC. Before using any Openstack commands create a powervcrc file that match you PowerVC environement:

# cat powervcrc
export OS_USERNAME=root
export OS_PASSWORD=mypasswd
export OS_TENANT_NAME=ibm-default
export OS_AUTH_URL=https://powervc.lab.chmod666.org:5000/v3/
export OS_IDENTITY_API_VERSION=3
export OS_CACERT=/etc/pki/tls/certs/powervc.crt
export OS_REGION_NAME=RegionOne
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default

Then source the powervcrc file, and you are ready to play with all Openstack commands:

# source powervcrc

You can then play with Openstack commands, here are a few nice example:

  • List virtual machines:
  • # nova list
    +--------------------------------------+-----------------------+--------+------------+-------------+------------------------+
    | ID                                   | Name                  | Status | Task State | Power State | Networks               |
    +--------------------------------------+-----------------------+--------+------------+-------------+------------------------+
    | dc5c9fce-c839-43af-8af7-e69f823e57ca | ghostdev0clouddev1    | ACTIVE | -          | Running     | vlan666=10.16.66.56    |
    | d7d0fd7e-a580-41c8-b3d8-d7aab180d861 | ghostdevto1cloudevto1 | ACTIVE | -          | Running     | vlan666=10.16.66.57    |
    | bf697dfa-f69a-476c-8d0f-abb2fdcb44a7 | multi-vol             | ACTIVE | -          | Running     | vlan666=10.16.66.59    |
    | 394ab4d4-729e-44c7-a4d0-57bf2c121902 | deckard               | ACTIVE | -          | Running     | vlan666=10.16.66.60    |
    | cd53fb69-0530-451b-88de-557e86a2e238 | priss                 | ACTIVE | -          | Running     | vlan666=10.16.66.61    |
    | 64a3b1f8-8120-4388-9d64-6243d237aa44 | rachael               | ACTIVE | -          | Running     |                        |
    | 2679e3bd-a2fb-4a43-b817-b56ead26852d | batty                 | ACTIVE | -          | Running     |                        |
    | 5fdfff7c-fea0-431a-b99b-fe20c49e6cfd | tyrel                 | ACTIVE | -          | Running     |                        |
    +--------------------------------------+-----------------------+--------+------------+-------------+------------------------+
    
  • Reboot a machine:
  • # nova reboot multi-vol
    
  • List the hosts:
  • # nova hypervisor-list
    +----+---------------------+-------+---------+
    | ID | Hypervisor hostname | State | Status  |
    +----+---------------------+-------+---------+
    | 21 | 828641A_XXXXXXX     | up    | enabled |
    | 23 | 828641A_YYYYYYY     | up    | enabled |
    +----+---------------------+-------+---------+
    
  • Migrate a virtual machine (run a live partition mobility operation):
  • # nova live-migration ghostdevto1cloudevto1 828641A_YYYYYYY
    
  • Evacuate and set a server in maintenance mode and move all the partitions to another host:
  • # nova maintenance-enable --migrate active-only --target-host 828641A_XXXXXX 828641A_YYYYYYY
    
  • Virtual Machine creation (output truncated):
  • # nova boot --image 7100-03-04-cic2-chef --flavor powervm.tiny --nic net-id=5fae84a7-b463-4a1a-b4dd-9ab24cdb66b5,v4-fixed-ip=10.16.66.51 novacreated
    +-------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------+
    | Property                            | Value                                                                                                                                            |
    +-------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------+
    | OS-DCF:diskConfig                   | MANUAL                                                                                                                                           |
    | OS-EXT-AZ:availability_zone         | nova                                                                                                                                             |
    | OS-EXT-SRV-ATTR:host                | -                                                                                                                                                |
    | OS-EXT-SRV-ATTR:hypervisor_hostname | -                                                                                                                                                |
    | OS-EXT-SRV-ATTR:instance_name       | novacreated-bf704dc6-00000040                                                                                                                    |
    | OS-EXT-STS:power_state              | 0                                                                                                                                                |
    | OS-EXT-STS:task_state               | scheduling                                                                                                                                       |
    | OS-EXT-STS:vm_state                 | building                                                                                                                                         |
    | accessIPv4                          |                                                                                                                                                  |
    | accessIPv6                          |                                                                                                                                                  |
    | adminPass                           | PDWuY2iwwqQZ                                                                                                                                     |
    | avail_priority                      | -                                                                                                                                                |
    | compliance_status                   | [{"status": "compliant", "category": "resource.allocation"}]                                                                                     |
    | cpu_utilization                     | -                                                                                                                                                |
    | cpus                                | 1                                                                                                                                                |
    | created                             | 2015-08-05T15:56:01Z                                                                                                                             |
    | current_compatibility_mode          | -                                                                                                                                                |
    | dedicated_sharing_mode              | -                                                                                                                                                |
    | desired_compatibility_mode          | -                                                                                                                                                |
    | endianness                          | big-endian                                                                                                                                       |
    | ephemeral_gb                        | 0                                                                                                                                                |
    | flavor                              | powervm.tiny (ac01ba9b-1576-450e-a093-92d53d4f5c33)                                                                                              |
    | health_status                       | {"health_value": "PENDING", "id": "bf704dc6-f255-46a6-b81b-d95bed00301e", "value_reason": "PENDING", "updated_at": "2015-08-05T15:56:02.307259"} |
    | hostId                              |                                                                                                                                                  |
    | id                                  | bf704dc6-f255-46a6-b81b-d95bed00301e                                                                                                             |
    | image                               | 7100-03-04-cic2-chef (96f86941-8480-4222-ba51-3f0c1a3b072b)                                                                                      |
    | metadata                            | {}                                                                                                                                               |
    | name                                | novacreated                                                                                                                                      |
    | operating_system                    | -                                                                                                                                                |
    | os_distro                           | aix                                                                                                                                              |
    | progress                            | 0                                                                                                                                                |
    | root_gb                             | 60                                                                                                                                               |
    | security_groups                     | default                                                                                                                                          |
    | status                              | BUILD                                                                                                                                            |
    | storage_connectivity_group_id       | -                                                                                                                                                |
    | tenant_id                           | 1471acf124a0479c8d525aa79b2582d0                                                                                                                 |
    | uncapped                            | -                                                                                                                                                |
    | updated                             | 2015-08-05T15:56:02Z                                                                                                                             |
    | user_id                             | 0688b01e6439ca32d698d20789d52169126fb41fb1a4ddafcebb97d854e836c9                                                                                 |
    +-------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------+
    
    

LUN order, remove a boot lun

If you are moving to PowerVC you will probably need to migrate existing machines to your PowerVC environment. One of my customer is asking to move its machines from old boxes using vscsi, to new PowerVC managed boxes using NPIV. I am doing it with the help of a SVC for the storage side. Instead of creating the Virtual Machine profile on the HMC, and then doing the zoning and masking on the Storage Volume Controller and on the SAN switches, I decided to let PowerVC do the job for me. Unfortunately, PowerVC can’t only “carve” Virtual Machine, if you want to do so you have to build a Virtual Machine (rootvg include). This is what I am doing. During the migration process I have to replace the PowerVC created lun by the lun used for the migration …. and finally delete the PowerVC created boot lun. There is a trick to know if you want to do this:

  • Let’s say the lun created by PowerVC is the one named “volume-clouddev-test….” and the orignal rootvg is named “good_rootvg”. The Virtual Machine is booted on the “good_rootvg” lun and I want to remove the “volume-clouddev-test….”:
  • root1

  • You first have to click the “Edit Details” button:
  • root2

  • Then toggle the boot set to “YES” for the “good_rootvg” lun and click move up (the rootvg order must be set to 1, it is mandatory, the lun at order 1 can’t be deleted):
  • root3

  • Toggle the boot set to “NO” for the PowerVC created rootvg:
  • root4

  • If you are trying to detach the volume in first position you will got an error:
  • root5

  • When the order are ok, you can detach and delete the lun created by PowerVC:
  • root6
    root7

Conclusion

There are always good things to learn about PowerVC and related AIX topics. Tell me if these tricks are useful for you and I will continue to write posts like this one. You don’t need to understand all this details to work with PowerVC, most customers don’t. I’m sure you prefer understand what is going on “behind the scene” instead of just clicking a nice GUI. I hope it helps you to better understand what PowerVC is made of. And don’t be shy share you tricks with me. Next: more to come about Chef ! Up the irons !

Using Chef and cloud-init with PowerVC 1.2.2.2 | What’s new in version 1.2.2.2

I’ve been busy; very busy and I apologize for that … almost two months since the last update on the blog, but I’m still alive and I love AIX more than ever ;-). There is no blog post about it but I’ve developped a tool called “lsseas” which can be useful to all PowerVM administrators (you can find the script on github at this address https://github.com/chmod666org/lsseas). I’ll not talk to much about it but I thought sharing the information to all my readers who are not following me on twitter was the best way to promote the tool. Have a look on it, submit your own changes on github, code and share !

This said we can talk about this new blog post. PowerVC 1.2.2.2 has been released since a few months and there are a few things I wanted to talk about. The new version include new features making the product more powerful than ever (export/import images, activation input, vscsi lun management). PowerVC is only building “empty” machine, it’s a good start but we can do better. The activation engine can customize the virtual machines but is limited and in my humble opinion not really usable for post-installation tasks. With the recent release of cloud-init and Chef for AIX PowerVC can be utilized to build your machines from nothing … and finally get your application running in minutes. Using cloud-init and Chef can help you making your infrastructure repeatable, “versionable” and testable this is what we call infrastructure as code and it is damn powerful.

A big thank you to Jay Kruemcke (@chromeaix), Philippe Hermes (@phhermes) and S.Tran (https://github.com/transt) , they gave me very useful help about the cloud-init support on AIX. Follow them on twitter !

PowerVC 1.2.2.1 mandatory fixes

Before starting please note that I strongly recommend to have the latest ifixes installed on your Virtual I/O Server. These ones are mandatory for PowerVC, install these ifixes no matter what :

  • On Virtual I/O Servers install IV66758m4c, rsctvios2:
  • # emgr -X -e /mnt/VIOS_2.2.3.4_IV66758m4c.150112.epkg.Z
    # emgr -l
    [..]
    ID  STATE LABEL      INSTALL TIME      UPDATED BY ABSTRACT
    === ===== ========== ================= ========== ======================================
    1    S    rsctvios2  03/03/15 12:13:42            RSCT fixes for VIOS
    2    S    IV66758m4c 03/03/15 12:16:04            Multiple PowerVC fixes VIOS 2.2.3.4
    3    S    IV67568s4a 03/03/15 14:12:45            man fails in VIOS shell
    [..]
    
  • Check you have the latest version of the Hardware Management Console (I strongly recommend v8.2.20 Service Pack 1):
  • hscroot@myhmc:~> lshmc -V
    "version= Version: 8
     Release: 8.2.0
     Service Pack: 1
    HMC Build level 20150216.1
    ","base_version=V8R8.2.0
    "
    

Exporting and importing image from another PowerVC

The PowerVC latest version allows you to export and import images. It’s a good thing ! Let’s say that like me you have a few PowerVC hosts, on different SAN networks with different storage arrays, you probably do not want to create your images on each one and you prefer to be sure to use the same image for each PowerVC. Just create one image and use the export/import feature to copy/move this image to a different storage array or PowerVC host:

  • To do so map your current image disk on the PowerVC itself (in my case by using the SVC), you can’t attach volume used for an image volume directly from PowerVC so you have to do it on the storage side by hand:
  • maptohost
    maptohost2

  • On the PowerVC host, rescan the volume and copy the whole new discovered lun with a dd:
  • powervc_source# rescan-scsi-bus.sh
    [..]
    powervc_source# multipath -ll
    mpathe (3600507680c810010f800000000000097) dm-10 IBM,2145
    [..]
    powervc_source# dd if=/dev/mapper/mpathe of=/data/download/aix7100-03-04-cloudinit-chef-ohai bs=4M
    16384+0 records in
    16384+0 records out
    68719476736 bytes (69 GB) copied, 314.429 s, 219 MB/s                                         
    
  • Map a new volume to the new PowerVC server and upload this new created file on the new PowerVC server, then dd the file back to the new volume:
  • mapnewlun

    powervc_dest# scp /data/download/aix7100-03-04-cloudinit-chef-ohai new_powervc:/data/download
    aix7100-03-04-cloudinit-chef-ohai          100%   64GB  25.7MB/s   42:28.
    powervc_dest# dd if=/data/download/aix7100-03-04-cloudinit-chef-ohai of=/dev/mapper/mpathc bs=4M
    16384+0 records in
    16384+0 records out
    68719476736 bytes (69 GB) copied, 159.028 s, 432 MB/s
    
  • Unmap the volume from the new PowerVC after the dd operation, and import it with the PowerVC graphical interface.
  • Manage the existing current volume you just created (note that the current PowerVC code does not allows you to choose cloud-init as an activation engine even if it is working great) :
  • manage_ex1
    manage_ex2

  • Import the image:
  • import1
    import2
    import3
    import4

You can also use the command powervc-volume-image-import to import the new volume by using the command line instead of the graphical user interface. Here is an example with a Red Hat Enterprise Linux 6.4 image:

powervc_source# dd if=/dev/hdisk4 of=/apps/images/rhel-6.4.raw bs=4M
5815360+0 records in
15360+0 records out
powervc_dest# scp 10.255.248.38:/apps/images/rhel-6.4.raw .
powervc_dest# dd if=/home/rhel-6.4.raw of=/dev/mapper/mpathe
30720+0 records in
30720+0 records out
64424509440 bytes (64 GB) copied, 124.799 s, 516 MB/s
powervc_dest# powervc-volume-image-import --name rhel64 --os rhel --volume volume_capture2 --activation-type ae
Password:
Image creation complete for image id: e3a4ece1-c0cd-4d44-b197-4bbbc2984a34

Activation input (cloud-init and ae)

Instead of doing post-installation tasks by hand after the deployment of the machine you can now use the activation input added recently in PowerVC. The activation input can be utilized to run any scripts you want or even better things (such as cloud-config syntax) if you are using cloud-init instead of the old activation engine. You have to remember that cloud-init is not yet officially supported by PowerVC, for this reason I think most of customers will still use the old activation engine. Latest activation engine version is also working with the activation input. On the examples below I’m of course using cloud-init :-). Don’t worry I’ll detail later in this post how to install and use cloud-init on AIX:

  • If you are using the activation engine please be sure to use the latest version. The current version of the activation engine in PowerVC 1.2.2.* is vmc-vsae-ext-2.4.5-1, the only way to be sure your are using this version is to check the size of /opt/ibm/ae/AS/vmc-sys-net/activate.py. The size of this file is 21127 bytes for the latest version. Check this before trying to do anything with the activation input. More information can be found here: Activation input documentation.
  • A simple shebang script can be used, on the example below this one is just writing a file, but it can be anything you want:
  • ai1

    # cat /tmp/activation_input
    Activation input was used on this server
    
  • If you are using cloud-init you can directly put cloud-config “script” in the activation input. The first line is always mandatory to tell the format of the activation input. If you forget to put this first line the activation input can not determine the format and the script will not be executed. Check the next point for more information about activation input:
  • ai2

    # cat /tmp/activation_input
    cloud-config activation input
    
  • There are additional fields called “server meta data key/value pairs”, just do not use them. They are used by images provided by IBM with customization of the activation engine. Forget about this it is useless, use this field only if IBM told you to do so.
  • cloud-init valid activation input can be found here: http://cloudinit.readthedocs.org/en/latest/topics/format.html. As you can see on the two examples above shell scripts and cloud-config format can be utilized, but you can also upload a gzip archive, or use a part handler format. Go on the url above for more informations.

vscsi and mix NPIV/vscsi machine creation

This is one of the major enhancement, PowerVC is now able create and map vscsi disks, even better you can create mixed NPIV vscsi machine. To do so create storage connectivity groups for each technology you want to use. You can choose a different way to create disk for boot volumes and for data volumes. Here are three examples, full NPIV, full vscsi, and a mixed vscsi(boot) and NPIV(data):

connectivitygroup1
connectivitygroup2
connectivitygroup3

What is really cool about this new feature is that PowerVC can use existing mapped luns on the Virtual I/O Server, please note that PowerVC will only use SAN backed devices and cannot use iSCSI or local disk (local disk can be use in the express version). You obviously have to make the zoning of your Virtual I/O Server by yourself. Here is an example where I have 69 devices mapped to my Virtual I/O Server, you can see that PowerVC is using one of the existing device for its deployment. This can be very useful if you have different teams working for the SAN and the system side, the storage guys will not change their habits and still can map you bunch of luns on the Virtual I/O Server, this can be used as a transition if you did not succeed in convincing guys from you storage team:

$ lspv | wc -l
      69

connectivitygroup_deploy1

$ lspv | wc -l
      69
$ lsmap -all -fmt :
vhost1:U8202.E4D.845B2DV-V2-C28:0x00000009:vtopt0:Available:0x8100000000000000:/var/vio/VMLibrary/vopt_c1309be1ed244a5c91829e1a5dfd281c: :N/A:vtscsi1:Available:0x8200000000000000:hdisk66:U78AA.001.WZSKM6P-P1-C3-T1-W500507680C11021F-L41000000000000:false

Please note that you still need to add fabrics and storage on PowerVC even if you have pre-mapped luns on your Virtual I/O Servers. This is mandatory for PowerVC image management and creation.

Maintenance Mode

This last feature is probably the one I like the most. You can now put your host in maintenance mode, this means that when you put a host in maintenance mode all the virtual machines hosted on this one are migrated with live partition mobility (remember the migrlpar –all option, I’m pretty sure this option is utilized for the PowerVC maintenance mode). By putting an host in maintenance mode this one is no longer available for new machines deployment and for mobility operations. The host can be shutdown for instance for a firmware upgrade.

  • Select a host and click the “Enter maintenance mode button”:
  • maintenance1

  • Choose where you want to move virtual machines, or let PowerVC decide for you (packing or stripping placement policy):
  • maintenance2

  • The host is entering maintenance mode:
  • maintenance3

  • Once the host is in maintenance mode this one is ready to be shutdown:
  • maintenance4

  • Leave the maintenance mode when you are ready:
  • maintenance5

An overview of Chef and cloud-init

With PowerVC you are now able to deploy new AIX virtual machines in a few minutes but there is still some work to do. What about post-installation tasks ? I’m sure that most of you are using NIM post-install scripts for post installation tasks. PowerVC does not use NIM and even if you can run your own shell scripts after a PowerVC deployment the goal of this tool is to automate a full installation… post-install included.

If the activation engine do the job to change the hostname and ip address of the machine it is pretty hard to customize it to do other tasks. Documentation is hard to find and I can assure you that it is not easy at all to customize and maintain. PowerVC Linux user’s are probably already aware of cloud-init. cloud-init is a tool (like the activation engine) in charge of the reconfiguration of your machine after its deployment, as the activation engine do today cloud-init change the hostname and the ip address of the machine but it can do way more than that (create user, add ssh-keys, mounting a filesystem, …). The good news is that cloud-init is now available an AIX since a few days, and you can use it with PowerVC. Awesome \o/.

If cloud-init can do one part of this job, it can’t do all and is not designed for that! It is not a configuration management tool, configurations are not centralized in a server, there is now way to create cookbooks, runbooks (or whatever you call it), you can’t pull product sources from a git server, there are a lot of things missing. cloud-init is a light tool designed for a simple job. I recently (at work and in my spare time) played a lot with configuration management tools. I’m a huge fan of Saltstack but unfortunately salt-minion (which are Saltstack clients) is not available on AIX… I had to find another tool. A few months ago Chef (by Opscode) announced the support of AIX and a release of chef-client for AIX, I decided to give it a try and I can assure you that this is damn powerful, let me explain this further.

Instead of creating shell scripts to do your post installation, Chef allows you to create cookbooks. Cookbooks are composed by recipes and each recipes is doing a task, for instance install an Oracle client, create the home directory for root user and create its profile file, enable or disable service on the system. The recipes are coded in a Chef language, and you can directly put Ruby code inside a recipe. Chef recipes are idempotent, it means that if something has already be done, it will not be done again. The advantage of using a solution like this is that you don’t have to maintain shell code and shells scripts which are difficult to change/rewrite. Your infrastructure is repeatable and changeable in minutes (after Chef is installed you can for instance told him to change /etc/resolv.conf for all your Websphere server). This is called “infrastructure as a code”. Give it a try and you’ll see that the first thing you’ll think will be “waaaaaaaaaaaaaooooooooooo”.

Trying to explain how PowerVC, cloud-init and Chef can work together is not really easy, a nice diagram is probably better than a long text:

chef

  1. You have built an AIX virtual machine. On this machine cloud-init is installed, Chef client 12 is installed. cloud-init is configured to register the chef-client on the chef-server, and to run a cookbook for a specific role. This server has been captured with PowerVC and is now ready to be deployed.
  2. Virtual machines are created with PowerVC.
  3. When the machine is built cloud-init is running on first boot. The ip address and the hostname of this machine is changed with the values provided in PowerVC. cloud-init create the chef-client configuration (client.rb, validation.pem). Finally chef-client is called.
  4. chef-client is registering on chef-server. Machine are now known by the chef-server.
  5. chef-client is resolving and downloading cookbooks for a specific role. Cookbooks and recipes are executed on the machine. After cookbooks execution the machine is ready and configured.
  6. Administrator create and upload cookbooks an recipe from his knife workstation. (knife is the tool to interact with the chef-server this one can be hosted anywhere you want, your laptop, a server …)

In a few step here is what you need to do to use PowerVC, cloud-init, and Chef together:

  1. Create a virtual machine with PowerVC.
  2. Download cloud-init, and install cloud-init in this virtual machine.
  3. Download chef-client, and install chef-client in this virtual machine.
  4. Configure cloud-init, modifiy /opt/freeware/etc/cloud.cfg. In this file put the Chef configuration of the cc_chef cloud-init module.
  5. Create mandatory files, such as /etc/chef directory, put your ohai plugins in /etc/chef/ohai-plugins directory.
  6. Stop the virtual machine.
  7. Capture the virtual machine with PowerVC.
  8. Obviously as prerequisites a chef-server is up and running, cookbooks, recipes, roles, environments are ok in this chef-server.

cloud-init installation

cloud-init is now available on AIX, but you have to build the rpm by yourself. Sources can be found on github at this address : https://github.com/transt/cloud-init-0.7.5. There are a lot of prerequisites, most of them can be found on the github page, some of them on famous perzl site, download and install these prerequisites; it is mandatory (links to download the prerequisites are on the github page, the zip file containing cloud-init can be downloaded here : https://github.com/transt/cloud-init-0.7.5/archive/master.zip

# rpm -ivh --nodeps gettext-0.17-8.aix6.1.ppc.rpm
[..]
gettext                     ##################################################
# for rpm in bzip2-1.0.6-2.aix6.1.ppc.rpm db-4.8.24-4.aix6.1.ppc.rpm expat-2.1.0-1.aix6.1.ppc.rpm gmp-5.1.3-1.aix6.1.ppc.rpm libffi-3.0.11-1.aix6.1.ppc.rpm openssl-1.0.1g-1.aix6.1.ppc.rpm zlib-1.2.5-6.aix6.1.ppc.rpm gdbm-1.10-1.aix6.1.ppc.rpm libiconv-1.14-1.aix6.1.ppc.rpm bash-4.2-9.aix6.1.ppc.rpm info-5.0-2.aix6.1.ppc.rpm readline-6.2-3.aix6.1.ppc.rpm ncurses-5.9-3.aix6.1.ppc.rpm sqlite-3.7.15.2-2.aix6.1.ppc.rpm python-2.7.6-1.aix6.1.ppc.rpm python-2.7.6-1.aix6.1.ppc.rpm python-devel-2.7.6-1.aix6.1.ppc.rpm python-xml-0.8.4-1.aix6.1.ppc.rpm python-boto-2.34.0-1.aix6.1.noarch.rpm python-argparse-1.2.1-1.aix6.1.noarch.rpm python-cheetah-2.4.4-2.aix6.1.ppc.rpm python-configobj-5.0.5-1.aix6.1.noarch.rpm python-jsonpointer-1.0.c1ec3df-1.aix6.1.noarch.rpm python-jsonpatch-1.8-1.aix6.1.noarch.rpm python-oauth-1.0.1-1.aix6.1.noarch.rpm python-pyserial-2.7-1.aix6.1.ppc.rpm python-prettytable-0.7.2-1.aix6.1.noarch.rpm python-requests-2.4.3-1.aix6.1.noarch.rpm libyaml-0.1.4-1.aix6.1.ppc.rpm python-setuptools-0.9.8-2.aix6.1.noarch.rpm fdupes-1.51-1.aix5.1.ppc.rpm ; do rpm -ivh $rpm ;done
[..]
python-oauth                ##################################################
python-pyserial             ##################################################
python-prettytable          ##################################################
python-requests             ##################################################
libyaml                     ##################################################

Build the rpm by following the commands below. You can reuse this rpm on every AIX on which you want to install cloud-init package:

# jar -xvf cloud-init-0.7.5-master.zip
inflated: cloud-init-0.7.5-master/upstart/cloud-log-shutdown.conf
# mv cloud-init-0.7.5-master  cloud-init-0.7.5
# chmod -Rf +x cloud-init-0.7.5/bin
# chmod -Rf +x cloud-init-0.7.5/tools
# cp cloud-init-0.7.5/packages/aix/cloud-init.spec.in /opt/freeware/src/packages/SPECS/cloud-init.spec
# tar -cvf cloud-init-0.7.5.tar cloud-init-0.7.5
[..]
a cloud-init-0.7.5/upstart/cloud-init.conf 1 blocks
a cloud-init-0.7.5/upstart/cloud-log-shutdown.conf 2 blocks
# gzip cloud-init-0.7.5.tar
# cp cloud-init-0.7.5.tar.gz /opt/freeware/src/packages/SOURCES/cloud-init-0.7.5.tar.gz
# rpm -v -bb /opt/freeware/src/packages/SPECS/cloud-init.spec
[..]
Requires: cloud-init = 0.7.5
Wrote: /opt/freeware/src/packages/RPMS/ppc/cloud-init-0.7.5-4.1.aix7.1.ppc.rpm
Wrote: /opt/freeware/src/packages/RPMS/ppc/cloud-init-doc-0.7.5-4.1.aix7.1.ppc.rpm
Wrote: /opt/freeware/src/packages/RPMS/ppc/cloud-init-test-0.7.5-4.1.aix7.1.ppc.rpm

Finally install the rpm:

# rpm -ivh /opt/freeware/src/packages/RPMS/ppc/cloud-init-0.7.5-4.1.aix7.1.ppc.rpm
cloud-init                  ##################################################
# rpm -qa | grep cloud-init
cloud-init-0.7.5-4.1

cloud-init configuration

By installing cloud-init package on AIX some entries have been added to /etc/rc.d/rc2.d:

ls -l /etc/rc.d/rc2.d | grep cloud
lrwxrwxrwx    1 root     system           33 Apr 26 15:13 S01cloud-init-local -> /etc/rc.d/init.d/cloud-init-local
lrwxrwxrwx    1 root     system           27 Apr 26 15:13 S02cloud-init -> /etc/rc.d/init.d/cloud-init
lrwxrwxrwx    1 root     system           29 Apr 26 15:13 S03cloud-config -> /etc/rc.d/init.d/cloud-config
lrwxrwxrwx    1 root     system           28 Apr 26 15:13 S04cloud-final -> /etc/rc.d/init.d/cloud-final

The default configuration file is located in /opt/freeware/etc/cloud/cloud.cfg, this configuration file is splited in three parts. The first one called cloud_init_module tells cloud-init to run specifics modules when the cloud-init script is started at boot time. For instance set the hostname of the machine (set_hostname), reset the rmc (reset_rmc) and so on. In our case this part will automatically change the hostname and the ip address of the machine by the values provided in PowerVC at the deployement time. This cloud_init_module part is splited in two, the local one and the normal one. The local on is using information provided by the cdrom build by PowerVC at the time of the deployment. This cdrom provides ip and hostname of the machine, activation input script, nameservers information. The datasource_list stanza tells cloud-init to use the “ConfigDrive” (in our case virtual cdrom) to get ip and hostname needed by some cloud_init_modules. The second one called cloud_config_module tells cloud-init to run specific modules when cloud-config script is called, at this stage the minimal requirements have already been configured by the previous cloud_init_module stage (dns, ip address, hostname are ok). We will configure and setup the chef-client in this stage. The last part called cloud_final_module tells cloud-init to run specific modules when the cloud-final script is called. You can at this step print a final message, reboot the host and so on (In my case host reboot is needed by my install_sddpcm Chef recipe). Here is an overview of the cloud.cfg configuration file:

cloud-init

  • The datasource_list stanza tells cloud-init to use the virtual cdrom as a source of information:
  • datasource_list: ['ConfigDrive']
    
  • cloud_init_module:
  • cloud_init_modules:
    [..]
     - set-multipath-hcheck-interval
     - update-bootlist
     - reset-rmc
     - set_hostname
     - update_hostname
     - update_etc_host
    
  • cloud_config_module:
  • cloud_config_modules:
    [..]
      - mounts
      - chef
      - runcmd
    
  • cloud_final_module:
  • cloud_final_modules:
      [..]
      - final-message
    

If you do not want to use Chef at all you can modify the cloud.cfg file to fit you needs (running homemade scripts, mounting filesystems …), but my goal here is to do the job with Chef. We will try to do the minimal job with cloud-init, so the goal here is to configure cloud-init to configure chef-client. Anyway I also wanted to play with cloud-init and see its capabilities. The full documentation of cloud-init can be found here https://cloudinit.readthedocs.org/en/latest/. Here are a few thing I just added (the Chef part will be detailed later), but keep in mind you can just use cloud-init without Chef if you want (setup you ssh key, mount or create filesystems, create files and so on):

write_files:
  - path: /tmp/cloud-init-started
    content: |
      cloud-init was started on this server
    permissions: '0755'
  - path: /var/log/cloud-init-sub.log
    content: |
      starting chef logging
    permissions: '0755'

final_message: "The system is up, cloud-init is finished"

EDIT : The IBM developper of cloud-init for AIX just send me a mail yesterday about the new support of cc_power_state. As I need to reboot my host at the end of the build I can with the latest version of cloud-init for AIX use the power_state stanza, I here use poweroff as an example, use reboot … for reboot:

power_state:
 delay: "+5"
 mode: poweroff
 message: cloud-init mandatory reboot for sddpcm
 timeout: 5

power_state1

Rerun cloud-init for testing purpose

You probably want to test your cloud-init configuration before of after capturing the machine. When cloud-init is launched by the startup script a check is performed to be sure that cloud-init has not already been run. Some “semaphores” files are created in /opt/freeware/var/lib/cloud/instance/sem to tell modules have already been executed. If you want to re-run cloud-init by hand without having to rebuild a machine, just remove these files in this directory :

# rm -rf /opt/freeware/var/lib/cloud/instance/sem

Let’s say we just want to re-run the Chef part:

# rm /opt/freeware/var/lib/cloud/instance/sem/config_chef

To sum up here is what I want to do with cloud-init:

  1. Use the cdrom as datasource.
  2. Set the hostname and ip.
  3. Setup my chef-client.
  4. Print a final message.
  5. Do a mandatory reboot at the end of the installation.

chef-client installation and configuration

Before modifying the cloud.cfg file to tell cloud-init to setup the Chef client we first have to download and install the chef-client on the AIX host we will capture later. Download the Chef client bff file at this address: https://opscode-omnibus-packages.s3.amazonaws.com/aix/6.1/powerpc/chef-12.1.2-1.powerpc.bff and install it:

# installp -aXYgd . chef
[..]
+-----------------------------------------------------------------------------+
                         Installing Software...
+-----------------------------------------------------------------------------+

installp: APPLYING software for:
        chef 12.1.2.1
[..]
Installation Summary
--------------------
Name                        Level           Part        Event       Result
-------------------------------------------------------------------------------
chef                        12.1.2.1        USR         APPLY       SUCCESS
chef                        12.1.2.1        ROOT        APPLY       SUCCESS
# lslpp -l | grep -i chef
  chef                      12.1.2.1    C     F    The full stack of chef
# which chef-client
/usr/bin/chef-client

The configuration file of chef-client created by cloud-init will be created in the /etc/chef directory, by default the /etc/chef directory does not exists, so you’ll have to create it

# mkdir -p /etc/chef
# mkdir -p /etc/chef/ohai_plugins

If -like me- you are using custom ohai plugins, you have two things to do. cloud-init is using templates files to build configuration files needed by Chef. Theses templates files are located in /opt/freeware/etc/cloud/templates. Modify the chef_client.rb.tmpl file to add a configuration line for ohai plugin_path. Copy your ohai plugin in /etc/chef/ohai_plugins:

# tail -1 /opt/freeware/etc/cloud/templates/chef_client.rb.tmpl
Ohai::Config[:plugin_path] << '/etc/chef/ohai_plugins'
# ls /etc/chef/ohai_plugins
aixcustom.rb

Add the chef stanza in the /opt/freeware/cloud/cloud.cfg. After this step the image is ready to be captured (Check ohai plugin configuration if you need one), so the chef-client is already installed. Put the force_install stanza to false, put the server_url, the validation_name of your Chef server, the organization and finally put the validation RSA private key provided in your Chef server (in the example below the key has been truncated for obvious purpose; server_url and validation_name have also been replaced). As you can see below, I tell here to Chef to run all recipes defined in the aix7 cookbook, we'll see later how to create a cookbook and recipes :

chef:
  force_install: false
  server_url: "https://chefserver.lab.chmod666.org/organizations/chmod666"
  validation_name: "chmod666-validator"
  validation_key: |
    -----BEGIN RSA PRIVATE KEY-----
    MIIEpQIBAAKCAQEApj/Qqb+zppWZP+G3e/OA/2FXukNXskV8Z7ygEI9027XC3Jg8
    [..]
    XCEHzpaBXQbQyLshS4wAIVGxnPtyqXkdDIN5bJwIgLaMTLRSTtjH/WY=
    -----END RSA PRIVATE KEY-----
  run_list:
    - "role[aix7]"

runcmd:
  - /usr/bin/chef-client

EDIT: With the latest build of cloud-init for AIX there is no need to run chef-client with the runcmd stanza. Just add exec: 1 in the chef stanza.

To sum up, cloud-init is installed, cloud-init is configured to run a few actions at boot time but mainly to configure chef-client and run it with a specific role> The chef-client is installed. The machine can now be shutdown and is ready to be deployed. At the deployement time cloud-init will do the job to change ip address and hostname, and configure Chef. Chef will retreive the cookbooks and recipes and run it on the machine.

If you want to use custom ohai plugins read the ohai part before capturing your machine.

capture
capture2

Use chef-solo for testing

You will have to create your own recipes. My advice is to use chef-solo to debug. The chef-solo binary file is provided with the chef-client package. This one can be use without a Chef server to run and execute Chef recipes:

  • Create a test recipe:
  • # mkdir -p ~/chef/cookbooks/testing/recipes
    # cat  ~/chef/cookbooks/testing/recipes/test.rb
    file "/tmp/helloworld.txt" do
      owner "root"
      group "system"
      mode "0755"
      action :create
      content "Hello world !"
    end
    
  • Create a run_list with you test recipe:
  • # cat ~/chef/node.json
    {
      "run_list": [ "recipe[testing::test]" ]
    }
    
  • Create attribute file for chef-solo execution:
  • # cat  ~/chef/solo.rb
    file_cache_path "/root/chef"
    cookbook_path "/root/chef/cookbooks"
    json_attribs "/root/chef/node.json"
    
  • Run chef-solo:
  • # chef-solo -c /root/chef/solo.rb
    

chef-solo

cookbooks and recipes example on AIX

Let's say you have written all you recipes using chef-solo on a test server. On the Chef server you now want to put all these recipes in a cookbook. From the workstation, create a cookbook :

# knife cookbook create test
** Creating cookbook test in /home/kadmin/.chef/cookbooks
** Creating README for cookbook: aix7
** Creating CHANGELOG for cookbook: aix7
** Creating metadata for cookbook: aix7

In the .chef directory you can now find a directory for the aix7 cookbook. In this one you will find a directory for each Chef objects : recipes, templates, files, and so on. This place is called the chef-repo. I strongly recommend using this place as a git repository (you will by doing this save all modifications of any object in the cookbook).

# ls /home/kadmin/.chef/cookbooks/aix7/recipes
create_fs_rootvg.rb  create_profile_root.rb  create_user_group.rb  delete_group.rb  delete_user.rb  dns.rb  install_sddpcm.rb  install_ssh.rb  ntp.rb  ohai_custom.rb  test_ohai.rb
# ls /home/kadmin/.chef/cookbooks/aix7/templates/default
aixcustom.rb.erb  ntp.conf.erb  ohai_test.erb  resolv.conf.erb

Recipes

Here are a few examples of my own recipes:

  • install_ssh, the recipe is mounting an nfs filesystem (nim server). The nim_server is an attribute coming from role default attribute (we will check that later), the oslevel is an ohai attribute coming from an ohai custom plugin (we will check that later too). openssh.license and openssh.server filesets are installed, the filesystem is unmounted, and finally ssh service is started:
  • # creating temporary directory
    directory "/var/mnttmp" do
      action :create
    end
    # mouting nim server
    mount "/var/mnttmp" do
      device "#{node[:nim_server]}:/export/nim/lppsource/#{node['aixcustom']['oslevel']}"
      fstype "nfs"
      action :mount
    end
    # installing ssh packages (openssh.license, openssh.base)
    bff_package "openssh.license" do
      source "/var/mnttmp"
      action :install
    end
    bff_package "openssh.base" do
      source "/var/mnttmp"
      action :install
    end
    # umount the /var/mnttmp directory
    mount "/var/mnttmp" do
      fstype "nfs"
      action :umount
    end
    # deleting temporary directory
    directory "/var/mnttmp" do
      action :delete
    end
    # start and enable ssh service
    service "sshd" do
      action :start
    end
    
  • install_sddpcm, the recipe is mounting an nfs filesystem (nim server). The nim_server is an attribute coming from role default attribute (we will check that later), the platform_version is coming from ohai. devices.fcp.disk.ibm.mpio and devices.sddpcm.71.rte filesets are installed, the filesystem is unmounted:
  • # creating temporary directory
    directory "/var/mnttmp" do
      action :create
    end
    # mouting nim server
    mount "/var/mnttmp" do
      device "#{node[:nim_server]}:/export/nim/lpp_source/#{node['platform_version']}/sddpcm-71-2660"
      fstype "nfs"
      action :mount
    end
    # installing sddpcm packages (devices.fcp.disk.ibm.mpio, devices.sddpcm.71.rte)
    bff_package "devices.fcp.disk.ibm.mpio" do
      source "/var/mnttmp"
      action :install
    end
    bff_package "devices.sddpcm.71.rte" do
      source "/var/mnttmp"
      action :install
    end
    # umount the /var/mnttmp directory
    mount "/var/mnttmp" do
      fstype "nfs"
      action :umount
    end
    # deleting temporary directory
    directory "/var/mnttmp" do
      action :delete
    end
    
  • create_fs_rootvg, some filesystems are extended, an /apps filesystem is created and mounted. Please note that there are no cookbooks for AIX lvm for the moment and you have here to use the execute statement which is the only not to be idempotent:
  • execute "hd3" do
      command "chfs -a size=1024M /tmp"
    end
    execute "hd9var" do
      command "chfs -a size=512M /var"
    end
    execute "/apps" do
      command "crfs -v jfs2 -g rootvg -m /apps -Ay -a size=1M ; chlv -n appslv fslv00"
      not_if { ::File.exists?("/dev/appslv")}
    end
    mount "/apps" do
      device "/dev/appslv"
      fstype "jfs2"
    end
    
  • ntp, ntp.conf.erb located in the template directory is copied to /etc/ntp.conf:
  • template "/etc/ntp.conf" do
      source "ntp.conf.erb"
    end
    
  • dns, resolv.conf.erb located in the template directory is copied to /etc/resolv.conf:
  • template "/etc/resolv.conf" do
      source "resolv.conf.erb"
    end
    
  • crearte_user_group, a user for tadd is created:
  • user "taddmux" do
      gid 'sys'
      uid 421
      home '/home/taddmux'
      comment 'user TADDM connect SSH'
    end
    

Templates

On the recipes above templates are used for ntp and dns configuration. Templates files are files in which some strings are replaced by Chef attributes found in the roles, the environments, in ohai, or even directly in recipes, here are the two files I used for dns and ntp

  • ntp.conf.erb, ntpserver1,2,3 attributes are found in environments (let's say I have siteA and siteB and ntp are different for each site, I can define an environment for siteA en siteB):
  • [..]
    server <%= node['ntpserver1'] %>
    server <%= node['ntpserver2'] %>
    server <%= node['ntpserver3'] %>
    driftfile /etc/ntp.drift
    tracefile /etc/ntp.trace
    
  • resolv.conf.erb, nameserver1,2,3 and namesearch are found in environments:
  • search  <%= node['namesearch'] %>
    nameserver      <%= node['nameserver1'] %>
    nameserver      <%= node['nameserver2'] %>
    nameserver      <%= node['nameserver3'] %>
    

role assignation

Chef roles can be used to run different chef recipes depending of the type of server you want to post install. You can for instance create a role for webserver in which the Websphere recipe will be executed and create a role for databases server in which the recipe for Oracle will be executed. In my case and for the simplicity of this example I just create one role called aix7

# knife role create aix7
Created role[aix7]
# knife role edit aix7
{
  "name": "aix7",
  "description": "",
  "json_class": "Chef::Role",
  "default_attributes": {
    "nim_server": "nimsrv01"
  },
  "override_attributes": {

  },
  "chef_type": "role",
  "run_list": [
    "recipe[aix7::ohai_custom]",
    "recipe[aix7::create_fs_rootvg]",
    "recipe[aix7::create_profile_root]",
    "recipe[aix7::test_ohai]",
    "recipe[aix7::install_ssh]",
    "recipe[aix7::install_sddpcm]",
    "recipe[aix7::ntp]",
    "recipe[aix7::dns]"
  ],
  "env_run_lists": {

  }
}

What we can se here are two important things. We created an attribute specific to this role called nim_server. In all recipes, templates "node['nim_server']" will be replaced by nimsrv01 (remember the recipes above, and remember we told chef-client to run the aix7 role). We created a run_list telling that recipes coming from aix7 cookbook : install_ssh, install_sddpcm, ... should be exectued on a server calling chef-client with the aix7 role.

environments

Chef environments can be use to separate you environments, for instance production, developpement, backup, or in my example sites. In my company depending the site on which you are building a machine nameservers and ntp servers will differ. Remember that we are using templates files for resolv.conf and ntp.conf files :

knife environment show siteA
chef_type:           environment
cookbook_versions:
default_attributes:
  namesearch:  lab.chmod666.org chmod666.org
  nameserver1: 10.10.10.10
  nameserver2: 10.10.10.11
  nameserver3: 10.10.10.12
  ntpserver1:  11.10.10.10
  ntpserver2:  11.10.10.11
  ntpserver3:  11.10.10.12
description:         production site
json_class:          Chef::Environment
name:                siteA
override_attributes:

When chef-client will be called with -E siteA attribute it will replace node['namesearch'] by "lab.chmod666.org chomd666.org" in all recipes, and templates files.

A Chef run

When you are ok with your cookbook upload it to the Chef server:

# knife cookbook upload aix7
Uploading aix7           [0.1.0]
Uploaded 1 cookbook.

When chef-client is not executed by cloud-init you can run it by hand. I thought it is interessting to put an output of chef-client here, you can see that files are modified, packages installed and so on ;-) :

chef-clientrun1
chef-clientrun2

Ohai

ohai is a command delivered with chef-client. Its purpose is to gather information about the machine on which chef-client is executed. Each time chef-client is running a call to ohai is launched. By default ohai is gathering a lot of information such as ip address of the machine, the lpar id, the lpar name, and so on. A call to ohai is returning a json tree. Each element of this json tree can be accessed in Chef recipes or in Chef templates. For instance to get the lpar name the 'node['virtualization']['lpar_name']' can be called. Here is an example of a single call to ohai:

# ohai | more
  "ipaddress": "10.244.248.56",
  "macaddress": "FA:A3:6A:5C:82:20",
  "os": "aix",
  "os_version": "1",
  "platform": "aix",
  "platform_version": "7.1",
  "platform_family": "aix",
  "uptime_seconds": 14165,
  "uptime": "3 hours 56 minutes 05 seconds",
  "virtualization": {
    "lpar_no": "7",
    "lpar_name": "s00va9940866-ada56a6e-0000004d"
  },

At the time of writing this blog post there is -at my humble opinion- some attirbutes missing in ohai. For instance if you want to install a specific package from an lpp_source you first need to know what is you current oslevel (I mean the output of oslevel -s). Fortunately ohai can be surcharged by custom plugin and you can add your own attributes what ever it is.

  • In ohai 7 (the one shipped with chef-client 12) an attribute needs to be added to the Chef client.rb configuration to tells where the ohai plugins will be located. Remember that the chef-client is configured by cloud-init, to do so you need to modify the template used by cloud-init the build the client.rb file. This one is located in /opt/freeware/etc/cloud/template:
  • # tail -1 /opt/freeware/etc/cloud/templates/chef_client.rb.tmpl
    Ohai::Config[:plugin_path] << '/etc/chef/ohai_plugins'
    # mkdir -p /etc/chef/ohai_plugins
    
  • After this modification the machine is ready to be captured.
  • You want your custom ohai plugins to be uploaded to the chef-client machine at the time of chef-client execution, here is an example of custom ohai plugin used as a template. This one will gather the oslevel (oslevel -s), the node name, the partition name and the memory mode of the machine. These attributes are gathered with lparstat command:
  • Ohai.plugin(:Aixcustom) do
      provides "aixcustom"
    
      collect_data(:aix) do
        aixcustom Mash.new
    
        oslevel = shell_out("oslevel -s").stdout.split($/)[0]
        nodename = shell_out("lparstat -i | awk -F ':' '$1 ~ \"Node Name\" {print $2}'").stdout.split($/)[0]
        partitionname = shell_out("lparstat -i | awk -F ':' '$1 ~ \"Partition Name\" {print $2}'").stdout.split($/)[0]
        memorymode = shell_out("lparstat -i | awk -F ':' '$1 ~ \"Memory Mode\" {print $2}'").stdout.split($/)[0]
    
        aixcustom[:oslevel] = oslevel
        aixcustom[:nodename] = nodename
        aixcustom[:partitionname] = partitionname
        aixcustom[:memorymode] = memorymode
      end
    end
    
  • The custom ohai plugin is written. Remember that you want this one to be uploaded on the machine a the chef-client execution. New attributes created by this plugin needs to be added in ohai. Here is a recipe uploading the custom ohai plugin, at the time the plugin is uploaded ohai is reloaded and new attributes can be utilized in any further templates (for recipes you have no other choice than putting the custom ohai plugin in the directroy before the capture):
  • cat ~/.chef/cookbooks/aix7/recipes/ohai_custom.rb
    ohai "reload" do
      action :reload
    end
    
    template "/etc/chef/ohai_plugins/aixcustom.rb" do
      notifies :reload, "ohai[reload]", :immediately
    end
    

chef-server, chef workstation, knife

I'll not detail here how to setup a Chef server, and how configure you Chef workstation (knife). There are plenty of good tutorials about that on the internet. Please just note that you need to use Chef sever 12 if you are using Chef client 12. Here are some good link to start.

I had some difficulties during the configuration here are a few tricks to know :

  • cacert can by found here: /opt/opscode/embedded/ssl/cert/cacert.pem
  • The Chef validation key can be found in /etc/chef/chef-validator.pem

Building the machine, checking the logs

  • The write_file part was executed, the file is present in /tmp filesystem:
  • # cat /tmp/cloud-init-started
    cloud-init was started on this server
    
  • The chef-client was configured, file are present in /etc/chef directory, looking at the log file these files were created by cloud-init
  • # ls -l /etc/chef
    total 32
    -rw-------    1 root     system         1679 Apr 26 23:46 client.pem
    -rw-r--r--    1 root     system          646 Apr 26 23:46 client.rb
    -rw-r--r--    1 root     system           38 Apr 26 23:46 firstboot.json
    -rw-r--r--    1 root     system         1679 Apr 26 23:46 validation.pem
    
    # grep chef | /var/log/cloud-init-output.log
    2015-04-26 23:46:22,463 - importer.py[DEBUG]: Found cc_chef with attributes ['handle'] in ['cloudinit.config.cc_chef']
    2015-04-26 23:46:22,879 - util.py[DEBUG]: Writing to /opt/freeware/var/lib/cloud/instances/a8b8fe0d-34c1-4bdb-821c-777fca1c391f/sem/config_chef - wb: [420] 23 bytes
    2015-04-26 23:46:22,882 - helpers.py[DEBUG]: Running config-chef using lock ()
    2015-04-26 23:46:22,884 - util.py[DEBUG]: Writing to /etc/chef/validation.pem - wb: [420] 1679 bytes
    2015-04-26 23:46:22,887 - util.py[DEBUG]: Reading from /opt/freeware/etc/cloud/templates/chef_client.rb.tmpl (quiet=False)
    2015-04-26 23:46:22,889 - util.py[DEBUG]: Read 892 bytes from /opt/freeware/etc/cloud/templates/chef_client.rb.tmpl
    2015-04-26 23:46:22,954 - util.py[DEBUG]: Writing to /etc/chef/client.rb - wb: [420] 646 bytes
    2015-04-26 23:46:22,958 - util.py[DEBUG]: Writing to /etc/chef/firstboot.json - wb: [420] 38 bytes
    
  • The runcmd part was executed:
  • # cat /opt/freeware/var/lib/cloud/instance/scripts/runcmd
    #!/bin/sh
    /usr/bin/chef-client
    
    2015-04-26 23:46:22,488 - importer.py[DEBUG]: Found cc_runcmd with attributes ['handle'] in ['cloudinit.config.cc_runcmd']
    2015-04-26 23:46:22,983 - util.py[DEBUG]: Writing to /opt/freeware/var/lib/cloud/instances/a8b8fe0d-34c1-4bdb-821c-777fca1c391f/sem/config_runcmd - wb: [420] 23 bytes
    2015-04-26 23:46:22,986 - helpers.py[DEBUG]: Running config-runcmd using lock ()
    2015-04-26 23:46:22,987 - util.py[DEBUG]: Writing to /opt/freeware/var/lib/cloud/instances/a8b8fe0d-34c1-4bdb-821c-777fca1c391f/scripts/runcmd - wb: [448] 31 bytes
    2015-04-26 23:46:25,868 - util.py[DEBUG]: Running command ['/opt/freeware/var/lib/cloud/instance/scripts/runcmd'] with allowed return codes [0] (shell=False, capture=False)
    
  • The final message was printed in the output of the cloud-init log file
  • 2015-04-26 23:06:01,203 - helpers.py[DEBUG]: Running config-final-message using lock ()
    The system is up, cloud-init is finished
    2015-04-26 23:06:01,240 - util.py[DEBUG]: The system is up, cloud-init is finished
    2015-04-26 23:06:01,242 - util.py[DEBUG]: Writing to /opt/freeware/var/lib/cloud/instance/boot-finished - wb: [420] 57 bytes
    

On the Chef server you can check the client registred itself and get details about it.

# knife node list | grep a8b8fe0d-34c1-4bdb-821c-777fca1c391f
a8b8fe0d-34c1-4bdb-821c-777fca1c391f
# knife node show a8b8fe0d-34c1-4bdb-821c-777fca1c391f
Node Name:   a8b8fe0d-34c1-4bdb-821c-777fca1c391f
Environment: _default
FQDN:
IP:          10.10.208.61
Run List:    role[aix7]
Roles:       france_testing
Recipes:     aix7::create_fs_rootvg, aix7::create_profile_root
Platform:    aix 7.1
Tags:

What's next ?

If you have a look on the Chef supermarket (the place where you can download Chef cookbooks written by the community and validated by opscode) you'll see that there are not a lot of cookbooks for AIX. I'm currently writting my own cookbook for AIX logical volume manager and filesystems creation, but there is still a lot of work to do on cookbooks creation for AIX. Here is a list of cookbooks that needs to be written by the community : chdev, multibos, mksysb, nim client, wpar, update_all, ldap_client .... I can continue this list but I'm sure that you have a lot of ideas. Last word learn ruby and write cookbooks, they will be used by the community and we can finally have a good configuration management tool on AIX. With PowerVC, cloud-init and Chef support AIX will have a full "DevOps" stack and can finally fight against Linux. As always hope this blog post helps you to understand PowerVC, cloud-init and Chef !