PowerVC Express using local storage : Overview, tips and tricks, and lessons learned from experience

Everybody talks about PowerVC since the October 8th announcement, after seeing a few videos and reading a few articles about it, I didn’t find anything telling what’s the product really has in his guts. I had the chance to deploy and test a PowerVC express version (using local storage), faced a lot of problems and found some interesting things to share with you. Rather than boiling the ocean :-) and asking for new features (oh everybody wants new features !), here is a practical how-to, some tips and tricks and the lessons I’ve learned about it. After a few weeks of work I can say that PowerVC is really good and pretty simple to use and deploy. Here we go :

Preparing the PowerVC express host

Setting SELinux from enforcing to permissive

cool1

Please refer to my previous about installing a Linux On Power if you have any doubt about this. Before trying to install PowerVC express edition you first have to disable selinux or at least set the policy form enforcing to permissive. Please note that a mandatory reboot is needed for this modification :

# sed -i 's/SELINUX=enforcing/SELINUX=permissive/g' /etc/selinux/config
# grep ^SELINUX /etc/selinux/config
SELINUX=permissive
SELINUXTYPE=targeted

Reboot the PowerVC express host :

[root@powervc-standard ~]# shutdown -fh now
Broadcast message from root@powervc-standard
(/dev/pts/0) at 15:43 ...
The system is going down for halt NOW!

Yum repository

cool2

Before running the installer you have to configure your yum repository because the installer needs to install rpm shipped with the Red Hat Enterprise installation cdrom. I choose to use the cdrom as repository but this one can be served through http, without any problems :

# mkdir /mnt/cdrom ; mount -o loop /dev/cdrom /mnt/cdrom
# cat /etc/yum.repos.d/rhel-cdrom.repo
[rhel-cdrom]
name=RHEL Cdrom
baseurl=file:///mnt/cdrom
gpgckeck=0
enabled=1
# yum update
# yum upgrade

If using x86 version : noop scheduler

If you are using the x86 version of PowerVC express you can experience some slowness while trying to install the product. In my case I had to change the I/O scheduler from cfq to noop. My advice is just to temporarily enable it. My installation of PowerVC express takes hours (no joke, almost 5 hours) before changing the I/O scheduler to noop. Enabling this option reduce this time to an half hour (in my case) :

# cat /sys/block/vda/queue/scheduler
noop anticipatory deadline [cfq]
# echo "noop" > /sys/block/vda/queue/scheduler
# cat /sys/block/vda/queue/scheduler
[noop] anticipatory deadline cfq

PATH modification

Add the /opt/ibm/powervc/bin to your path to be allowed to run PowerVC commands such as powervc-console-term, powervc-services, powervc-get-token, and so on …..

# more /root/.bash_profile
PATH=$PATH:$HOME/bin:/opt/ibm/powervc/bin

I’ll not detail the installation here but just run this installer and follow the questions asked by the installer :

# ./install
Select the offering type to install:
   1 - Express  (IVM support)
   2 - Standard (HMC support)
   9 - Exit
1
Extracting license content
International Program License Agreement
[..]

Preparing the Virtual I/O Server and the IVM

Before trying to do anything you have to configure the Virtual I/O Server and the IVM, check that all the points below are ok before registering the host :

  • You need at least one Shared Ethernet Adapter to use PowerVC express, you can on an IVM have up to four Shared Ethernet Adapter.
  • A virtual media repository created with at least 40Gb free.
  • # lsrep
    Size(mb) Free(mb) Parent Pool         Parent Size      Parent Free
       40795    29317 rootvg                   419328           342272
    
  • A PowerVM enterprise edition or PowerVM for IBM PowerLinux key.
  • The max virtual adapter correctly configured (in my case 256).
  • The maximum number of ssh sessions opened on the Virtual I/O Server has to be at least 20.
  • # grep MaxSessions /etc/ssh/sshd_config
    MaxSessions 20
    # stopsrc -s sshd
    # startsrc -s sshd
    
  • FTP transfers and FTP ports are opened between the PowerVC host and the IVM.

PowerVC usage

Host Registering

It’s very easy but registering the host is one of the most important step in this configuration. Just set your IVM hostname user and password. The tricky part is to check the box to use local storage, you then have to choose the directory where images will be stored. Be careful when choosing this directory, it can’t be changed on the fly, and you have to remove and re-register the host if you want to do this. My advice is not to choose the default /home/padmin directory, and to create a dedicated logical volume for this.

deckard

If the host registration fails check all the Virtual I/O Server prerequisites, then retry. If it fails again check the /var/log/nova/api.log and /var/log/nova/compute_xxx.log

host1

Manage existing Virtual Machines

Unlike VMcontrol PowerVC allows to manage existing machines, so if your IVM is correctly configured, you’ll not have trouble to import existing machines, and manage them with PowerVC. This is one of the strength of PowerVC, it assure a backward compatibility for your existing Virtual I/O Clients. And it’s simple to use (look at the images below) :

manage_existing
manage_existing1

Network Definition

Create a network using one of your Shared Ethernet Adapter to be able to deploy machines :

network_1

First installation with ISO image

Importing ISO images

For the first installation (if you do not have any systems already installed on your system) you first need to import an iso to PowerVC, be very careful to read the next steps because I had a lot problems of space with this. Importing images is managed by glance, so if you have any problem checking in /var/log/glance file can be useful (putting in verbose mode too in /etc/glance/glance.conf). Just use the powervc-iso-import command to do so :

# powervc-iso-import --name aix-7100-02-02 --os-distro aix --location /root/AIX_7.1_Base_Operating_System_TL_7100-02-02_DVD_1_of_2_32013.iso 
Password: 
+----------------------------+--------------------------------------+
| Property                   | Value                                |
+----------------------------+--------------------------------------+
| Property 'architecture'    | ppc64                                |
| Property 'hypervisor_type' | powervm                              |
| Property 'os_distro'       | aix                                  |
| checksum                   | df548a0cc24dbec196d0d3ead92feaca     |
| container_format           | bare                                 |
| created_at                 | 2014-02-04T19:45:29.125109           |
| deleted                    | False                                |
| deleted_at                 | None                                 |
| disk_format                | iso                                  |
| id                         | ee0a6544-c065-4ab7-aec8-7d6ee4248672 |
| is_public                  | True                                 |
| min_disk                   | 0                                    |
| min_ram                    | 0                                    |
| name                       | aix-7100-02-02                       |
| owner                      | 437b161186414e2bb0d4778cbd6fa14c     |
| protected                  | False                                |
| size                       | 3835723776                           |
| status                     | active                               |
| updated_at                 | 2014-02-04T19:49:29.031481           |
+----------------------------+--------------------------------------+

importing_iso

The result of the command above do not tell you anything about what is really done by the command.

Images are stored forever in /var/lib/glance/images and copied here by powerc-iso-import, this is the place where you need to have free space, don’t forget to remove you source image from the PowerVC host or you’ll need to have more space (in fact double space :-)). Check the /var/lib/glance/images while running powervc-iso-import shows you that images are copied :

# ls -lh /var/lib/glance/images
total 3.3G
-rw-r-----. 1 glance glance 3.3G Feb  4 22:08 3b95401b-85b4-4682-a7a5-332ea9e48348
# ls -lh /var/lib/glance/images
total 3.4G
-rw-r-----. 1 glance glance 3.4G Feb  4 22:09 3b95401b-85b4-4682-a7a5-332ea9e48348

image_import2

Deploying a Virtual Machine with ISO image :

Be careful when deploying images to have enough space in /home/padmin directory of the Virtual I/O Server : images are first copied to this directory before being available on the Virtual I/O Server media repository in /var/vio/VMLibrary (they are -apparently- removed later). On the PowerVC host itself, be careful to have enough space in /var/lib/nova/images and /var/lib/glance/images. On the PowerVC host images are stored by glance PowerVC so DON’T DELETE IMAGES in /var/lib/glance/images ! My understanding of this is that images are copied on fly from glance (/var/lib/images/glances) where images are stored (by powervc-import-iso), to nova (/var/lib/nova/images) where images are copied and then sent to the Virtual I/O Server, and then added to the Virtual I/O Server repository. PowerVC is using ftp to copy files to the Virtual I/O Server, so be sure to have ports open between PowerVC host and the Virtual I/O Server.

  • Here is an exemple of iso file present in /home/padmin on the Virtual I/O Server when deploying a server with an image, below we can see that image was copied in /var/lib/nova/images before beeing copied on the Virtual I/O Server :
  • padmin@deckard# ls *.iso
    config                                                  rhel-server-6.4-beta-ppc64-dvd.iso                      smit.script
    89386e2e-c2f0-4f9d-b0f4-267f4a99dc1e.192_168_0_100.iso  rhel-server-ppc-6.4-boot.iso                            smit.transaction
    ioscli.log                                              smit.log                                                tivoli
    [root@powervc-express ~]# ls -l /var/lib/nova/images/
    total 4579236
    -rw-r--r--. 1 nova nova 4689133568 Feb  4 22:31 89386e2e-c2f0-4f9d-b0f4-267f4a99dc1e.192_168_0_100.iso
    
  • Once images are copied from PowerVC they are imported to the Virtual I/O Server repository :
  • padmin@deckard# ps -ef | grep mkvopt
      padmin  6422716  8519802   0 05:41:42      -  0:00 ioscli mkvopt -name 89386e2e-c2f0-4f9d-b0f4-267f4a99dc1e -file /home/padmin/89386e2e-c2f0-4f9d-b0f4-267f4a99dc1e.192_168_0_100.iso -ro
      padmin  8519802 10485798   0 05:41:42      -  0:00 rksh -c ioscli mkvopt -name 89386e2e-c2f0-4f9d-b0f4-267f4a99dc1e -file /home/padmin/89386e2e-c2f0-4f9d-b0f4-267f4a99dc1e.192_168_0_100.iso -ro;echo $?
      padmin 10158232  9699504   2 05:42:30  pts/0  0:00 grep mkvopt
    
  • The Virtual Optical Device is then used to load the CDROM to the partition :
  • padmin@deckard# lsmap -all
    SVSA            Physloc                                      Client Partition ID
    --------------- -------------------------------------------- ------------------
    vhost0          U8203.E4A.06E7E53-V1-C11                     0x00000002
    
    VTD                   vtopt0
    Status                Available
    LUN                   0x8200000000000000
    Backing device        /var/vio/VMLibrary/89386e2e-c2f0-4f9d-b0f4-267f4a99dc1e
    Physloc               
    Mirrored              N/A
    
    VTD                   vtscsi0
    Status                Available
    LUN                   0x8100000000000000
    Backing device        lv00
    Physloc               
    Mirrored              N/A
    
    padmin@deckard# lsrep
    Size(mb) Free(mb) Parent Pool         Parent Size      Parent Free
       40795    29317 rootvg                   419328           342272
    
    Name                                                  File Size Optical         Access 
    89386e2e-c2f0-4f9d-b0f4-267f4a99dc1e                       4472 vtopt0          ro     
    fa9b3cf0-a649-4bf0-b309-5f2bab6379ea                       3659 None            ro     
    rhel-server-ppc-6.4-boot.iso                                227 None            ro     
    rhel-server-ppc-6.4.iso                                    3120 None            ro     
    
    

When deploying an iso image after all the steps below are finished the Virtual Machine newly created will be in shutoff state :

shutoff_before_start

Run the console term before starting the Virtual Machine, then start the Virtual Machine (by PowerVC) :

# powervc-console-term tyler61
Password: 
Starting terminal.
  • When deploying multiple hosts with the same image it is possible that some virtual machines will have the same name; in this case the powervc-console-term will warn you :
  • # powervc-console-term --f mary
    Password: 
    Multiple servers were found with the same name. Specify the server ID.
    089ecbc5-5bed-4d06-8659-bf7c57529c95 mary
    231ad074-7557-42b5-82b9-82ae2483fccd mary
    powervc-console-term --f 089ecbc5-5bed-4d06-8659-bf7c57529c95
    
    padmin@deckard# ps -ef | grep -i mkvt
      padmin  2556048  8323232   0 06:39:15  pts/1  0:00 rksh -c ioscli rmvt -id 2 && ioscli mkvt -id 2 && exit
    

    starting_first

    Then follow the instruction on the screen to finish these first installation (like if you were installing an AIX form the cdrom):

    IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM 
    IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM 
    -
    Elapsed time since release of system processors: 151 mins 48 secs
    /
    Elapsed time since release of system processors: 151 mins 57 secs
    -------------------------------------------------------------------------------
                                    Welcome to AIX.
                       boot image timestamp: 18:20:10 02/04/2013
                     The current time and date: 05:00:31 02/05/2014
            processor count: 1;  memory size: 2048MB;  kernel size: 29153194
    boot device: /vdevice/v-scsi@30000002/disk@8200000000000000:\ppc\chrp\bootfile.exe
                           kernel debugger setting: enabled
    -------------------------------------------------------------------------------
    
    AIX Version 6.1
    
    

    Preparing the capture of the first installed Virtual Machine

    The Activation Engine

    Before capturing the Virtual machine run the Activation Engine. The script will allow the machine to be captured and the captured image to be automatically reconfigured on the fly after first boot time. Be careful when running the Activation Engine, the Virtual Machine will be shut-off just by running this script.

    # scp 192.168.0.98:/opt/ibm/powervc/activation-engine/vmc.vsae.tar .
    # tar xvf vmc.vsae.tar
    x activation-engine-2.2-106.aix5.3.noarch.rpm, 1240014 bytes, 2422 media blocks.
    [..]
    x aix-install.sh, 2681 bytes, 6 media blocks.
    # rm /opt/ibm/ae/AP/*
    # cp /opt/ibm/ae/AS/vmc-network-restore/resetenv /opt/ibm/ae/AP/ovf-env.xml
    # JAVA_HOME=/usr/java5/jre
    # ./aix-install.sh
    Install VSAE and VMC extensions
    package activation-engine-jython-2.2-106 is already installed
    package activation-engine-2.2-106 is already installed
    package vmc-vsae-ext-2.4.4-1 is already installed
    # /opt/ibm/ae/AE.sh --reset
    JAVA_HOME=/usr/java5/jre
    [..]
    [2014-02-04 23:49:51,980] INFO: OS: AIX Version: 6
    [..]
    [2014-02-04 23:51:20,095] INFO: Cleaning AR and AP directories
    [2014-02-04 23:51:20,125] INFO: Shutting down the system
    
    SHUTDOWN PROGRAM
    Tue Feb  4 23:51:21 CST 2014
    
    
    Broadcast message from root@tyler (tty) at 23:51:21 ... 
    
    shutdown: PLEASE LOG OFF NOW !!!
    System maintenance is in progress.
    All processes will be killed now. 
    
    Broadcast message from root@tyler (tty) at 23:51:21 ... 
    
    shutdown: THE SYSTEM IS BEING SHUT DOWN NOW
    
    [..]
    
    Wait for '....Halt completed....' before stopping. 
    Error reporting has stopped.
    

    Capturing the host

    Just select the virtual machine you want to capture, and click capture ;-) :

    powervc_capture_ted
    snapshot1

    Here are the step realized by PowerVC when running a capture (so be careful to have enough space on PowerVC and Virtual I/O Server before running it :

    • By looking on the Virtual I/O Server itself, the main capture process is a simple dd command capturing the logical volume of the physical volume used as rootvg backing device, once the dd is finished this one is gzipped (in /home/padmin) :
    • padmin@deckard# ps -ef | grep dd      
          root  5832754  7078058   9 06:54:09      -  0:01 dd if=/dev/lv00 bs=1024k
          root  7078058  9043976  82 06:54:09      -  0:14 dd if=/dev/lv00 bs=1024k
        padmin  8388674  9699504   2 06:59:20  pts/0  0:00 grep dd
      padmin@deckard# ls -l /home/padmin/5154d176-6c3b-4eda-aa20-998deb207ca8.gz
      -rw-r--r--    1 root     staff    6102452605 Feb 05 07:13 5154d176-6c3b-4eda-aa20-998deb207ca8.gz
      
    • Once again the captured image is transferred to nova with ftp (ftpd process is spawned on the Virtual I/O Server) :
    • padmin@deckard# ps -ef | grep ftp
        padmin  7012516  9699504   1 07:20:47  pts/0  0:00 grep ftp
        padmin  7078072  4587660  47 07:14:18      -  0:11 ftpd
      [root@powervc-express ~]#  ls -l /var/lib/nova/images
      total 4666504
      -rw-r--r--. 1 nova nova 4778496000 Feb  5 00:22 5154d176-6c3b-4eda-aa20-998deb207ca8.gz
      
    • Then again the image is unzipped to glance :
    • # ls -l /var/lib/glance/images
      total 6453096
      -rw-r-----. 1 glance glance 1918828544 Feb  5 00:30 5154d176-6c3b-4eda-aa20-998deb207ca8
      -rw-r-----. 1 glance glance 4689133568 Feb  4 22:26 89386e2e-c2f0-4f9d-b0f4-267f4a99dc1e
      
    • During all these steps you can check take the Virtual Machine is in snapshot mode :
    • snapshot2

    • After the capture completion, you can have a look on the details :
    • image_3

    Deploying

    Deploying an host is very easy, just follow the instruction :

    • Here is an example of a deploy screen (I like visual thing when reading documents :-)) :
    • deploy1

    • Choose on which host you want to deploy the machine. You can on this step select the number of instance to deploy (you had to have a dhcp network configured for multiple instance), and select the size of the machine (by default a few one are pre-defined but you can define you own templates) :
    • deploy_1
      choose

    • PowerVC is smart enough to tell you a prediction of your machine usage. (It will show you in yellow the whole usage of your Power Server after the machine deployment (practical, and visual, love it !) :
    • deploy_3

    • Then just wait for the deployment to finish, steps are the same as an iso deployment, but the activation engine will be started at first boot to reconfigure the virtual machine :
    • deploy2

    Here is a image to sum-up the capture, the ISO deployment, and a deployment of a Virtual Machine, I think it’ll be easier for you to understand with an image :

    powervc-deploy-capture

    Tips and tricks

    Using PowerVC express on Power 6 machines

    There are a few things not written and tell in the manual. By looking in the source code you can find an hidden option to add in the /etc/nova/nova.conf file. There is one very interesting option for PowerVC express that allows you to try it on a Power6 server. If you want to do this, just add ivm_power6_enabled = true to /etc/nova/nova.conf. Restart PowerVC service before you can add any Power 6 server. The piece of code can be found in /usr/lib/python2.6/site-packages/powervc_discovery/registration/compute/ivm_powervm_registrar.py file :

    LOG.info("ivm_power6_enabled set to TRUE in nova.conf, "
             "so POWER6 will be allowed for testing")
    

    If you want to do so, just add it in the /etc/nova/nova.conf file in the [DEFAULT] section

    # grep power6 /etc/nova/nova.conf
    ivm_power6_enabled = true
    

    Just for the story, I was sure this was possible because the first presentation I’ve found on the internet about PowerVC was on PowerVC Express on a 8203-EA4 machine which is a Power 6 machine, the screenshots provided in these presentation were enough to tell me it was possible (don’t blame anybody for this). Next grep was my best friend to find were this option was hidden. Be aware that this option is only available for test purpose, so don’t open a PMR about this or it’ll be directly closed by IBM. Once again if IBMers are reading this one, tell me if it is ok to publish this option. If not I can remove it from the post.

    Enabling verbose and debug output

    PowerVC is not verbose at all when something is going wrong it’s sometimes difficult to check what is going on. First off all, the product is based on OpenStack so you’ll have access to all OpenStack log files. These files are located to /var/log/nova, /var/log/glance and so on. By default debug and verbose output are disabled for each OpenStack part. This is not supported by PowerVC but you can enable this verbose and debug output. For instance I had problem with nova when registering an host, putting verbose and debug mode in /etc/nova/nova.conf helped me a lot and let me check the ssh command run on the Virtual I/O Server (look for on the example below):

    # grep -iE "verbose|debug" /etc/nova/nova.conf
    verbose=true
    debug=true
    # vi /var/log/nova/compute-192_168_0_100.log
    2014-02-16 22:07:48.523 13090 ERROR powervc_nova.virt.ibmpowervm.ivm.exception [req-a4bee79a-5eb8-43fd-8ca6-ed75ebee880f 04c4ca89f32046ed91e0493c9e554d1d 437b161186414e2bb0d4778cbd6fa14c] Unexpected exception while running IVM command.
    Command: mksyscfg -r lpar -i "max_virtual_slots=64,max_procs=4,lpar_env=aixlinux,desired_procs=1,min_procs=1,proc_mode=shared,virtual_eth_adapters=\"36/0/1//0/0\",desired_proc_units=0.100000000000000,sharing_mode=uncap,min_mem=512,desired_mem=512,virtual_eth_mac_base_value=fa3f3d3cae,max_proc_units=4,lpar_proc_compat_mode=default,name=priss-6712136f-000000cd,max_mem=4096,min_proc_units=0.1"
    Exit code: 1
    Stdout: []
    Stderr: ['[VIOSE01040181-0025] Value for attribute desired_proc_units is not valid.', '']
    

    Using the PowerVC Rest API

    Systems engineers and systems administrator like me are rarely using REST APIs. If you want to automate some PowerVC actions such as deploying virtual machines without the need to go one the web interface you have to use the REST API provided with PowerVC. First of all here are the places where you’ll find some useful documentation for the PowerVC REST API

    • On the PowerVC inforcenter, you’ll find good tips and tricks for using the REST API :
    • The PowerVC programming guide :

    PowerVC is providing a script that is using the REST API. This one will generate an API token used for each calls of the API. This script is written in Python so I decided to take this script as reference to develop my own scripts based on this one :

    • You first have to use powervc-get-token to generate a token used to call the API. In general GET requests are used to query PowerVC (list virtual machines, list networks), and POST request to create things (create a network, create a virtual machine).
    • Get an API token to begin, by using powervc-get-token :
    • # powervc-get-token 
      Password: 
      323806024c70455d84a7a1db900a4f89
      
    • To create a virtual machine you’ll need to know three things, the tenant, the network on which the VM will be deployed and the images used to deploy the server.
    • Here is the script I used to get the tenant (url : /powervc/openstack/identity/v2.0/tenants) :
    • import httplib
      import json
      import os
      import sys
      
      def main():
          token = raw_input("Please enter PowerVC token : ")
          print "PowerVC token used = "+token
      
          conn = httplib.HTTPSConnection('localhost')
          headers = {"X-Auth-Token":token, "Content-type":"application/json"}
          body = ""
      
          conn.request("GET", "/powervc/openstack/identity/v2.0/tenants", body, headers)
          response = conn.getresponse()
          raw_response = response.read()
          conn.close()
          json_data = json.loads(raw_response)
          print json.dumps(json_data, indent=4, sort_keys=True)
      
      if __name__ == "__main__":
          main()
      
    • By running the script I get the tenant id 437b161186414e2bb0d4778cbd6fa14c :
    • # ./powervc-get-tenants
      Please enter PowerVC token : a3a9904fa5a24a24aa6833358f54c7ce
      PowerVC token used = a3a9904fa5a24a24aa6833358f54c7ce
      {
          "tenants": [
              {
                  "description": "IBM Default Tenant", 
                  "enabled": true, 
                  "id": "437b161186414e2bb0d4778cbd6fa14c", 
                  "name": "ibm-default"
              }
          ], 
          "tenants_links": []
      }
      
    • Here is the script I used to get the network id (url :/powervc/openstack/network/v2.0/networks) :
    • import httplib
      import json
      import os
      import sys
      
      def main():
          token = raw_input("Please enter PowerVC token : ")
          print "PowerVC token used = "+token
          tenant_id = raw_input("Please enter PowerVC Tenant ID : ")
          print "Tenant ID = "+tenant_id
      
          conn = httplib.HTTPSConnection('localhost')
          headers = {"X-Auth-Token":token, "Content-type":"application/json"}
          body = ""
      
          conn.request("GET", "/powervc/openstack/network/v2.0/networks", body, headers)
          response = conn.getresponse()
          raw_response = response.read()
          conn.close()
          json_data = json.loads(raw_response)
          print json.dumps(json_data, indent=4, sort_keys=True)
      
      if __name__ == "__main__":
          main()
      
    • By running the script I get the network id 83e233a7-34ef-4bf2-ae95-958046da770f :
    • # ./powervc-list-networks
      Please enter PowerVC token : a3a9904fa5a24a24aa6833358f54c7ce
      PowerVC token used = a3a9904fa5a24a24aa6833358f54c7ce
      Please enter PowerVC Tenant ID : 437b161186414e2bb0d4778cbd6fa14c
      Tenant ID = 437b161186414e2bb0d4778cbd6fa14c
      {
          "networks": [
              {
                  "admin_state_up": true, 
                  "id": "83e233a7-34ef-4bf2-ae95-958046da770f", 
                  "name": "local_net", 
                  "provider:network_type": "vlan", 
                  "provider:physical_network": "default", 
                  "provider:segmentation_id": 1, 
                  "shared": false, 
                  "status": "ACTIVE", 
                  "subnets": [
                      "6b76f7e6-02fa-427f-9032-e8d28aaa6ef4"
                  ], 
                  "tenant_id": "437b161186414e2bb0d4778cbd6fa14c"
              }
          ]
      }
      
    • Here is the script I used to get the image id (url : /powervc/openstack/compute/v2/”+tenant_id+”/images)
    • import httplib
      import json
      import os
      import sys
      
      def main():
          token = raw_input("Please enter PowerVC token : ")
          print "PowerVC token used = "+token
          tenant_id = raw_input("Please enter PowerVC Tenant ID : ")
          print "Tenant ID ="+tenant_id
      
          conn = httplib.HTTPSConnection('localhost')
          headers = {"X-Auth-Token":token, "Content-type":"application/json"}
          body = ""
      
          conn.request("GET", "/powervc/openstack/compute/v2/"+tenant_id+"/images", body, headers)
          response = conn.getresponse()
          raw_response = response.read()
          conn.close()
          json_data = json.loads(raw_response)
          print json.dumps(json_data, indent=4, sort_keys=True)
      
      if __name__ == "__main__":
          main()
      
    • By running the script I get the image id 0537da41-8542-41a0-b1b0-84ed75c6ed27 :
    • # ./powervc-list-images
      Please enter PowerVC token : a3a9904fa5a24a24aa6833358f54c7ce
      PowerVC token used = a3a9904fa5a24a24aa6833358f54c7ce
      Please enter PowerVC Tenant ID : 437b161186414e2bb0d4778cbd6fa14c
      Tenant ID = 437b161186414e2bb0d4778cbd6fa14c
      {
          "images": [
              {
                  "id": "0537da41-8542-41a0-b1b0-84ed75c6ed27", 
                  "links": [
                      {
                          "href": "http://localhost:8774/v2/437b161186414e2bb0d4778cbd6fa14c/images/0537da41-8542-41a0-b1b0-84ed75c6ed27", 
                          "rel": "self"
                      }, 
                      {
                          "href": "http://localhost:8774/437b161186414e2bb0d4778cbd6fa14c/images/0537da41-8542-41a0-b1b0-84ed75c6ed27", 
                          "rel": "bookmark"
                      }, 
                      {
                          "href": "http://192.168.0.12:9292/437b161186414e2bb0d4778cbd6fa14c/images/0537da41-8542-41a0-b1b0-84ed75c6ed27", 
                          "rel": "alternate", 
                          "type": "application/vnd.openstack.image"
                      }
                  ], 
                  "name": "ted_capture_201402161858"
              }
          ]
      }
      
    • With all this information, the token (a3a9904fa5a24a24aa6833358f54c7ce), the tenant id (437b161186414e2bb0d4778cbd6fa14c), the network id (83e233a7-34ef-4bf2-ae95-958046da770f), the image id (0537da41-8542-41a0-b1b0-84ed75c6ed27), I create a script to create a virtual machine (url : /powervc/openstack/compute/v2/”+tenant_id+”/servers) :
    • import httplib
      import json
      import os
      import sys
      
      def main():
          token = raw_input("Please enter PowerVC token : ")
          print "PowerVC token used = "+token
          tenant_id = raw_input("Please enter PowerVC Tenant ID : ")
          print "Tenant ID ="+tenant_id
          headers = {"Content-Type": "application/json"}
      
          conn = httplib.HTTPSConnection('localhost')
          headers = {"X-Auth-Token":token, "Content-type":"application/json"}
      
          body = {
            "server": {
              "flavor": {
                "OS-FLV-EXT-DATA:ephemeral": 10,
                "disk": 10,
                "extra_specs": {
                  "powervm:proc_units": 1
                },
                "ram": 512,
                "vcpus": 1
              },
              "imageRef": "0537da41-8542-41a0-b1b0-84ed75c6ed27",
              "max_count": 1,
              "name": "api",
              "networkRef": "83e233a7-34ef-4bf2-ae95-958046da770f",
              "networks": [
                {
                "fixed_ip": "192.168.0.21",
                "uuid": "83e233a7-34ef-4bf2-ae95-958046da770f"
                }
              ]
            }
          }
      
          conn.request("POST", "/powervc/openstack/compute/v2/"+tenant_id+"/servers",
                       json.dumps(body), headers)
          response = conn.getresponse()
          raw_response = response.read()
          conn.close()
          json_data = json.loads(raw_response)
          print json.dumps(json_data, indent=4, sort_keys=True)
      
      if __name__ == "__main__":
          main()
      
    • Running the script will finally create the virtual machine, you can check that the virtual machine is in deploying state in the PowerVC web interface : :
    • api

      # ./powervc-create-vm 
      Please enter PowerVC token : a3a9904fa5a24a24aa6833358f54c7ce
      PowerVC token used = a3a9904fa5a24a24aa6833358f54c7ce
      Please enter PowerVC Tenant ID : 437b161186414e2bb0d4778cbd6fa14c
      Tenant ID =437b161186414e2bb0d4778cbd6fa14c
      {
          "server": {
              "OS-DCF:diskConfig": "MANUAL", 
              "adminPass": "LE2bqbA2y87X", 
              "id": "0c7521d1-7e09-4c07-bc19-40e9ac3b756f", 
              "links": [
                  {
                      "href": "http://localhost:8774/v2/437b161186414e2bb0d4778cbd6fa14c/servers/0c7521d1-7e09-4c07-bc19-40e9ac3b756f", 
                      "rel": "self"
                  }, 
                  {
                      "href": "http://localhost:8774/437b161186414e2bb0d4778cbd6fa14c/servers/0c7521d1-7e09-4c07-bc19-40e9ac3b756f", 
                      "rel": "bookmark"
                  }
              ], 
              "security_groups": [
                  {
                      "name": "default"
                  }
              ]
          }
      }
      

    Backup and restore

    PowerVC does not have any HA solution, so my advice is to run it on a ip alias and to have a second dormant PowerVC instance ready to be setup at the time you need it. To do so my advice is to regularly run a powervc-backup (why not in crontab). If you need to restore PowerVC on the dormant instance, the only thing to do is to restore the backup (put it in /var/opt/imb/powervc/backups before running powervc-restore). The backup/restore is just an export/import of each db2 database (cinder,glance,nova,…), so it can take space and time (in my case my backup takes 8Gb and restoring the backup on the dormant instance takes me 1 hour).

    • Backuping :
    • # powervc-backup 
      Continuing with this operation will stop all PowerVC services.  Do you want to continue?  (y/N):y
      PowerVC services stopped.
      Database CINDER backup completed.
      Database QTM_IBM backup completed.
      Database NOSQL backup completed.
      Database NOVA backup completed.
      Database GLANCE backup completed.
      Database KEYSTONE backup completed.
      Database and file backup completed. Backup data is in archive /var/opt/ibm/powervc/backups/20142199544840294/powervc_backup.tar.gz.
      PowerVC services started.
      PowerVC backup completed successfully.
      
    • Restoring :
    • # powervc-restore --noPrompt
      

    Places to check

    Finding information about PowerVC is not so simple, the product is still young and there are not many feedbacks and informations about it. Here are a few places to check if you have any problems. Keep in mind that the community is very active and is growing day by day :

    If I have one last word to say this one will be future. In my opinion PowerVC is the future for deployment on Power Systems. I had the chance to use VMcontrol and PowerVC : both are powerful, but the second is so simple to use that I can easily say that it will be used by IBM customers (has anyone used VMcontrol in production outside of PureSystems ?). Where VMcontrol has fail PowerVC can succeed …. but looking in the code you’ll find some part of VMcontrol (in the Activation Engine). So the ghost of VMcontrol is not so far and wil surely be kicked by PowerVC. Once again, I hope it helps, comments are welcome, I really need it to be sure my posts are useful and simple to understand.

    end

    4 thoughts on “PowerVC Express using local storage : Overview, tips and tricks, and lessons learned from experience

    1. I like the private joke ;)
      Could you explain how you turn your scripts into executable forms ?

    2. Hi,

      Thx a lot, very nice tuto!
      One remark however, I tried to enable debug mode as you suggested by modifing the nova.conf file, and after that issues raised (I can’t commit the exact order) but one symptom was similar message when running powervc-services status (or-validate):
      openstack-nova-conductor dead but pid file exists

      I just disabled the debug mode by commenting
      debug=true
      and now everything is back to normal ….

      very weird by worth to share I think.

      Thx again

    3. I have an issue with deploying an image using powerVC 1.3.0.0.

      Following are the errors and any assistance is much appreciated
      2016-03-18 15:23:29.403 29301 WARNING oslo_config.cfg [req-99d1c1e4-b598-4702-b36e-b5fd7a1a76b4 0688b01e6439ca32d698d20789d52169126fb41fb1a4ddafcebb97d854e836c9 940058af74e9448390801c308d8561f6 – – -] Option “username” from group “neutron” is deprecated. Use option “user-name” from group “neutron”.
      2016-03-18 15:23:31.513 29301 INFO oslo_messaging._drivers.impl_rabbit [req-99d1c1e4-b598-4702-b36e-b5fd7a1a76b4 0688b01e6439ca32d698d20789d52169126fb41fb1a4ddafcebb97d854e836c9 940058af74e9448390801c308d8561f6 – – -] Connecting to AMQP server on 127.0.0.1:5671
      2016-03-18 15:23:31.550 29301 INFO oslo_messaging._drivers.impl_rabbit [req-99d1c1e4-b598-4702-b36e-b5fd7a1a76b4 0688b01e6439ca32d698d20789d52169126fb41fb1a4ddafcebb97d854e836c9 940058af74e9448390801c308d8561f6 – – -] Connected to AMQP server on 127.0.0.1:5671
      2016-03-18 15:23:31.607 29301 INFO nova.osapi_compute.wsgi.server [req-99d1c1e4-b598-4702-b36e-b5fd7a1a76b4 0688b01e6439ca32d698d20789d52169126fb41fb1a4ddafcebb97d854e836c9 940058af74e9448390801c308d8561f6 – – -] 10.231.134.188,127.0.0.1 “POST /v2/940058af74e9448390801c308d8561f6/servers HTTP/1.1″ status: 202 len: 790 time: 2.7702479
      2016-03-18 15:23:31.631 29301 INFO oslo_messaging._drivers.impl_rabbit [req-99d1c1e4-b598-4702-b36e-b5fd7a1a76b4 0688b01e6439ca32d698d20789d52169126fb41fb1a4ddafcebb97d854e836c9 940058af74e9448390801c308d8561f6 – – -] Connecting to AMQP server on 127.0.0.1:5671
      2016-03-18 15:23:31.668 29301 INFO oslo_messaging._drivers.impl_rabbit [req-99d1c1e4-b598-4702-b36e-b5fd7a1a76b4 0688b01e6439ca32d698d20789d52169126fb41fb1a4ddafcebb97d854e836c9 940058af74e9448390801c308d8561f6 – – -] Connected to AMQP server on 127.0.0.1:5671
      2016-03-18 15:23:31.797 29300 INFO nova.osapi_compute.wsgi.server [req-f961aeb0-16cf-4597-8a80-6da8027f868b 30109fb4c47d226296ff4f279ce7ef8f230b3eade4a3c428ffba9a92bc187d24 55dc3a3bc80b4ff78f9c518090827433 – – -] 127.0.0.1 “GET /v2/55dc3a3bc80b4ff78f9c518090827433/servers/f8a69318-b814-48fd-abaa-5ee29f8c1ca8 HTTP/1.1″ status: 200 len: 2567 time: 0.2349088
      2016-03-18 15:23:31.808 29301 INFO nova.osapi_compute.wsgi.server [req-c22405b8-85b9-4d64-8c4e-e27fd61f2bec 0688b01e6439ca32d698d20789d52169126fb41fb1a4ddafcebb97d854e836c9 940058af74e9448390801c308d8561f6 – – -] 10.231.134.188,127.0.0.1 “POST /v2/940058af74e9448390801c308d8561f6/ibm-console-messages HTTP/1.1″ status: 200 len: 721 time: 0.0246091
      2016-03-18 15:23:35.108 29301 WARNING nova.scheduler.utils [req-99d1c1e4-b598-4702-b36e-b5fd7a1a76b4 0688b01e6439ca32d698d20789d52169126fb41fb1a4ddafcebb97d854e836c9 940058af74e9448390801c308d8561f6 – – -] Failed to compute_task_build_instances: NV-67B7376 No valid host was found. Platform Resource Scheduler is temporarily disconnected.
      Traceback (most recent call last):

      File “/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py”, line 142, in inner
      return func(*args, **kwargs)

      File “/usr/lib/python2.7/site-packages/nova/scheduler/manager.py”, line 84, in select_destinations
      filter_properties)

      File “/usr/lib/python2.7/site-packages/nova/scheduler/ibm/ego/ego_scheduler.py”, line 199, in select_destinations
      raise exception.NoValidHost(reason=six.text_type(ex))

      NoValidHost: NV-67B7376 No valid host was found. Platform Resource Scheduler is temporarily disconnected.

      2016-03-18 15:23:35.109 29301 WARNING nova.scheduler.utils [req-99d1c1e4-b598-4702-b36e-b5fd7a1a76b4 0688b01e6439ca32d698d20789d52169126fb41fb1a4ddafcebb97d854e836c9 940058af74e9448390801c308d8561f6 – – -] [instance: f8a69318-b814-48fd-abaa-5ee29f8c1ca8] Setting instance to ERROR state.
      2016-03-18 15:23:35.695 29300 INFO nova.osapi_compute.wsgi.server [req-79c53111-b21e-4aca-a30e-8a91f8103758 30109fb4c47d226296ff4f279ce7ef8f230b3eade4a3c428ffba9a92bc187d24 55dc3a3bc80b4ff78f9c518090827433 – – -] 127.0.0.1 “GET /v2/55dc3a3bc80b4ff78f9c518090827433/servers/f8a69318-b814-48fd-abaa-5ee29f8c1ca8 HTTP/1.1″ status: 200 len: 4184 time: 0.2227721

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>