Unleash the true potential of SRIOV vNIC using vNIC failover !

I’m always working on tight schedule, I never have the time to write documentation because we’re moving fast, very fast … but not as fast as I want to ;-). A few months ago we were asked to put the TSM servers in our PowerVC environment I thought it was a very very bad idea to put a pet among the cattle as TSM servers are very specific and super I/O intensive in our environment (and are configured with plenty of rmt devices. This means that we tried to put lan-free stuffs into Openstack which is not designed at all for this kind of things). In my previous place we tried to put the TSM servers behind a virtualized environment (this means serving network through Shared Ethernet Adapters) and this was an EPIC FAIL. A few weeks after putting the servers in production we decided to move back to physical I/O and decided to used dedicated network adapters. As we didn’t want to make the same mistake in my current place we decided not to go on Shared Ethernet Adapters. Instead of that we took the decision to use SRIOV vNIC. SRIOV vNIC have the advantage to be fully virtualized (this means LPM aware and super flexible) allowing us to have the wanted flexibility (by moving TSM servers between sites if we feel the need to put a host in maintenance mode or if we are facing any kind of outage). In my previous blog post about vNIC I was very happy with the performance but not with the reliability. I didn’t want to go on NIB adapters for network redundancy (because it is an anti-virtualization way of doing things (we do not want to manage anything inside the VM, we want to let the virtualization environment do the job for us)). Lucky for me the project was reschedule to the end of the year and we finally took the decision not to put the TSM server into our big Openstack by dedicating some hosts for the backup stuffs. The latest version of PowerVM, HMC and firmware arrived just at time to let me use SRIOV vNIC failover new feature for this new TSM environment (fortunately for me we had some data center issues allowing me to wait enough time not to go on NIB and start the production directly with SRIOV vNIC \o/). I just have delivered the first four servers to my backup team yesterday and I must admit that SRIOV vNIC failover is a killer feature for this kind of things. Let’s now see how to setup this !

Prerequisites

As always using the latest features means you need to have everything up to date. In this case the minimal requierements for SRIOV vNIC failover are Virtual I/O Servers 2.2.5.10, Hardware Management Console v8R860 with the latest patchs and finally having a firmware up to date (ie. fw 860). Note that not all AIX versions are ok with SRIOV vNIC I’m here only using AIX 7.2 TL1 SP1:

  • Check the Virtual I/O Server are installed in 2.2.5.10:
  • # ioslevel
    2.2.5.10
    
  • Check the HMC is in the latest version (V8R860)
  • hscroot@myhmc:~> lshmc -V
    "version= Version: 8
     Release: 8.6.0
     Service Pack: 0
    HMC Build level 20161101.1
    MH01655: Required fix for HMC V8R8.6.0 (11-01-2016)
    ","base_version=V8R8.6.0
    "
    

    860

  • Check the firmware version is ok on the PowerSystem:
  • # updlic -o u -t sys -l latest -m reptilian-9119-MME-659707C -r mountpoint -d /home/hscroot/860_056/ -v
    # lslic -m reptilan-9119-MME-65BA46F -F activated_level,activated_spname
    56,FW860.10
    

    fw

What’s SRIOV vNIC failover and how it works ?

I’ll not explain here what’s an SRIOV vNIC, if you want to know more about it just check my previous blog post speaking about this topic A first look at SRIOV vNIC adapters. What’s failover is adding is a feature allowing you to add as “many” backing devices as you want for a vNIC adapter (the maximum is 6 backing devices). For each backing device you have the possibility to choose on which Virtual I/O Server will be created the corresponding vnicserver and set a failover priority to determine which backing device is active. Keep in mind that priorities are working the exact same way as it is with Shared Ethernet Adapter. This means that priority 10 is an higher priority than priority 20.

vnicvisio1

On the example shown on the images above and below the vNIC is configured with two backing devices (on two differents SRIOV adapters) with priority 10 and 20. As long as there is no outage (for instance on the Virtual I/O Server or on the adapter itself) the physical port utilized will be the one with priority 10. If the adapter has for instance an hardware issue we will have the possiblity to manually fallback on the second backing device or let the hypervisor do this for us by checking the next highest priority to choose the right backing device to use. Easy. This allow us to have redundant LPM aware and high performance adapters fully virtualized. A MUST :-) !

vnicvisio2

Creating a SRIOV vNIC failover using the HMC GUI and administrating it

To create or delete an SRIOV vNIC failover adapter (I’ll call this vNIC for the rest of the blog post) the machine must be shutdown or active (this is not possible to add a vNIC when a machine is booted in OpenFirmware). The only way to do this using the HMC GUI is to used the enhanced interface (no problem as we will have no other choice in a near future). Select the machine on which you want to create the adapter and click on the “Virtual NICs” tab.

vnic1b

Click “Add Virtual NIC”:

vnic1c

Chose the “Physical Port Location Code” (the physical port of the SRIOV adapter) on which you want to create the vNIC. You can add from one to six “backup adapter” (by clicking the “Add Entry” buton). This means that only one vNIC will be active at a moment. If this one is failing (adapter issue, network issue) the vNIC will failover to the next backup adapter depending on the “Failover priority”. Be careful to spread the hosting Virtual I/O Server to be sure that having a Virtual I/O Server down will be seamless for you partition:

vnic1d

On the example above:

  • I’m creating a vNIC failover with “vNIC Auto Priority Failover” enabled.
  • Four VF will be created two on the VIOS ending with 88, two on the VIOS ending with 89.
  • Obviously four vnicservers will be created on the VIOS (2 on each).
  • The lower priority will take the lead. This means That if the first one with priority 10 is failing the active adapter will be the second one. Then if the second one with priority 20 is failing the third one will be active and so on. Keep in my that if your lower priority is ok nothing will appends if one on the other backup adapter is failing. Be smart when choosing the priorities. As Yoda says “Wise you must be!”.
  • The physical ports are located on different CECs.

vnic1e

The “Advanced Virtual NIC Settings” is applied to all the vNIC that will be created (in the example above 4). For instance I’m using vlan tagging on these port so I just need to apply the “Port VLAN ID” one time.

vnic1f

You can choose or not to allow the hypervisor to perform the failover/fallback automatically depending on the priorities you have set. If you click “enable” the hypervisor will automatically failover to the next operational backing device depending on the priorities. If it is disabled only a user can trigger a failover operation.

vnic1g

Be careful the priorities are designed the same way they are on Shared Ethernet Adapter. This means the lowest number you will have in the failover priority will be the “highest priority failover” just like it is designed for Shared Ethernet Adapter. On the image below you can notice that the “priority” 10 which is the “highest failover priority” is active (but it is the lowest number between 10 20 30 and 40)

vnic1h

After the creation of the vNIC you can check differents stuffs on the Virtual I/O Server. You will notice that every entry added for the creation of the vNIC has a corresponding VF (virtual function) and a corresponding vnicserver (each vnicserver has a VF mapped on it):

  • You can see that for each entry added when creating a vNIC you’ll have the corresponding VF device present on the Virtual I/O Servers:
  • vios1# lsdev -type adapter -field name physloc description | grep "VF"
    [..]
    ent3             U78CA.001.CSS08ZN-P1-C3-C1-T2-S5                                  PCIe3 4-Port 10GbE SR Adapter VF(df1028e21410e304)
    ent4             U78CA.001.CSS08EL-P1-C3-C1-T2-S6                                  PCIe3 4-Port 10GbE SR Adapter VF(df1028e21410e304)
    
    vios2# lsdev -type adapter -field name physloc description | grep "VF"
    [..]
    ent3             U78CA.001.CSS08ZN-P1-C4-C1-T2-S2                                  PCIe3 4-Port 10GbE SR Adapter VF(df1028e21410e304)
    ent4             U78CA.001.CSS08EL-P1-C4-C1-T2-S2                                  PCIe3 4-Port 10GbE SR Adapter VF(df1028e21410e304)
    
  • For each VF you’ll see the corresponding vnicserver devices:
  • vios1# lsdev -type adapter -virtual | grep vnicserver
    [..]
    vnicserver1      Available   Virtual NIC Server Device (vnicserver)
    vnicserver2      Available   Virtual NIC Server Device (vnicserver)
    
    vios2# lsdev -type adapter -virtual | grep vnicserver
    [..]
    vnicserver1      Available   Virtual NIC Server Device (vnicserver)
    vnicserver2      Available   Virtual NIC Server Device (vnicserver)
    
  • You can check the corresponding mapped VF for each vnicserver using the ‘lsmap’ command. You can check on funny thing: when the adapter was never “used” by using the “Make the backing Device Active” button in the GUI the corresponding client name and Client device will not be showed:
  • vios1# lsmap -all -vnic -fmt :
    [..]
    vnicserver1:U9119.MME.659707C-V2-C32898:6:lizard:AIX:ent3:Available:U78CA.001.CSS08ZN-P1-C3-C1-T2-S5:ent0:U9119.MME.659707C-V6-C6
    vnicserver2:U9119.MME.659707C-V2-C32899:6:N/A:N/A:ent4:Available:U78CA.001.CSS08EL-P1-C3-C1-T2-S6:N/A:U9119.MME.659707C-V6-C6
    
    vios2# lsmap -all -vnic
    [..]
    Name          Physloc                            ClntID ClntName       ClntOS
    ------------- ---------------------------------- ------ -------------- -------
    vnicserver1   U9119.MME.659707C-V1-C32898             6 N/A            N/A
    
    Backing device:ent3
    Status:Available
    Physloc:U78CA.001.CSS08ZN-P1-C4-C1-T2-S2
    Client device name:ent0
    Client device physloc:U9119.MME.659707C-V6-C6
    
    Name          Physloc                            ClntID ClntName       ClntOS
    ------------- ---------------------------------- ------ -------------- -------
    vnicserver2   U9119.MME.659707C-V1-C32899             6 N/A            N/A
    
    Backing device:ent4
    Status:Available
    Physloc:U78CA.001.CSS08EL-P1-C4-C1-T2-S2
    Client device name:N/A
    Client device physloc:U9119.MME.659707C-V6-C6
    
  • You can activate the device by yourself just by clicking the “Make backing Device Active Button” in the GUI and check the vnicserver is now logged:
  • vnic1i
    vnic1j

    vios2# lsmap -all -vnic -vadapter
    [..]
    vnicserver1:U9119.MME.659707C-V1-C32898:6:lizard:AIX:ent3:Available:U78CA.001.CSS08ZN-P1-C4-C1-T2-S2:ent0:U9119.MME.659707C-V6-C6
    vnicserver2:U9119.MME.659707C-V1-C32899:6:N/A:N/A:ent4:Available:U78CA.001.CSS08EL-P1-C4-C1-T2-S2:N/A:U9119.MME.659707C-V6-C6
    
  • I noticed something pretty strange for me. When you are doing a manual failover of the vNIC the auto-priority will be set to disable. Remember to re-enable it after the manual operation was performed:
  • vnic1k

    You can also check the status and the priority of the vNIC in the Virtual I/O Server using the vnicstat command. Some good information are showed by the command, the state of the device, if it is active or not (I have noticed 2 different states in my test which are “active” (meaning this is the vf/vnicserver you are using) and “config_2″ meaning the adapter is ready and available for a failover operation (there is probably another state when the link is down but I didn’t had the time to ask my network team to shut a port to verify this)) and finally the failover priority. The vnicstat command is a root command.

    vios1#  vnicstat vnicserver1
    
    --------------------------------------------------------------------------------
    VNIC Server Statistics: vnicserver1
    --------------------------------------------------------------------------------
    Device Statistics:
    ------------------
    State: active
    Backing Device Name: ent3
    
    Failover State: active
    Failover Readiness: operational
    Failover Priority: 10
    
    Client Partition ID: 6
    Client Partition Name: lizard
    Client Operating System: AIX
    Client Device Name: ent0
    Client Device Location Code: U9119.MME.659707C-V6-C6
    [..]
    
    vios2# vnicstat vnicserver1
    --------------------------------------------------------------------------------
    VNIC Server Statistics: vnicserver1
    --------------------------------------------------------------------------------
    Device Statistics:
    ------------------
    State: config_2
    Backing Device Name: ent3
    
    Failover State: inactive
    Failover Readiness: operational
    Failover Priority: 20
    [..]
    

    You can also check vnic server events in this errpt (login when failover and so on …)

    # errpt | more
    8C577CB6   1202195216 I S vnicserver1    VNIC Transport Event
    60D73419   1202194816 I S vnicserver1    VNIC Client Login
    # errpt -aj 60D73419 | more
    ---------------------------------------------------------------------------
    LABEL:          VS_CLIENT_LOGIN
    IDENTIFIER:     60D73419
    
    Date/Time:       Fri Dec  2 19:48:06 2016
    Sequence Number: 10567
    Machine Id:      00C9707C4C00
    Node Id:         vios2
    Class:           S
    Type:            INFO
    WPAR:            Global
    Resource Name:   vnicserver1
    
    Description
    VNIC Client Login
    
    Probable Causes
    VNIC Client Login
    
    Failure Causes
    VNIC Client Login
    

    Same thing using the hmc command line.

    Now we will do the same thing in command line. I warn you the commands are pretty huge !!!!

    • List the sriov adapter (you will need those to create the vNICs):
    • # lshwres -r sriov --rsubtype adapter -m reptilian-9119-MME-65BA46F
      adapter_id=3,slot_id=21010012,adapter_max_logical_ports=64,config_state=sriov,functional_state=1,logical_ports=64,phys_loc=U78CA.001.CSS08XH-P1-C3-C1,phys_ports=4,sriov_status=running,alternate_config=0
      adapter_id=4,slot_id=21010013,adapter_max_logical_ports=64,config_state=sriov,functional_state=1,logical_ports=64,phys_loc=U78CA.001.CSS08XH-P1-C4-C1,phys_ports=4,sriov_status=running,alternate_config=0
      adapter_id=1,slot_id=21010022,adapter_max_logical_ports=64,config_state=sriov,functional_state=1,logical_ports=64,phys_loc=U78CA.001.CSS08RG-P1-C3-C1,phys_ports=4,sriov_status=running,alternate_config=0
      adapter_id=2,slot_id=21010023,adapter_max_logical_ports=64,config_state=sriov,functional_state=1,logical_ports=64,phys_loc=U78CA.001.CSS08RG-P1-C4-C1,phys_ports=4,sriov_status=running,alternate_config=0
      
    • List vNIC for virtual machine “lizard”:
    • lshwres -r virtualio  -m reptilian-9119-MME-65BA46F --rsubtype vnic --level lpar --filter "lpar_names=lizard"
      lpar_name=lizard,lpar_id=6,slot_num=6,desired_mode=ded,curr_mode=ded,auto_priority_failover=0,port_vlan_id=0,pvid_priority=0,allowed_vlan_ids=all,mac_addr=6ac53577b106,allowed_os_mac_addrs=all,"backing_devices=sriov/vios1/1/3/0/2700c003/2.0/2.0/50,sriov/vios2/2/1/0/27004003/2.0/2.0/60","backing_device_states=sriov/2700c003/0/Operational,sriov/27004003/1/Operational"
      
    • Creates a vNIC with 2 backing devices first one on Virtual I/O Server 1 on adapter 1 on physical port 2 with a failover priority set to 10, second one on Virtual I/O Server 2 on adapter 3 on physical port 2 with a failover priority set to 20 (this vNIC will take the next available slot which will be 6) (WARNING: Physical port numbering starts from 0):
    • #chhwres -r virtualio -m reptilian-9119-MME-65BA46F -o a -p lizard --rsubtype vnic -v -a 'port_vlan_id=3455,auto_priority_failover=1,backing_devices="sriov/vios1//1/1/2.0/10,sriov/vios1//3/1/2.0/20"'
      #lshwres -r virtualio -m reptilian-9119-MME-65BA46F --rsubtype vnic --level lpar --filter "lpar_names=lizard"
      lpar_name=lizard,lpar_id=6,slot_num=6,desired_mode=ded,curr_mode=ded,auto_priority_failover=1,port_vlan_id=3455,pvid_priority=0,allowed_vlan_ids=all,mac_addr=6ac53577b106,allowed_os_mac_addrs=all,"backing_devices=sriov/vios1/1/1/1/2700400b/2.0/2.0/10,sriov/vios2/2/3/1/2700c008/2.0/2.0/20","backing_device_states=sriov/2700400b/1/Operational,sriov/2700c008/0/Operational"
      
    • Add two backing devices (one on each vios on adapter 2 and 4, both on physical port 2 with failover priority set to 30 and 40) on vNIC with slot 6:
    • # chhwres -r virtualio -m reptilian-9119-MME-65BA46F -o s --rsubtype vnic -p lizard -s 6 -a '"backing_devices+=sriov/vios1//2/1/2.0/30,sriov/vios2//4/1/2.0/40"'
      # lshwres -r virtualio -m reptilian-9119-MME-65BA46F --rsubtype vnic --level lpar --filter "lpar_names=lizard"
      lpar_name=lizard,lpar_id=6,slot_num=6,desired_mode=ded,curr_mode=ded,auto_priority_failover=1,port_vlan_id=3455,pvid_priority=0,allowed_vlan_ids=all,mac_addr=6ac53577b106,allowed_os_mac_addrs=all,"backing_devices=sriov/vios1/1/1/1/2700400b/2.0/2.0/10,sriov/vios2/2/3/1/2700c008/2.0/2.0/20,sriov/vios1/1/2/1/27008005/2.0/2.0/30,sriov/vios2/2/4/1/27010002/2.0/2.0/40","backing_device_states=sriov/2700400b/1/Operational,sriov/2700c008/0/Operational,sriov/27008005/0/Operational,sriov/27010002/0/Operational"
      
    • Change the failover priority of logical port 2700400b of the vNIC in slot 6 to 11:
    • # chhwres -m reptilian-9119-MME-65BA46F -r virtualio -o s --rsubtype vnicbkdev -p lizard -s 6 --logport 2700400b -a "failover_priority=11"
      # lshwres -r virtualio -m reptilian-9119-MME-65BA46F --rsubtype vnic --level lpar --filter "lpar_names=lizard"
      lpar_name=lizard,lpar_id=6,slot_num=6,desired_mode=ded,curr_mode=ded,auto_priority_failover=1,port_vlan_id=3455,pvid_priority=0,allowed_vlan_ids=all,mac_addr=6ac53577b106,allowed_os_mac_addrs=all,"backing_devices=sriov/vios1/1/1/1/2700400b/2.0/2.0/11,sriov/vios2/2/3/1/2700c008/2.0/2.0/20,sriov/vios1/1/2/1/27008005/2.0/2.0/30,sriov/vios2/2/4/1/27010002/2.0/2.0/40","backing_device_states=sriov/2700400b/1/Operational,sriov/2700c008/0/Operational,sriov/27008005/0/Operational,sriov/27010002/0/Operational"
      
    • Make logical port 27008005 active on vNIC in slot 6:
    • # chhwres -m reptilian-9119-MME-65BA46F -r virtualio -o act --rsubtype vnicbkdev -p lizard  -s 6 --logport 27008005 
      # lshwres -r virtualio -m reptilian-9119-MME-65BA46F --rsubtype vnic --level lpar --filter "lpar_names=lizard"
      lpar_name=lizard,lpar_id=6,slot_num=6,desired_mode=ded,curr_mode=ded,auto_priority_failover=0,port_vlan_id=3455,pvid_priority=0,allowed_vlan_ids=all,mac_addr=6ac53577b106,allowed_os_mac_addrs=all,"backing_devices=sriov/vios1/1/1/1/2700400b/2.0/2.0/11,sriov/vios2/2/3/1/2700c008/2.0/2.0/20,sriov/vios1/1/2/1/27008005/2.0/2.0/30,sriov/vios2/2/4/1/27010002/2.0/2.0/40","backing_device_states=sriov/2700400b/0/Operational,sriov/2700c008/0/Operational,sriov/27008005/1/Operational,sriov/27010002/0/Operational"
      
    • Re-enable automatic failover on vNIC in slot 6:
    • # chhwres -m reptilian-9119-MME-65BA46F -r virtualio -o s --rsubtype vnic -p lizard  -s 6 -a "auto_priority_failover=1"
      # lshwres -r virtualio -m reptilian-9119-MME-65BA46F --rsubtype vnic --level lpar --filter "lpar_names=lizard"
      lpar_name=lizard,lpar_id=6,slot_num=6,desired_mode=ded,curr_mode=ded,auto_priority_failover=1,port_vlan_id=3455,pvid_priority=0,allowed_vlan_ids=all,mac_addr=6ac53577b106,allowed_os_mac_addrs=all,"backing_devices=sriov/vios1/1/1/1/2700400b/2.0/2.0/11,sriov/vios2/2/3/1/2700c008/2.0/2.0/20,sriov/vios1/1/2/1/27008005/2.0/2.0/30,sriov/vios2/2/4/1/27010002/2.0/2.0/40","backing_device_states=sriov/2700400b/1/Operational,sriov/2700c008/0/Operational,sriov/27008005/0/Operational,sriov/27010002/0/Operational"
      

    Testing the failover.

    It’s now time to test is the failover is working as intended. The test will be super simple I will just shutoff one of the two Virtual I/O Server and check if I’m loosing some packets or not. I’m first checking on which VIOS is located the active adapter:

    vnic1l

    I now need to shutdown the Virtual I/O Server ending with 88 and check if the one ending with 89 is taking the lead:

    *****88# shutdown -force 
    

    Priorities 10 and 30 are on the shutted Virtual I/O Server, the highest priority is on the active Virtual I/O Server is 20. This backing device hosted on the second Virtual I/O Server is serving the network I/Os;

    vnic1m

    You can check the same thing with command line on the remaining Virtual I/O Server:

    *****89# errpt | more
    IDENTIFIER TIMESTAMP  T C RESOURCE_NAME  DESCRIPTION
    60D73419   1202214716 I S vnicserver0    VNIC Client Login
    60D73419   1202214716 I S vnicserver1    VNIC Client Login
    *****89# vnicstat vnicserver1
    --------------------------------------------------------------------------------
    VNIC Server Statistics: vnicserver1
    --------------------------------------------------------------------------------
    Device Statistics:
    ------------------
    State: active
    Backing Device Name: ent3
    
    Failover State: active
    Failover Readiness: operational
    Failover Priority: 20
    
    

    During my tests the failover was working as I expected. You can see on the picture below that during this test I only lost one ping between 64 and 66 during the failover/failback process.

    vnic1n

    In the partition I saw some messaging in the errpt during the failover:

    # errpt | mroe 
    4FB9389C   1202215816 I S ent0           VNIC Link Up
    F655DA07   1202215816 I S ent0           VNIC Link Down
    # errpt -a | more
    [..]
    SOURCE ADDRESS
    56FB 2DB8 A406
    Event
    physical link: DOWN   logical link: DOWN
    Status
    [..]
    SOURCE ADDRESS
    56FB 2DB8 A406
    Event
    physical link: UP   logical link: UP
    Status
    

    What about Live Partition Mobility.

    If you want a seamless LPM experience without having to choose the destination adapter and physical port on which to map you current vNIC backing devices on the destination, just fill the label and sublabel (most important is label) for each physical port of your SRIOV adapter. Then during the LPM if names are aligned between two systems the good physical port will be automatically chose depending on the names of the label:

    vnic1o
    vnic1p

    The LPM was working like a charm and I didn’t notice any particular problems during the move. vNIC failover and LPM are working ok as long as you take care of your SRIOV labels :-). I did notice on AIX 7.2 TL1 SP1 that there was no errpt messages in the partition itself but just in the Virtual I/O Server … weird :-)

    # errpt | more
    IDENTIFIER TIMESTAMP  T C RESOURCE_NAME  DESCRIPTION
    3EB09F5A   1202222416 I S Migration      Migration completed successfully
    

    Conlusion.

    No long story here. If you need performance AND flexibility you absolutely have to use SRIOV vNIC failover adapters. This feature offers you the best of two worlds having the possibility to dedicate 10GB adapters with a failover capability without having to be worried about LPM or about NIB configuration. It’s not applicable in all cases but it’s definitely something to have for an environment such as TSM or network I/O intensive workloads. Use it !

    About reptilians !

    Before you start reading this, keep your sense of humor and be noticed that what I say is not related to my workplace at all it’s a general way of thinking not especially based on my experience. Don’t be offended by this it’s just a personal opinion based on things I may or may have not seen during my life. You’ve been warned.

    This blog was never a place to share my opinions about life and society but I must admit that I should have done that before. Speaking about this kind of things makes you feel alive in world where everything needs to be ok and where you don’t have anymore the right to feel or express something about what you are living. There are couple of good blog posts speaking of this kind of things and related to the IT world. I agree with all of what is said in these posts. Some of the authors of these posts are just telling what they love in their daily jobs but I think it’s also a way to say what they probably won’t love in another one :-) :

    • Adam Leventhal’s “I’m not a resource”: here
    • Brendan Gregg’s “Working at Netflix in 2016″: here

    All of this to say that I work at nights, I work on weekends, I’m thinking about PowerSystems/computers when I fall asleep. I always have new ideas and I always want to learn new things, discover new technologies and features. I truly, deeply love this but being like this does not help me and will never help me in my daily job for one single reason. In this world people who have the knowledge are not people who are taking technical decisions it’s sad but true. I’m just good at working the most I can for the less money possible. Nobody cares if techs are happy, unhappy, want to stay or leave. I doesn’t make any differences for anyone driving a company. What’s important is money. Everything is meaningless. We are no one we are nothing, just number in a excel spreadsheet. I’m probably saying because I’m not good enough in anything to find an acceptable workplace. Once again sad but true.

    Even worst, if you just want to follow what’s the industry is asking you have to be everywhere and know everything. I know I’ll be forced in a very near future to move on Devops/ Linux (I love Linux I’m an RHCE certified engineer !). That’s why since a couple of years now, at night after my daily job is finished I’m working again: working to understand how Docker is working, working to install my own Openstack on my own machines, working to understand Saltstack, Ceph, Python, Ruby, Go …. it’s a never ending process. But it’s still not enough for them ! No enough to be consider as good or good enough guy to fit for a job. I remember being asked to know about Openstack, Cassandra, Hadoop, AWS, KVM, Linux, Automation tools (puppet this time), Docker and continuous integration for one single job application. First, I seriously doubt that someone will have such skills and be good at each. Second even if I’m an expert on each one if you have a look a few years ago it was the exact same thing but with different products. You have to understand and be good at every new products in minutes. All of this to understand that one or two years after you are considered as an “expert” you are bad at everything that exists in the industry. I’m really sick of this fight against something I can’t control. Being a hard worker and clever enough to understand every new features is not enough nowadays. On top of that you also need to be a beautiful person with a nice perfect smile wearing a perfect suit. You also have to be on LinkedIn and be connected with the good persons. And even if every of these boxes are checked you still need to be lucky enough to be at the right place at the right moment. I’m so sick of this. Work doesn’t pay. Only luck. I don’t want to live in this kind of world but I have to. Anyway this is just a “two-cents” way of thinking. Everything is probably a big trick orchestrated by this reptilians lizard mens ! ^^. Be good at what you do and don’t care about what people are thinking of you (even your horrible french accent during your sessions) … that’s the most important !

    picture-of-reptilian-alien

    pvcctl : Using python Openstack api to code a PowerVC command line | Automating PowerVC and NovaLink (post)-installation with Ansible

    The world is changing fast, especially regarding sysadmin jobs and skills. Everybody has noticed that being a good sysadmin now implies two things. The first one is “being a good dev”. Not just someone who knows how to write a ksh or bash script but someone who is able to write in one of these three languages: python, ruby, go. Most of my team members does not understand that. This is now almost mandatory. I truly believe what I’m saying here. The second one is to have strong skills in automation tools. On this part I’m almost ok being good at Chef, Saltstack and Ansible. Unfortunately for the world the best tool is never the one who wins and that’s why Ansible is almost winning everywhere in the battle of automation. It is simple to understand why: Ansible is simple to use and to understand and it is based on ssh. My opinion is that Ansible is ok for administration stuff but not ok when scaling. Being based on ssh make it being based on a “push” model and in my humble opinion push models are bad and pull models is the future. (One thing to say about that: this is just my opinion. I don’t want this to end in never ending trolls on Twitter. Please do blog posts if you want to express yourself) (I’m saying this because twitter is becoming a place to troll and not anymore a place to share skills and knowledge, like it was before). This is said.

    The first part of this blog post will talk about a tool I am coding called pvcctl. This tool is a python tool allowing you to use PowerVC in command line. It was also for me the opportunity to be better at python and to improve my skills developing in this language. Keep in mind that I’m not a developer but I’m here going to give you simple tips and tricks to use python to write your own tools to query and interact with Openstack. I must admit that I’ve tried everything: httplib, librequest, Chef, Ansible, Salstack. None of this solution was ok for me. I finally ended in using Openstack python api to write this tool. It’s not that hard and it now allows me to write my own programs to interact with PowerVC. Once again keep in mind that this tool fit my needs and it will probably not fit yours. This is an example of how to write a tool based on python Openstack api. Not an official tool or anything else.

    The second part of this blog post will talk about an Ansible playbook I’ve written to take care of PowerVC installation and NovaLink post installation. The more machine I deploy the more PowerVC I need (yeah yeah, I was forced to) and the more NovaLink I need too. Instead of doing the same thing over and over again the best solution was to use an automation tool and as it is now the most common one used on Linux the one I choose was ansible.

    Using python Openstack api to code a PowerVC command line

    This part of the post will show you how to use the python Openstack api to create scripts to query and interact with PowerVC. First of all I know that their are other ways to use the APIs but (it’s my opinion) I think that using service-specific clients is the simplest way to understand and to work with the API. This part of the blog post will only talk about service-specific clients (ie. novaclient, cinderclient, and so on …). I wanted to thanks Matthew Edmonds from the PowerVC team. He helped me to better understand the api and gave me good advises. So a big shout out to you Matthew :-). Thank you.

    Initialize you script

    Sessions

    Almost all Openstack tools are using “rc” files to load authentication credentials and endpoints. As I wanted my tool to work like this (ie. sourcing an rc file containing my credentials) I have found that the best way to do this was to use session. By using session you don’t have to manage or work with any tokens or to be worried about that. Sessions is taking care of that for you and you have nothing to do. As you can see on the code below the “OS_*” environment variables are used here. So before running the tool all you have to do is to export these variables. It’s as simple as that:

    • An example “rc” file filled with the OS_* values (note that crt file must be copied from the PowerVC host to the host running the tool (/etc/pki/tls/certs/powervc.crt)):
    • # cat powervrc
      export OS_AUTH_URL=https://mypowervc:5000/v3/
      export OS_USERNAME=root
      export OS_PASSWORD=root
      export OS_TENANT_NAME=ibm-default
      export OS_REGION_NAME=RegionOne
      export OS_USER_DOMAIN_NAME=Default
      export OS_PROJECT_DOMAIN_NAME=Default
      export OS_CACERT=~/powervc_labp8.crt
      export OS_IMAGE_ENDPOINT=https://mypowervc:9292/
      export NOVACLIENT_DEBUG=0
      # source powervcrc
      
    • The python piece of code creating a session object:
    • from keystoneauth1.identity import v3
      from keystoneauth1 import session
      
      auth = v3.Password(auth_url=env['OS_AUTH_URL'],
                         username=env['OS_USERNAME'],
                         password=env['OS_PASSWORD'],
                         project_name=env['OS_TENANT_NAME'],
                         user_domain_id=env['OS_USER_DOMAIN_NAME'],
                         project_domain_id=env['OS_PROJECT_DOMAIN_NAME'])
      
      sess = session.Session(auth=auth, verify=env['OS_CACERT'])
      

    The logger

    Instead of using the print statement each time I need to debug my script I have found that most of Openstack API can be used with a python logger object. By using a logger you’ll be able to see all your http calls to the Openstack API (your post, put, get, delete with their json body, their response and their url). It is super useful to debug your scripts and it’s super simple to use. The piece of code below will create a logger object writing to my log directory. You’ll see after how to use a logger when creating a client object (a nova, cinder, or neutron object):

    import logging
    
    logger = logging.getLogger('pvcctl')
    hdlr = logging.FileHandler(BASE_DIR + "/logs/pvcctl.log")
    logger.addHandler(hdlr)
    logger.setLevel(logging.DEBUG)
    

    Here is an exemple of the output created by the logger with a novaclient (the novaclient was created specifying a logger object):

    REQ: curl -g -i --cacert "/data/tools/ditools/pvcctl/conf/powervc.crt" -X POST https://mypowervc/powervc/openstack/compute/v2.1/51488ae7be7e4ec59759ccab496c8793/servers/a3cea5b8-33b4-432e-88ec-e11e47941846/os-volume_attachments -H "User-Agent: python-novaclient" -H "Content-Type: application/json" -H "Accept: application/json" -H "X-OpenStack-Nova-API-Version: 2.19" -H "X-Auth-Token: {SHA1}9b8becc425d25fdfb98d5e4f055c71498d2e744f" -d '{"volumeAttachment": {"volumeId": "a90f04ce-feb2-4163-9b36-23765777c6a0"}}' 
    RESP: [200] Date: Fri, 28 Oct 2016 13:05:24 GMT Server: Apache Content-Length: 194 Content-Type: application/json X-Openstack-Nova-Api-Version: 2.19 Vary: X-OpenStack-Nova-API-Version X-Compute-Request-Id: req-58addd19-7421-4c9f-a712-9c386f46b6cb Cache-control: max-age=0, no-cache, no-store, must-revalidate Pragma: no-cache Keep-Alive: timeout=5, max=93 Connection: Keep-Alive 
    RESP BODY: {"volumeAttachment": {"device": "/dev/sdb", "serverId": "a3cea5b8-33b4-432e-88ec-e11e47941846", "id": "a90f04ce-feb2-4163-9b36-23765777c6a0", "volumeId": "a90f04ce-feb2-4163-9b36-23765777c6a0"}} 
    

    The clients

    Each Openstack service (nova, glance, neutron, swift, cinder, …) is provided with a python Openstack API. I’m -in my tool- only using novaclient, cinderclient and neutronclient but it will be the exact same thing if you want to use ceilomeiter or glance. Before doing anything else you have to install the clients you want to use (using your package manager (yum on my side) or using pip)

    # yum install python2-novaclient.noarch
    # pip install python-neutronclient
    

    Initializing the clients

    After the clients are installed you can use them in your python scripts, import them and create the objects. Use the previously created session object to create the clients objects (session=sess in my example below):

    from novaclient import client as client_nova
    from neutronclient.v2_0 import client as client_neutron
    from cinderclient import client as client_cinder
    
    nova = client_nova.Client(2.19, session=sess)
    neutron = client_neutron.Client(session=sess)
    cinder = client_cinder.Client(2.0, service_type="volume", session=sess)
    

    If your client can be created using a logger object you can specify this at the time of the object creation. Here is an example with novaclient:

    nova = client_nova.Client(2.19, session=sess, http_log_debug=True, logger=logger)
    

    Using the clients

    After the objects are created using them is super simple. I’ll give you here a couple of examples below:

    • Searching a vm (this will return a server object):
    • server = nova.servers.find(name=machine)
      
    • Renaming a vm:
    • server = nova.servers.find(name=machine)
      server.update(name=new_name)
      
    • Starting a vm:
    • server = nova.servers.find(name=machine)
      server.start()
      
    • Stopping a vm:
    • server = nova.servers.find(name=machine)
      server.stop()
      
    • Listing vms:
    • for server in nova.servers.list():
        name = getattr(server, 'OS-EXT-SRV-ATTR:hostname')
        print name
      
    • Find a vlan:
    • vlan = neutron.list_networks(name=vlan)
      
    • Creating a volume:
    • cinder.volumes.create(name=volume_name, size=size, volume_type=storage_template)
      

    And so on. Each client type has it’s own method, the best way to find which methods are available for each object is to check in the official Openstack API documentation:

    What about PowerVC extensions ? (using get,put,delete …)

    If you have already read my blog posts about PowerVC you will probably already know that PowerVC add some extensions to OpenStack. That means that for the PowerVC extension using the Openstack method shipped with the API will not work. To be more specific the methods used to query or interact with the PowerVC extensions will simply not exists at all. The good part of these API is that they are also shipped with the http common methods. This means that for each Openstack api object, let’s say nova, you’ll be able to directly use the put, post, get, delete and so on method. By doing that you’ll be able to use the same object to use all api method (let’s say create or rename a server) and to use the PowerVC extensions. For instance the “host-sea” is a PowerVC added extension (link here). You can simply use a novaclient to query or post something to the extension (the example below shows you both post and a get on the PowerVC extension “host-seas”:

    resp, host_seas = nova.client.get("/host-seas?network_id=" + net_id + "&vlan_id" + vlan_id)
    resp, body = nova.client.post("/host-network-mapping", body=mapping_json)
    

    Here is another example for onboarding or unmanaging volume (which is a PowerVC extension to Openstack):

    resp, body = cinder.client.post("/os-hosts/" + oshost['host_name'] + "/onboard", body=onboard_json)
    resp, body = cinder.client.post("/os-hosts/" + oshost['host_name'] + "/unmanage", body=onboard_json)
    

    Working with json

    Last part for this tips and tricks on how to write your own python code using Openstack api: you’ll probably see that you’ll need to work with json. What is cool with python is that json can be managed as a dict object. It’s super simple to use:

    • Importing json:
    • import json
      
    • Loading json:
    • json_load = json.loads('{ "ibm-extend": { "new_size": 0 } }')
      
    • Using the dict:
    • json_load['ibm-extend']['new_size'] = 200
      
    • Use it as a body in a post call (grow a volume):
    • resp, body = cinder.client.post("/volumes/" + volume.id + "/action", body=json_grow)
      

    The pvcctl tool

    Now that you have understand this I can now say to you that I’ve written a tool called pvcctl based on the Openstack python api. This tool is freely available on github. As I said before this tools fit my needs and is an example of what can be done using the Openstack API in python. Keep in mind that I’m not a developer and the code can probably be better. But this tool is used by my whole team on PowerVC so … it will probably be good enough to create shells scripts on top of it or for daily PowerVC administration. The tool can be found a this address: https://github.com/chmod666org/pvcctl. Give it a try and tell me what you think a about it. I give you below a couple of example of how to use the tools. You’ll see it’s super simple:

    • Create a network:
    • # pvcctl network create name=newvlan id=666 cidr='10.10.20.20/24' dns1='8.8.8.8' dns2='8.8.9.9' gw='10.10.20.254'
      
    • Add description on a vm:
    • # pvcctl vm set_description vm=deckard desc="We call it Voight-Kampff"
      
    • Migrate a vm:
    • # pvcctl vm migrate vm=tyrell host=21AFF8V
      
    • Attach a volume to a vm:
    • # pvcctl vm attach_vol vm=tyrell vol=myvol
      
    • Create a vm
    • # pvcctl vm create ip='10.14.20.240' ec_max=1 ec_min=0.1 ec=0.1 vp=1 vp_min=1 vp_max=4 mem=4096 mem_min=1024 mem_max=8192 weight=240 name=bcubcu disks="death" scg=ssp vlan=vlan-1331 image=kitchen-aix72 aggregate=hg2 user_data=testvm srr=yes
      
    • Create a volume:
    • # pvcctl volume create provider=mystorageprovider name=volume_test size=10
      
    • Grow a volume!
    • # pvcctl volume grow vol=test_volume size=50
      

    Automating PowerVC and NovaLink installation and post-installation with ansible

    At the same time I have released my pvcctl tool I also had the idea that releasing my PowerVC and NovaLink playbook for Ansible will be a good thing. This playbook is not so huge and is not doing a lot of things but I was using it a lot when deploying all my NovaLink hosts (I now have 16 MME managed by NovaLink) and when creating PowerVC servers for different kind of project. That’s a shame that everybody in my company has not yet understood why having multiple PowerVC is just a wrong idea and a waste of time (I’m not surprised that between a good and a bad idea they prefer to choose the bad one :-) , obvious when you never touch to production at all but when you still have the power of deciding things in your hands). Anyway this playbook is used for two things, first one is preparing my novalink hosts (being sure I’m at the latest version of NovaLink, being sure that everything is configured as I want to (ntp, dns, rsct)), second one is installing PowerVC hosts (installing PowerVC is just super boring you always have to install tons of rpms needed for dependencies and if like me, you do not have a satellite connection or access to the internet it can be a real pain). The only thing you have to do is to configure the inventories files and the group_vars files located in the playbook directory. The playbook can be founded at this address https://github.com/chmod666org/ansible-powervc-novalink.

    • Put the name of your NovaLink hosts in the hosts.novalink file:
    • # cat inventories/hosts.novalink
      nl1.lab.chmod666.org
      nl2.lab.chmod666.org
      [..]
      
    • Put the name of your PowerVC hosts in the hosts.powervcfile:
    • # cat inventories/hosts.powervc
      pvc1.lab.chmod666.org
      pvc2.lab.chmod666.org
      [..]
      
    • Next prepare group_vars files for NovaLink …
    • ntpservers:
        - myntp1
        - myntp2
        - myntp2
      dnsservers:
        - 8.8.8.8
        - 8.8.9.9
      dnssearch:
        - lab.chmod666.org
      vepa_iface: ibmveth6
      repo: novalinkrepo
      
    • and PowerVC:
    • ntpservers:
        - myntp1
        - myntp2
        - myntpd3
      dnsservers:
        - 8.8.8.8
        - 8.8.9.9
      dnssearch:
        - lab.chmod666.org
      repo_rhel: http://myrepo.lab.chmod666.org/rhel72le/
      repo_ibmtools: http://myrepo.lab.chmod666.org/ibmpowertools71le/
      repo_powervc: http://myrepo.lab.chmod666.org/powervc
      powervc_base: PowerVC_V1.3.1_for_Power_Linux_LE_RHEL_7.1_062016.tar.gz
      powervc_upd: powervc-update-ppcle-1.3.1.2.tgz
      powervc_rpm: [ 'python-dns-1.12.0-1.20150617git465785f.el7.noarch.rpm', 'selinux-policy-3.13.1-60.el7.noarch.rpm', 'selinux-policy-targeted-3.13.1-60.el7.noarch.rpm', 'python-fpconst-0.7.3-12.el7.noarch.rpm', 'python-pyasn1-0.1.6-2.el7.noarch.rpm', 'python-pyasn1-modules-0.1.6-2.el7.noarch.rpm', 'python-twisted-web-12.1.0-5.el7_2.ppc64le.rpm', 'sysfsutils-2.1.0-16.el7.ppc64le.rpm', 'SOAPpy-0.11.6-17.el7.noarch.rpm', 'SOAPpy-0.11.6-17.el7.noarch.rpm', 'python-twisted-core-12.2.0-4.el7.ppc64le.rpm', 'python-zope-interface-4.0.5-4.el7.ppc64le.rpm', 'pyserial-2.6-5.el7.noarch.rpm' ]
      powervc_base_version: 1.3.1.0
      powervc_upd_version: 1.3.1.2
      powervc_edition: cloud_powervm
      

    You then just have to run the playbook for Novalink and PowerVC hosts to run the installation and post-installation:

    • Novalink post-install:
    • # ansible-playbook -i inventories/hosts.novalink site.yml
      
    • PowerVC install:
    • # ansible-playbook -i inventories/hosts.powervc site.yml
      

    powervcansible

    Just to give you an example of one the the tasks of this playbook here is the task in charge to install PowerVC. Pretty simple :-) :

    ## install powervc
    - name: check previous installation
      command: bash -c "rpm -qa | grep ibmpowervc-"
      register: check_base
      ignore_errors: True
    - debug: var=check_base
    
    - name: install powervc binaires
      command: chdir=/tmp/powervc-{{ powervc_base_version }} /tmp/powervc-{{ powervc_base_version }}/install -s cloud_powervm
      environment:
        HOST_INTERFACE: "{{ ansible_default_ipv4.interface }}"
        EGO_ENABLE_SUPPORT_IPV6: N
        PATH: $PATH:/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.65-3.b17.el7.ppc64le/jre/bin/:/usr/sbin:/usr/bin
        ERL_EPMD_ADDRESS: "::ffff:127.0.1.1"
      when: check_base.rc == 1
    
    - name: check previous update
      command: rpm -q ibmpowervc-{{ powervc_upd_version }}-1.noarch
      register: check_upd
      ignore_errors: True
    - debug: var=check_upd
      
    - name: updating powervc
      command: chdir=/tmp/powervc-{{ powervc_upd_version }} /tmp/powervc-{{ powervc_upd_version }}/update -s 
      when: check_upd.rc == 1
    

    The goal here is not to explain how Ansible is working but to show you a simple example of what I’m doing with Ansible on my Linux boxes (all of this related to Power). If you want to check further have a look in the playbook itself on github :-)

    Conclusion

    This blog post is just a way to show you my work on both pvcctl and Ansible playbook for NovaLink and PowerVC. It’s not a detailed blog post about deep technical stuffs. I hope you’ll give a try to the tools and tell me what can be improved or changed. As always … I hope it helps.

    Putting NovaLink in Production & more PowerVC (1.3.1.2) tips and tricks

    I’ve been quite busy and writing the blog is getting to be more and more difficult with the amount of work I have but I try to stick to my thing as writing these blogs posts is almost the only thing I can do properly in my whole life. So why do without ? As my place is one of the craziest place I have ever worked in -(for the good … and the bad (I’ll not talk here about how are the things organized here or how is the recognition of your work but be sure it is probably be one the main reason I’ll probably leave this place one day or another)- the PowerSystems growth is crazy and the number of AIX partitions we are managing with PowerVC never stops increasing and I think that we are one the biggest PowerVC customer in the whole world (I don’t know if it is a good thing or not). Just to give you a couple of examples we have here on the biggest Power Enterprise Pool I have ever seen (384 Power8 mobile cores), the number of partitions managed by PowerVC is around 2600 and we have a PowerVC managing almost 30 hosts. You have understand well … theses numbers are huge. It’s seems to be very funny, but it’s not ; the growth is problem, a technical problem and we are facing problems that most of you will never hit. I’m speaking about density and scalability. Hopefully for us the “vertical” design of PowerVC can now be replaced by what I call an “horizontal” design. Instead of putting all the nova instances on one single machine, we now have the possibility to spread the load on each host by using NovaLink. As we needed to solve these density and scalability problems we decided to move all the P8 hosts to NovaLink (this process is still ongoing but most of the engineering stuffs are already done). As you now know we are not deploying a host every year but generally a couple by month and that’s why we needed to find a solution to automate this. So this blog post will talk about all the things and the best practices I have learn using and implementing NovaLink in a huge production environment (automated installation, tips and tricks, post-install, migration and so on). But we will not stop here I’ll also talk about the new things I have learn about PowerVC (1.3.1.2 and 1.3.0.1) and give more tips and tricks to use the product as it best. Before going any further I’d first want to say a big thank you to the whole PowerVC team for their kindness and the precious time they gave to us to advise and educate the OpenStack noob I am. (A special thanks to Drew Thorstensen for the long discussions we had about Openstack and PowerVC. He is probably one the most passionate guy I have ever met at IBM).

    Novalink Automated installation

    I’ll not write big introduction, let’s work and let’s start with NovaLink and how to automate the Novalink installation process. Copy the content of the installation cdrom to a directory that can be served by an http server on your NIM server (I’m using my NIM server for the bootp and tftp part). Note that I’m doing this with a tar command because there are symbolic links in the iso and a simple cp will end up with a full filesystem.

    # loopmount -i ESD_-_PowerVM_NovaLink_V1.0.0.3_062016.iso -o "-V cdrfs -o ro" -m /mnt
    # tar cvf iso.tar /mnt/*
    # tar xvf ios.tar -C /export/nim/lpp_source/powervc/novalink/1.0.0.3/iso
    # ls -l /export/nim/lpp_source/powervc/novalink/1.0.0.3/iso
    total 320
    dr-xr-xr-x    2 root     system          256 Jul 28 17:54 .disk
    -r--r--r--    1 root     system          243 Apr 20 21:27 README.diskdefines
    -r--r--r--    1 root     system         3053 May 25 22:25 TRANS.TBL
    dr-xr-xr-x    3 root     system          256 Apr 20 11:59 boot
    dr-xr-xr-x    3 root     system          256 Apr 20 21:27 dists
    dr-xr-xr-x    3 root     system          256 Apr 20 21:27 doc
    dr-xr-xr-x    2 root     system         4096 Aug 09 15:59 install
    -r--r--r--    1 root     system       145981 Apr 20 21:34 md5sum.txt
    dr-xr-xr-x    2 root     system         4096 Apr 20 21:27 pics
    dr-xr-xr-x    3 root     system          256 Apr 20 21:27 pool
    dr-xr-xr-x    3 root     system          256 Apr 20 11:59 ppc
    dr-xr-xr-x    2 root     system          256 Apr 20 21:27 preseed
    dr-xr-xr-x    4 root     system          256 May 25 22:25 pvm
    lrwxrwxrwx    1 root     system            1 Aug 29 14:55 ubuntu -> .
    dr-xr-xr-x    3 root     system          256 May 25 22:25 vios
    

    Prepare the PowerVM NovaLink repository. The content of the repository can be found in the NovaLink iso image in pvm/repo/pvmrepo.tgz:

    # ls -l /export/nim/lpp_source/powervc/novalink/1.0.0.3/iso/pvm/repo/
    total 720192
    -r--r--r--    1 root     system          223 May 25 22:25 TRANS.TBL
    -rw-r--r--    1 root     system         2106 Sep 05 15:56 pvm-install.cfg
    -r--r--r--    1 root     system    368722592 May 25 22:25 pvmrepo.tgz
    

    Extract the content of this tgz file in a directory that can be served by the http server:

    # mkdir /export/nim/lpp_source/powervc/novalink/1.0.0.3/pvmrepo
    # cp /export/nim/lpp_source/powervc/novalink/1.0.0.3/iso/pvm/repo/pvmrepo.tgz
    # cd /export/nim/lpp_source/powervc/novalink/1.0.0.3/pvmrepo
    # gunzip pvmrepo.tgz
    # tar xvf pvmrepo.tar
    [..]
    x ./pool/non-free/p/pvm-core/pvm-core-dbg_1.0.0.3-160525-2192_ppc64el.deb, 54686380 bytes, 106810 media blocks.
    x ./pool/non-free/p/pvm-core/pvm-core_1.0.0.3-160525-2192_ppc64el.deb, 2244784 bytes, 4385 media blocks.
    x ./pool/non-free/p/pvm-core/pvm-core-dev_1.0.0.3-160525-2192_ppc64el.deb, 618378 bytes, 1208 media blocks.
    x ./pool/non-free/p/pvm-pkg-tools/pvm-pkg-tools_1.0.0.3-160525-492_ppc64el.deb, 170700 bytes, 334 media blocks.
    x ./pool/non-free/p/pvm-rest-server/pvm-rest-server_1.0.0.3-160524-2229_ppc64el.deb, 263084432 bytes, 513837 media blocks.
    # rm pvmrepo.tar 
    # ls -l 
    total 16
    drwxr-xr-x    2 root     system          256 Sep 11 13:26 conf
    drwxr-xr-x    2 root     system          256 Sep 11 13:26 db
    -rw-r--r--    1 root     system          203 May 26 02:19 distributions
    drwxr-xr-x    3 root     system          256 Sep 11 13:26 dists
    -rw-r--r--    1 root     system         3132 May 24 20:25 novalink-gpg-pub.key
    drwxr-xr-x    4 root     system          256 Sep 11 13:26 pool
    

    Copy the NovaLink boot files in a directory that can be served by your tftp server (I’m using /var/lib/tftpboot):

    # mkdir /var/lib/tftpboot
    # cp -r /export/nim/lpp_source/powervc/novalink/1.0.0.3/iso/pvm /var/lib/tftpboot
    # ls -l /var/lib/tftpboot
    total 1016
    -r--r--r--    1 root     system         1120 Jul 26 20:53 TRANS.TBL
    -r--r--r--    1 root     system       494072 Jul 26 20:53 core.elf
    -r--r--r--    1 root     system          856 Jul 26 21:18 grub.cfg
    -r--r--r--    1 root     system        12147 Jul 26 20:53 pvm-install-config.template
    dr-xr-xr-x    2 root     system          256 Jul 26 20:53 repo
    dr-xr-xr-x    2 root     system          256 Jul 26 20:53 rootfs
    -r--r--r--    1 root     system         2040 Jul 26 20:53 sample_grub.cfg
    

    I still don’t know why this is the case on AIX but the tftp server is searching for the grub.cfg in the root directory of your AIX system. It’s not the case for my RedHat Enterprise Linux installation but it’s the case for the NovaLink/Ubuntu installation. Copy the sample-grub.cfg to /grub.cfg and modify the content of the file:

    • As the gateway, netmask and nameserver will be provided the the pvm-install-config.cfg (the configuration file of the Novalink installer we will talk about this later) file comment those three lines.
    • The hostname will still be needed.
    • Modify the linux line and point to the vmlinux file provided in the NovaLink iso image.
    • Modify the live-installer to point to the filesystem.squashfs provided in the NovaLink iso image.
    • Modify the pvm-repo line to point to the pvm-repository directory we created before.
    • Modify the pvm-installer line to point to the NovaLink install configuration file (we will modify this one after).
    • Don’t do anything with the pvm-vios line as we are installing NovaLink on a system already having Virtual I/O Servers installed (I’m not installing Scale Out system but high end models only).
    • I’ll talk later about the pvm-disk line (this line is not by default in the pvm-install-config.template provided in the NovaLink iso image).
    # cp /var/lib/tftpboot/sample_grub.cfg /grub.cfg
    # cat /grub.cfg
    # Sample GRUB configuration for NovaLink network installation
    set default=0
    set timeout=10
    
    menuentry 'PowerVM NovaLink Install/Repair' {
     insmod http
     insmod tftp
     regexp -s 1:mac_pos1 -s 2:mac_pos2 -s 3:mac_pos3 -s 4:mac_pos4 -s 5:mac_pos5 -s 6:mac_pos6 '(..):(..):(..):(..):(..):(..)' ${net_default_mac}
     set bootif=01-${mac_pos1}-${mac_pos2}-${mac_pos3}-${mac_pos4}-${mac_pos5}-${mac_pos6}
     regexp -s 1:prefix '(.*)\.(\.*)' ${net_default_ip}
    # Setup variables with values from Grub's default variables
     set ip=${net_default_ip}
     set serveraddress=${net_default_server}
     set domain=${net_ofnet_network_domain}
    # If tftp is desired, replace http with tftp in the line below
     set root=http,${serveraddress}
    # Remove comment after providing the values below for
    # GATEWAY_ADDRESS, NETWORK_MASK, NAME_SERVER_IP_ADDRESS
    # set gateway=10.10.10.1
    # set netmask=255.255.255.0
    # set namserver=10.20.2.22
      set hostname=nova0696010
    # In this sample file, the directory novalink is assumed to exist on the
    # BOOTP server and has the NovaLink ISO content
     linux /export/nim/lpp_source/powervc/novalink/1.0.0.3/iso/install/vmlinux \
     live-installer/net-image=http://${serveraddress}/export/nim/lpp_source/powervc/novalink/1.0.0.3/iso/install/filesystem.squashfs \
     pkgsel/language-pack-patterns= \
     pkgsel/install-language-support=false \
     netcfg/disable_dhcp=true \
     netcfg/choose_interface=auto \
     netcfg/get_ipaddress=${ip} \
     netcfg/get_netmask=${netmask} \
     netcfg/get_gateway=${gateway} \
     netcfg/get_nameservers=${nameserver} \
     netcfg/get_hostname=${hostname} \
     netcfg/get_domain=${domain} \
     debian-installer/locale=en_US.UTF-8 \
     debian-installer/country=US \
    # The directory novalink-repo on the BOOTP server contains the content
    # of the pvmrepo.tgz file obtained from the pvm/repo directory on the
    # NovaLink ISO file.
    # The directory novalink-vios on the BOOTP server contains the files
    # needed to perform a NIM install of VIOS server(s)
    #  pvmdebug=1
     pvm-repo=http://${serveraddress}/export/nim/lpp_source/powervc/novalink/1.0.0.3/novalink-repo/ \
     pvm-installer-config=http://${serveraddress}/export/nim/lpp_source/powervc/novalink/1.0.0.3/mnt/pvm/repo/pvm-install.cfg \
     pvm-viosdir=http://${serveraddress}/novalink-vios \
     pvmdisk=/dev/mapper/mpatha \
     initrd /export/nim/lpp_source/powervc/novalink/1.0.0.3/mnt/install/netboot_initrd.gz
    }
    

    Modify the pvm-install.cfg, it’s the NovaLink installer configuration file. We just need to modify here the [SystemConfig],[NovaLinkGeneralSettings],[NovaLinkNetworkSettings],[NovaLinkAPTRepoConfig] and [NovaLinkAdminCredential]. My advice is to configure one NovaLink by hand (by doing an installation directly with the iso image, then after the installation your configuration file is saved in /var/log/pvm-install/novalink-install.cfg. You can copy this one as your template on your installation server. This file is filled by the answers you gave during the NovaLink installation)

    # more /export/nim/lpp_source/powervc/novalink/1.0.0.3/mnt/pvm/repo/pvm-install.cfg
    [SystemConfig]
    serialnumber = XXXXXXXX
    lmbsize = 256
    
    [NovaLinkGeneralSettings]
    ntpenabled = True
    ntpserver = timeserver1
    timezone = Europe/Paris
    
    [NovaLinkNetworkSettings]
    dhcpip = DISABLED
    ipaddress = YYYYYYYY
    gateway = ZZZZZZZZ
    netmask = 255.255.255.0
    dns1 = 8.8.8.8
    dns2 = 8.8.9.9
    hostname = WWWWWWWW
    domain = lab.chmod666.org
    
    [NovaLinkAPTRepoConfig]
    downloadprotocol = http
    mirrorhostname = nimserver
    mirrordirectory = /export/nim/lpp_source/powervc/novalink/1.0.0.3/mnt/
    mirrorproxy =
    
    [VIOSNIMServerConfig]
    novalink_private_ip = 192.168.128.1
    vios1_private_ip = 192.168.128.2
    vios2_private_ip = 192.168.128.3
    novalink_netmask = 255.255.128.0
    viosinstallprompt = False
    
    [NovaLinkAdminCredentials]
    username = padmin
    password = $6$N1hP6cJ32p17VMpQ$sdThvaGaR8Rj12SRtJsTSRyEUEhwPaVtCTvbdocW8cRzSQDglSbpS.jgKJpmz9L5SAv8qptgzUrHDCz5ureCS.
    userdescription = NovaLink System Administrator
    

    Finally modify the /etc/bootptab file and add a line matching your installation:

    # tail -1 /etc/bootptab
    nova0696010:bf=/var/lib/tftpboot/core.elf:ip=10.20.65.16:ht=ethernet:sa=10.255.228.37:gw=10.20.65.1:sm=255.255.255.0:
    

    Don’t forget to setup an http server, serving all the needed files. I know this configuration is super unsecured. But honestly I don’t care my NIM server is in a super secured network just accessible by the VIOS and NovaLink partition. So I’m good :-) :

    # cd /opt/freeware/etc/httpd/ 
    # grep -Ei "^Listen|^DocumentRoot" conf/httpd.conf
    Listen 80
    DocumentRoot "/"
    

    novaserved

    Instead of doing this over and over and over at every NovaLink installation I have written a custom script preparing my NovaLink installation file, what I do in this script is:

    • Preparing the pvm-install.cfg file.
    • Modifying the grub.cfg file.
    • Adding a line to the /etc/bootptab file.
    #  ./custnovainstall.ksh nova0696010 10.20.65.16 10.20.65.1 255.255.255.0
    #!/usr/bin/ksh
    
    novalinkname=$1
    novalinkip=$2
    novalinkgw=$3
    novalinknm=$4
    cfgfile=/export/nim/lpp_source/powervc/novalink/novalink-install.cfg
    desfile=/export/nim/lpp_source/powervc/novalink/1.0.0.3/mnt/pvm/repo/pvm-install.cfg
    grubcfg=/export/nim/lpp_source/powervc/novalink/grub.cfg
    grubdes=/grub.cfg
    
    echo "+--------------------------------------+"
    echo "NovaLink name: ${novalinkname}"
    echo "NovaLink IP: ${novalinkip}"
    echo "NovaLink GW: ${novalinkgw}"
    echo "NovaLink NM: ${novalinknm}"
    echo "+--------------------------------------+"
    echo "Cfg ref: ${cfgfile}"
    echo "Cfg file: ${cfgfile}.${novalinkname}"
    echo "+--------------------------------------+"
    
    typeset -u serialnumber
    serialnumber=$(echo ${novalinkname} | sed 's/nova//g')
    
    echo "SerNum: ${serialnumber}"
    
    cat ${cfgfile} | sed "s/serialnumber = XXXXXXXX/serialnumber = ${serialnumber}/g" | sed "s/ipaddress = YYYYYYYY/ipaddress = ${novalinkip}/g" | sed "s/gateway = ZZZZZZZZ/gateway = ${novalinkgw}
    /g" | sed "s/netmask = 255.255.255.0/netmask = ${novalinknm}/g" | sed "s/hostname = WWWWWWWW/hostname = ${novalinkname}/g" > ${cfgfile}.${novalinkname}
    cp ${cfgfile}.${novalinkname} ${desfile}
    cat ${grubcfg} | sed "s/  set hostname=WWWWWWWW/  set hostname=${novalinkname}/g" > ${grubcfg}.${novalinkname}
    cp ${grubcfg}.${novalinkname} ${grubdes}
    # nova1009425:bf=/var/lib/tftpboot/core.elf:ip=10.20.65.15:ht=ethernet:sa=10.255.248.37:gw=10.20.65.1:sm=255.255.255.0:
    echo "${novalinkname}:bf=/var/lib/tftpboot/core.elf:ip=${novalinkip}:ht=ethernet:sa=10.255.248.37:gw=${novalinkgw}:sm=${novalinknm}:" >> /etc/bootptab
    

    Novalink installation: vSCSI or NPIV ?

    NovaLink is not designed to be installed of top of NPIV it’s a fact. As it is designed to be installed on a totally new system without any Virtual I/O Servers configured the NovaLink installation is by default creating the Virtual I/O Servers and using these VIOS the installation process is creating backing devices on top of logical volumes created in the default VIOS storage pool. Then the Novalink installation partition is created on top of these two logical volumes and at the end mirrored. This is the way NovaLink is doing for Scale Out systems.

    For High End systems NovaLink is assuming your going to install the NovaLink partition on top of vSCSI (have personnaly tried with hdisk backed and SSP Logical Unit backed and both are working ok). For those like me who wants to install NovaLink on top of NPIV (I know this is not a good choice, but once again I was forced to do that) there still is a possiblity to do it. (In my humble opinion the NPIV design is done for high performance and the Novalink partition is not going to be an I/O intensive partition. Even worse our whole new design is based on NPIV for LPARs …. it’s a shame as NPIV is not a solution designed for high denstity and high scalability. Every PowerVM system administrator should remember this. NPIV IS NOT A GOOD CHOICE FOR DENSITY AND SCALABILITY USE IT FOR PERFORMANCE ONLY !!!. The story behind this is funny. I’m 100% sure that SSP is ten time a better choice to achieve density and scalability. I decided to open a poll on twitter asking this question “Will you choose SSP or NPIV to design a scalable AIX cloud based on PowerVC ?”. I was 100% sure SSP will win and made a bet with friend (I owe him beers now) that I’ll be right. What was my surprise when seeing the results. 90% of people vote for NPIV. I’m sorry to say that guys but there are two possibilities: 1/ You don’t really know what scalability and density means because you never faced it so that’s why you made the wrong choice. 2/ You know it and you’re just wrong :-) . This little story is another proof telling that IBM is not responsible about the dying of AIX and PowerVM … but unfortunately you are responsible of it not understanding that the only way to survive is to face high scalable solution like Linux is doing with Openstack and Ceph. It’s a fact. Period.)

    This said … if you are trying to install NovaLink on top of NPIV you’ll get an error. A workaround to this problem is to add the following line to the grub.cfg file

     pvmdisk=/dev/mapper/mpatha \
    

    If you do that you’ll be able to install NovaLink on your NPIV disk but still have an error the first time you’ll install it at the “grub-install step”. Just re-run the installation a second time and the grub-install command will work ok :-) (I’ll explain how to do to avoid this second issue later).

    One work-around to this second issue is to recreate the initrd by adding a line in the debian-installer config file.

    Fully automated installation by example

    • Here the core.elf file is downloaded by tftp. You can se in the capture below that the grub.cfg file is searched in / :
    • 1m
      13m

    • The installer is starting:
    • 2

    • The vmlinux is downloaded (http):
    • 3

    • The root.squashfs is downloaded (http):
    • 4m

    • The pvm-install.cfg configuration file is downloaded (http):
    • 5

    • pvm services are started. At this time if you are running in co-management mode you’ll see the Red lock in the HMC Server status:
    • 6

    • The Linux and Novalink istallation is ongoing:
    • 7
      8
      9
      10
      11
      12

    • System is ready:
    • 14

    Novalink code auto update

    When adding a NovaLink host to PowerVC the powervc packages coming from the powervc management host will be installed on the NovaLink partition. You can check this during the installation. Here is what’s going on when adding the NovaLink host to PowerVC:

    15
    16

    # cat /opt/ibm/powervc/log/powervc_install_2016-09-11-164205.log
    ################################################################################
    Starting the IBM PowerVC Novalink Installation on:
    2016-09-11T16:42:05+02:00
    ################################################################################
    
    LOG file is /opt/ibm/powervc/log/powervc_install_2016-09-11-164205.log
    
    2016-09-11T16:42:05.18+02:00 Installation directory is /opt/ibm/powervc
    2016-09-11T16:42:05.18+02:00 Installation source location is /tmp/powervc_img_temp_1473611916_1627713/powervc-1.3.1.2
    [..]
    Setting up python-neutron (10:8.0.0-201608161728.ibm.ubuntu1.375) ...
    Setting up neutron-common (10:8.0.0-201608161728.ibm.ubuntu1.375) ...
    Setting up neutron-plugin-ml2 (10:8.0.0-201608161728.ibm.ubuntu1.375) ...
    Setting up ibmpowervc-powervm-network (1.3.1.2) ...
    Setting up ibmpowervc-powervm-oslo (1.3.1.2) ...
    Setting up ibmpowervc-powervm-ras (1.3.1.2) ...
    Setting up ibmpowervc-powervm (1.3.1.2) ...
    W: --force-yes is deprecated, use one of the options starting with --allow instead.
    
    ***************************************************************************
    IBM PowerVC Novalink installation
     successfully completed at 2016-09-11T17:02:30+02:00.
     Refer to
     /opt/ibm/powervc/log/powervc_install_2016-09-11-165617.log
     for more details.
    ***************************************************************************
    

    17

    Installing the missing deb packages if NovaLink host was added before PowerVC upgrade

    If the NovaLink host was added in PowerVC 1.3.1.1 and you updated to PowerVC 1.3.1.2 you have to update the package by hand because there is a little bug during the update of some packages:

    • From the PowerVC management host copy the latest packages to the NovaLink host:
    • # scp /opt/ibm/powervc/images/powervm/powervc-powervm-compute-1.3.1.2.tgz padmin@nova0696010:~
      padmin@nova0696010's password:
      powervc-powervm-compute-1.3.1.2.tgz
      
    • Update the packages on the NovaLink host
    • # tar xvzf powervc-powervm-compute-1.3.1.2.tgz
      # cd powervc-1.3.1.2/packages/powervm
      # dpkg -i nova-powervm_2.0.3-160816-48_all.deb
      # dpkg -i networking-powervm_2.0.1-160816-6_all.deb
      # dpkg -i ceilometer-powervm_2.0.1-160816-17_all.deb
      # /opt/ibm/powervc/bin/powervc-services restart
      

    rsct and pvm deb update

    Never forget to install latest rsct and pvm packages after the installation. You can clone the official IBM repository for pvm and rsct files (you can check my previous post about Novalink for more details about cloning the repository). Then create two files in /etc/apt/sources.list.d one for pvm, the other for rsct

    # vi /etc/apt/sources.list.d/pvm.list
    deb http://nimserver/export/nim/lpp_source/powervc/novalink/nova/debian novalink_1.0.0 non-free
    # vi /etc/apt/source.list.d/rsct.list
    deb http://nimserver/export/nim/lpp_source/powervc/novalink/rsct/ubuntu xenial main
    # dpkg -l | grep -i rsct
    ii  rsct.basic                                3.2.1.0-15300                           ppc64el      Reliable Scalable Cluster Technology - Basic
    ii  rsct.core                                 3.2.1.3-16106-1ubuntu1                  ppc64el      Reliable Scalable Cluster Technology - Core
    ii  rsct.core.utils                           3.2.1.3-16106-1ubuntu1                  ppc64el      Reliable Scalable Cluster Technology - Utilities
    # # dpkg -l | grep -i pvm
    ii  pvm-cli                                   1.0.0.3-160516-1488                     all          Power VM Command Line Interface
    ii  pvm-core                                  1.0.0.3-160525-2192                     ppc64el      PVM core runtime package
    ii  pvm-novalink                              1.0.0.3-160525-1000                     ppc64el      Meta package for all PowerVM Novalink packages
    ii  pvm-rest-app                              1.0.0.3-160524-2229                     ppc64el      The PowerVM NovaLink REST API Application
    ii  pvm-rest-server                           1.0.0.3-160524-2229                     ppc64el      Holds the basic installation of the REST WebServer (Websphere Liberty Profile) for PowerVM NovaLink 
    # apt-get install rsct.core rsct.basic
    Reading package lists... Done
    Building dependency tree
    Reading state information... Done
    The following packages were automatically installed and are no longer required:
      docutils-common libpaper-utils libpaper1 python-docutils python-roman
    Use 'apt autoremove' to remove them.
    The following additional packages will be installed:
      rsct.core.utils src
    The following packages will be upgraded:
      rsct.core rsct.core.utils src
    3 upgraded, 0 newly installed, 0 to remove and 6 not upgraded.
    Need to get 9,356 kB of archives.
    After this operation, 548 kB disk space will be freed.
    [..]
    # apt-get install pvm-novalink
    Reading package lists... Done
    Building dependency tree
    Reading state information... Done
    The following packages were automatically installed and are no longer required:
      docutils-common libpaper-utils libpaper1 python-docutils python-roman
    Use 'apt autoremove' to remove them.
    The following additional packages will be installed:
      pvm-core pvm-rest-app pvm-rest-server pypowervm
    The following packages will be upgraded:
      pvm-core pvm-novalink pvm-rest-app pvm-rest-server pypowervm
    5 upgraded, 0 newly installed, 0 to remove and 1 not upgraded.
    Need to get 287 MB of archives.
    After this operation, 203 kB of additional disk space will be used.
    Do you want to continue? [Y/n] Y
    [..]
    

    After the installation, here is what you should have if everything was updated properly:

    dpkg -l | grep rsct
    ii  rsct.basic                                3.2.1.4-16154-1ubuntu1                  ppc64el      Reliable Scalable Cluster Technology - Basic
    ii  rsct.core                                 3.2.1.4-16154-1ubuntu1                  ppc64el      Reliable Scalable Cluster Technology - Core
    ii  rsct.core.utils                           3.2.1.4-16154-1ubuntu1                  ppc64el      Reliable Scalable Cluster Technology - Utilities
    dpkg -l | grep pvm
    ii  pvm-cli                                   1.0.0.3-160516-1488                     all          Power VM Command Line Interface
    ii  pvm-core                                  1.0.0.3.1-160713-2441                   ppc64el      PVM core runtime package
    ii  pvm-novalink                              1.0.0.3.1-160714-1152                   ppc64el      Meta package for all PowerVM Novalink packages
    ii  pvm-rest-app                              1.0.0.3.1-160713-2417                   ppc64el      The PowerVM NovaLink REST API Application
    ii  pvm-rest-server                           1.0.0.3.1-160713-2417                   ppc64el      Holds the basic installation of the REST WebServer (Websphere Liberty Profile) for PowerVM NovaLink
    

    Novalink post-installation (my ansible way to do that)

    You all now know that I’m not very fond of doing the same things over and over again, that’s why I have create an ansible post-install playbook especially for NovaLink post installation. You can download it here: nova_ansible. Then install ansible on a host that has an ssh access to all your NovaLink partitions and run the the ansible playbook:

    • Untar the ansible playbook:
    • # mkdir /srv/ansible
      # cd /srv/ansible
      # tar xvf novalink_ansible.tar 
      
    • Modify the group_vars/novalink.yml to fit your environment:
    • # cat group_vars/novalink.yml
      ntpservers:
        - ntpserver1
        - ntpserver2
      dnsservers:
        - 8.8.8.8
        - 8.8.9.9
      dnssearch:
        - lab.chmod666.org
      vepa_iface: ibmveth6
      repo: nimserver
      
    • Share root ssh key to the NovaLink host (be careful by default NovaLink does not allow root login you have to modify the sshd configuration file):
    • Put all your Novalink hosts into the inventory file:
    • #cat inventories/hosts.novalink
      [novalink]
      nova65a0cab
      nova65ff4cd
      nova10094ef
      nova06960ab
      
    • Run ansible-playbook and you’re done:
    • # ansible-playbook -i inventories/hosts.novalink site.yml
      

      ansible1
      ansible2
      ansible3

    More details about NovaLink

    MGMTSWITCH vswitch automatic creation

    Do not try to create the MGMTSWITCH by yourself. The NovaLink installer is doing it for you. As my Virtual I/O Servers are installed using the IBM Provisioning Toolkit for PowerVM … I was creating the MGMTSWITCH at this time but I was wrong. You can see this in the file /var/log/pvm-install/pvminstall.log on the NovaLink partition:

    # cat /var/log/pvm-install/pvminstall.log
    Fri Aug 12 17:26:07 UTC 2016: PVMDebug = 0
    Fri Aug 12 17:26:07 UTC 2016: Running initEnv
    [..]
    Fri Aug 12 17:27:08 UTC 2016: Using user provided pvm-install configuration file
    Fri Aug 12 17:27:08 UTC 2016: Auto Install set
    [..]
    Fri Aug 12 17:27:44 UTC 2016: Auto Install = 1
    Fri Aug 12 17:27:44 UTC 2016: Validating configuration file
    Fri Aug 12 17:27:44 UTC 2016: Initializing private network configuration
    Fri Aug 12 17:27:45 UTC 2016: Running /opt/ibm/pvm-install/bin/switchnetworkcfg -o c
    Fri Aug 12 17:27:46 UTC 2016: Running /opt/ibm/pvm-install/bin/switchnetworkcfg -o n -i 3 -n MGMTSWITCH -p 4094 -t 1
    Fri Aug 12 17:27:49 UTC 2016: Start setupinstalldisk operation for /dev/mapper/mpatha
    Fri Aug 12 17:27:49 UTC 2016: Running updatedebconf
    Fri Aug 12 17:56:06 UTC 2016: Pre-seeding disk recipe
    

    NPIV lpar creation problem !

    As you know my environment is crazy. Every lpar we are creating have 4 virtual fibre channels adapters. Obviously two on fabric A and two on fabric B. And obviously again each fabric must be present on each Virtual I/O Servers. So to sum up. An lpar must have access to fabric A and B using VIOS1 and to fabric A and B using VIOS2. Unfortunately there was a little bug in the current NovaLink (1.0.0.3) code and all the lpar created were created with only two adapters. The PowerVC team gave my a patch to handle this particular issue patching the npiv.py file. This patch needs to be installed on the NovaLink partition itself.:

    # cd /usr/lib/python2.7/dist-packages/powervc_nova/virt/ibmpowervm/pvm/volume
    # sdiff npiv.py.back npiv.bck
    

    npivpb

    I’m intentionally not giving you the solution here (just by copying/pasting code) because an issue is addressed and an APAR has been opened for this issue and is resolved in 1.3.1.2 version. IT16534

    From NovaLink to HMC …. and the opposite

    One of the challenge for me was to be sure everything was working ok regarding LPM and NovaLink. So I decided to test different cases:

    • From NovaLink host to Novalink host (didn’t had any trouble) :-)
    • From NovaLink host to HMC host (didn’t had any trouble) :-)
    • From HMC host to Novalink host (had a trouble) :-(

    Once again this issue avoiding HMC to Novalink LPM to work correctly is related to storage. A patch is ongoing but let me explain this issue a little bit (only if you have to absolutely move an LPAR from HMC to NovaLink and your are in the same case as I am):

    PowerVC is not correctly doing the mapping to the destination Virtual I/O Servers and is trying to map two times the fabric A on the VIOS1 and two time the fabric B on the VIOS2. Hopefully for us you can do the migration by hand :

    • Do the LPM operation from PowerVC and check on the HMC side how PowerVC is doing the mapping (log on the HMC to check this):
    • #  lssvcevents -t console -d 0 | grep powervc_admin | grep migrlpar
      time=08/31/2016 18:53:27,"text=HSCE2124 User name powervc_admin: migrlpar -m 9119-MME-656C38A -t 9119-MME-65A0C31 --id 18 --ip 10.22.33.198 -u wlp -i ""virtual_fc_mappings=6/vios1/2//fcs2,3/vios2/1//fcs2,4/vios2/1//fcs1,5/vios1/2//fcs1"",shared_proc_pool_id=0 -o m command failed."
      
    • One interesting point you can see here is that the NovaLink user used for LPM is not padmin but wlp. Have look on the Novalink machine if you are a little bit curious:
    • 18

    • If you are double checking the mapping you’ll see that PowerVC is mixing up the VIOS. Just rerun the command in the right order and you’ll see that you’re going to be able to do HMC to NovaLink LPM (By the way PowerVC is automattically detecting that the host has changed for this lpar (moved outside of PowerVC)):
    • # migrlpar -m 9119-MME-656C38A -t 9119-MME-65A0C31 --id 18 --ip 10.22.33.198 -u wlp -i '"virtual_fc_mappings=6/vios2/1//fcs2,3/vios1/2//fcs2,4/vios2/1//fcs1,5/vios1/2//fcs1"',shared_proc_pool_id=0 -o m
      # lssvcevents -t console -d 0 | grep powervc_admin | grep migrlpar
      time=08/31/2016 19:13:00,"text=HSCE2123 User name powervc_admin: migrlpar -m 9119-MME-656C38A -t 9119-MME-65A0C31 --id 18 --ip 10.22.33.198 -u wlp -i ""virtual_fc_mappings=6/vios2/1//fcs2,3/vios1/2//fcs2,4/vios2/1//fcs1,5/vios1/2//fcs1"",shared_proc_pool_id=0 -o m command was executed successfully."
      
    hmctonova

    One more time don't worry about this issue a patch is on the way. But I thought it was interessting to talk about it just to show you how PowerVC is handling this (user, key sharing, check on the HMC).

    Deep dive into the initrd

    I am curious and there is no way to change this. As I wanted to know how the NovaLink installer is working I had to check into the netboot_initrd.gz file. There are a lot of interesting stuff to check in this initrd. Run the commands below on a Linux partition if you also want to have a look:

    # scp nimdy:/export/nim/lpp_source/powervc/novalink/1.0.0.3/iso/install/netboot_initrd.gz .
    # gunzip netboot_initrd
    # cpio -i < netboot_initrd
    185892 blocks
    

    The installer is located in opt/ibm/pvm-install:

    # ls opt/ibm/pvm-install/data/
    40mirror.pvm  debpkgs.txt  license.txt  nimclient.info  pvm-install-config.template  pvm-install-preseed.cfg  rsct-gpg-pub.key  vios_diagram.txt
    # ls opt/ibm/pvm-install/bin
    assignio.py        envsetup        installpvm                    monitor        postProcessing    pvmwizardmain.py  restore.py        switchnetworkcfg  vios
    cfgviosnetwork.py  functions       installPVMPartitionWizard.py  network        procmem           recovery          setupinstalldisk  updatedebconf     vioscfg
    chviospasswd       getnetworkinfo  ioadapter                     networkbridge  pvmconfigdata.py  removemem         setupviosinstall  updatenimsetup    welcome.py
    editpvmconfig      initEnv         mirror                        nimscript      pvmtime           resetsystem       summary.py        user              wizpkg
    

    You can for instance check what's the installer is exactly doing. Let's take again the exemple of the MGMTSWITCH creation, you can see in the output below that I was right saying that:

    initrd1

    Remember that I was telling you before that I had problem with installation on NPIV. You can avoid installing NovaLink two times by modifying the debian installer directly in the initrd by adding a line in the debian installer file opt/ibm/pvm-install/data/pvm-install-preseed.cfg (you have to rebuild the initrd after doing this) :

    # grep bootdev opt/ibm/pvm-install/data/pvm-install-preseed.cfg
    d-i grub-installer/bootdev string /dev/mapper/mpatha
    # find | cpio -H newc -o > ../new_initrd_file
    # gzip -9 ../new_initrd_file
    # scp ../new_initrdfile.gz nimdy:/export/nim/lpp_source/powervc/novalink/1.0.0.3/iso/install/netboot_initrd.gz
    

    You can also find good example here of pvmctl commands:

    # grep -R pvmctl *
    pvmctl lv create --size $LV_SIZE --name $LV_NAME -p id=$vid
    pvmctl scsi create --type lv --vg name=rootvg --lpar id=1 -p id=$vid --stor-id name=$LV_NAME
    

    Troubleshooting

    NovaLink is not PowerVC so here is a little reminder of what I do to troubleshot Novalink:

    • Installation troubleshooting:
    • #cat /var/log/pvm-install/pvminstall.log
      
    • Neutron Agent log (always double check this one):
    • # cat /var/log/neutron/neutron-powervc-pvm-sea-agent.log
      
    • Nova logs for this host are not accessible on the PowerVC management host anymore, so check it on the NovaLink partition if needed:
    • # cat /var/log/nova/nova-compute.log
      
    • pvmctl logs:
    • # cat /var/log/pvm/pvmctl.log
      

    One last thing to add about NovaLink. One thing I like a lot is that Novalink is doing backups of the system and VIOS hourly/daily. These backup are stored in /var/backup/pvm :

    # crontab -l
    # VIOS hourly backups - at 15 past every hour except for midnight
    15 1-23 * * * /usr/sbin/pvm-backup --type vios --frequency hourly
    # Hypervisor hourly backups - at 15 past every hour except for midnight
    15 1-23 * * * /usr/sbin/pvm-backup --type system --frequency hourly
    # VIOS daily backups - at 15 past midnight
    15 0    * * * /usr/sbin/pvm-backup --type vios --frequency daily
    # Hypervisor daily backups - at 15 past midnight
    15 0    * * * /usr/sbin/pvm-backup --type system --frequency daily
    #ls -l /var/backups/pvm
    total 4
    drwxr-xr-x 2 root pvm_admin 4096 Sep  9 00:15 9119-MME*0265FF47B
    

    More PowerVC tips and tricks

    Let's finish this blog post with more PowerVC tips and tricks. Before giving you the tricks I have to warn you. All of these tricks are not supported by PowerVC, use them at your own risk OR contact your support before doing anything else. You may break and destroy everything if you are not aware of what you are doing. So please be very careful using all these tricks. YOU HAVE BEEN WARNED !!!!!!

    Accessing and querying the database

    This first trick is funny and will allow you to query and modify the PowerVC database. Once again do this a your own risks. One of the issue I had was strange. I do not remeber how it happends exactly but some of my luns that were not attached to any hosts and were still showing an attachmenent number equals to 1 and I didn't had the possibility to remove it. Even worse someone has deleted these luns on the SVC side. So these luns were what I called "ghost lun". Non existing but non-deletable luns. (I had also to remove the storage provider related to these luns). The only way to change this was to change the state to detached directly in the cinder database. Be careful this trick is only working with MariaDB.

    First get the database password. Get the encrypted password from /opt/ibm/powervc/data/powervc-db.conf file and decode it to have the clear password:

    # grep ^db_password /opt/ibm/powervc/data/powervc-db.conf
    db_password = aes-ctr:NjM2ODM5MjM0NTAzMTg4MzQzNzrQZWi+mrUC+HYj9Mxi5fQp1XyCXA==
    # python -c "from powervc_keystone.encrypthandler import EncryptHandler; print EncryptHandler().decode('aes-ctr:NjM2ODM5MjM0NTAzMTg4MzQzNzrQZWi+mrUC+HYj9Mxi5fQp1XyCXA==')"
    OhnhBBS_gvbCcqHVfx2N
    # mysql -u root -p cinder
    Enter password:
    MariaDB [cinder]> MariaDB [cinder]> show tables;
    +----------------------------+
    | Tables_in_cinder           |
    +----------------------------+
    | backups                    |
    | cgsnapshots                |
    | consistencygroups          |
    | driver_initiator_data      |
    | encryption                 |
    [..]
    

    Then get the lun uuid on the PowerVC gui for the lun you want to change, and follow the commands below:

    dummy

    MariaDB [cinder]> select * from volume_attachment where volume_id='9cf6d85a-3edd-4ab7-b797-577ff6566f78' \G
    *************************** 1. row ***************************
       created_at: 2016-05-26 08:52:51
       updated_at: 2016-05-26 08:54:23
       deleted_at: 2016-05-26 08:54:23
          deleted: 1
               id: ce4238b5-ea39-4ce1-9ae7-6e305dd506b1
        volume_id: 9cf6d85a-3edd-4ab7-b797-577ff6566f78
    attached_host: NULL
    instance_uuid: 44c7a72c-610c-4af1-a3ed-9476746841ab
       mountpoint: /dev/sdb
      attach_time: 2016-05-26 08:52:51
      detach_time: 2016-05-26 08:54:23
      attach_mode: rw
    attach_status: attached
    1 row in set (0.01 sec)
    MariaDB [cinder]> select * from volumes where id='9cf6d85a-3edd-4ab7-b797-577ff6566f78' \G
    *************************** 1. row ***************************
                     created_at: 2016-05-26 08:51:57
                     updated_at: 2016-05-26 08:54:23
                     deleted_at: NULL
                        deleted: 0
                             id: 9cf6d85a-3edd-4ab7-b797-577ff6566f78
                         ec2_id: NULL
                        user_id: 0688b01e6439ca32d698d20789d52169126fb41fb1a4ddafcebb97d854e836c9
                     project_id: 1471acf124a0479c8d525aa79b2582d0
                           host: pb01_mn_svc_qual
                           size: 1
              availability_zone: nova
                         status: available
                  attach_status: attached
                   scheduled_at: 2016-05-26 08:51:57
                    launched_at: 2016-05-26 08:51:59
                  terminated_at: NULL
                   display_name: dummy
            display_description: NULL
              provider_location: NULL
                  provider_auth: NULL
                    snapshot_id: NULL
                 volume_type_id: e49e9cc3-efc3-4e7e-bcb9-0291ad28df42
                   source_volid: NULL
                       bootable: 0
              provider_geometry: NULL
                       _name_id: NULL
              encryption_key_id: NULL
               migration_status: NULL
             replication_status: disabled
    replication_extended_status: NULL
        replication_driver_data: NULL
            consistencygroup_id: NULL
                    provider_id: NULL
                    multiattach: 0
                previous_status: NULL
    1 row in set (0.00 sec)
    MariaDB [cinder]> update volume_attachment set attach_status='detached' where volume_id='9cf6d85a-3edd-4ab7-b797-577ff6566f78';
    Query OK, 1 row affected (0.00 sec)
    Rows matched: 1  Changed: 1  Warnings: 0
    MariaDB [cinder]> update volumes set attach_status='detached' where id='9cf6d85a-3edd-4ab7-b797-577ff6566f78';
    Query OK, 1 row affected (0.00 sec)
    Rows matched: 1  Changed: 1  Warnings: 0
    

    The second issue I had was about having some machines in deleted state but the reality was that the HMC just rebooted and for an unknow reason these machines where seen as 'deleted' .. but they were not. Using this trick I was able to force a re-evalutation of each machine is this case:

    #  mysql -u root -p nova
    Enter password:
    MariaDB [nova]> select * from instance_health_status where health_state='WARNING';
    +---------------------+---------------------+------------+---------+--------------------------------------+--------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------+
    | created_at          | updated_at          | deleted_at | deleted | id                                   | health_state | reason                                                                                                                                                                                                                | unknown_reason_details |
    +---------------------+---------------------+------------+---------+--------------------------------------+--------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------+
    | 2016-07-11 08:58:37 | NULL                | NULL       |       0 | 1af1805c-bb59-4bc9-8b6d-adeaeb4250f3 | WARNING      | [{"resource_local": "server", "display_name": "p00ww6754398", "resource_property_key": "rmc_state", "resource_property_value": "initializing", "resource_id": "1af1805c-bb59-4bc9-8b6d-adeaeb4250f3"}]                |                        |
    | 2015-07-31 16:53:50 | 2015-07-31 18:49:50 | NULL       |       0 | 2668e808-10a1-425f-a272-6b052584557d | WARNING      | [{"resource_local": "server", "display_name": "multi-vol", "resource_property_key": "vm_state", "resource_property_value": "deleted", "resource_id": "2668e808-10a1-425f-a272-6b052584557d"}]                         |                        |
    | 2015-08-03 11:22:38 | 2015-08-03 15:47:41 | NULL       |       0 | 2934fb36-5d91-48cd-96de-8c16459c50f3 | WARNING      | [{"resource_local": "server", "display_name": "clouddev-test-754df319-00000038", "resource_property_key": "rmc_state", "resource_property_value": "inactive", "resource_id": "2934fb36-5d91-48cd-96de-8c16459c50f3"}] |                        |
    | 2016-07-11 09:03:59 | NULL                | NULL       |       0 | 3fc42502-856b-46a5-9c36-3d0864d6aa4c | WARNING      | [{"resource_local": "server", "display_name": "p00ww3254401", "resource_property_key": "rmc_state", "resource_property_value": "initializing", "resource_id": "3fc42502-856b-46a5-9c36-3d0864d6aa4c"}]                |                        |
    | 2015-07-08 20:11:48 | 2015-07-08 20:14:09 | NULL       |       0 | 54d02c60-bd0e-4f34-9cb6-9c0a0b366873 | WARNING      | [{"resource_local": "server", "display_name": "p00wb3740870", "resource_property_key": "rmc_state", "resource_property_value": "inactive", "resource_id": "54d02c60-bd0e-4f34-9cb6-9c0a0b366873"}]                    |                        |
    | 2015-07-31 17:44:16 | 2015-07-31 18:49:50 | NULL       |       0 | d5ec2a9c-221b-44c0-8573-d8e3695a8dd7 | WARNING      | [{"resource_local": "server", "display_name": "multi-vol-sp5", "resource_property_key": "vm_state", "resource_property_value": "deleted", "resource_id": "d5ec2a9c-221b-44c0-8573-d8e3695a8dd7"}]                     |                        |
    +---------------------+---------------------+------------+---------+--------------------------------------+--------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------+
    6 rows in set (0.00 sec)
    MariaDB [nova]> update instance_health_status set health_state='PENDING',reason='' where health_state='WARNING';
    Query OK, 6 rows affected (0.00 sec)
    Rows matched: 6  Changed: 6  Warnings: 0
    

    pending

    The ceilometer issue

    When updating from PowerVC 1.3.0.1 to 1.3.1.1 PowerVC is changing the database backend from DB2 to MariaDB. This is a good thing but the way the update is done is by exporting all the data in flat files and then re-inserting it in the MariaDB database records per records. I had a huge problem because of this, just because my ceilodb base was huge because of the number of machines I had and the number of operations we run on PowerVC since it is in production. The DB insert took more than 3 days and never finish. If you don't need the ceilo data my advice is to change the retention from 270 days y default to 2 hours:

    # powervc-config metering event_ttl --set 2 --unit hr 
    # ceilometer-expirer --config-file /etc/ceilometer/ceilometer.conf
    

    If this is not enough an you still experiencing problems regarding the update the best way is to flush the entire table before the update:

    # /opt/ibm/powervc/bin/powervc-services stop
    # /opt/ibm/powervc/bin/powervc-services db2 start
    # /bin/su - pwrvcdb -c "db2 drop database ceilodb2"
    # /bin/su - pwrvcdb -c "db2 CREATE DATABASE ceilodb2 AUTOMATIC STORAGE YES ON /home/pwrvcdb DBPATH ON /home/pwrvcdb USING CODESET UTF-8 TERRITORY US COLLATE USING SYSTEM PAGESIZE 16384 RESTRICTIVE"
    # /bin/su - pwrvcdb -c "db2 connect to ceilodb2 ; db2 grant dbadm on database to user ceilometer"
    # /opt/ibm/powervc/bin/powervc-dbsync ceilometer
    # /bin/su - pwrvcdb -c "db2 connect TO ceilodb2; db2 CALL GET_DBSIZE_INFO '(?, ?, ?, 0)' > /tmp/ceilodb2_db_size.out; db2 terminate" > /dev/null
    

    Multi tenancy ... how to deal with a huge environment

    As my environment is growing bigger and bigger I faced a couple people trying to force me to multiply the number of PowerVC machine we have. As Openstack is a solution designed to handle both density and scalability I said that doing this is just a "non-sense". Seriously people who still believe in this have not understand anything about the cloud, openstack and PowerVC. Hopefully we found a solution acceptable by everybody. As we are created what we are calling "building-block" we had to find a way to isolate one "block" from one another. The solution for host isolation is called mutly tenancy isolation. For the storage side we are just going to play with quotas. By doing this a user will be able to manage a couple of hosts and the associated storage (storage template) without having the right to do anything on the others:

    multitenancyisolation

    Before doing anything create the tenant (or project) and a user associated with it:

    # cat /opt/ibm/powervc/version.properties | grep cloud_enabled
    cloud_enabled = yes
    # ~/powervcrc
    export OS_USERNAME=root
    export OS_PASSWORD=root
    export OS_TENANT_NAME=ibm-default
    export OS_AUTH_URL=https://powervc.lab.chmod666.org:5000/v3/
    export OS_IDENTITY_API_VERSION=3
    export OS_CACERT=/etc/pki/tls/certs/powervc.crt
    export OS_REGION_NAME=RegionOne
    export OS_USER_DOMAIN_NAME=Default
    export OS_PROJECT_DOMAIN_NAME=Default
    export OS_COMPUTE_API_VERSION=2.25
    export OS_NETWORK_API_VERSION=2.0
    export OS_IMAGE_API_VERSION=2
    export OS_VOLUME_API_VERSION=2
    # source powervcrc
    # openstack project create hb01
    +-------------+----------------------------------+
    | Field       | Value                            |
    +-------------+----------------------------------+
    | description |                                  |
    | domain_id   | default                          |
    | enabled     | True                             |
    | id          | 90d064b4abea4339acd32a8b6a8b1fdf |
    | is_domain   | False                            |
    | name        | hb01                             |
    | parent_id   | default                          |
    +-------------+----------------------------------+
    # openstack role list
    +----------------------------------+---------------------+
    | ID                               | Name                |
    +----------------------------------+---------------------+
    | 1a76014f12594214a50c36e6a8e3722c | deployer            |
    | 54616a8b136742098dd81eede8fd5aa8 | vm_manager          |
    | 7bd6de32c14d46f2bd5300530492d4a4 | storage_manager     |
    | 8260b7c3a4c24a38ba6bee8e13ced040 | deployer_restricted |
    | 9b69a55c6b9346e2b317d0806a225621 | image_manager       |
    | bc455ed006154d56ad53cca3a50fa7bd | admin               |
    | c19a43973db148608eb71eb3d86d4735 | service             |
    | cb130e4fa4dc4f41b7bb4f1fdcf79fc2 | self_service        |
    | f1a0c1f9041d4962838ec10671befe33 | vm_user             |
    | f8cf9127468045e891d5867ce8825d30 | viewer              |
    +----------------------------------+---------------------+
    # useradd hb01_admin
    # openstack role add --project hb01 --user hb01_admin admin
    

    Then associate each host group (aggregates in Openstack terms) (you have to put your allowed hosts in an host group to enable this feature) that are allowed for this tenant using filter_tenant_id meta-data. For each allowed host group add this field to the metatadata of the host. (first find the tenant id):

    # openstack project list
    +----------------------------------+-------------+
    | ID                               | Name        |
    +----------------------------------+-------------+
    | 1471acf124a0479c8d525aa79b2582d0 | ibm-default |
    | 90d064b4abea4339acd32a8b6a8b1fdf | hb01        |
    | b79b694c70734a80bc561e84a95b313d | powervm     |
    | c8c42d45ef9e4a97b3b55d7451d72591 | service     |
    | f371d1f29c774f2a97f4043932b94080 | project1    |
    +----------------------------------+-------------+
    # openstack aggregate list
    +----+---------------+-------------------+
    | ID | Name          | Availability Zone |
    +----+---------------+-------------------+
    |  1 | Default Group | None              |
    | 21 | aggregate2    | None              |
    | 41 | hg2           | None              |
    | 43 | hb01_mn       | None              |
    | 44 | hb01_me       | None              |
    +----+---------------+-------------------+
    # nova aggregate-set-metadata hb01_mn filter_tenant_id=90d064b4abea4339acd32a8b6a8b1fdf 
    Metadata has been successfully updated for aggregate 43.
    | Id | Name    | Availability Zone | Hosts             | Metadata                                                                                                                                   
    | 43 | hb01_mn | -                 | '9119MME_1009425' | 'dro_enabled=False', 'filter_tenant_id=90d064b4abea4339acd32a8b6a8b1fdf', 'hapolicy-id=1', 'hapolicy-run_interval=1', 'hapolicy-stabilization=1', 'initialpolicy-id=4', 'runtimepolicy-action=migrate_vm_advise_only', 'runtimepolicy-id=5', 'runtimepolicy-max_parallel=10', 'runtimepolicy-run_interval=5', 'runtimepolicy-stabilization=2', 'runtimepolicy-threshold=70' |
    # nova aggregate-set-metadata hb01_me filter_tenant_id=90d064b4abea4339acd32a8b6a8b1fdf 
    Metadata has been successfully updated for aggregate 44.
    | Id | Name    | Availability Zone | Hosts             | Metadata                                                                                                                                   
    | 44 | hb01_me | -                 | '9119MME_0696010' | 'dro_enabled=False', 'filter_tenant_id=90d064b4abea4339acd32a8b6a8b1fdf', 'hapolicy-id=1', 'hapolicy-run_interval=1', 'hapolicy-stabilization=1', 'initialpolicy-id=2', 'runtimepolicy-action=migrate_vm_advise_only', 'runtimepolicy-id=5', 'runtimepolicy-max_parallel=10', 'runtimepolicy-run_interval=5', 'runtimepolicy-stabilization=2', 'runtimepolicy-threshold=70' |
    

    To make this work add the AggregateMultiTenancyIsolation to the scheduler_default_filter in nova.conf file and restart nova services:

    # grep scheduler_default_filter /etc/nova/nova.conf
    scheduler_default_filters = RamFilter,CoreFilter,ComputeFilter,RetryFilter,AvailabilityZoneFilter,ImagePropertiesFilter,ComputeCapabilitiesFilter,MaintenanceFilter,PowerVCServerGroupAffinityFilter,PowerVCServerGroupAntiAffinityFilter,PowerVCHostAggregateFilter,PowerVMNetworkFilter,PowerVMProcCompatModeFilter,PowerLMBSizeFilter,PowerMigrationLicenseFilter,PowerVMMigrationCountFilter,PowerVMStorageFilter,PowerVMIBMiMobilityFilter,PowerVMRemoteRestartFilter,PowerVMRemoteRestartSameHMCFilter,PowerVMEndianFilter,PowerVMGuestCapableFilter,PowerVMSharedProcPoolFilter,PowerVCResizeSameHostFilter,PowerVCDROFilter,PowerVMActiveMemoryExpansionFilter,PowerVMNovaLinkMobilityFilter,AggregateMultiTenancyIsolation
    # powervc-services restart
    

    We are done regarding the hosts.

    Enabling quotas

    To allow one user/tenant to create volumes only on onz storage provider we first need to enable quotas using the following commands:

    # grep quota /opt/ibm/powervc/policy/cinder/policy.json
        "volume_extension:quotas:show": "",
        "volume_extension:quotas:update": "rule:admin_only",
        "volume_extension:quotas:delete": "rule:admin_only",
        "volume_extension:quota_classes": "rule:admin_only",
        "volume_extension:quota_classes:validate_setup_for_nested_quota_use": "rule:admin_only",
    

    Then put to 0 all the non-allowed storage template for this tenant and let the only one you want to 10000. Easy:

    # cinder --service-type volume type-list
    +--------------------------------------+---------------------------------------------+-------------+-----------+
    |                  ID                  |                     Name                    | Description | Is_Public |
    +--------------------------------------+---------------------------------------------+-------------+-----------+
    | 53434872-a0d2-49ea-9683-15c7940b30e5 |               svc2 base template            |      -      |    True   |
    | e49e9cc3-efc3-4e7e-bcb9-0291ad28df42 |               svc1 base template            |      -      |    True   |
    | f45469d5-df66-44cf-8b60-b226425eee4f |                     svc3                    |      -      |    True   |
    +--------------------------------------+---------------------------------------------+-------------+-----------+
    # cinder --service-type volume quota-update --volumes 0 --volume-type "svc2" 90d064b4abea4339acd32a8b6a8b1fdf
    # cinder --service-type volume quota-update --volumes 0 --volume-type "svc3" 90d064b4abea4339acd32a8b6a8b1fdf
    +-------------------------------------------------------+----------+
    |                        Property                       |  Value   |
    +-------------------------------------------------------+----------+
    |                    backup_gigabytes                   |   1000   |
    |                        backups                        |    10    |
    |                       gigabytes                       | 1000000  |
    |              gigabytes_svc2 base template             | 10000000 |
    |              gigabytes_svc1 base template             | 10000000 |
    |                     gigabytes_svc3                    |    -1    |
    |                  per_volume_gigabytes                 |    -1    |
    |                       snapshots                       |  100000  |
    |             snapshots_svc2 base template              |  100000  |
    |             snapshots_svc1 base template              |  100000  |
    |                     snapshots_svc3                    |    -1    |
    |                        volumes                        |  100000  |
    |            volumes_svc2 base template                 |  100000  |
    |            volumes_svc1 base template                 |    0     |
    |                      volumes_svc3                     |    0     |
    +-------------------------------------------------------+----------+
    # powervc-services stop
    # powervc-services start
    

    By doing this you have enable the isolation between two tenants. Then use the appropriate user to do the appropriate task.

    PowerVC cinder above the Petabyte

    Now that quota are enabled use this command if you want to be able to have more that one petabyte of data managed by PowerVC:

    # cinder --service-type volume quota-class-update --gigabytes -1 default
    # powervc-services stop
    # powervc-services start
    

    PowerVC cinder above 10000 luns

    Change the osapi_max_limit in cinder.conf if you want to go above the 10000 lun limits (check every cinder configuration files; the cinder.conf if for the global number of volumes):

    # grep ^osapi_max_limit cinder.conf
    osapi_max_limit = 15000
    # powervc-services stop
    # powervc-services start
    

    Snapshot and consistncy group

    There is a new cool feature available with the latest version of PowerVC (1.3.1.2). This feature allows you to create snapshots of volume (only on SVC and Storwise for the moment). You now have the possibility to create consistency group (group of volumes) and create snapshots of these consistency groups (allowing for instance to make a backup of a volume group directly from OpenStack. I'm doing the example below using the command line because I think it is easier to understand with these commands rather than showing you the same thing with the rest api):

    First create a consistency group:

    # cinder --service-type volume type-list
    +--------------------------------------+---------------------------------------------+-------------+-----------+
    |                  ID                  |                     Name                    | Description | Is_Public |
    +--------------------------------------+---------------------------------------------+-------------+-----------+
    | 53434872-a0d2-49ea-9683-15c7940b30e5 |              svc2 base template             |      -      |    True   |
    | 862b0a8e-cab4-400c-afeb-99247838f889 |             p8_ssp base template            |      -      |    True   |
    | e49e9cc3-efc3-4e7e-bcb9-0291ad28df42 |               svc1 base template            |      -      |    True   |
    | f45469d5-df66-44cf-8b60-b226425eee4f |                     svc3                    |      -      |    True   |
    +--------------------------------------+---------------------------------------------+-------------+-----------+
    # cinder --service-type volume consisgroup-create --name foovg_cg "svc1 base template"
    +-------------------+-------------------------------------------+
    |      Property     |                   Value                   |
    +-------------------+-------------------------------------------+
    | availability_zone |                    nova                   |
    |     created_at    |         2016-09-11T21:10:58.000000        |
    |    description    |                    None                   |
    |         id        |    950a5193-827b-49ab-9511-41ba120c9ebd   |
    |        name       |                  foovg_cg                 |
    |       status      |                  creating                 |
    |    volume_types   | [u'e49e9cc3-efc3-4e7e-bcb9-0291ad28df42'] |
    +-------------------+-------------------------------------------+
    # cinder --service-type volume consisgroup-list
    +--------------------------------------+-----------+----------+
    |                  ID                  |   Status  |   Name   |
    +--------------------------------------+-----------+----------+
    | 950a5193-827b-49ab-9511-41ba120c9ebd | available | foovg_cg |
    +--------------------------------------+-----------+----------+
    

    Create volume in this consistency group:

    # cinder --service-type volume create --volume-type "svc1 base template" --name foovg_vol1 --consisgroup-id 950a5193-827b-49ab-9511-41ba120c9ebd 200
    # cinder --service-type volume create --volume-type "svc1 base template" --name foovg_vol2 --consisgroup-id 950a5193-827b-49ab-9511-41ba120c9ebd 200
    +------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------+
    |           Property           |                                                                          Value                                                                           |
    +------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------+
    |         attachments          |                                                                            []                                                                            |
    |      availability_zone       |                                                                           nova                                                                           |
    |           bootable           |                                                                          false                                                                           |
    |     consistencygroup_id      |                                                           950a5193-827b-49ab-9511-41ba120c9ebd                                                           |
    |          created_at          |                                                                2016-09-11T21:23:02.000000                                                                |
    |         description          |                                                                           None                                                                           |
    |          encrypted           |                                                                          False                                                                           |
    |        health_status         | {u'health_value': u'PENDING', u'id': u'8d078772-00b5-45fc-89c8-82c63e2c48ed', u'value_reason': u'PENDING', u'updated_at': u'2016-09-11T21:23:02.669372'} |
    |              id              |                                                           8d078772-00b5-45fc-89c8-82c63e2c48ed                                                           |
    |           metadata           |                                                                            {}                                                                            |
    |       migration_status       |                                                                           None                                                                           |
    |         multiattach          |                                                                          False                                                                           |
    |             name             |                                                                        foovg_vol2                                                                        |
    |    os-vol-host-attr:host     |                                                                           None                                                                           |
    | os-vol-tenant-attr:tenant_id |                                                             1471acf124a0479c8d525aa79b2582d0                                                             |
    |      replication_status      |                                                                         disabled                                                                         |
    |             size             |                                                                           200                                                                            |
    |         snapshot_id          |                                                                           None                                                                           |
    |         source_volid         |                                                                           None                                                                           |
    |            status            |                                                                         creating                                                                         |
    |          updated_at          |                                                                           None                                                                           |
    |           user_id            |                                             0688b01e6439ca32d698d20789d52169126fb41fb1a4ddafcebb97d854e836c9                                             |
    |         volume_type          |                                                                   svc1 base template                                                                     |
    +------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------+
    

    You're now able to attach these two volumes to a machine from the PowerVC GUI:

    consist

    # lsmpio -q
    Device           Vendor Id  Product Id       Size    Volume Name
    ------------------------------------------------------------------------------
    hdisk0           IBM        2145                 64G volume-aix72-44c7a72c-000000e0-
    hdisk1           IBM        2145                100G volume-snap1-dab0e2d1-130a
    hdisk2           IBM        2145                100G volume-snap2-5e863fdb-ab8c
    hdisk3           IBM        2145                200G volume-foovg_vol1-3ba0ff59-acd8
    hdisk4           IBM        2145                200G volume-foovg_vol2-8d078772-00b5
    # cfgmr
    # lspv
    hdisk0          00c8b2add70d7db0                    rootvg          active
    hdisk1          00f9c9f51afe960e                    None
    hdisk2          00f9c9f51afe9698                    None
    hdisk3          none                                None
    hdisk4          none                                None
    

    Then you can create a snapshot fo these two volumes. It's that easy :-) :

    # cinder --service-type volume cgsnapshot-create 950a5193-827b-49ab-9511-41ba120c9ebd
    +---------------------+--------------------------------------+
    |       Property      |                Value                 |
    +---------------------+--------------------------------------+
    | consistencygroup_id | 950a5193-827b-49ab-9511-41ba120c9ebd |
    |      created_at     |      2016-09-11T21:31:12.000000      |
    |     description     |                 None                 |
    |          id         | 20e2ce6b-9c4a-4eea-b05d-f0b0b6e4768f |
    |         name        |                 None                 |
    |        status       |               creating               |
    +---------------------+--------------------------------------+
    # cinder --service-type volume cgsnapshot-list
    +--------------------------------------+-----------+------+
    |                  ID                  |   Status  | Name |
    +--------------------------------------+-----------+------+
    | 20e2ce6b-9c4a-4eea-b05d-f0b0b6e4768f | available |  -   |
    +--------------------------------------+-----------+------+
    # cinder --service-type volume cgsnapshot-show 20e2ce6b-9c4a-4eea-b05d-f0b0b6e4768f
    +---------------------+--------------------------------------+
    |       Property      |                Value                 |
    +---------------------+--------------------------------------+
    | consistencygroup_id | 950a5193-827b-49ab-9511-41ba120c9ebd |
    |      created_at     |      2016-09-11T21:31:12.000000      |
    |     description     |                 None                 |
    |          id         | 20e2ce6b-9c4a-4eea-b05d-f0b0b6e4768f |
    |         name        |                 None                 |
    |        status       |              available               |
    +---------------------+--------------------------------------+
    

    cgsnap

    Conclusion

    Please keep in mind that the content of this blog post comes from real life and production examples. I hope you will be able to better understand that scalability, density, fast deployment, snapshots, multi tenancy are some features that are absolutely needed in the AIX world. As you can see the PowerVC team is moving fast. Probably faster than every customer I have ever seen. I must admit they are right. Doing this is the only way the face the Linux X86 offering. And I must confess this is damn fun to work on those things. I'm so happy to have the best of two worlds AIX/PowerSystem and Openstack. This is the only direction we have to take if we want AIX to survive. So please stop being scared or not convinced by these solutions they are damn good, production ready. Please face and embrace the future and stop looking at the past. As always I hope it help.

    Enhance your AIX packages management with yum and nim over http

    As AIX is getting older and older our old favorite OS is still trying to struggle versus the mighty Linux and the fantastic Solaris (no sarcasm in that sentence I truly believe what I say). You may have notice that -with time- IBM is slowly but surely moving from proprietary code to something more open (ie. PowerVC/Openstack projects, integration with Chef, Linux on Power and tons of other examples). I’m a little bit deviating from the main topic of this blog post but speaking about open source I have many things to say. If someone from my company is reading this post please note that it is my point of view … but I’m still sure that we are going the WRONG way not being more open, and not publishing on github. Starting from now every AIX IT shop in world must consider using OpenSource software (git, chef, ansible, zsh and so on) instead of maintaining homemade tools, or worse paying for tools that are 100 % of the time worse than OpenSource tools. Even better, every IT admin and every team must consider sharing their sources with the rest of the world for one single good reason: “Alone we can do so little, together we can do so much”. Every company not considering this today is doomed. Take example on Bloomberg, Facebook (sharing to the world all their Chef’s cookbooks), twitter, they’re all using github to share their opensource projects. Even military, police and banks are doing the same. They’re still secure but they are open to world ready work to make and create things better and better. All of this to introduce you to new things coming on AIX. Instead of reinventing the wheel IBM had the great idea to use already well implanted tools. It was the case for Openstack/PowerVC and it is also for the tools I’ll talk about in this post. It is the case for yum (yellowdog updater modified). Instead of installing rpm packages by hand you now have the possibility to use yum and to definitely end the rpm dependency nightmare that we all had since AIX 5L was released. Next instead of using the proprietary nimsh protocol to install filesets (bff package) you can now tell the nim server and nimclient to this over http/https (secure is only for the authentication as far as I know) (an open protocol :-) ). By doing this you will enhance the way you are managing packages on AIX. Do this now on every AIX system you install, yum everywhere and stop using NFS … we’re now in an http world :-)

    yum: the yellow dog updater modified

    I’m not going to explain you what yum is. If you don’t know you’re not in the right place. Just note that my advice starting from now is to use yum to install every software of the AIX toolbox (ftp://ftp.software.ibm.com/aix/freeSoftware/aixtoolbox/RPMS/ppc/). IBM is providing an official repository than can be mirrored on your own site to avoid having to use a proxy or having an access to the internet from you servers (you must admit that this is almost impossible and every big company will try to avoid this). Let’s start by trying to install yum:

    Installing yum

    IBM is providing an archive with all the needed rpm mandatory to use and install yum on an AIX server, you can find this archive here: ftp://public.dhe.ibm.com/aix/freeSoftware/aixtoolbox/ezinstall/ppc/yum_bundle_v1.tar. Just download it and install every rpm in it and yum will be available on you system, simple as that:

    A specific version of rpm binary command is mandatory to use yum. Before doing anything update the rpm.rte fileset. As AIX is rpm “aware” it already have an rpm database, but this one will not be manageable by yum. The installation of rpm in a version greater than 4.9.1.3 is needed. This installation will migrate the existing rpm database to a new one usable by yum. The fileset in the right version can be found here ftp://ftp.software.ibm.com/aix/freeSoftware/aixtoolbox/INSTALLP/ppc/

    • By default the rpm command is installed by an AIX fileset:
    • # which rpm
      /usr/bin/rpm
      # lslpp -w /usr/bin/rpm
        File                                        Fileset               Type
        ----------------------------------------------------------------------------
        /usr/bin/rpm                                rpm.rte               File
      # rpm --version
      RPM version 3.0.5
      
    • The rpm database is located in /usr/opt/freeware/packages :
    • # pwd
      /usr/opt/freeware/packages
      # ls -ltr
      total 5096
      -rw-r--r--    1 root     system         4096 Jul 01 2011  triggerindex.rpm
      -rw-r--r--    1 root     system         4096 Jul 01 2011  conflictsindex.rpm
      -rw-r--r--    1 root     system        20480 Jul 21 00:54 nameindex.rpm
      -rw-r--r--    1 root     system        20480 Jul 21 00:54 groupindex.rpm
      -rw-r--r--    1 root     system      2009224 Jul 21 00:54 packages.rpm
      -rw-r--r--    1 root     system       647168 Jul 21 00:54 fileindex.rpm
      -rw-r--r--    1 root     system        20480 Jul 21 00:54 requiredby.rpm
      -rw-r--r--    1 root     system        81920 Jul 21 00:54 providesindex.rpm
      
    • Install the rpm.rte fileset in the right version (4.9.1.3):
    • # file rpm.rte.4.9.1.3
      rpm.rte.4.9.1.3: backup/restore format file
      # installp -aXYgd . rpm.rte
      +-----------------------------------------------------------------------------+
                          Pre-installation Verification...
      +-----------------------------------------------------------------------------+
      Verifying selections...done
      Verifying requisites...done
      Results...
      
      SUCCESSES
      ---------
        Filesets listed in this section passed pre-installation verification
        and will be installed.
      
        Selected Filesets
        -----------------
        rpm.rte 4.9.1.3                             # RPM Package Manager
      [..]
      #####################################################
              Rebuilding RPM Data Base ...
              Please wait for rpm_install background job termination
              It will take a few minutes
      [..]
      Installation Summary
      --------------------
      Name                        Level           Part        Event       Result
      -------------------------------------------------------------------------------
      rpm.rte                     4.9.1.3         USR         APPLY       SUCCESS
      rpm.rte                     4.9.1.3         ROOT        APPLY       SUCCESS
      
    • After the installation check you have the correct version of rpm, you can also notice some changes in the rpm database files:
    • # rpm --version
      RPM version 4.9.1.3
      # ls -ltr /usr/opt/freeware/packages
      total 25976
      -rw-r--r--    1 root     system         4096 Jul 01 2011  triggerindex.rpm
      -rw-r--r--    1 root     system         4096 Jul 01 2011  conflictsindex.rpm
      -rw-r--r--    1 root     system        20480 Jul 21 00:54 nameindex.rpm
      -rw-r--r--    1 root     system        20480 Jul 21 00:54 groupindex.rpm
      -rw-r--r--    1 root     system      2009224 Jul 21 00:54 packages.rpm
      -rw-r--r--    1 root     system       647168 Jul 21 00:54 fileindex.rpm
      -rw-r--r--    1 root     system        20480 Jul 21 00:54 requiredby.rpm
      -rw-r--r--    1 root     system        81920 Jul 21 00:54 providesindex.rpm
      -rw-r--r--    1 root     system            0 Jul 21 01:08 .rpm.lock
      -rw-r--r--    1 root     system         8192 Jul 21 01:08 Triggername
      -rw-r--r--    1 root     system         8192 Jul 21 01:08 Conflictname
      -rw-r--r--    1 root     system        28672 Jul 21 01:09 Dirnames
      -rw-r--r--    1 root     system       221184 Jul 21 01:09 Basenames
      -rw-r--r--    1 root     system         8192 Jul 21 01:09 Sha1header
      -rw-r--r--    1 root     system         8192 Jul 21 01:09 Requirename
      -rw-r--r--    1 root     system         8192 Jul 21 01:09 Obsoletename
      -rw-r--r--    1 root     system         8192 Jul 21 01:09 Name
      -rw-r--r--    1 root     system         8192 Jul 21 01:09 Group
      -rw-r--r--    1 root     system       815104 Jul 21 01:09 Packages
      -rw-r--r--    1 root     system         8192 Jul 21 01:09 Sigmd5
      -rw-r--r--    1 root     system         8192 Jul 21 01:09 Installtid
      -rw-r--r--    1 root     system        86016 Jul 21 01:09 Providename
      -rw-r--r--    1 root     system       557056 Jul 21 01:09 __db.004
      -rw-r--r--    1 root     system     83894272 Jul 21 01:09 __db.003
      -rw-r--r--    1 root     system      7372800 Jul 21 01:09 __db.002
      -rw-r--r--    1 root     system        24576 Jul 21 01:09 __db.001
      

    Then install yum. Please note that I already have some rpm installed on my current system that’s why I’m not installing db, or gdbm. If your system is free of any rpm install all the rpm found in the archive:

    # tar xvf yum_bundle_v1.tar
    x curl-7.44.0-1.aix6.1.ppc.rpm, 584323 bytes, 1142 media blocks.
    x db-4.8.24-3.aix6.1.ppc.rpm, 2897799 bytes, 5660 media blocks.
    x gdbm-1.8.3-5.aix5.2.ppc.rpm, 56991 bytes, 112 media blocks.
    x gettext-0.10.40-8.aix5.2.ppc.rpm, 1074719 bytes, 2100 media blocks.
    x glib2-2.14.6-2.aix5.2.ppc.rpm, 1686134 bytes, 3294 media blocks.
    x pysqlite-1.1.7-1.aix6.1.ppc.rpm, 51602 bytes, 101 media blocks.
    x python-2.7.10-1.aix6.1.ppc.rpm, 23333701 bytes, 45574 media blocks.
    x python-devel-2.7.10-1.aix6.1.ppc.rpm, 15366474 bytes, 30013 media blocks.
    x python-iniparse-0.4-1.aix6.1.noarch.rpm, 37912 bytes, 75 media blocks.
    x python-pycurl-7.19.3-1.aix6.1.ppc.rpm, 162093 bytes, 317 media blocks.
    x python-tools-2.7.10-1.aix6.1.ppc.rpm, 830446 bytes, 1622 media blocks.
    x python-urlgrabber-3.10.1-1.aix6.1.noarch.rpm, 158584 bytes, 310 media blocks.
    x readline-6.1-2.aix6.1.ppc.rpm, 489547 bytes, 957 media blocks.
    x sqlite-3.7.15.2-2.aix6.1.ppc.rpm, 1334918 bytes, 2608 media blocks.
    x yum-3.4.3-1.aix6.1.noarch.rpm, 1378777 bytes, 2693 media blocks.
    x yum-metadata-parser-1.1.4-1.aix6.1.ppc.rpm, 62211 bytes, 122 media blocks.
    
    # rpm -Uvh curl-7.44.0-1.aix6.1.ppc.rpm glib2-2.14.6-2.aix5.2.ppc.rpm pysqlite-1.1.7-1.aix6.1.ppc.rpm python-2.7.10-1.aix6.1.ppc.rpm python-devel-2.7.10-1.aix6.1.ppc.rpm python-iniparse-0.4-1.ai
    x6.1.noarch.rpm python-pycurl-7.19.3-1.aix6.1.ppc.rpm python-tools-2.7.10-1.aix6.1.ppc.rpm python-urlgrabber-3.10.1-1.aix6.1.noarch.rpm yum-3.4.3-1.aix6.1.noarch.rpm yum-metadata-parser-1.1.4-
    1.aix6.1.ppc.rpm
    # Preparing...                ########################################### [100%]
       1:python                 ########################################### [  9%]
       2:pysqlite               ########################################### [ 18%]
       3:python-iniparse        ########################################### [ 27%]
       4:glib2                  ########################################### [ 36%]
       5:yum-metadata-parser    ########################################### [ 45%]
       6:curl                   ########################################### [ 55%]
       7:python-pycurl          ########################################### [ 64%]
       8:python-urlgrabber      ########################################### [ 73%]
       9:yum                    ########################################### [ 82%]
      10:python-devel           ########################################### [ 91%]
      11:python-tools           ########################################### [100%]
    

    Yum is now ready to be configured and used :-)

    # which yum
    /usr/bin/yum
    # yum --version
    3.4.3
      Installed: yum-3.4.3-1.noarch at 2016-07-20 23:24
      Built    : None at 2016-06-22 14:13
      Committed: Sangamesh Mallayya  at 2014-05-29
    

    Setting up yum and you private yum repository for AIX

    A private repository

    As nobody wants to use the official IBM repository available directly on internet the goal here is to create your own repository. Download all the content of the official repository and “serve” this directory (the one where you download all the rpms) on an private http server (yum is using http/https obviously :-) ).

    • Using wget download the content of the whole official repository. You can notice here that IBM is providing the metadata needed (repodata directory) (if you don’t have this repodata directory yum can’t work properly. This one can be created using the createrepo command available on akk good Linux distros :-) ):
    • # wget -r ftp://public.dhe.ibm.com/aix/freeSoftware/aixtoolbox/RPMS/ppc/
      # ls -ltr
      [..]
      drwxr-xr-x    2 root     system         4096 Jul 11 22:08 readline
      drwxr-xr-x    2 root     system          256 Jul 11 22:08 rep-gtk
      drwxr-xr-x    2 root     system         4096 Jul 11 22:08 repodata
      drwxr-xr-x    2 root     system         4096 Jul 11 22:08 rpm
      drwxr-xr-x    2 root     system         4096 Jul 11 22:08 rsync
      drwxr-xr-x    2 root     system          256 Jul 11 22:08 ruby
      drwxr-xr-x    2 root     system          256 Jul 11 22:09 rxvt
      drwxr-xr-x    2 root     system         4096 Jul 11 22:09 samba
      drwxr-xr-x    2 root     system          256 Jul 11 22:09 sawfish
      drwxr-xr-x    2 root     system          256 Jul 11 22:09 screen
      drwxr-xr-x    2 root     system          256 Jul 11 22:09 scrollkeeper
      
    • Configure you web server (here it’s just an alias because I’m using my http server for other things):
    • # more httpd.conf
      [..]
      Alias /aixtoolbox/  "/apps/aixtoolbox/"
      
          Options Indexes FollowSymLinks MultiViews
          AllowOverride None
          Require all granted
      
      
    • Restart your webserver and check you repository is accessible:
    • repo

    • That’s it the private repository is ready.

    Configuring yum

    On the client just modify the /opt/freeware/etc/yum/yum.conf or add a file in /opt/freeware/etc/yum/yum.repos.d to point to your private repository:

    # cat /opt/freeware/etc/yum/yum.conf
    [main]
    cachedir=/var/cache/yum
    keepcache=1
    debuglevel=2
    logfile=/var/log/yum.log
    exactarch=1
    obsoletes=1
    
    [AIX_Toolbox]
    name=AIX ToolBox Repository
    baseurl=http://nimserver:8080/aixtoolbox/
    enabled=1
    gpgcheck=0
    
    # PUT YOUR REPOS HERE OR IN separate files named file.repo
    # in /etc/yum/repos.d
    

    That’s it the client is ready.

    Chef recipe to install and configre yum

    My readers all knows that I’m using Chef as a configuration management tools. As you are going to do this on every single system you have I think giving you the Chef recipe installing and configuring yum can be useful (if you don’t care about it just skip it and go to the next session). If you are not using a configuration management tool maybe this simple example will help you to move on and stop doing this by hand or writing ksh scripts. I have to do that on tons of system so for me it’s just mandatory. Here is my recipe to do all the job, configuring and installing yum, and installing some RPM:

    directory '/var/tmp/yum' do
      action :create
    end
    
    remote_file '/var/tmp/yum/rpm.rte.4.9.1.3'  do
      source "http://#{node['nimserver']}/powervc/rpm.rte.4.9.1.3"
      action :create
    end
    
    execute "Do the toc" do
      command 'inutoc /var/tmp/yum'
      not_if { File.exist?('/var/tmp/yum/.toc') }
    end
    
    bff_package 'rpm.rte' do
      source '/var/tmp/yum/rpm.rte.4.9.1.3'
      action :install
    end
    
    tar_extract "http://#{node['nimserver']/powervc/yum_bundle_v1.tar" do
      target_dir '/var/tmp/yum'
      compress_char ''
      user 'root'
      group 'system'
    end
    
    # installing some rpm needed for yum
    for rpm in [ 'curl-7.44.0-1.aix6.1.ppc.rpm', 'python-pycurl-7.19.3-1.aix6.1.ppc.rpm', 'python-urlgrabber-3.10.1-1.aix6.1.noarch.rpm', 'glib2-2.14.6-2.aix5.2.ppc.rpm', 'yum-metadata-parser-1.1.4-1.aix6.1.ppc.rpm', 'python-iniparse-0.4-1.aix6.1.noarch.rpm', 'pysqlite-1.1.7-1.aix6.1.ppc.rpm'  ]
      execute "installing yum" do
        command "rpm -Uvh /var/tmp/yum/#{rpm}"
        not_if "rpm -qa | grep $(echo #{rpm} | sed 's/.aix6.1//' | sed 's/.aix5.2//' | sed 's/.rpm//')"
      end
    end
    
    # updating python
    execute "updating python" do
      command "rpm -Uvh /var/tmp/yum/python-devel-2.7.10-1.aix6.1.ppc.rpm /var/tmp/yum/python-2.7.10-1.aix6.1.ppc.rpm"
      not_if "rpm -qa | grep python-2.7.10-1"
    end
    
    # installing yum
    execute "installing yum" do
      command "rpm -Uvh /var/tmp/yum/yum-3.4.3-1.aix6.1.noarch.rpm"
      not_if "rpm -qa | grep yum-3.4.3.1.noarch"
    end
    
    # changing yum configuration
    template '/opt/freeware/etc/yum/yum.conf' do
      source 'yum.conf.erb'
    end
    
    # installing some software with aix yum
    for soft in [ 'bash', 'bzip2', 'curl', 'emacs', 'gzip', 'screen', 'vim-enhanced', 'wget', 'zlib', 'zsh', 'patch', 'file', 'lua', 'nspr', 'git' ] do
      execute "install #{soft}" do
        command "yum -y install #{soft}"
      end
    end
    
    # removing temporary file
    execute 'removing /var/tmp/yum' do
      command 'rm -rf /var/tmp/yum'
      only_if { File.exists?('/var/tmp/yum')}
    end
    

    chef_yum1
    chef_yum2
    chef_yum3

    After running the chef recipe yum is fully usable \o/ :

    chef_yum4

    Using yum on AIX: what you need to know

    yum is usable just like it is on a Linux system. You may hit some issues when using yum on AIX. For instance you can have this kind of errors:

    # yum check
    AIX-rpm-7.2.0.1-2.ppc has missing requires of rpm
    AIX-rpm-7.2.0.1-2.ppc has missing requires of popt
    AIX-rpm-7.2.0.1-2.ppc has missing requires of file-libs
    AIX-rpm-7.2.0.1-2.ppc has missing requires of nss
    

    If you are not aware of what is the purpose of AIX-rpm please read this. This rpm is what I call a meta package. It does not install anything. This rpm is used because the rpm database does not know anything about things (binaries, libraries) installed by standard AIX filesets. By default rpm are not “aware” of what is installed by a fileset (bff) but most of rpms depends on things installed by filesets. When you install a fileset … let’s say it install a library like libc.a AIX run the updtvpkg program to rebuild this AIX-rpm and says “this rpm will resolve any rpm dependencies issue for libc.a. So first, never try to uninstall this rpm, second it’s not a real problem is this rpm has missing dependencies …. as it is providing nothing. If you really want to see what dependencies resolve AIX-rpm run the following command:

    # rpm -q --provides AIX-rpm-7.2.0.1-2.ppc | grep libc.a
    libc.a(aio.o)
    # lslpp -w /usr/lib/libc.a
      File                                        Fileset               Type
      ----------------------------------------------------------------------------
      /usr/lib/libc.a                             bos.rte.libc          Symlink
    

    If you want to get rid of these messages just install the missing rpm … using yum:

    # yum -y install popt file-libs
    

    A few examples

    Here are a few example a software installation using yum:

    • Installing git:
    • # yum install git
      Setting up Install Process
      Resolving Dependencies
      --> Running transaction check
      ---> Package git.ppc 0:4.3.20-4 will be installed
      --> Finished Dependency Resolution
      
      Dependencies Resolved
      
      ================================================================================================================================================================================================
       Package                                    Arch                                       Version                                         Repository                                          Size
      ================================================================================================================================================================================================
      Installing:
       git                                        ppc                                        4.3.20-4                                        AIX_Toolbox                                        215 k
      
      Transaction Summary
      ================================================================================================================================================================================================
      Install       1 Package
      
      Total size: 215 k
      Installed size: 889 k
      Is this ok [y/N]: y
      Downloading Packages:
      Running Transaction Check
      Running Transaction Test
      Transaction Test Succeeded
      Running Transaction
        Installing : git-4.3.20-4.ppc                                                                                                                                                             1/1
      
      Installed:
        git.ppc 0:4.3.20-4
      
      Complete!
      
    • Removing git :
    • # yum remove git
      Setting up Remove Process
      Resolving Dependencies
      --> Running transaction check
      ---> Package git.ppc 0:4.3.20-4 will be erased
      --> Finished Dependency Resolution
      
      Dependencies Resolved
      
      ================================================================================================================================================================================================
       Package                                   Arch                                      Version                                           Repository                                          Size
      ================================================================================================================================================================================================
      Removing:
       git                                       ppc                                       4.3.20-4                                          @AIX_Toolbox                                       889 k
      
      Transaction Summary
      ================================================================================================================================================================================================
      Remove        1 Package
      
      Installed size: 889 k
      Is this ok [y/N]: y
      Downloading Packages:
      Running Transaction Check
      Running Transaction Test
      Transaction Test Succeeded
      Running Transaction
        Erasing    : git-4.3.20-4.ppc                                                                                                                                                             1/1
      
      Removed:
        git.ppc 0:4.3.20-4
      
      Complete!
      
    • List available repo
    • yum repolist
      repo id                                                                                repo name                                                                                          status
      AIX_Toolbox                                                                            AIX ToolBox Repository                                                                             233
      repolist: 233
      

    Getting rid of nimsh: USE HTTPS !

    A new feature that is now available on latest version of AIX (7.2) allows you to use nim over http. It is a long awaited feature for different reasons (it’s just my opinion). I personally don’t like proprietary protocols such as nimsh and nimsh secure … security teams neither. Who has never experienced installation problems because of nimsh port not opened, because of ids, because of security teams ? Using http or https is the solution? No company is not allowing http or https ! This protocol is so used and secured, widely spread in a lot of products that everybody trust it. I personally prefer opening on single port than struggling opening all nimsh ports. You’ll understand that using http is far better than using nimsh. Before explaining this in details here are a few things you need to now. nimhttp is only available on latest version of AIX (7.2 SP0/1/2), same for the nimclient. If there is a problem using http the nimclient will automatically fallback in an NFS mode. Only certain nim operation are available over http:

    Configuring the nim server

    To use nim over http (nimhttp) you nim server must be at least deployed on an AIX 7.2 server (mine is updated to the latest service pack (SP2)). Start the service nimhttp on the nim server to allow nim to use http for its operations:

    # oslevel -s
    7200-00-02-1614
    # startsrc -s nimhttp
    0513-059 The nimhttp Subsystem has been started. Subsystem PID is 11665728.
    # lssrc -a | grep nimhttp
     nimhttp                           11665728     active
    

    The nimhttp service will listen on port 4901, this port is defined in the /etc/services :

    # grep nimhttp /etc/services
    nimhttp         4901/tcp
    nimhttp         4901/udp
    # netstat -an | grep 4901
    tcp4       0      0  *.4901                 *.*                    LISTEN
    # rmsock f1000e0004a483b8 tcpcb
    The socket 0xf1000e0004a48008 is being held by proccess 14811568 (nimhttpd).
    # ps -ef | grep 14811568
        root 14811568  4456760   0 04:03:22      -  0:02 /usr/sbin/nimhttpd -v
    

    If you want to enable crypto/ssl to encrypt http authentication, just add the -a “-c” to your command line. This “-c” argument will tell nimhttp to start in secure mode and encrypt the authentication:

    # startsrc -s nimhttp -a "-c"
    0513-059 The nimhttp Subsystem has been started. Subsystem PID is 14811570.
    # ps -ef | grep nimhttp
        root 14811570  4456760   0 22:57:51      -  0:00 /usr/sbin/nimhttpd -v -c
    

    Starting the service for the first time will create an httpd.conf file in the root home directory :

    # grep ^document_root ~/httpd.conf
    document_root=/export/nim/
    # grep ^service.log ~/httpd.conf
    service.log=/var/adm/ras/nimhttp.log
    

    If you choose to enable the secure authentication nimhttp will use the pem certificates file used by nim. If you are already using secure nimsh you don’t have to run the “nimconfig -c” command. If it is the first time this command will create the two pem files (root and server in /ssl_nim/certs) (check my blog post about secure nimsh for more information about that):

    # nimconfig -c
    # grep ^ssl. ~/httpd.conf
    ssl.cert_authority=/ssl_nimsh/certs/root.pem
    ssl.pemfile=/ssl_nimsh/certs/server.pem
    

    The document_root of the http server will define the resource the nim http will “serve”. The default one is /export/nim (default nim place for all nim resources (spot, mksysb, lpp_source) and cannot be changed today (I think it is now ok on SP2, I’ll change the blog post as soon as the test will be done). Unfortunately for me one of my production nim was created by someone not very aware of AIX and … resources are not in /export/nim (I had to recreate my own nim because of that :-( )

    On the client side ?

    On the client side you just have nothing to do. If you’re using AIX 7.2 and nimhttp is enabled the client will automatically use http for communication (if it is enabled on the nim server). Just note that if you’re using nimhttp in secure mode, you must enable your nimclient in secure mode too:

    # nimclient -c
    Received 2788 Bytes in 0.0 Seconds
    0513-044 The nimsh Subsystem was requested to stop.
    0513-077 Subsystem has been changed.
    0513-059 The nimsh Subsystem has been started. Subsystem PID is 13500758.
    # stopsrc -s nimsh
    # startsrc -s nimsh
    

    Changing nimhttp port

    You can easily change the port on which nimhttp is listening by modify the /etc/services file. Here is an example with the port 443 (I know this is not a good idea to use this one but it’s just for the example)

    #nimhttp                4901/tcp
    #nimhttp                4901/udp
    nimhttp         443/tcp
    nimhttp         443/udp
    # stopsrc -s nimhttp
    # startsrc -s nimhttp -a "-c"
    # netstat -Aan | grep 443
    f1000e00047fb3b8 tcp4       0      0  *.443                 *.*                   LISTEN
    # rmsock f1000e00047fb3b8 tcpcb
    The socket 0xf1000e00047fb008 is being held by proccess 14811574 (nimhttpd).
    

    Same on the client side, just change the /etc/services file and use your nimclient as usual

    # grep nimhttp /etc/services
    #nimhttp                4901/tcp
    #nimhttp                4901/udp
    nimhttp         443/tcp
    nimhttp         443/udp
    # nimclient -l
    

    To be sure I’m not using nfs anymore I’m removing any entries in my /etc/export file. I know that it will just work for some case (some type of resources) as nimesis is filling the file even if this one is empty:

    # > /etc/exports
    # exportfs -uav
    exportfs: 1831-184 unexported /export/nim/bosinst_data/golden-vios-2233-08192014-bosinst_data
    exportfs: 1831-184 unexported /export/nim/spot/golden-vios-22422-05072016-spot/usr
    exportfs: 1831-184 unexported /export/nim/spot/golden-vios-22410-22012015-spot/usr
    exportfs: 1831-184 unexported /export/nim/mksysb
    exportfs: 1831-184 unexported /export/nim/hmc
    exportfs: 1831-184 unexported /export/nim/lpp_source
    [..]
    

    Let’s do this

    Let’s now try this with a simple example. I’m here installing powervp on a machine using a cust operation from the nimclient, on the client I’m doing like I have always do running the exact same command as before. Super simple:

    # nimclient -o cust -a lpp_source=powervp1100-lpp_source -a filesets=powervp.rte
    
    +-----------------------------------------------------------------------------+
                        Pre-installation Verification...
    +-----------------------------------------------------------------------------+
    Verifying selections...done
    Verifying requisites...done
    Results...
    
    SUCCESSES
    ---------
      Filesets listed in this section passed pre-installation verification
      and will be installed.
    
      Selected Filesets
      -----------------
      powervp.rte 1.1.0.0                         # PowerVP for AIX
    
      << End of Success Section >>
    
    +-----------------------------------------------------------------------------+
                       BUILDDATE Verification ...
    +-----------------------------------------------------------------------------+
    Verifying build dates...done
    FILESET STATISTICS
    ------------------
        1  Selected to be installed, of which:
            1  Passed pre-installation verification
      ----
        1  Total to be installed
    
    +-----------------------------------------------------------------------------+
                             Installing Software...
    +-----------------------------------------------------------------------------+
    
    installp: APPLYING software for:
            powervp.rte 1.1.0.0
    
    0513-071 The syslet Subsystem has been added.
    Finished processing all filesets.  (Total time:  4 secs).
    
    +-----------------------------------------------------------------------------+
                                    Summaries:
    +-----------------------------------------------------------------------------+
    
    Installation Summary
    --------------------
    Name                        Level           Part        Event       Result
    -------------------------------------------------------------------------------
    powervp.rte                 1.1.0.0         USR         APPLY       SUCCESS
    powervp.rte                 1.1.0.0         ROOT        APPLY       SUCCESS
    
    

    On the server side I’m checking the /var/adm/ras/nimhttp.log (log file for nimhttp) and I can check that files are transferred from the server to the client using the http protocol. So it works great.

    # Thu Jul 21 23:44:19 2016        Request Type is GET
    Thu Jul 21 23:44:19 2016        Mime not supported
    Thu Jul 21 23:44:19 2016        Sending Response Header "200 OK"
    Thu Jul 21 23:44:19 2016        Sending file over socket 6. Expected length is 600
    Thu Jul 21 23:44:19 2016        Total length sent is 600
    Thu Jul 21 23:44:19 2016        handle_httpGET: Entering cleanup statement
    Thu Jul 21 23:44:20 2016        nim_http: queue socket create product (memory *)200739e8
    Thu Jul 21 23:44:20 2016        nim_http: 200739e8 6 200947e8 20098138
    Thu Jul 21 23:44:20 2016        nim_http: file descriptor is 6
    Thu Jul 21 23:44:20 2016        nim_buffer: (resize) buffer size is 0
    Thu Jul 21 23:44:20 2016        file descriptor is : 6
    Thu Jul 21 23:44:20 2016        family is : 2 (AF_INET)
    Thu Jul 21 23:44:20 2016        source address is : 10.14.33.253
    Thu Jul 21 23:44:20 2016        socks: Removing socksObject 2ff1ec80
    Thu Jul 21 23:44:20 2016        socks: 200739e8 132 <- 87 bytes (SSL)
    Thu Jul 21 23:44:20 2016        nim_buffer: (append) len is 87, buffer length is 87
    Thu Jul 21 23:44:20 2016        nim_http: data string passed to get_http_request: "GET /export/nim/lpp_source/powervp/powervp.1.1.0.0.bff HTTP/1.1
    

    Let's do the same thing with a fileset coming from a bigger lpp_source (in fact an simage one for the latest release of AIX 7.2):

    # nimclient -o cust -a lpp_source=7200-00-02-1614-lpp_source -a filesets=bos.loc.utf.en_KE
    [..]
    

    Looking on the nim server I notice that files are transfered from the server to the client, but NOT my fileset and it's dependencies .... but the whole lpp_source (seriously ? uh ? why ?)

    # tail -f /var/adm/ras/nimhttp.log
    Thu Jul 21 23:28:39 2016        Request Type is GET
    Thu Jul 21 23:28:39 2016        Mime not supported
    Thu Jul 21 23:28:39 2016        Sending Response Header "200 OK"
    Thu Jul 21 23:28:39 2016        Sending file over socket 6. Expected length is 4482048
    Thu Jul 21 23:28:39 2016        Total length sent is 4482048
    Thu Jul 21 23:28:39 2016        handle_httpGET: Entering cleanup statement
    Thu Jul 21 23:28:39 2016        nim_http: queue socket create product (memory *)200739e8
    Thu Jul 21 23:28:39 2016        nim_http: 200739e8 6 200947e8 20098138
    Thu Jul 21 23:28:39 2016        nim_http: file descriptor is 6
    Thu Jul 21 23:28:39 2016        nim_buffer: (resize) buffer size is 0
    Thu Jul 21 23:28:39 2016        file descriptor is : 6
    Thu Jul 21 23:28:39 2016        family is : 2 (AF_INET)
    Thu Jul 21 23:28:39 2016        source address is : 10.14.33.253
    Thu Jul 21 23:28:39 2016        socks: Removing socksObject 2ff1ec80
    Thu Jul 21 23:28:39 2016        socks: 200739e8 132 <- 106 bytes (SSL)
    Thu Jul 21 23:28:39 2016        nim_buffer: (append) len is 106, buffer length is 106
    Thu Jul 21 23:28:39 2016        nim_http: data string passed to get_http_request: "GET /export/nim/lpp_source/7200-00-02-1614/installp/ppc/X11.fnt.7.2.0.0.I HTTP/1.1
    

    If you have a deeper look of what is nimclient doing when using nimhttp .... he is just transfering the whole lpp_source from the server to the client and then installing the needed fileset from a local filesystem. Filesets are storred into /tmp so be sure you have a /tmp bigger enough to store your biggest lpp_source. Maybe this will be changed in the future but it is like it is for the moment :-) . The nimclient is creating temporary directory named (prefix) "_nim_dir_" to store the lpp_source:

    root@nim_server:/export/nim/lpp_source/7200-00-02-1614/installp/ppc# du -sm .
    7179.57 .
    root@nim_client:/tmp/_nim_dir_5964094/export/nim/lpp_source/7200-00-02-1614/installp/ppc# du -sm .
    7179.74 .
    

    More details ?

    You can notice while running a cust operation from the nim client that nimhttp is also running in background (on the client itself). The truth is that the nimhttp binary running on client act as an http client. In the output below the http client is getting the file Java8_64.samples.jnlp.8.0.0.120.U and

    # ps -ef |grep nim
        root  3342790 16253432   6 23:29:10  pts/0  0:00 /bin/ksh /usr/lpp/bos.sysmgt/nim/methods/c_installp -afilesets=bos.loc.utf.en_KE -alpp_source=s00va9932137:/export/nim/lpp_source/7200-00-02-1614
        root  6291880 13893926   0 23:29:10  pts/0  0:00 /bin/ksh /usr/lpp/bos.sysmgt/nim/methods/c_script -alocation=s00va9932137:/export/nim/scripts/s00va9954403.script
        root 12190194  3342790  11 23:30:06  pts/0  0:00 /usr/sbin/nimhttp -f /export/nim/lpp_source/7200-00-02-1614/installp/ppc/Java8_64.samples.jnlp.8.0.0.120.U -odest -s
        root 13500758  4325730   0 23:23:29      -  0:00 /usr/sbin/nimsh -s -c
        root 13893926 15991202   0 23:29:10  pts/0  0:00 /bin/ksh -c /var/adm/nim/15991202/nc.1469222947
        root 15991202 16974092   0 23:29:07  pts/0  0:00 nimclient -o cust -a lpp_source=7200-00-02-1614-lpp_source -a filesets=bos.loc.utf.en_KE
        root 16253432  6291880   0 23:29:10  pts/0  0:00 /bin/ksh /tmp/_nim_dir_6291880/script
    

    You can use the nimhttp as a client to download file directly from the nim server. Here I'm just listing the content of /export/nim/lpp_source from the client

    # nimhttp -f /export/nim/lpp_source -o dest=/tmp -v
    nimhttp: (source)       /export/nim/lpp_source
    nimhttp: (dest_dir)     /tmp
    nimhttp: (verbose)      debug
    nimhttp: (master_ip)    nimserver
    nimhttp: (master_port)  4901
    
    sending to master...
    size= 59
    pull_request= "GET /export/nim/lpp_source HTTP/1.1
    Connection: close
    
    "
    Writing 1697 bytes of data to /tmp/export/nim/lpp_source/.content
    Total size of datalen is 1697. Content_length size is 1697.
    # cat /tmp/export/nim/lpp_source/.content
    DIR: 71-04-02-1614 0:0 00240755 256
    DIR: 7100-03-00-0000 0:0 00240755 256
    DIR: 7100-03-01-1341 0:0 00240755 256
    DIR: 7100-03-02-1412 0:0 00240755 256
    DIR: 7100-03-03-1415 0:0 00240755 256
    DIR: 7100-03-04-1441 0:0 00240755 256
    DIR: 7100-03-05-1524 0:0 00240755 256
    DIR: 7100-04-00-1543 0:0 00240755 256
    DIR: 7100-04-01-1543 0:0 00240755 256
    DIR: 7200-00-00-0000 0:0 00240755 256
    DIR: 7200-00-01-1543 0:0 00240755 256
    DIR: 7200-00-02-1614 0:0 00240755 256
    FILE: MH01609.iso 0:0 00100644 1520027648
    FILE: aixtools.python.2.7.11.4.I 0:0 00100644 50140160
    

    Here I'm just downloading a python fileset !

    # nimhttp -f /export/nim/lpp_source/aixtools.python.2.7.11.4.I -o dest=/tmp -v
    [..]
    Writing 65536 bytes of data to /tmp/export/nim/lpp_source/aixtools.python.2.7.11.4.I
    Writing 69344 bytes of data to /tmp/export/nim/lpp_source/aixtools.python.2.7.11.4.I
    Writing 7776 bytes of data to /tmp/export/nim/lpp_source/aixtools.python.2.7.11.4.I
    Total size of datalen is 50140160. Content_length size is 50140160.
    # ls -l /tmp/export/nim/lpp_source/aixtools.python.2.7.11.4.I
    -rw-r--r--    1 root     system     50140160 Jul 23 01:21 /tmp/export/nim/lpp_source/aixtools.python.2.7.11.4.I
    

    Allowed operation

    All cust operations on nim objects type lpp_source, installp_bundle, fix_bundle, scripts, and file_res in push or pull are working great with nimhttp. Here are a few examples (from the official doc, thanks to Paul F for that ;-) ) :

    • Push:
    • # nim –o cust –a file_res=obj_name client_obj_name
      # nim –o cust –a script=obj_name client_obj_name
      # nim –o cust –a lpp_source=obj_name -a filesets=fileset names to install client_obj_name
      # nim –o cust –a lpp_source=obj_name -a installp_bundle=obj_name client_obj_name
      # nim –o cust –a lpp_source=obj_name ‐a fixes=update_all client_obj_name
      
    • Pull:
    • # nimclient -o cust -a lpp_source=obj_name -a filesets=fileset names to install
      # nimclient –o cust –a file_res=obj_name
      # nimclient –o cust –a script=obj_name nimclient –o cust –a lpp_source=obj_name -‐a filesets=fileset names to install
      # nimclient –o cust –a lpp_source=obj_name -a installp_bundle=obj_name
      # nimclient –o cust –a lpp_source=obj_name -a fixes=update
      

    Proxying: use your own http server

    You can use you own webserver to host nimhttp and the nimhttp binary will just act as a proxy between your client and you http server. I have tried to do it but didn't succeed with that I'll let you know if I'm finding the solution:

    # grep ^proxt ~/httpd.conf
    service.proxy_port=80
    enable_proxy=yes
    

    Conclusion: "about administration and post-installation"

    Just a few words about best practices of post-installation and administration on AIX. On on the major purpose of this blog post is to prove to you than you need to get rid of an old way of working. The first thing to do is always to try using http or https instead of NFS. To give you an example of that I'm always using http to transfer my files whatever it is (configuration, product installation and so on ...). With an automation tool such as Chef it is so simple to integrate the download of a file from an http server that you must now avoid using NFS ;-) . Second good practice is to never install things "by hand" and using yum is one of the reflex you need to have instead of using the rpm command (Linux users will laugh reading that ... I'm laughing writing that, using yum is just something I'm doing for more than 10 years ... but for AIX admins it's still not the case and not so simple to understand :-) ). As always I hope it helps.

    About blogging

    I just wanted to say one word about blogging because I got a lot of questions about this (from friends, readers, managers, haters, lovers). I'm doing this for two reasons. The first one is that writing and explaining things force me to better understand what I'm doing and force me to always discover new features, new bugs, new everything. Second I'm doing this for you, for my readers because I remember how blogs were useful to me when I began AIX (Chris and Nigel are the best example of that). I don't care about being the best or the worst. I'm just me. I'm doing this because I love that that's all. Even if manager, recruiters or anybody else don't care about it I'll continue to do this whatever appends. I agree with them "It does not prove anything at all". I'm just like you a standard admin trying to do his job at his best. Sorry for the two months "break" about blogging but it was really crazy at work and in my life. Take care all. Haters gonna hate.