IBM Oct 8 2013 Announcement : A technical point of view after Enterprise 2013

PowerVM and Power systems lovers like me have probably heard a lot of things since Oct 8 2013. Unfortunately with the big mass of information, some of you, like me, may be lost with all these new things coming. I personnaly feel the need to clarify things from a technical point of vue, and have a deeper look on these annoucements. Finding technical stuffs from the announcement was not easy but Enterprise 2013 gave us a lot of information about the shape of things to come. Is this a new era for Power Systems ? Are our jobs going to change with the rise of the cloud ? Will PowerVM be replaced by KVM in a few years ? Will AIX be replaced by Linux On Power ? I can easily give you an answer : NO ! Have a look below and you will be as excited as I am, Power Systems are awesome, and they are going to be even better with all these new products:-).

PowerVC

PowerVC stands for Power Virtualization Center. PowerVC let you manages your virtualized infrastructure by capturing, deploying, creating and moving virtual machines. It’s based on OpenStack and has to be installed on a Linux Machine (Red Hat Enterprise Linux 6, running on x86 or Power). PowerVC runs on top of Power Systems infrastructure (Power6, or Power7 hardware, Hardware Management Console or IVM). It’s declined on two versions, standard (allowing Power6 and Power7 management with an HMC) and express (allowing only Power7 management with an IVM). At the time of writing this post PowerVC manages storage only for IBM V* Serie and IBM SVC. Brocade SAN switch are the only one supported. In a few words here is what you have to remember :

  • PowerVC runs on top of Hardware and HMC/IVM, like VMcontrol.
  • PowerVC allows you to manage and create (capture, deploy, …) virtual machines.
  • PowerVC allows you to move and automatically place virtual machines (resource pooling, dynamic virtual machines placement).
  • PowerVC is based on OpenStack API. IBM modifications and enhancements to OpenStack are committed to the community.
  • PowerVC only runs on RHEL 6.4 x86 or Power, RHEL support is not included.
  • PowerVC express manages Power7, Power7+ hardware.
  • PowerVC standard manages Power6, Power7 and Power7+ hardware.
  • Storage Systems SVC-family is mandatory (SVC/V7000/V3700/V3500)
  • PowerVC does not install Virtual I/O Server neither IVM. Virtual infrastructure has to be installed and it’s a mandatory prerequisite.
  • PowerVC express edition needs at least Virtual I/O Server 2.2.1.5, storage can be pre-zoned (I don’t know if it can be pre-masked), limit of five managed hosts with a total of 100 maximum managed LPARs.
  • PowerVC standard edition needs HMC 7.7.8 and Virtual I/O Server 2.2.3.0, storage can’t be pre-zoned and only supported SAN Switch is Brocade. It’s limited to ten managed hosts with a total of 400 LPARs.
  • PowerVC1

    In my opinion PowerVC will in the near future replace VMcontrol, it’s a shame for me because I spend so much time on VMcontrol but I think it’s good thing for Power Systems. PowerVC seems to be easy to deploy, easy to manage and seems to have a nice look and feel, my only regret : it’s only running on a Linux :-(. So keep an eye on this product because it’s the future of Power Systems management. You can see it as a VMware VCenter for Power Systems ;-)

    PowerVC2

PowerVP

PowerVP was first known as Sleuth and was first an internal IBM tool. It’s a performance analysis tool looking from the whole machine to lpar. No joke, you can compare it to the lssrad command and it’s seems to be a graphical version of this one. Nice visuals tells you how the hardware resources are assigned and consumed by the lpars. Three views are accessible :

  • The System Topology view shows the hardware topology how busy are the chips, and how busy is the traffic between each chips :
  • PowerVP2
    PowerVP1

  • The Node View shows you how each cores per chips are consumed and gives you information on memory controller, busses, and traffic from/to remote I/O :
  • PowerVP3

  • The partition view seems to be more classical and gives you information on CPU, Memory, Disk, Ethernet cache and memory affinity. You can drill down on each of this statistics :-) :
  • PowerVP4

Some agents are needed by PowerVP. For partitions drill down an agent seems to be mandatory (oh no, not again); for whole system monitoring a kind of “super-agent” is also needed (to be installed on one partition ?)

PowerVPcollectors

I don’t why but PowerVP graphical interface is a java/swing application. In 2013 everybody wants a web interface I really do not understand this choice …. All data can be monitored and recorded (for analysis comparison) on the fly. The product is included to PowerVM enterprise edition and needs at least firmware 770 and 780 for high end machines.

PowerVM Virtual I/O Server 2.2.3.0

Shared Storage Pool 4 (SSP4)

As a big fan of Shared Storage Pool, the SSPv4 was long long awaited. It enables a few cool things, the one I’m waiting the most is the SSP mirroring. You can now mask luns from two different SAN array and the mirroring is performed by the Virtual I/O Server. Nothing to do in the Virtual I/O Client :-). This new feature comes with a new command : failgrp. By default when the SSP is created the failover group is named default. You can also check which pv belongs to which failover group with the new pv command, this command allows you to check SAN array id by checking pv’s udid. It allows you too to check if pv are capable (for example pv comming from iscsi are not SSP capable) Here are a few cool commands example (sorry, without the output, theses ones are deduced from recent Nigel G. tests :-)).

# failgrp -list
# pv -list -capable
# failgrp -create -fg SANARRAY2: hdisk12 hdisk13 hdisk14
# failgrp -modify -fg Default -attr fg_name=SANARRAY1
# pv -add -fg SANARRAY1: hdisk15 hdisk16 SANARRAY2: hdisk17 hdisk18
# pv -list

SSP4 is also simplying SSP management with new commands. The lu command allows you to create a “lu” in the SSP and to allocate it to a vhost in one command.

  • lu creation and mapping in one command :
# lu -create -lu lpar1rootvg -size 32G -vadapter vhost3
  • lu creation and mapping in two commands :
# lu -create -lu lpar1rootvg
# lu -map -lu lpar1rootvg -vadpater vhost3

The last thing to say about SSPv4, you can now remove a lun from the storage pool ! So with all these new features SSP can be used on production server. Finally.

Throwing away Control Channel from Shared Ethernet Adapter

To simplifiy Virtual I/O Server management and SEA creation it is now possible to create SEA failover without creating and specifying any control channel adapter. Both Virtual I/O Server can now detect if an SEA failover is created and use their default virtual adapter for control channel. You can find more precision on this subject on Scott Vetter‘s blog here. A funny thing to notice is that an APAR was created this year on this subject. This APAR leaks the name of the Project K2 (IBM internal name for PowerVM simplification) : IV37193. Ok to sum up here are the technical things to remember about this Shared Ethernet Adapter Simplification :

  • The creation of the SEA requires HMC V7.7.8, Virtual I/O Server 2.2.3.0, and firmware 780 (it’s seems to be only supported on Power7 and Power7+).
  • The end user can’t see it but the control channel function is using the SEA‘s default adapter.
  • The end user can’t see it but the control channel function is using a special VLAN id : 4095.
  • Standard SEA with classic control channel adapter can still be created.
  • My understanding of this feature is that it has a limitation : you can create only one Shared Ethernet Adapter per virtual switch. It’s seems that the HMC check that there is only one adapter with priority 1 and one adapter with priority 2 per virtual switch.
  • Command to create SEA is still the same, it you omit the ctl_chan attribute the SEA will automatically use the management VLAN 4095 for the control channel
  • A special discovery protocol is implemented on the Virtual I/O Server to automatically enables the “ghost” control channel

… other few things

VIOS advisors are modified, they can now monitor Shared Storage Pools and NPIV adapters, the part command is updated to do so. There are also some additional statistics and listing for SEA.

Virtual Network Adapter are updated with a ping feature to check if the adapter is up or down. A virtual adapter can now be considered as down and can feel a physical link loss/failure. Is this feature the end of the netmon.cf file in PowerHA ?

Conclusion, OpenPower, Power8, KVM

I can’t finish this post without writing a few lines about huge global things to come. A new consorsium called OpenPower was created to work on Power8. It means that some third party manufacturers will be able to build their own implementation of Power8 processors. Talking about Power8 this one is on the way a seems to be a real performance monster, first entry class systems will probably be available in Q3/Q4 2014. But Power8 is not going to be released alone, KVM is going to be ported on Power8 hardware. Keep in mind that this will not provide nested virtualization (the purpose is not to run KVM on AIX, or to run AIX on KVM). A Power8 hardware will be able to run a special firmware letting you run a Linux KVM and to build and run Linux PPC virtual machines. At the time of writing this post users will be able to switch between this firmware and the classic one running PowerVM. It is pretty exciting !

I hope this post is clarifying the current situation about all these new awesome annoucements.

10 thoughts on “IBM Oct 8 2013 Announcement : A technical point of view after Enterprise 2013

  1. thank you for having clarify these annoucements !

    It seems that big blue want to make Power plateform as an alternative choice to the carnivorous vmware/x86 couple.

    To achieve this goal, the sell of Power systems must increase, that is the reason why the OpenPower consortium was created.

    The more the Power survive, the more AIX survive.

    In case of you don’t it already, i recommend you to read this useful website in order to stay tuned of the Power business : http://itjungle.com

    • Hi Laurent,

      Thanks for your feedback once again. It’s always cool to see people are following the blog :-).

      Thank you for the itjungle link too, I’m one of their readers.

      I agree with you, IBM probably wants to replace some WMare environment with PowerVM/Linux On Power/PowerVC solution. But everybody is afraid by the AIX future …. There are still some questions I’m asking myself about the future :
      1/ Why did they invest 1B$ on Linux On Power ? There is still a lot of work to do on AIX (and I’m talking on AIX only : I want a zfs like filesystem, an installation method over http, a real management of the boot time (seriously AIX is still using initab file in 2013 ???), …)
      2/ Who is going to use Linux On Power ? Application are already running on x86, and it’s cheaper ! Nobody wants to invest to adapt x86 code to Power. Major enterprise product are not running on Linux On Power (Oracle, SAP, …)
      3/ Why are they porting KVM on Power if they still believe in PowerVM/AIX couple ? IBM already has the best virtualization solution in the entire world. Why do they want to port KVM on Power ?
      4/ They already try to do this in 2001/2002 and it was an epic fail. Why are they going to do the same mistake. Seriously who is running Linux on z platform (maybe in US, but in Europe ? IBM is saying, “We succeed to do this on Z let’s go on P”)
      5/ The problem is that Power sales are decreasing over time (last China numbers tells ~40% less Power sales ….), in my opinion they have to cut price for AIX/Power and not to try selling something that is not working (Linux On Power …..)

      Did you agree with me on theses points ?

      Regards,

      Benoit.

      • I’m agree with you, Linux on Power is an exotic platform and a lot of software doesn’t natively run on it (you can however purchase Power LX86 but it’s silly if you want to reduce infrastructures costs).

        But the Power8 CPU seems to be a serious monster in terms of performance, and i think that some software vendors would be interested to propose “big data” based software which requires performance. The fact that cheaper servers with Power8 based is an incitative factor.

        I think IBM wants to keep traditional Power business (AIX, i…) and gaining new businesses (big data, solaris / hpux /zos customers)

        I’m agree, there is a lot of work to do on AIX,

        /etc/inittab is deprecated for long time in the Unix world but still used in AIX (and unfortunately sometimes by the AIX admin themselves…).

        I don’t know if systemd is the best response (in my opinion no). I think the best way is to provide tools that automatically managing entries in /etc/rc.d/rc* like linuces distros…

        GPFS would be a response to ZFS but i don’t know well ZFS features :-)

        About KVM on Power, i think it’s simply going on the way ==> free hypervisor (kvm) with free operating system (linux) like x86

        I’m not sure that the goal of KVM on Power is to be hosted on IBM hardware vendors :-)

        Linux is the future.
        I will not be suprised to see the next major VIO release to be linux based…

  2. Hi Benoit,

    Thanks for the post.

    No one can resist the avalanche of the open source ecosystem, even the Big Blue.

    Taking the Openstacke example, no one can/wants produce a similar product in terms of quality and adaptability not because they cannot but no business model can support (from idea, design, conception to marketing) the funding of a product that can conquer openstacke and follow its enormous growth.

    That’s why IBM , HP, DELL even VMware joined the Openstakce foundation immediately, and the strangest case is Oracle who put his foot into OpenStack’s foundation without making much noise through the Nimbula company which is 100 % subsidiary of the Larry Ellison firm’s.

    Almost any “innovation“ in IT comes from the open source, cloud, Big Data etc … and large companies are beginning to open their APIs for the standards and one day we will see the source code of AIX PowerHA and PowerVM open under a free license, who knows

  3. Hi Benoit,
    just to be accurate on “Is this feature the end of the netmon.cf file in PowerHA ? “.
    netmon.cf is not primary used by PowerHA
    It is first use by RSCT services and then PowerHA may use the state information.
    Phil

  4. Hey Benoit ,

    Thanks for this wonderful post , I am trying to understand the infrastructure of how PowerVC works , So does it installs on the HMC and manages all the connected servers from there or , is it installs on linux o.s installed on top of the IBM Power Server ? Or I am really not getting the concept clearly . I would appreciate your help .
    Thanks,

    • Hi :

      – PowerVC is installed on a Linux separated machine X86 or PPC.
      – PowerVC is talking to the HMC through a REST API to manage servers and machines.
      – You can choose to install PowerVC inside or outside a Power server. Some customers are installing it in an X86 Virtual Machine, others are installing it in a Linux PPC Virtual Machine.

      Hope it helps.

      Regards,

      B.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>