Since one year now the new trend at work is to save money ….. For one of his new project a client have bought two P720 (E4C) PowerVM express edition. This client needs two AIX 6.1 partitions on each server. The first thought and will was not to use PowerVM at all but those two servers were shipped with 2 SAS controllers with three 146GB disks on each one and two 4 ports Ethernet adapters. This means that none of all the I/O can be redundant (there is in this case no possibility to mirror disk between each SAS controller, and no possibility to give 2 ports from different Ethernet adapter on each partition (no LHEA)). After talking a lot with this client I succeed to convince him to use Virtualization, these two P720 were shipped with PowerVM and I think it’s shame not to use it. But with this fabulous idea comes one problem, I’m working on a large bank environment I have never used PowerVM express edition, I am working with HMC and not with IVM. This post will explain you all my mistakes during the deployment of these two P720 :
Do not plug you PowerSystem with PowerVM express edition on an HMC
PowerVM express edition can only be managed by an Intergrated Virtualization manager, you can’t plug your system on a HMC and run a single Virtual I/O Server on it. If by mistake (like me) the system was plugged on an HMC the only thing you can do is to create partitions but you can’t create Virtual I/O Server. Before trying to install the Virtual I/O Server (by using the system CDrom, you’re not dreaming in 2013 you still need to burn a Virtual I/O Server CDrom …) you have to reset the service processor trough the ASMI.
Install the Virtual I/O Server with a CDrom
If you want to run a system with PowerVM express and an Integrated Virtualization Manager the first Virtual I/O Server installation is performed with the CDrom. The Virtual I/O Server CDrom is shipped with the Integrated Virtualization Manager and by installing this one the system will automatically detected that you are running a and the http IVM process will automatically be started at the Virtual I/O Server boot time. Believe it or not but if it is the first time you are doing this and you are coming from HMC environment this is not so trivial to understand, true story …
On a system running the Intergated Virtualization Manager the Virtual I/O Server itself is a single point of failure. All the operations can be performed from the Virtual I/O Server itself (starting, stopping lpar, defining lpar and so on …) so my advice is to secure this . Before mounting an IP address on this Virtual I/O always created an Etherchannel (on two different Ethernet adapters) to secure the access on your Integrated Visualization Manager :
$ lsdev -type adapter -field name physloc | grep -E "ent0|ent5"
$ mkdev -lnagg entX entY -attr mode=8023ad -attr hash_mode=src_dst_port
$ lsdev -dev ent8 -attr adapter_names,mode,hash_mode
If and only if the Etherchannel adapter is correctly configured and working use the mktcpip command to set an address on this new Etherchannel adapter :
$ mktcpip -hostname ivm1 -inetaddr 10.10.20.198 -interface entZ -netmask 255.255.255.0 -gateway 10.10.20.254 -nsrvaddr 10.10.20.20 -nsvdomain domain.test -start
Networking : Virtual Ethernet Bridge
By default the Virtual I/O Server is configured with four Virtual Ethernet Adapters. You can’t delete it. My advice is to use this adapter to create a Shared Ethernet Adapter :
# lsdev -type virtual [..] ent9 Available Virtual I/O Ethernet Adapter (l-lan) ent10 Available Virtual I/O Ethernet Adapter (l-lan) ent11 Available Virtual I/O Ethernet Adapter (l-lan) ent12 Available Virtual I/O Ethernet Adapter (l-lan) [..] # lsdev -type adapter -field name physloc [..] ent9 U8202.E4C.063D89T-V1-C3-T1 ent10 U8202.E4C.063D89T-V1-C4-T1 ent11 U8202.E4C.063D89T-V1-C5-T1 ent12 U8202.E4C.063D89T-V1-C6-T1 [..]
One more time with PowerVM express edition the Virtual I/O Server is the single point of failure, so secure the Shared Ethernet adapter by creating an Etherchannel Adapter before the Shared Ethernet adapter creation :
$ lsdev -dev ent13 -attr adapter_names,mode,hash_mode value ent3,ent1,ent7,ent6 8023ad src_dst_port $ lsdev -type adapter -field name physloc | grep -Ew "ent3|ent1|ent7|ent6" ent1 U78AA.001.WZSJAUX-P1-C7-T2 ent3 U78AA.001.WZSJAUX-P1-C7-T4 ent6 U78AA.001.WZSJAUX-P1-C2-T3 ent7 U78AA.001.WZSJAUX-P1-C2-T4
The Shared Ethernet Adapter can be created trough the Intergrated Virtualization Manager and my advice is to do it this way to prevents human command line error. On the Integrated Visualization Manager the Shared Ethernet Adapter is called Virtual Ethernet Bridge. In the I/O Management tab go to Virtual Ethernet Bridging, select the Etherchannel adapter, and click apply. This will create a Shared Ethernet Adapter.
On the Virtual I/O Server you can check the Shared Ethernet Adapter has been created :
$ lsdev -type sea name status description ent14 Available Shared Ethernet Adapter $ lsdev -dev ent14 -attr virt_adapters,real_adapter,pvid value ent9 ent13 1
Client LPAR can use this Shared Ethernet Adapter to access network, just choose this adapter when creating the LPAR :
Storage : always mirror backing device
If you are using internal disk for you client LPAR remember always to mirror backing devices by assigning two different backing devices on the client LPAR. (mirrorvg or mklvcopy are used on the client itself). To avoid mistakes create two different storage pools an create the backing devices two by two (one on each storage pool) :
$ lssp Pool Size(mb) Free(mb) Alloc Size(mb) BDs Type rootvg 279552 209920 256 0 LVPOOL ctl_sas0 279808 15616 128 6 LVPOOL ctl_sas1 279808 15616 128 6 LVPOOL
Map backing devices from each storage pools on the clients, and be sure to map backing devices two by two :
$ lssp -bd -sp ctl_sas0 Name Size(mb) VTD SVSA lpar1_sas0_r 32768 vtscsi0 vhost0 lpar2_sas0_r 32768 vtscsi1 vhost1 $ lssp -bd -sp ctl_sas1 Name Size(mb) VTD SVSA lpar1_sas1_r 32768 vtscsi2 vhost0 lpar2_sas1_r 32768 vtscsi3 vhost1
On client LPAR be sure to mirror volume group, by using mirrorvg command :
# extendvg rootvg hdisk1 # mirrorvg rootvg hdisk1 # lsvg -p rootvg rootvg: PV_NAME PV STATE TOTAL PPs FREE PPs FREE DISTRIBUTION hdisk0 active 511 306 102..00..00..102..102 hdisk1 active 511 322 102..16..00..102..102
Use the mkvt command to open client’s console from the Virtual I/O Server
# mkvt -id 2 AIX Version 6 Copyright IBM Corporation, 1982, 2013. Console login:
You can also use classic HMC command from the Virtual I/O Server itself, here is an example with lssyscfg :
$ lssyscfg -r lpar -F name,lpar_env ivm1,vioserver lpar1,aixlinux lpar2,aixlinux
Hope this helps.