Running Docker on PowerSystems using Ubuntu and Redhat ppc64le (docker, registry, compose and swarm with haproxy)

Every blog post that I read since a couple of months are mentioning Docker, that’s a fact ! I’ve never been so stressed since years because our jobs are changing. That is not my choice or my will but what we were doing couple of years ago and what we are doing now is going to disappear sooner than I thought. The world of infrastructure as we know it is dying, same thing for sysadmin jobs. I would never have thought this was something that could happen to me during my career, but here we are. Old Unix systems are slowly dying and Linux virtual machines are becoming less and less popular. One part of my career plan is to be excellent on two different systems Linux and AIX but I now have to recognize I probably made a mistake thinking it will saves me from unemployment or from any bullshit job. We’re all gonna die that’s certain but the reality is that I’ll prefer working on something fun and innovative than being stuck on old stuffs forever. We’ve got Openstack for a while and we now have Docker. As no employers will look at a candidate with no Docker experience I had to learn this (in fact I’m using docker since more than one year now. My twitter followers already knows this). I don’t want to be one of the social-reject of a world that is changing too fast. Computer science is living its car crisis and we are the blue collars who will be left behind. There is no choice; there won’t be a place for everyone and you’ll not be the only one fighting in the pit trying to be hired. You have to react now or slowly die … like all the sysadmins I see in banks getting worse and worse. Moving them on Openstack was a real challenge (still not completed) I can’t imagine trying to make them work on Docker. On the other hand I’m also surrounded by excellent people (I have to say I’ve met a true genius a couple of years ago) who are doing crazy things. Unfortunately for me they are not working with me (they are in big companies (ie. RedHat/Oracle/Big blue) or in other places where people tends to understand something is changing and going on)). I feel like being a bad at everything I do. Unemployable. But I don’t want to die. I still have the energy to work on new things and Docker is a part of it. One of my challenge was/is to migrate all our infrastructure services on Docker, not just for the fun but to be able to easily reproduce this infrastructure over and over again. The goal here is to run every infrastructure service in a Docker containers and try at least to make them highly available. We are here going to see how to do that on PowerSystems trying to use Ubuntu or Redhat ppc64le to run our Docker engine and containers. We will next create our own Docker base images (Ubuntu and Redhat ones) and push it in our custom made registry. Then we will create containers for our applications (I’ll just give here some examples (webserver and grafana/influxdb). Then to finish we will try Swarm to make these containers highly available by creating “global/replicas” services. This blog post is also here to prove that Power is an architecture on which you can do the exact same thing as x86. Having Ubuntu 16.04 LTS available on ppc64le arch is a damn good thing because it provides a lot of Opensource products (graphite, grafana, influxdb and all web servers, and so on). Let’s do everything to become a killer DevOps. I have done this for sysadmin stuffs why the hell I’ll not be capable of providing the same effort on DevOps things. I’m not that bad, at least I try.

Image 1

Installing the docker-engine

Red Hat Enterprise Linux ppc64el

Unfortunately for our “little” community the current Red Hat Enterprise repositories for the ppc64le arch do not provides the Docker packages. IBM is providing a repository at this adress http://ftp.unicamp.br/pub/ppc64el/rhel/7_1/. On my side I’m mirroring this repository on my local site (with wget) and create my own repository as my servers have no access to the internet. Keep in mind that this repository is not up to date with the lastest version of Docker. At the time I’m writing this blog post Docker 1.13 is available on this repository is still serving Docker 1.12. Not exactly what we want for a technology like Docker (we absolutely want to keep the engine up to date):

# wget --mirror http://ftp.unicamp.br/pub/ppc64el/rhel/7_1/docker-ppc64el/
# wget --mirror http://ftp.unicamp.br/pub/ppc64el/rhel/7_1/misc_ppc64el/
# cat docker.repo
[docker-ppc64le-misc]
name=docker-ppc64le-msic
baseurl=http://nimprod:8080/dockermisc-ppc64el/
enabled=1
gpgcheck=0
[docker-ppc64le]
name=docker-ppc64le
baseurl=http://nimprod:8080/docker-ppc64el/
enabled=1
gpgcheck=0
# yum info docker.ppc64le
Loaded plugins: product-id, search-disabled-repos, subscription-manager
This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
Installed Packages
Name        : docker
Arch        : ppc64le
Version     : 1.12.0
Release     : 0.ael7b
Size        : 77 M
Repo        : installed
From repo   : docker-ppc64le
Summary     : The open-source application container engine
URL         : https://dockerproject.org
License     : ASL 2.0
Description : Docker is an open source project to build, ship and run any application as a
[..]
# yum search swarm
yum search swarm
Loaded plugins: product-id, search-disabled-repos, subscription-manager
This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
============================================================================================================================== N/S matched: swarm ==============================================================================================================================
docker-swarm.ppc64le : Docker Swarm is native clustering for Docker.
[..]
# yum -y install docker
[..]
Downloading packages:
(1/3): docker-selinux-1.12.0-0.ael7b.noarch.rpm                                                                                                                                                                                                          |  27 kB  00:00:00
(2/3): libtool-ltdl-2.4.2-20.el7.ppc64le.rpm                                                                                                                                                                                                             |  50 kB  00:00:00
(3/3): docker-1.12.0-0.ael7b.ppc64le.rpm                                                                                                                                                                                                                 |  16 MB  00:00:00
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Total                                                                                                                                                                                                                                            33 MB/s |  16 MB  00:00:00
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : libtool-ltdl-2.4.2-20.el7.ppc64le                                                                                                                                                                                                                            1/3
  Installing : docker-selinux-1.12.0-0.ael7b.noarch                                                                                                                                                                                                                         2/3
setsebool:  SELinux is disabled.
  Installing : docker-1.12.0-0.ael7b.ppc64le                                                                                                                                                                                                                                3/3
rhel72/productid                                                                                                                                                                                                                                         | 1.6 kB  00:00:00
  Verifying  : docker-selinux-1.12.0-0.ael7b.noarch                                                                                                                                                                                                                         1/3
  Verifying  : docker-1.12.0-0.ael7b.ppc64le                                                                                                                                                                                                                                2/3
  Verifying  : libtool-ltdl-2.4.2-20.el7.ppc64le                                                                                                                                                                                                                            3/3

Installed:
  docker.ppc64le 0:1.12.0-0.ael7b

Dependency Installed:
  docker-selinux.noarch 0:1.12.0-0.ael7b                                                                                                   libtool-ltdl.ppc64le 0:2.4.2-20.el7

Complete!
# systemctl start docker
# docker ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
# docker info
Containers: 0
 Running: 0
 Paused: 0
 Stopped: 0
Images: 0
Server Version: 1.12.0
[..]

Enabling the device-mapper direct disk mode (instead of loop)

By default on RHEL after installing the docker packages and starting the engine Docker use an lvm loop device to create it’s pool (where the images and the containers will be stored). This is not recommanded and not good for production usage. That’s why on every docker engine host I’m creating a dockervg for this pool. Red Hat provides with the atomic host project a tool called docker-storage-setup to let you configure the thin pool for you (on another volume group).

# git clone https://github.com/projectatomic/docker-storage-setup.git
# cd docker-storage-setup
# make install

Create a volume group on a physical volume, configure and run docker-storage-setup:

# docker-storage-setup --reset
# systemctl stop docker
# rm -rf /var/lib/docker
# pvcreate /dev/mapper/mpathb
  Physical volume "/dev/mapper/mpathb" successfully created
# vgcreate dockervg /dev/mapper/mpathb
  Volume group "dockervg" successfully created
# cat /etc/sysconfig/docker-storage-setup
# Edit this file to override any configuration options specified in
# /usr/lib/docker-storage-setup/docker-storage-setup.
#
# For more details refer to "man docker-storage-setup"
VG=dockervg
SETUP_LVM_THIN_POOL=yes
DATA_SIZE=70%FREE
# /usr/bin/docker-storage-setup
  Rounding up size to full physical extent 104.00 MiB
  Logical volume "docker-pool" created.
  Logical volume "docker-pool" changed.
# cat /etc/sysconfig/docker-storage
DOCKER_STORAGE_OPTIONS="--storage-driver devicemapper --storage-opt dm.fs=xfs --storage-opt dm.thinpooldev=/dev/mapper/dockervg-docker--pool --storage-opt dm.use_deferred_removal=true "

I don’t know why on the version of docker I am running the DOCKER_STORAGE_OPTIONS (in /etc/sysconfig/docker-storage) was not read. I had to manually edit the systemctl unit to be able to let Docker use my thinpooldev:

# vi /usr/lib/systemd/system/docker.service
ExecStart=/usr/bin/dockerd --storage-driver devicemapper --storage-opt dm.fs=xfs --storage-opt dm.thinpooldev=/dev/mapper/dockervg-docker--pool --storage-opt dm.use_deferred_removal=true
# systemctl daemon-reload
# systemctl start docker
# docker info
[..]
Storage Driver: devicemapper
 Pool Name: dockervg-docker--pool
 Pool Blocksize: 524.3 kB
 Base Device Size: 10.74 GB
 Backing Filesystem: xfs
 Data file:
 Metadata file:
 Data Space Used: 20.45 MB
 Data Space Total: 74.94 GB
 Data Space Available: 74.92 GB
 Metadata Space Used: 77.82 kB
 Metadata Space Total: 109.1 MB
 Metadata Space Available: 109 MB
 Thin Pool Minimum Free Space: 7.494 GB
 Udev Sync Supported: true
 Deferred Removal Enabled: true
 Deferred Deletion Enabled: false
 Deferred Deleted Device Count: 0
 Library Version: 1.02.107-RHEL7 (2015-10-14)

Ubuntu 16.04 LTS ppc64le

As always on Ubuntu all is always super easy. I’m just deploying an Ubuntu 16.04 LTS and run a single apt install to install the docker engine. Neat. Just for you information as my server do not have any access to the internet I’m using a tool called apt-mirror to mirror the official Ubuntu repositories. The tool can be found easily on github at this address. https://github.com/apt-mirror/apt-mirror. You then just have to specify which arch and which repository you want to clone on your local site:

# cat /etc/apt/mirror.list
[..]
set defaultarch       ppc64el
[..]
set use_proxy         on
set http_proxy        proxy:8080
set proxy_user        benoit
set proxy_password    mypasswd
[..]
deb http://ports.ubuntu.com/ubuntu-ports xenial main restricted universe multiverse
deb http://ports.ubuntu.com/ubuntu-ports xenial-security main restricted universe multiverse
deb http://ports.ubuntu.com/ubuntu-ports xenial-updates main restricted universe multiverse
deb http://ports.ubuntu.com/ubuntu-ports xenial-backports main restricted universe multiverse
# /usr/local/bin/apt-mirror
Downloading 152 index files using 20 threads...
Begin time: Fri Feb 17 14:36:03 2017
[20]... [19]... [18]... [17]... [16]... [15]... [14]... [13]... [12]... [11]... [10]... [9]... [8]... [7]... [6]... [5].

After having downloaded the packages create a repository based on these downloaded deb files accessible trough http and install Docker:

# cat /etc/os-release
NAME="Ubuntu"
VERSION="16.04 LTS (Xenial Xerus)"
# uname -a
Linux dockermachine1 4.4.0-21-generic #37-Ubuntu SMP Mon Apr 18 18:30:22 UTC 2016 ppc64le ppc64le ppc64le GNU/Linux
# apt-install docker.io
Reading package lists... Done
Building dependency tree
Reading state information... Done
[..]
Setting up docker.io (1.10.3-0ubuntu6) ...
Adding group `docker' (GID 116) ...
# docker ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES

On ubuntu use aufs

I strongly recommend keeping aufs as the default filesystem to store containers and images. I’m creating and mounting the /var/lib/docker/aufs on another disk with a lot of space available and that’s it:

# pvcreate /dev/mapper/mpathb
  Physical volume "/dev/mapper/mpathb" successfully created
# vgcreate dockervg /dev/mapper/mpathb
  Volume group "dockervg" successfully created
# lvcreate -n dockerlv -L99G dockervg
  Logical volume "dockerlv" created.
# mkfs.ext4 /dev/dockervg/dockerlv
[..]
# echo "/dev/mapper/dockervg-dockerlv /var/lib/docker/ ext4 errors=remount-ro 0       1" > /etc/fstab
# systemctl stop docker
# mount /var/lib/docker
# systemctl start docker
# df -h | grep docker
/dev/mapper/dockervg-dockerlv   98G   61M   93G   1% /var/lib/docker

The docker-compose case

If you’re installing Docker on a Ubuntu host everything is easy as docker-compose will be available on the Ubuntu official repository. Just run an apt-get install docker-compose and it’s ok.

# apt install docker-compose
[..]
# docker-compose -v
docker-compose version 1.5.2, build unknown

On RedHat compose is not available on the repository delivred by IBM. docker-compose is just a python program and can be downloaded and install via pip. Download compose on a machine with internet access, then use pip to install it:

On the machine having the access to the internet:

# mkdir compose
# pip install --proxy "http://benoit:mypasswd@myproxy:8080"  --download="compose" docker-compose --force --upgrade
[..]
Successfully downloaded docker-compose cached-property six backports.ssl-match-hostname PyYAML ipaddress enum34 colorama requests jsonschema docker texttable websocket-client docopt dockerpty functools32 docker-pycreds
# scp -r compose dockerhost:~
docker_compose-1.11.1-py2.py3-none-any.whl                                                                                                                                                                                                    100%   83KB  83.4KB/s   00:00
cached_property-1.3.0-py2.py3-none-any.whl                                                                                                                                                                                                    100% 8359     8.2KB/s   00:00
six-1.10.0-py2.py3-none-any.whl                                                                                                                                                                                                               100%   10KB  10.1KB/s   00:00
backports.ssl_match_hostname-3.5.0.1.tar.gz                                                                                                                                                                                                   100% 5605     5.5KB/s   00:00
PyYAML-3.12.tar.gz                                                                                                                                                                                                                            100%  247KB 247.1KB/s   00:00
ipaddress-1.0.18-py2-none-any.whl                                                                                                                                                                                                             100%   17KB  17.1KB/s   00:00
enum34-1.1.6-py2-none-any.whl                                                                                                                                                                                                                 100%   12KB  12.1KB/s   00:00
colorama-0.3.7-py2.py3-none-any.whl                                                                                                                                                                                                           100%   19KB  19.5KB/s   00:00
requests-2.11.1-py2.py3-none-any.whl                                                                                                                                                                                                          100%  503KB 502.8KB/s   00:00
jsonschema-2.6.0-py2.py3-none-any.whl                                                                                                                                                                                                         100%   39KB  38.6KB/s   00:00
docker-2.1.0-py2.py3-none-any.whl                                                                                                                                                                                                             100%  103KB 102.9KB/s   00:00
texttable-0.8.7.tar.gz                                                                                                                                                                                                                        100% 9829     9.6KB/s   00:00
websocket_client-0.40.0.tar.gz                                                                                                                                                                                                                100%  192KB 191.6KB/s   00:00
docopt-0.6.2.tar.gz                                                                                                                                                                                                                           100%   25KB  25.3KB/s   00:00
dockerpty-0.4.1.tar.gz                                                                                                                                                                                                                        100%   14KB  13.6KB/s   00:00
functools32-3.2.3-2.zip                                                                                                                                                                                                                       100%   33KB  33.3KB/s   00:00
docker_pycreds-0.2.1-py2.py3-none-any.whl                                                                                                                                                                                                     100% 4474     4.4KB/s   00:00

On the machine runrunning docker:

# rpm -ivh python2-pip-8.1.2-5.el7.noarch.rpm
# cd compose
# pip install docker-compose -f ./ --no-index
[..]
Successfully installed colorama-0.3.7 docker-2.1.0 docker-compose-1.11.1 ipaddress-1.0.18 jsonschema-2.6.0
# docker-compose -v
docker-compose version 1.11.1, build 7c5d5e4

Creating you docker base images and run your first application (a web server)

Regardless of which Linux distribution you have chosen you now need a docker base image to run your first containers. You have two choices: downloading an image from the internet and modify it to your own needs or create an image by yourself base on your current os.

Downloading an image from the internet

From a machine having an access to the internet install the docker engine and download the Ubuntu image. Using the docker save command create a tar based image. This one can then be imported on any docker engine using the docker load command:

  • On the machine having access to the internet:
  • # docker pull ppc64le/ubuntu
    # docker save ppc644le/ubuntu > /tmp/ppc64le_ubuntu.tar
    
  • On your docker engine host:
  • # docker load  < ppc64le_ubuntu.tar
    4fad21ac6351: Loading layer [==================================================>] 173.5 MB/173.5 MB
    625e647dc584: Loading layer [==================================================>] 15.87 kB/15.87 kB
    8505832e8bea: Loading layer [==================================================>] 9.216 kB/9.216 kB
    9bca281924ab: Loading layer [==================================================>] 4.608 kB/4.608 kB
    289bda1cbd14: Loading layer [==================================================>] 3.072 kB/3.072 kB
    Loaded image: ppc64le/ubuntu:latest
    # docker images
    REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
    ppc64le/ubuntu      latest              1967d889e07f        3 months ago        167.9 MB
    

The problem is that this image is not customized for your/my own needs. By this I mean the repositories used by the image are “pointing” to the officials Ubuntu repositories which will obviously not work if you have no access to the internet. We now have to modify the image for our needs. Run a container and launch a shell, then modify the sources.list with you local repository. Then commit this images to validate the changes made inside this one (you will generate a new image based on the current one plus your modifications):

# docker run -it ppc64le/ubuntu /bin/bash
# rm /etc/apt/sources.list
# echo "deb http://ubuntuppc64le.chmod666.org:8080/ubuntu-ports-main/ xenial main" >> /etc/apt/sources.list
# echo "deb http://ubuntuppc64le.chmod666.org:8080/ubuntu-ports-main/ xenial-updates main" >> /etc/apt/sources.list
# echo "deb http://ubuntuppc64le.chmod666.org:8080/ubuntu-ports-main/ xenial-security main" >> /etc/apt/sources.list
# echo "deb http://ubuntuppc64le.chmod666.org:8080/ubuntu-ports/ xenial restricted" >> /etc/apt/sources.list
# echo "deb http://ubuntuppc64le.chmod666.org:8080/ubuntu-ports/ xenial-updates restricted" >> /etc/apt/sources.list
# echo "deb http://ubuntuppc64le.chmod666.org:8080/ubuntu-ports/ xenial-security restricted" >> /etc/apt/sources.list
# echo "deb http://ubuntuppc64le.chmod666.org:8080/ubuntu-ports/ xenial universe" >> /etc/apt/sources.list
# echo "#deb http://ubuntuppc64le.chmod666.org:8080/ubuntu-ports/ xenial-updates universe" >> /etc/apt/sources.list
# echo "deb http://ubuntuppc64le.chmod666.org:8080/ubuntu-ports/ xenial-security universe" >> /etc/apt/sources.list
# echo "deb http://ubuntuppc64le.chmod666.org:8080/ubuntu-ports/ xenial multiverse" >> /etc/apt/sources.list
# echo "#deb http://ubuntuppc64le.chmod666.org:8080/ubuntu-ports/ xenial-updates multiverse" >> /etc/apt/sources.list
# echo "deb http://ubuntuppc64le.chmod666.org:8080/ubuntu-ports/ xenial-security multiverse" >> /etc/apt/sources.list
# exit
# docker ps -a
# docker commit
# docker commit a9506bd5dd30 ppc64le/ubuntucust
sha256:423c13b604dee8d24dae29566cd3a2252e4060270b71347f8d306380b8b6817d
# docker images

Test the image is working by creating an image based on the one just created before. I’m here creating a dockerfile to do this. I’m not explaining here how dockerfiles are working, there are plenty of tutorial on the internet to learn this. To sum up you need to now the basis of Docker to read this blog post ;-) .

# cat dockerfile
FROM ppc64le/ubuntucust

RUN apt-get -y update && apt-get -y install apache2

ENV APACHE_RUN_USER www-data
ENV APACHE_RUN_GROUP www-data
ENV APACHE_LOG_DIR /var/log/apache2
ENV APACHE_PID_FILE /var/run/apache2.pid
ENV APACHE_RUN_DIR /var/run/apache2
ENV APACHE_LOCK_DIR /var/lock/apache2

RUN mkdir -p $APACHE_RUN_DIR $APACHE_LOCK_DIR $APACHE_LOG_DIR

EXPOSE 80

CMD [ "-D", "FOREGROUND" ]
ENTRYPOINT ["/usr/sbin/apache2"]

I’m building the image calling it ubuntu_apache2 (this image will run a single apache2 server and expose the port 80):

# docker build -t ubuntu_apache2 . 
Sending build context to Docker daemon 2.048 kB
Step 1 : FROM ppc64le/ubuntucust
 ---> 423c13b604de
Step 2 : RUN apt-get -y update && apt-get -y install apache2
 ---> Running in 5f868988bf5c
Get:1 http://ubuntuppc64le.chmod666.org:8080/ubuntu-ports-main xenial InRelease [247 kB]
Get:2 http://ubuntuppc64le.chmod666.org:8080/ubuntu-ports-main xenial-updates InRelease [102 kB]
Get:3 http://ubuntuppc64le.chmod666.org:8080/ubuntu-ports-main xenial-security InRelease [102 kB]
debconf: unable to initialize frontend: Dialog
debconf: (TERM is not set, so the dialog frontend is not usable.)
debconf: falling back to frontend: Readline
Processing triggers for libc-bin (2.23-0ubuntu4) ...
Processing triggers for systemd (229-4ubuntu11) ...
Processing triggers for sgml-base (1.26+nmu4ubuntu1) ...
 ---> 4256ac36c0f7
Removing intermediate container 5f868988bf5c
Step 3 : EXPOSE 80
 ---> Running in fc72a50d3f1d
 ---> 3c273b0e2c3f
Removing intermediate container fc72a50d3f1d
Step 4 : CMD -D FOREGROUND
 ---> Running in 112d87a2f1e6
 ---> e6ddda152e97
Removing intermediate container 112d87a2f1e6
Step 5 : ENTRYPOINT /usr/sbin/apache2
 ---> Running in 6dab9b99f945
 ---> bed93aae55b3
Removing intermediate container 6dab9b99f945
Successfully built bed93aae55b3
# docker images
REPOSITORY           TAG                 IMAGE ID            CREATED              SIZE
ubuntu_apache2       latest              bed93aae55b3        About a minute ago   301.8 MB
ppc64le/ubuntucust   latest              423c13b604de        7 minutes ago        167.9 MB
ppc64le/ubuntu       latest              1967d889e07f        3 months ago         167.9 MB

Run a container with this image and expose the port 80:

# docker run -d -it -p 80:80 ubuntu_apache2
49916e3703c1cf0a671be10984b3215478973c0fd085490a61142b8959495732
# docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                NAMES
49916e3703c1        ubuntu_apache2      "/usr/sbin/apache2 -D"   12 seconds ago      Up 10 seconds       0.0.0.0:80->80/tcp   high_brattain
# ps -ef | grep -i apache
root     11282 11267  0 11:04 pts/1    00:00:00 /usr/sbin/apache2 -D FOREGROUND
33       11302 11282  0 11:04 pts/1    00:00:00 /usr/sbin/apache2 -D FOREGROUND
33       11303 11282  0 11:04 pts/1    00:00:00 /usr/sbin/apache2 -D FOREGROUND
root     11382  3895  0 11:04 pts/0    00:00:00 grep --color=auto -i apache

On another host test the service is running by using curl (you can see here that you have access to the default index page of the Ubuntu apache2 server):

# curl mydockerhost
  <body>
    <div class="main_page">
      <div class="page_header floating_element">
        <img src="/icons/ubuntu-logo.png" alt="Ubuntu Logo" class="floating_element"/>
        <span class="floating_element">
          Apache2 Ubuntu Default Page
[..]

Creating your own image

You can also create your own image from scratch. For RHEL based systems (Centos, Fedora), Redhat provides an awesome script doing the job for you. This script is called mkimage-yum.sh and can be directly download from github. Have a look in it if you want to have the exact details (mknode, yum installroot, …..). The script will create a tar file and import it. After running the script you will have a new image available to use:

# wget https://github.com/docker/docker/blob/master/contrib/mkimage-yum.sh
# chmod +x mkimage-yum.sh 
# ./mkimage-yum.sh baserehel72
[..]
+ tar --numeric-owner -c -C /tmp/base.sh.bxma2T .
+ docker import - baserhel72:7.2
sha256:f8b80847b4c7fe03d2cfdeda0756a7aa857eb23ab68e5c954cf3f0cb01f61562
+ docker run -i -t --rm baserhel72:7.2 /bin/bash -c 'echo success'
success
+ rm -rf /tmp/base.sh.bxma2T
# docker images
REPOSITORY           TAG                 IMAGE ID            CREATED              SIZE
baserhel72           7.2                 f8b80847b4c7        About a minute ago   309.1 MB
[..]

I’m running a web server to be sure everything is working is ok (same thing than on Ubuntu, httpd installation and exposing the port 80). Here below here is the dockerfile and the image build:

# cat dockerfile
FROM baserhel72:7.2

RUN yum -y update && yum -y upgrade && yum -y install httpd

EXPOSE 80

CMD [ "-D", "FOREGROUND" ]
ENTRYPOINT ["/usr/sbin/httpd"]
# docker build -t rhel_httpd .
Sending build context to Docker daemon 2.048 kB
Step 1 : FROM baserhel72:7.2
 ---> 0c22a33fc079
Step 2 : RUN yum -y update && yum -y upgrade && yum -y install httpd
 ---> Running in 74c79763c56f
[..]
Dependency Installed:
  apr.ppc64le 0:1.4.8-3.el7                apr-util.ppc64le 0:1.5.2-6.el7
  httpd-tools.ppc64le 0:2.4.6-40.el7       mailcap.noarch 0:2.1.41-2.el7

Complete!
 ---> 73094e173c1b
Removing intermediate container 74c79763c56f
Step 3 : EXPOSE 80
 ---> Running in 045b86d1a6dc
 ---> f032c1569201
Removing intermediate container 045b86d1a6dc
Step 4 : CMD -D FOREGROUND
 ---> Running in 9edc1cc2540d
 ---> 6d5d27171cba
Removing intermediate container 9edc1cc2540d
Step 5 : ENTRYPOINT /usr/sbin/httpd
 ---> Running in 8280382d61f0
 ---> f937439d4359
Removing intermediate container 8280382d61f0
Successfully built f937439d4359

Again I’m launching a container and checking the service is available by curling the docker host. You can see that the image is based on RedHat … and the default page is the RHEL test page :

# docker run
# docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                NAMES
30d090b2f0d1        rhel_httpd          "/usr/sbin/httpd -D F"   3 seconds ago       Up 1 seconds        0.0.0.0:80->80/tcp   agitated_boyd
# curl localhost
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" "http//www.w3.org/TR/xhtml11/DTD/xhtml11.dtd">

<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en">
        <head>
                <title>Test Page for the Apache HTTP Server on Red Hat Enterprise Linux</title>
[..]

Creating your own docker registry

We now have our base Docker images but we want to make the available on every docker hosts without having to recreate them over and over again. To do so we are going to create what we call a docker registry. This registry will allow us to distribute our images across different docker hosts. Neat :-) . When you are installing Docker the package docker-distribution is also installed and is shipped with a binary called “registry”. Why not running the registry … in a Docker container?

  • Verify you have the registry command on the system:
  • # which registry
    /usr/bin/registry
    # registry --version
    registry github.com/docker/distribution v2.3.0+unknown
    
  • The package containing the registry is docker-distribution:
  • # yum whatproviders /usr/bin/registry
    Loaded plugins: product-id, search-disabled-repos, subscription-manager
    This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
    docker-distribution-2.3.0-2.ael7b.ppc64le : Docker toolset to pack, ship, store, and deliver content
    Repo        : @docker
    Matched from:
    Filename    : /usr/bin/registry
    

It’s a question of “chicken or egg” but you will need to have a base image to create your registry image (it’s obvious). As we have created our images locally we will now use one of these image (the RedHat one) to run the docker registry in a container. Here are the steps we are going to follow.

  • Create a dockerfile based on the RedHat image we just created before. This docker file will contain the registry binary (registry) (COPY ./registry), the registry config file (config.yml) (COPY ./config.yml) and a wrapper script allowing its execution (entrypoint.sh) (COPY ./entrypoint.sh). We will also secure the registry with a password using htaccess file type (RUN htpasswd). Finally we will make volumes (VOLUME) /var/lib/registry and /certs available and expose (EXPOSE) the port 5000. Obviously necessaries directories will be create (RUN mkdir) and need tools will be install (RUN yum). I’m also here generating the htaccess file with regimguser with the password regimguser:
  • # cat dockerfile
    FROM ppc64le/rhel72:7.2
    
    RUN yum update && yum upgrade && yum -y install httpd-tools
    RUN mkdir /etc/registry && mkdir /certs
    
    COPY ./registry /usr/bin/registry
    COPY ./entrypoint.sh /entrypoint.sh
    COPY ./config.yml /etc/registry/config.yml
    
    RUN htpasswd -b -B -c /etc/registry/registry_passwd regimguser regimguser
    
    VOLUME ["/var/lib/registry", "/certs"]
    EXPOSE 5000
    
    ENTRYPOINT ["./entrypoint.sh"]
    
    CMD ["/etc/registry/config.yml"]
    
  • Copy the registry binary to the directory containing the dockerfile:
  • # cp /usr/bin/registry .
    
  • create an entrypoint.sh file in the directory containing the dockerfile. This script will launch the registry binary:
  • # cat entrypoint.sh
    #!/bin/sh
    
    set -e
    exec /usr/bin/registry "$@"
    
  • Create a configuration file for the registry in the directory containing the dockerfile and name it config.yml. This configuration file will contain where to store the registry file, the certification and the authentication method to the registry (we are using an htaccess file):
  • version: 0.1
    storage:
      filesystem:
        rootdirectory: /var/lib/registry
      delete:
        enabled: true
    http:
      addr: :5000
      tls:
          certificate: /certs/domain.crt
          key: /certs/domain.key
    auth:
      htpasswd:
        realm: basic-realm
        path: /etc/registry/registry_passwd
    
  • Build the image:
  • # docker build -t registry .
    ending build context to Docker daemon 13.57 MB
    Step 1 : FROM ppc64le/rhel72:7.2
     ---> 9005cbc9c7f6
    Step 2 : RUN yum update && yum upgrade && yum -y install httpd-tools
     ---> Using cache
     ---> de34fdf3864e
    Step 3 : RUN mkdir /etc/registry && mkdir /certs
     ---> Using cache
     ---> c801568b6944
    Step 4 : COPY ./registry /usr/bin/registry
     ---> Using cache
     ---> 49927e0a90b8
    Step 5 : COPY ./entrypoint.sh /entrypoint.sh
     ---> Using cache
    [..]
    Removing intermediate container 261f2b380556
    Successfully built ccef43825f21
    # docker images
    REPOSITORY                                          TAG                 IMAGE ID            CREATED             SIZE
                                                                16d35e8c1177        About an hour ago   361 MB
    registry                                            latest              4287d4e389dc        2 hours ago         361 MB
    

We now need to generate certificates and place it in the right directories to make the registry secure:

  • Generate an ssl certificate:
  • # cd /certs
    # openssl req  -newkey rsa:4096 -nodes -sha256 -keyout /certs/domain.key  -x509 -days 365 -out /certs/domain.crt
    Generating a 4096 bit RSA private key
    .............................................................................................................................................................++
    ..........................................................++
    writing new private key to '/certs/domain.key'
    -----
    You are about to be asked to enter information that will be incorporated
    into your certificate request.
    [..]
    If you enter '.', the field will be left blank.
    -----
    Country Name (2 letter code) [XX]:
    State or Province Name (full name) []:
    Locality Name (eg, city) [Default City]:
    Organization Name (eg, company) [Default Company Ltd]:
    Organizational Unit Name (eg, section) []:
    Common Name (eg, your name or your server's hostname) []:dockerengineppc64le.chmod666.org
    Email Address []:
    
  • Copy the certificates on every docker engine host that will need to access the registry:
  • # mkdir /etc/docker/certs.d/dockerengineppc64le.chmod666.org\:5000/
    # cp /certs/domain.crt /etc/docker/certs.d/dockerengineppc64le.chmod666.org\:5000/cat.crt
    # cp /certs/domain.crt /etc/pki/ca-trust/source/anchors/dockerengineppc64le.chmod666.org.crt
    # update-ca-trust
    
  • Restart docker:
  • # systemctl restart docker
    

Now that everything is ok regarding the image and the certificates, let’s now run the Docker container upload and download an image into the registry:

  • Run the container, expose the port 5000 (-p 5000:5000), be sure the registry will be started when docker start (–restart=always), let the container access the certificates we have created before (-v /certs:/certs), store the images in /var/lib/registry (-v /var/lib/registry:/var/lib/registry):
  • # docker run -d -p 5000:5000 --restart=always -v /certs:/certs -v /var/lib/registry:/var/lib/registry --name registry registry
    51ad253616be336bcf5a1508bf48b059f01ebf20a0772b35b5686b4012600c46
    # docker ps
    CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                    NAMES
    51ad253616be        registry            "./entrypoint.sh /etc"   10 seconds ago      Up 8 seconds        0.0.0.0:5000->5000/tcp   registry
    
  • Connect to the registry using docker login (user and login created before will be asked). Then pull and push an image to be sure everything is working. The only way to list the images available in the registry is to make a call to the registry api and check the catalog:
  • # docker login https://dockerengineppc64le.chmod666.org:5000
    Username (regimguser): regimguser
    Password:
    Login Succeeded
    # docker tag grafana dockerengineppc64le.chmod666.org:5000/ppc64le/grafana
    # The push refers to a repository [dockerengineppc64le.chmod666.org:5000/ppc64le/grafana]
    82bca1cb11d8: Pushed
    9c1f2163c216: Pushing [==>                                                ] 22.83 MB/508.9 MB
    1df85fc1eaaf: Mounted from ppc64le/ubuntucust
    289bda1cbd14: Mounted from ppc64le/ubuntucust
    9bca281924ab: Mounted from ppc64le/ubuntucust
    8505832e8bea: Mounted from ppc64le/ubuntucust
    625e647dc584: Mounted from ppc64le/ubuntucust
    4fad21ac6351: Mounted from ppc64le/ubuntucust
    [..]
    atest: digest: sha256:88eef1b47ec57dd255aa489c8a494c11be17eb35ea98f38a63ab9f5690c26c1f size: 1984
    # curl --cacert /certs/domain.crt -X GET https://regimguser:regimguser@dockerengineppc64le.chmod666.org:5000/v2/_catalog
    {"repositories":["ppc64le/grafana","ppc64le/ubuntucust"]}
    # docker pull dockerengineppc64le.chmod666.org:5000/ppc64le/grafana
    Using default tag: latest
    latest: Pulling from ppc64le/grafana
    Digest: sha256:88eef1b47ec57dd255aa489c8a494c11be17eb35ea98f38a63ab9f5690c26c1f
    Status: Image is up to date for dockerengineppc64le.chmod666.org:5000/ppc64le/grafana:latest
    

Running a more complex application (graphana + influxdb)

One of the application I’m running is grafana. This grafana is used with influxdb as a datasource. We will see here how to run grafana/influxdb in a docker containers running a ppc64le Redhat distribution:

Build the grafana docker image

First create the docker file. You now have seen a lot of dockerfiles in this blog post so I’ll not explain this to you in details. The docker engine is running on Redhat but the image used here is an Ubuntu one. Grafana and Influxdb are available in the Ubuntu repositories.

# cat /data/docker/grafana/dockerfile
FROM ppc64le/ubuntucust

RUN apt-get update && apt-get -y install grafana gosu

VOLUME ['/var/lib/grafana', '/var/log/grafana", "/etc/grafana']

EXPOSE 3000

COPY ./run.sh /run.sh

ENTRYPOINT ["/run.sh"]

Here is the entrypoint script that will run grafan when the docker container will start :

# cat /data/docker/grafana/run.sh
#!/bin/bash -e

: "${GF_PATHS_DATA:=/var/lib/grafana}"
: "${GF_PATHS_LOGS:=/var/log/grafana}"
: "${GF_PATHS_PLUGINS:=/var/lib/grafana/plugins}"

chown -R grafana:grafana "$GF_PATHS_DATA" "$GF_PATHS_LOGS"
chown -R grafana:grafana /etc/grafana

if [ ! -z "${GF_INSTALL_PLUGINS}" ]; then
  OLDIFS=$IFS
  IFS=','
  for plugin in ${GF_INSTALL_PLUGINS}; do
    grafana-cli plugins install ${plugin}
  done
  IFS=$OLDIFS
fi

exec gosu grafana /usr/sbin/grafana  \
  --homepath=/usr/share/grafana             \
  --config=/etc/grafana/grafana.ini         \
  cfg:default.paths.data="$GF_PATHS_DATA"   \
  cfg:default.paths.logs="$GF_PATHS_LOGS"   \
  cfg:default.paths.plugins="$GF_PATHS_PLUGINS"

Then build grafana image:

# cd /data/docker/grafana
# docker build -t grafana .
Step 3 : VOLUME ['/var/lib/grafana', '/var/log/grafana", "/etc/grafana']
 ---> Running in 7baf11e2a2b6
 ---> f3449dd17ad4
Removing intermediate container 7baf11e2a2b6
Step 4 : EXPOSE 3000
 ---> Running in 89e10b7bfa5e
 ---> cdc65141d2f4
Removing intermediate container 89e10b7bfa5e
Step 5 : COPY ./run.sh /run.sh
 ---> 0a75c203bc8e
Removing intermediate container 885719ef1fde
Step 6 : ENTRYPOINT /run.sh
 ---> Running in 56f8b7d1274a
 ---> 4ca5c23b9aba
Removing intermediate container 56f8b7d1274a
Successfully built 4ca5c23b9aba
# docker images
REPOSITORY           TAG                 IMAGE ID            CREATED             SIZE
grafana              latest              4ca5c23b9aba        32 seconds ago      676.8 MB
ppc64le/ubuntucust   latest              c9274707505e        12 minutes ago      167.9 MB
ppc64le/ubuntu       latest              1967d889e07f        3 months ago        167.9 MB

Run it and verify it works ok:

# docker run -d -it -p 443:3000 grafana
19bdd6c82a37a7275edc12e91668530fc1d52699542dae1e17901cce59f1230a
# docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS                   NAMES
19bdd6c82a37        grafana             "/run.sh"           26 seconds ago      Up 24 seconds       0.0.0.0:443->3000/tcp   kickass_mcclintock
# docker logs 19bdd6c82a37
2017/02/17 15:28:36 [I] Starting Grafana
2017/02/17 15:28:36 [I] Version: master, Commit: NA, Build date: 1970-01-01 00:00:00 +0000 UTC
2017/02/17 15:28:36 [I] Configuration Info
Config files:
  [0]: /usr/share/grafana/conf/defaults.ini
  [1]: /etc/grafana/grafana.ini
Command lines overrides:
  [0]: default.paths.data=/var/lib/grafana
  [1]: default.paths.logs=/var/log/grafana
Paths:
  home: /usr/share/grafana
  data: /var/lib/grafana
[..]

grafana

Build the influxdb docker image

Same job for the influxdb image, this one is also based on the Ubuntu image. I’m here showing you the dockerfile (as always packages installation, volume, port exposition). I’m here also including a configuration file influxdb (you can see here I’m also including a configuration file for influxdb):

# cat /data/docker/influxdb/dockerfile
FROM ppc64le/ubuntucust

RUN apt-get update && apt-get -y install influxdb

VOLUME ['/var/lib/influxdb']

EXPOSE 8086 8083

COPY influxdb.conf /etc/influxdb.conf

COPY entrypoint.sh /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
CMD ["/usr/bin/influxd"]
# cat influxdb.conf
[meta]
  dir = "/var/lib/influxdb/meta"

[data]
  dir = "/var/lib/influxdb/data"
  engine = "tsm1"
  wal-dir = "/var/lib/influxdb/wal"

[admin]
  enabled = true
# cat entrypoint.sh
#!/bin/bash
set -e

if [ "${1:0:1}" = '-' ]; then
    set -- influxd "$@"
fi

exec "$@"

Then build influxdb image:

# docker build -t influxdb .
[..]
Step 3 : VOLUME ['/var/lib/influxdb']
 ---> Running in f3570a5a6c91
 ---> 014035e3134c
Removing intermediate container f3570a5a6c91
Step 4 : EXPOSE 8086 8083
 ---> Running in 590405701bfc
 ---> 25f557aae499
Removing intermediate container 590405701bfc
Step 5 : COPY influxdb.conf /etc/influxdb.conf
 ---> c58397a5ae7b
Removing intermediate container d22132ec9925
Step 6 : COPY entrypoint.sh /entrypoint.sh
 ---> 25e931d39bbc
Removing intermediate container 680eacd6597e
Step 7 : ENTRYPOINT /entrypoint.sh
 ---> Running in 0695135e81c0
 ---> 44ed7385ae61
Removing intermediate container 0695135e81c0
Step 8 : CMD /usr/bin/influxd
 ---> Running in f59cbcd5f199
 ---> 073eeeb78055
Removing intermediate container f59cbcd5f199
Successfully built 073eeeb78055
# docker images
REPOSITORY           TAG                 IMAGE ID            CREATED             SIZE
influxdb             latest              073eeeb78055        28 seconds ago      202.7 MB
grafana              latest              4ca5c23b9aba        11 minutes ago      676.8 MB
ppc64le/ubuntucust   latest              c9274707505e        23 minutes ago      167.9 MB
ppc64le/ubuntu       latest              1967d889e07f        3 months ago        167.9 MB

Run an influxdb container to verify it works ok:

# docker run -d -it -p 8080:8083 influxdb
c0c042c7bc1a361d1bcff403ed243651eac88270738cfc390e35dfd434cfc457
# docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                    NAMES
c0c042c7bc1a        influxdb            "/entrypoint.sh /usr/"   4 seconds ago       Up 1 seconds        0.0.0.0:8080->8086/tcp   amazing_goldwasser
19bdd6c82a37        grafana             "/run.sh"                10 minutes ago      Up 10 minutes       0.0.0.0:443->3000/tcp    kickass_mcclintock
#  docker logs c0c042c7bc1a

 8888888           .d888 888                   8888888b.  888888b.
   888            d88P"  888                   888  "Y88b 888  "88b
   888            888    888                   888    888 888  .88P
   888   88888b.  888888 888 888  888 888  888 888    888 8888888K.
   888   888 "88b 888    888 888  888  Y8bd8P' 888    888 888  "Y88b
   888   888  888 888    888 888  888   X88K   888    888 888    888
   888   888  888 888    888 Y88b 888 .d8""8b. 888  .d88P 888   d88P
 8888888 888  888 888    888  "Y88888 888  888 8888888P"  8888888P"

2017/02/17 15:39:08 InfluxDB starting, version 0.10.0, branch unknown, commit unknown, built unknown
2017/02/17 15:39:08 Go version go1.6rc1, GOMAXPROCS set to 16

influx

docker-compose

Now that we have two images on for grafana and one for influxdb lets make work them together. To do so we will user docker-compose. docker-compose allows to to describe the containers you want to run in a yml file and link them together. You can see below there are two different entries, one for influx db telling which image I’m going to use, the container name, the port that will be expose to the docker host (equivalent of -p 8080:8083 with a docker run command) and the volumes (-v with docker run command). For the grafana container everything is almost the same exception the “links” part. The grafana container should be able to “talk” to the influxdb one (to use influxdb as a datasource). The “links” stanza of the the yml file tells an entry containing the influxdb ip and name will be add in the /etc/hosts file of the grafana container. When you are going to configure grafana you will be able to use the “influxdb” name to access the database.:

# cat docker-compose.yml
influxdb:
  image: influxdb:latest
  container_name: influxdb
  ports:
    - "8080:8083"
    - "80:8086"
  volumes:
    - "/data/docker/influxdb/var/lib/influxdb:/var/lib/influxdb"

grafana:
  image: grafana:latest
  container_name: grafana
  ports:
    - "443:3000"
  links:
    - influxdb
  volumes:
    - "/data/docker/grafana/var/lib/grafana:/var/lib/grafana"
    - "/data/docker/grafana/var/log/grafana:/var/log/grafana"

To create the containers just run the “docker-compose up” (from the directory containing the yml file) command, this will create all the containers described in the yml file. Same for destroying them run a “docker-compose down.

# docker-compose up -d
Creating influxdb
Creating grafana
# docker ps
ONTAINER ID        IMAGE               COMMAND                  CREATED              STATUS              PORTS                                                    NAMES
5df7f3d58631        grafana:latest      "/run.sh"                About a minute ago   Up About a minute   0.0.0.0:443->3000/tcp                                    grafana
727dfc6763e1        influxdb:latest     "/entrypoint.sh /usr/"   About a minute ago   Up About a minute   8083/tcp, 0.0.0.0:80->8086/tcp, 0.0.0.0:8080->8086/tcp   influxdb
# docker-compose down
Stopping grafana ... done
Stopping influxdb ... done
Removing grafana ... done
Removing influxdb ... done

Just to prove you everything is working ok I’m logging inside the influxdb container and pushing some data to the database using the NOAA_data.txt file provided by influxdb guy (these are just test data).

# docker exec -it 15845e92152f /bin/bash
# apt-get install influxdb-client
# cd /var/lib/influxdb ; influx -import -path=NOAA_data.txt -precision=s
2017/02/17 17:00:35 Processed 1 commands
2017/02/17 17:00:35 Processed 76290 inserts
2017/02/17 17:00:35 Failed 0 inserts

I'm finally logging into the grafana (from a browser) and configuring the access to the database. I can create graphs based on the data just after doing this.

grafanaok1
grafanaok2

Creating a swarm cluster

Be very careful when starting with swarm. There are 2 different type of “swarm”. The swarm before docker 1.12 (called docker-swarm) and the swarm starting from docker 1.12 (call swarm mode). As the first version of swarm is already deprecated we will here use the swarm more embedded with docker 1.12. In this case no need to install additional software the swarm more is embedded with the docker binaries. The swarm mode can be used with the “docker service” commands to create what we call services (multiple docker-containers running across the swarm cluster with rules/constraints applied on them (create the containers on all the hosts, only on a couple of node and so on). First initialize the swarm mode on the machines (I’ll only use two nodes in my swarm cluster in the examples below) and all the worker nodes be sure you are logged in the registry (certificates are copied, docker login was done):

We will setup the swarm cluster on two nodes just to show you a simple example of the power of this technology. The first step is to choose a leader (there is one leader among the managers and the manager leader is responsible for the orchestration and the management of the swarm cluster) (if the leader has an issue one of the manager will take the lead) and a worker (you can have as many workers as you want in the swarm cluster). In the example below the manager/leader will be called (node1(manager)#) and the worker will be called (node2(worker)#). User the “docker swarm init” command to create your leader. The advertise address is the public address of the machine. The command will give you the commands to launch on the other managers or worker to allow them to join the cluster. Be sure the port tcp 2377 is reachable from all the nodes to the leader/managers. Last thing to add: swarm services rely on a overlay network, you need to createit to be able to create your swarm services:

node1(manager)# docker swarm init --advertise-addr 10.10.10.49
Swarm initialized: current node (813ompnl4c7f4ilkxqy0faj59) is now a manager.

To add a worker to this swarm, run the following command:
    docker swarm join \
    --token SWMTKN-1-69tw66gb9jwfl8y46ujeemj3p5v85ikrqvwmqzb2x32kqmek8e-a9dv25loilaor6jfmcdq8je6h \
    10.10.10.49:2377

To add a manager to this swarm, run the following command:
    docker swarm join \
    --token SWMTKN-1-69tw66gb9jwfl8y46ujeemj3p5v85ikrqvwmqzb2x32kqmek8e-9e82z5k7qrzxsk2autu9ajt3r \
    10.10.10.49:2377
node1(manager)# docker node ls
ID                           HOSTNAME                   STATUS  AVAILABILITY  MANAGER STATUS
813ompnl4c7f4ilkxqy0faj59 *  swarm1.chmod666.org  Ready   Active        Leader
node1(manager)# docker network create -d overlay mynet
8mv5ydu9vokx
node1(amanger)# docker network ls
8mv5ydu9vokx        mynet               overlay             swarm

On the worker node run the command to join the cluster and verify all the nodes are Ready and Active. This will mean that you are ready to use the swarm cluster:

node2(worker)# docker swarm join --token SWMTKN-1-69tw66gb9jwfl8y46ujeemj3p5v85ikrqvwmqzb2x32kqmek8e-a9dv25loilaor6jfmcdq8je6h 10.10.10.49:2377
This node joined a swarm as a worker.
node1(manager)# docker node ls
ID                           HOSTNAME                   STATUS  AVAILABILITY  MANAGER STATUS
813ompnl4c7f4ilkxqy0faj59 *  swarm1.chmod666.org        Ready   Active        Leader
bh7mhv3hg1x98b9j6lu00c3ef    swarm2.chmod666.org        Ready   Active

The cluster is up and ready. Before working with it we need to find a solution to share the data of our application among the cluster. The best solution (from my point of view) is to use gluster, but for the convenience of this blog post I’ll just create a small nfs server on the leader node and mount the data on the worker node (for a production server the nfs server should be externalized (mounted from a NAS server)):

node1(manager)# exportfs
# exportfs
/nfs            
node2(worker)# mount | grep nfs
mount | grep nfs
[..]
swarm1.chmod666.org:/nfs on /nfs type nfs4 (rw,relatime,vers=4.0,rsize=524288,wsize=524288,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=10.10.10.48,local_lock=none,addr=10.10.10.49)

Running an appliction in a swarm cluster

We now have the swarm cluster ready to run some services but we first need a service. I’ll use a web application created by myself called whoami (inspired by the emilevauge/whoami application), just displaying the hostname and the ip address of the node running the service). I’m first creating a dockerfile allowing me to create container image ready to run any cgi ksh scripts. The dockerfile is copying a configuration file in /etc/httpd/conf.d and serving the files in /var/www/mysite on an /whoami/ alias:

# cd /data/dockerfile/httpd
# cat dockerfile
FROM swarm1.chmod666.org:5000/ppc64le/rhel72:latest

RUN yum -y install httpd
RUN mkdir /var/www/mysite && chown apache:apache /var/www/mysite

EXPOSE 80

COPY ./mysite.conf /etc/httpd/conf.d
VOLUME ['/var/www/html', '/var/www/mysite']

CMD [ "-D", "FOREGROUND" ]
ENTRYPOINT ["/usr/sbin/httpd"]
# cat mysite.conf
Alias /whoami/ "/var/www/mysite/"

  AddHandler cgi-script .ksh
  DirectoryIndex whoami.ksh
  Options Indexes FollowSymLinks ExecCGI
  AllowOverride None
  Require all granted

I’m then building the image and pushing it into my private registry. The image is now available for download on any node of the swarm cluster:

Sending build context to Docker daemon 3.072 kB
Step 1 : FROM dockerengineppc64le.chmod666.org:5000/ppc64le/rhel72:latest
 ---> 9005cbc9c7f6
Step 2 : RUN yum -y install httpd
 ---> Using cache
 ---> 1bc91df747cd
[..]
 ---> Using cache
 ---> afb3cf77eb8a
Step 8 : ENTRYPOINT /usr/sbin/httpd
 ---> Using cache
 ---> 187da163e084
Successfully built 187da163e084
# docker tag httpd swarm1.chmod666.org:5000/ppc64le/httpd
# docker push swarm1.chmod666.org:5000/ppc64le/httpd
The push refers to a repository [swarm1.chmod666.org:5000/ppc64le/httpd]
92d958e708cc: Layer already exists
[..]
latest: digest: sha256:3b1521432c9704ca74707cd2f3c77fb342a957c919787efe9920f62a26b69e26 size: 1156

Now that the image is ready we will create the application, it’s just a single ksh script and a css file.

# ls /nfs/docker/whoami/
table-responsive.css  whoami.ksh
# cat whoami.ksh
#!/usr/bin/bash

hostname=$(hostname)
uname=$(uname -a)
ip=$(hostname -I)
date=$(date)
env=$(env)
echo ""
echo "<html>"
echo "<head>"
echo "  <title>Docker exemple</title>"
echo "  <link href="table-responsive.css" media="screen" type="text/css" rel="stylesheet" />"
echo "</head>"
echo "<body>"
echo "<h1><span class="blue"><<span>Docker<span class="blue"><span> <span class="yellow">on PowerSystems ppc64le</pan></h1>"
echo "<h2>Created with passion by <a href="http://chmod666.org" target="_blank">chmod666.org</a></h2>"
echo "<table class="container">"
echo "  <thead>"
echo "    <tr>"
echo "      <th><h1>type</h1></th>"
echo "      <th><h1>value</h1></th>"
echo "    </tr>"
echo "  </thead>"
echo "  <tbody>"
echo "    <tr>"
echo "      <td>hostname</td>"
echo "      <td>${hostname}</td>"
echo "    </tr>"
echo "    <tr>"
echo "      <td>uname</td>"
echo "      <td>${uname}</td>"
echo "    </tr>"
echo "    <tr>"
echo "      <td>ip</td>"
echo "      <td>${ip}</td>"
echo "    </tr>"
echo "    <tr>"
echo "      <td>date</td>"
echo "      <td>${date}</td>"
echo "    </tr>"
echo "    <tr>"
echo "      <td>httpd env</td>"
echo "      <td>SERVER_SOFTWARE:${SERVER_SOFTWARE},SERVER_NAME:${SERVER_NAME},SERVER_PROTOCOL:${SERVER_PROTOCOL}</td>"
echo "    </tr>"
echo "  </tbody>"
echo "</table>"
echo "  </tbody>"
echo "</table>"
echo "</body>"
echo "</html>"

Just to be sure the web application is working run this image on the worker node (without swarm):

# docker run -d -p 80:80 -v /nfs/docker/whoami/:/var/www/mysite --name httpd swarm1.chmod666.org:5000/ppc64le/httpd
a75095b23bc31715ac95d9bb57a7a161b06ef3e6a0f4eb4ed708cf60d03c0e5d
# curl localhost/whoami/
[..]
    
      hostname
      a75095b23bc3
    
    
      uname
      Linux a75095b23bc3 3.10.0-327.el7.ppc64le #1 SMP Thu Oct 29 17:31:13 EDT 2015 ppc64le ppc64le ppc64le GNU/Linux
    
    
      ip
      172.17.0.2 
    
    
      date
      Wed Feb 22 14:33:59 UTC 2017
#  docker rm a75095b23bc3 -f
a75095b23bc3

We are now ready to create a swarm service with our application. Verify the swarm cluster health and create a service in global mode. The global mode means swarm will create one docker container per node.

node1(manager)# docker node ls
ID                           HOSTNAME                   STATUS  AVAILABILITY  MANAGER STATUS
813ompnl4c7f4ilkxqy0faj59 *  swarm1.chmod666.org        Ready   Active        Leader
bh7mhv3hg1x98b9j6lu00c3ef    swarm2.chmod666.org        Ready   Active
node1(manager)# docker service create --name whoami --mount type=bind,source=/nfs/docker/whoami/,destination=/var/www/mysite --mode global --publish 80:80 --network mynet  swarm1.chmod666.org:5000/ppc64le/httpd
7l8c4stcl3zgiijf6oe2hvu1r
node1(manager) # docker service ls
ID            NAME    REPLICAS  IMAGE                                         COMMAND
7l8c4stcl3zg  whoami  global    swarm1.chmod666.org:5000/ppc64le/httpd

Verify there is one container available on each swarm node:

node1(manager) docker service ps 7l8c4stcl3zg  
docker service ps 7l8c4stcl3zg
ID                         NAME        IMAGE                                         NODE                       DESIRED STATE  CURRENT STATE          ERROR
2sa543un5v4hpvwgouyorhndm  whoami      swarm1.chmod666.org:5000/ppc64le/httpd        swarm1.chmod666.org        Running        Running 2 minutes ago
5061eogr8wimt9al6uss1wet2   \_ whoami  swarm2.chmod666.org:5000/ppc64le/httpd        swarm2.chmod666.org        Running        Running 2 minutes ago

I’m now accessing the webservice with both dns (swarm1 and swarm2) and I’m verifying I’m accessing a different container each time I’m doing an http resquest:

  • When access swarm1.chmod666.org (I’m seeing a docker hostname and ip)
  • node1

  • When access swarm2.chmod666.org (I’m seeing a docker hostname and ip different that the first one)
  • node2

You will now say: Ok that great !. But that’s not “redundant”. In fact it is because swarm is embedded with a very cool feature call swarm mesh routing. When you create service in the swarm cluster with the –publish option each swarm node will listen on this port, even the node on which the docker containers are not running, if you access any node on this port you will reach the container, by this I mean by accessing swarm1.chmod666.org you may reach the container running on swarm2.chmod666.org. When you will make another http request you can reach any of the containers running for this service. Let’s try creating a service with 10 replicas and access the same node over and over again.

node1(manager)# docker service create --name whoami --mount type=bind,source=/nfs/docker/whoami/,destination=/var/www/mysite --replicas 10 --publish 80:80 --network mynet  swarm1.chmod666.org:5000/ppc64le/httpd
el7nyiuga1vxtfgzktpfahucw
node1(manager)# docker service ls
ID            NAME    REPLICAS  IMAGE                                         COMMAND
el7nyiuga1vx  whoami  10/10     swarm1.chmod666.org:5000/ppc64le/httpd
node2(worker)# docker service ps el7nyiuga1vx
ID                         NAME          IMAGE                                         NODE                       DESIRED STATE  CURRENT STATE                ERROR
bed84pmdjy6c0758g3r52mmsq  whoami.1      swarm1.chmod666.org:5000/ppc64le/httpd        swarm2.chmod666.org        Running        Running 46 seconds ago
dgdj4ygqdr476e156osk8dd95  whoami.2      swarm1.chmod666.org:5000/ppc64le/httpd        swarm2.chmod666.org        Running        Running 46 seconds ago  
ba2ni51fo96eo6c4qfir90t7q  whoami.3      swarm1.chmod666.org:5000/ppc64le/httpd        swarm2.chmod666.org        Running        Running 48 seconds ago
9qkwigxkrqje48do39ru3cv2h  whoami.4      swarm1.chmod666.org:5000/ppc64le/httpd        swarm2.chmod666.org        Running        Running 40 seconds ago
3hgwwdly23ovafv1g0jvegu16  whoami.5      swarm1.chmod666.org:5000/ppc64le/httpd        swarm2.chmod666.org        Running        Running 43 seconds ago
0f3y844yqfbll2lmb954ro3cy  whoami.6      swarm1.chmod666.org:5000/ppc64le/httpd        swarm1.chmod666.org        Running        Running 51 seconds ago
0955dz84rv4gpb4oqv8libahd  whoami.7      swarm1.chmod666.org:5000/ppc64le/httpd        swarm1.chmod666.org        Running        Running 42 seconds ago
c05hrs9h0mm6ghxxdxc1afco9  whoami.8      swarm1.chmod666.org:5000/ppc64le/httpd        swarm1.chmod666.org        Running        Running 50 seconds ago
03qcbiuxlk13p60we0ke6vqka  whoami.9      swarm1.chmod666.org:5000/ppc64le/httpd        swarm1.chmod666.org        Running        Running 54 seconds ago
0otgw4ncka81hlxgyt82z36zj  whoami.10     swarm1.chmod666.org:5000/ppc64le/httpd        swarm1.chmod666.org        Running        Running 48 seconds ago
node1(manager)# docker ps
# docker ps
CONTAINER ID        IMAGE                                                 COMMAND                  CREATED             STATUS              PORTS                    NAMES
a25404371765        swarm1.chmod666.org:5000/ppc64le/httpd:latest   "/usr/sbin/httpd -D F"   5 minutes ago       Up 4 minutes        80/tcp                   whoami.7.0955dz84rv4gpb4oqv8libahd
07c38a306a68        swarm1.chmod666.org:5000/ppc64le/httpd:latest   "/usr/sbin/httpd -D F"   5 minutes ago       Up 4 minutes        80/tcp                   whoami.4.9qkwigxkrqje48do39ru3cv2h
e88a8c8a3639        swarm1.chmod666.org:5000/ppc64le/httpd:latest   "/usr/sbin/httpd -D F"   5 minutes ago       Up 5 minutes        80/tcp                   whoami.8.c05hrs9h0mm6ghxxdxc1afco9
f73a84cc6622        swarm1.chmod666.org:5000/ppc64le/httpd:latest   "/usr/sbin/httpd -D F"   5 minutes ago       Up 5 minutes        80/tcp                   whoami.1.bed84pmdjy6c0758g3r52mmsq
757be5ec73a4        swarm1.chmod666.org:5000/ppc64le/httpd:latest   "/usr/sbin/httpd -D F"   5 minutes ago       Up 5 minutes        80/tcp                   whoami.3.ba2ni51fo96eo6c4qfir90t7q
51ad253616be        registry                                              "./entrypoint.sh /etc"   45 hours ago        Up 2 hours          0.0.0.0:5000->5000/tcp   registry
node2(worker)# docker ps
# docker ps
CONTAINER ID        IMAGE                                                 COMMAND                  CREATED             STATUS              PORTS               NAMES
f015b0da7f2e        swarm1.chmod666.org:5000/ppc64le/httpd:latest   "/usr/sbin/httpd -D F"   6 minutes ago       Up 5 minutes        80/tcp              whoami.5.3hgwwdly23ovafv1g0jvegu16
4b7452245406        swarm1.chmod666.org:5000/ppc64le/httpd:latest   "/usr/sbin/httpd -D F"   6 minutes ago       Up 5 minutes        80/tcp              whoami.10.0otgw4ncka81hlxgyt82z36zj
71722a2d7f38        swarm1.chmod666.org:5000/ppc64le/httpd:latest   "/usr/sbin/httpd -D F"   6 minutes ago       Up 5 minutes        80/tcp              whoami.6.0f3y844yqfbll2lmb954ro3cy
01bc73d6fdf7        swarm1.chmod666.org:5000/ppc64le/httpd:latest   "/usr/sbin/httpd -D F"   6 minutes ago       Up 5 minutes        80/tcp              whoami.9.03qcbiuxlk13p60we0ke6vqka
438c0d553550        swarm1.chmod666.org:5000/ppc64le/httpd:latest   "/usr/sbin/httpd -D F"   6 minutes ago       Up 5 minutes        80/tcp              whoami.2.dgdj4ygqdr476e156osk8dd95

Let’s now try accessing the service. I’m modifying my whoami.ksh just to print the information I need (the hostname).

cat /nfs/docker/whoami/whoami.ksh
#!/usr/bin/bash

hostname=$(hostname)
uname=$(uname -a)
ip=$(hostname -I)
date=$(date)
env=$(env)
echo ""
echo "hostname: ${hostname}"
echo "ip: ${ip}"
echo "uname:${uname}"
# for i in $(seq 1 10) ; do 
 for i in $(seq 1 10) ; do echo "[CALL $1]" ; curl -s http://swarm1.chmod666.org/whoami/ ; done
[CALL ]
hostname: f015b0da7f2e
ip: 10.255.0.14 10.255.0.2 172.18.0.7 10.0.0.12 10.0.0.2
uname:Linux f015b0da7f2e 3.10.0-327.el7.ppc64le #1 SMP Thu Oct 29 17:31:13 EDT 2015 ppc64le ppc64le ppc64le GNU/Linux
[CALL ]
hostname: 4b7452245406
ip: 10.255.0.11 10.255.0.2 172.18.0.6 10.0.0.9 10.0.0.2
uname:Linux 4b7452245406 3.10.0-327.el7.ppc64le #1 SMP Thu Oct 29 17:31:13 EDT 2015 ppc64le ppc64le ppc64le GNU/Linux
[CALL ]
hostname: 438c0d553550
ip: 10.0.0.5 10.0.0.2 172.18.0.4 10.255.0.7 10.255.0.2
uname:Linux 438c0d553550 3.10.0-327.el7.ppc64le #1 SMP Thu Oct 29 17:31:13 EDT 2015 ppc64le ppc64le ppc64le GNU/Linux
[CALL ]
hostname: 71722a2d7f38
ip: 10.255.0.10 10.255.0.2 172.18.0.5 10.0.0.8 10.0.0.2
uname:Linux 71722a2d7f38 3.10.0-327.el7.ppc64le #1 SMP Thu Oct 29 17:31:13 EDT 2015 ppc64le ppc64le ppc64le GNU/Linux
[CALL ]
hostname: 01bc73d6fdf7
ip: 10.255.0.6 10.255.0.2 172.18.0.3 10.0.0.4 10.0.0.2
uname:Linux 01bc73d6fdf7 3.10.0-327.el7.ppc64le #1 SMP Thu Oct 29 17:31:13 EDT 2015 ppc64le ppc64le ppc64le GNU/Linux
[CALL ]
hostname: a25404371765
ip: 10.255.0.9 10.255.0.2 172.18.0.7 10.0.0.7 10.0.0.2
uname:Linux a25404371765 3.10.0-327.el7.ppc64le #1 SMP Thu Oct 29 17:31:13 EDT 2015 ppc64le ppc64le ppc64le GNU/Linux
[CALL ]
hostname: 07c38a306a68
ip: 10.255.0.8 10.255.0.2 172.18.0.6 10.0.0.6 10.0.0.2
uname:Linux 07c38a306a68 3.10.0-327.el7.ppc64le #1 SMP Thu Oct 29 17:31:13 EDT 2015 ppc64le ppc64le ppc64le GNU/Linux
[CALL ]
hostname: e88a8c8a3639
ip: 10.255.0.4 10.255.0.2 172.18.0.5 10.0.0.3 10.0.0.2
uname:Linux e88a8c8a3639 3.10.0-327.el7.ppc64le #1 SMP Thu Oct 29 17:31:13 EDT 2015 ppc64le ppc64le ppc64le GNU/Linux
[CALL ]
hostname: f73a84cc6622
ip: 10.255.0.12 10.255.0.2 172.18.0.4 10.0.0.10 10.0.0.2
uname:Linux f73a84cc6622 3.10.0-327.el7.ppc64le #1 SMP Thu Oct 29 17:31:13 EDT 2015 ppc64le ppc64le ppc64le GNU/Linux
[CALL ]
hostname: 757be5ec73a4
ip: 10.255.0.13 10.255.0.2 172.18.0.3 10.0.0.11 10.0.0.2
uname:Linux 757be5ec73a4 3.10.0-327.el7.ppc64le #1 SMP Thu Oct 29 17:31:13 EDT 2015 ppc64le ppc64le ppc64le GNU/Linux

I'm here doing ten calls and I'm seeing that I'm reaching a different docker container on each call, I can see that by checking the hostname. It shows you that the routing mesh is correctly working.

HAproxy

To access the service for a single ip I’m installing an haproxy server on aonther host (an Ubuntu ppc64le host). I’m then modifying the configuration file my swarm nodes. The haproxy will check for the accessibility of the web application and will round robin the request between the two docker host. If one of the docker swarm node is failing all requests will be send to the remaining alive node.

# apt-get install haproxy
apt-get install haproxy
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
  liblua5.3-0
Suggested packages:
[..]
Setting up haproxy (1.6.3-1ubuntu0.1) ...
Processing triggers for libc-bin (2.23-0ubuntu5) ...
Processing triggers for systemd (229-4ubuntu13) ...
Processing triggers for ureadahead (0.100.0-19) ...
# cat /etc/haproxy.conf
frontend http_front
   bind *:80
   stats uri /haproxy?stats
   default_backend http_back

backend http_back
   balance roundrobin
   server swarm1.chmod666.org 10.10.10.48:80 check
   server swarm2.chmod666.org 10.10.10.49:80 check

I’m again changing the whoami.sh script just to print the hostname. Then from another host I’m running 10000 http request on the public ip of my haproxy server. I’m then counting how many request per containers were done. By doing this we can see two things. The haproxy service is correctly spreading the requests across each swarm nodev (I’m reaching ten different containers). The swarm mesh routing is working ok: all the request are almost equally spread among all the running containers. You can see the sessions spread in the haproxy stats page and in the curl example:

# /nfs/docker/whoami/whoami.sh
#!/usr/bin/bash

hostname=$(hostname)
uname=$(uname -a)
ip=$(hostname -I)
date=$(date)
env=$(env)
echo ""
echo "${hostname}"
# for i in $(seq 1 10000) ; do curl -s http://10.10.10.50/whoami/ ; done  | sort | uniq -c
    999 01bc73d6fdf7
   1003 07c38a306a68
    993 438c0d553550
    998 4b7452245406
   1006 71722a2d7f38
    996 757be5ec73a4
   1004 a25404371765
   1004 e88a8c8a3639
    995 f015b0da7f2e
   1002 f73a84cc6622

haproxy1

I’m finally shutting down one of the worker nodes. We can also see two things here. The service is created with 10 replicas. When can see here that shutting down one done results in the creation of 5 more containers on the other node. By checking the haproxy stats page we also see that one node is detected down and all the request will be send to the remaining one. We have our high available docker service (to be totally redundant we also need to be sure the haproxy is running on two different host with a “floating” ip (I’ll not explain this here):

# docker ps
CONTAINER ID        IMAGE                                                 COMMAND                  CREATED              STATUS              PORTS                    NAMES
82fe21465b96        swarm1.chmod666.org:5000/ppc64le/httpd:latest   "/usr/sbin/httpd -D F"   About a minute ago   Up 29 seconds       80/tcp                   whoami.5.2d0t99pjide4w7nenzrribjph
71a4c51460ef        swarm1.chmod666.org:5000/ppc64le/httpd:latest   "/usr/sbin/httpd -D F"   About a minute ago   Up 21 seconds       80/tcp                   whoami.9.5f9qkx6t47vvjt8b9k5jhj79h
5830f0696cca        swarm1.chmod666.org:5000/ppc64le/httpd:latest   "/usr/sbin/httpd -D F"   About a minute ago   Up 32 seconds       80/tcp                   whoami.6.eso8uwhx6ij2we2iabmzx3tdu
dbc2b731c547        swarm1.chmod666.org:5000/ppc64le/httpd:latest   "/usr/sbin/httpd -D F"   About a minute ago   Up 16 seconds       80/tcp                   whoami.2.8tc8zoxrpdell4f4d8zsr0rlw
050aacdf8126        swarm1.chmod666.org:5000/ppc64le/httpd:latest   "/usr/sbin/httpd -D F"   About a minute ago   Up 23 seconds       80/tcp                   whoami.10.ej8ahxzzp8bw3pybc6fib17qh
a25404371765        swarm1.chmod666.org:5000/ppc64le/httpd:latest   "/usr/sbin/httpd -D F"   2 hours ago          Up 2 hours          80/tcp                   whoami.7.0955dz84rv4gpb4oqv8libahd
07c38a306a68        swarm1.chmod666.org:5000/ppc64le/httpd:latest   "/usr/sbin/httpd -D F"   2 hours ago          Up 2 hours          80/tcp                   whoami.4.9qkwigxkrqje48do39ru3cv2h
e88a8c8a3639        swarm1.chmod666.org:5000/ppc64le/httpd:latest   "/usr/sbin/httpd -D F"   2 hours ago          Up 2 hours          80/tcp                   whoami.8.c05hrs9h0mm6ghxxdxc1afco9
f73a84cc6622        swarm1.chmod666.org:5000/ppc64le/httpd:latest   "/usr/sbin/httpd -D F"   2 hours ago          Up 2 hours          80/tcp                   whoami.1.bed84pmdjy6c0758g3r52mmsq
757be5ec73a4        swarm1.chmod666.org:5000/ppc64le/httpd:latest   "/usr/sbin/httpd -D F"   2 hours ago          Up 2 hours          80/tcp                   whoami.3.ba2ni51fo96eo6c4qfir90t7q
51ad253616be        registry                                              "./entrypoint.sh /etc"   2 days ago           Up 4 hours          0.0.0.0:5000->5000/tcp   registry

haproxy2

Conclusion

docker_ascii

What we have reviewed in this blog post is pretty. The PowerSystem ecosystem is capable of doing the exact same thing as the x86 one. Everything here is proved. Powersystems are definitly ready to run Linux. The mighty RedHat and the incredible Ubuntu both provides a viable way to enter the world of DevOps on PowerSystems. We don’t need anymore to recompile everything or search for this or that package not available on Linux. The Ubuntu repository is huge. I was super impressed by the variety of packages available that are running on Power. A few days ago RedHat finally joined the OpenPower foundation and I can assure you that this is a big news. Maybe people are still not believing in the spreading of PowerSystems but things are slowy changing and with the first OpenPower servers running on Power9 I can assure you (at least I want to believe) that things will change. Regarding Docker I was/am a big x86 user of the solution, I’m running the blog and all my “personal” services on Docker and I have to recognize that ppc64le Linux distributions provides the exact same value as the x86. Hire me if you want to do such things (DevOps on Power). They’ll probably don’t want to do anything about Linux On Power in my company (I still have the faith as we have purchased 120 pairs of power sockets of Redhat ppc64le ;-) ;-) ).

Last words: sorry for not publishing more blog posts these days but I’m not living the best part of my life at work (nobody cares about what I’m doing, I’m just nothing …) and personally (different health problems for me and the people I love). Please accept my apologizes.

Enhance your AIX packages management with yum and nim over http

As AIX is getting older and older our old favorite OS is still trying to struggle versus the mighty Linux and the fantastic Solaris (no sarcasm in that sentence I truly believe what I say). You may have notice that -with time- IBM is slowly but surely moving from proprietary code to something more open (ie. PowerVC/Openstack projects, integration with Chef, Linux on Power and tons of other examples). I’m a little bit deviating from the main topic of this blog post but speaking about open source I have many things to say. If someone from my company is reading this post please note that it is my point of view … but I’m still sure that we are going the WRONG way not being more open, and not publishing on github. Starting from now every AIX IT shop in world must consider using OpenSource software (git, chef, ansible, zsh and so on) instead of maintaining homemade tools, or worse paying for tools that are 100 % of the time worse than OpenSource tools. Even better, every IT admin and every team must consider sharing their sources with the rest of the world for one single good reason: “Alone we can do so little, together we can do so much”. Every company not considering this today is doomed. Take example on Bloomberg, Facebook (sharing to the world all their Chef’s cookbooks), twitter, they’re all using github to share their opensource projects. Even military, police and banks are doing the same. They’re still secure but they are open to world ready work to make and create things better and better. All of this to introduce you to new things coming on AIX. Instead of reinventing the wheel IBM had the great idea to use already well implanted tools. It was the case for Openstack/PowerVC and it is also for the tools I’ll talk about in this post. It is the case for yum (yellowdog updater modified). Instead of installing rpm packages by hand you now have the possibility to use yum and to definitely end the rpm dependency nightmare that we all had since AIX 5L was released. Next instead of using the proprietary nimsh protocol to install filesets (bff package) you can now tell the nim server and nimclient to this over http/https (secure is only for the authentication as far as I know) (an open protocol :-) ). By doing this you will enhance the way you are managing packages on AIX. Do this now on every AIX system you install, yum everywhere and stop using NFS … we’re now in an http world :-)

yum: the yellow dog updater modified

I’m not going to explain you what yum is. If you don’t know you’re not in the right place. Just note that my advice starting from now is to use yum to install every software of the AIX toolbox (ftp://ftp.software.ibm.com/aix/freeSoftware/aixtoolbox/RPMS/ppc/). IBM is providing an official repository than can be mirrored on your own site to avoid having to use a proxy or having an access to the internet from you servers (you must admit that this is almost impossible and every big company will try to avoid this). Let’s start by trying to install yum:

Installing yum

IBM is providing an archive with all the needed rpm mandatory to use and install yum on an AIX server, you can find this archive here: ftp://public.dhe.ibm.com/aix/freeSoftware/aixtoolbox/ezinstall/ppc/yum_bundle_v1.tar. Just download it and install every rpm in it and yum will be available on you system, simple as that:

A specific version of rpm binary command is mandatory to use yum. Before doing anything update the rpm.rte fileset. As AIX is rpm “aware” it already have an rpm database, but this one will not be manageable by yum. The installation of rpm in a version greater than 4.9.1.3 is needed. This installation will migrate the existing rpm database to a new one usable by yum. The fileset in the right version can be found here ftp://ftp.software.ibm.com/aix/freeSoftware/aixtoolbox/INSTALLP/ppc/

  • By default the rpm command is installed by an AIX fileset:
  • # which rpm
    /usr/bin/rpm
    # lslpp -w /usr/bin/rpm
      File                                        Fileset               Type
      ----------------------------------------------------------------------------
      /usr/bin/rpm                                rpm.rte               File
    # rpm --version
    RPM version 3.0.5
    
  • The rpm database is located in /usr/opt/freeware/packages :
  • # pwd
    /usr/opt/freeware/packages
    # ls -ltr
    total 5096
    -rw-r--r--    1 root     system         4096 Jul 01 2011  triggerindex.rpm
    -rw-r--r--    1 root     system         4096 Jul 01 2011  conflictsindex.rpm
    -rw-r--r--    1 root     system        20480 Jul 21 00:54 nameindex.rpm
    -rw-r--r--    1 root     system        20480 Jul 21 00:54 groupindex.rpm
    -rw-r--r--    1 root     system      2009224 Jul 21 00:54 packages.rpm
    -rw-r--r--    1 root     system       647168 Jul 21 00:54 fileindex.rpm
    -rw-r--r--    1 root     system        20480 Jul 21 00:54 requiredby.rpm
    -rw-r--r--    1 root     system        81920 Jul 21 00:54 providesindex.rpm
    
  • Install the rpm.rte fileset in the right version (4.9.1.3):
  • # file rpm.rte.4.9.1.3
    rpm.rte.4.9.1.3: backup/restore format file
    # installp -aXYgd . rpm.rte
    +-----------------------------------------------------------------------------+
                        Pre-installation Verification...
    +-----------------------------------------------------------------------------+
    Verifying selections...done
    Verifying requisites...done
    Results...
    
    SUCCESSES
    ---------
      Filesets listed in this section passed pre-installation verification
      and will be installed.
    
      Selected Filesets
      -----------------
      rpm.rte 4.9.1.3                             # RPM Package Manager
    [..]
    #####################################################
            Rebuilding RPM Data Base ...
            Please wait for rpm_install background job termination
            It will take a few minutes
    [..]
    Installation Summary
    --------------------
    Name                        Level           Part        Event       Result
    -------------------------------------------------------------------------------
    rpm.rte                     4.9.1.3         USR         APPLY       SUCCESS
    rpm.rte                     4.9.1.3         ROOT        APPLY       SUCCESS
    
  • After the installation check you have the correct version of rpm, you can also notice some changes in the rpm database files:
  • # rpm --version
    RPM version 4.9.1.3
    # ls -ltr /usr/opt/freeware/packages
    total 25976
    -rw-r--r--    1 root     system         4096 Jul 01 2011  triggerindex.rpm
    -rw-r--r--    1 root     system         4096 Jul 01 2011  conflictsindex.rpm
    -rw-r--r--    1 root     system        20480 Jul 21 00:54 nameindex.rpm
    -rw-r--r--    1 root     system        20480 Jul 21 00:54 groupindex.rpm
    -rw-r--r--    1 root     system      2009224 Jul 21 00:54 packages.rpm
    -rw-r--r--    1 root     system       647168 Jul 21 00:54 fileindex.rpm
    -rw-r--r--    1 root     system        20480 Jul 21 00:54 requiredby.rpm
    -rw-r--r--    1 root     system        81920 Jul 21 00:54 providesindex.rpm
    -rw-r--r--    1 root     system            0 Jul 21 01:08 .rpm.lock
    -rw-r--r--    1 root     system         8192 Jul 21 01:08 Triggername
    -rw-r--r--    1 root     system         8192 Jul 21 01:08 Conflictname
    -rw-r--r--    1 root     system        28672 Jul 21 01:09 Dirnames
    -rw-r--r--    1 root     system       221184 Jul 21 01:09 Basenames
    -rw-r--r--    1 root     system         8192 Jul 21 01:09 Sha1header
    -rw-r--r--    1 root     system         8192 Jul 21 01:09 Requirename
    -rw-r--r--    1 root     system         8192 Jul 21 01:09 Obsoletename
    -rw-r--r--    1 root     system         8192 Jul 21 01:09 Name
    -rw-r--r--    1 root     system         8192 Jul 21 01:09 Group
    -rw-r--r--    1 root     system       815104 Jul 21 01:09 Packages
    -rw-r--r--    1 root     system         8192 Jul 21 01:09 Sigmd5
    -rw-r--r--    1 root     system         8192 Jul 21 01:09 Installtid
    -rw-r--r--    1 root     system        86016 Jul 21 01:09 Providename
    -rw-r--r--    1 root     system       557056 Jul 21 01:09 __db.004
    -rw-r--r--    1 root     system     83894272 Jul 21 01:09 __db.003
    -rw-r--r--    1 root     system      7372800 Jul 21 01:09 __db.002
    -rw-r--r--    1 root     system        24576 Jul 21 01:09 __db.001
    

Then install yum. Please note that I already have some rpm installed on my current system that’s why I’m not installing db, or gdbm. If your system is free of any rpm install all the rpm found in the archive:

# tar xvf yum_bundle_v1.tar
x curl-7.44.0-1.aix6.1.ppc.rpm, 584323 bytes, 1142 media blocks.
x db-4.8.24-3.aix6.1.ppc.rpm, 2897799 bytes, 5660 media blocks.
x gdbm-1.8.3-5.aix5.2.ppc.rpm, 56991 bytes, 112 media blocks.
x gettext-0.10.40-8.aix5.2.ppc.rpm, 1074719 bytes, 2100 media blocks.
x glib2-2.14.6-2.aix5.2.ppc.rpm, 1686134 bytes, 3294 media blocks.
x pysqlite-1.1.7-1.aix6.1.ppc.rpm, 51602 bytes, 101 media blocks.
x python-2.7.10-1.aix6.1.ppc.rpm, 23333701 bytes, 45574 media blocks.
x python-devel-2.7.10-1.aix6.1.ppc.rpm, 15366474 bytes, 30013 media blocks.
x python-iniparse-0.4-1.aix6.1.noarch.rpm, 37912 bytes, 75 media blocks.
x python-pycurl-7.19.3-1.aix6.1.ppc.rpm, 162093 bytes, 317 media blocks.
x python-tools-2.7.10-1.aix6.1.ppc.rpm, 830446 bytes, 1622 media blocks.
x python-urlgrabber-3.10.1-1.aix6.1.noarch.rpm, 158584 bytes, 310 media blocks.
x readline-6.1-2.aix6.1.ppc.rpm, 489547 bytes, 957 media blocks.
x sqlite-3.7.15.2-2.aix6.1.ppc.rpm, 1334918 bytes, 2608 media blocks.
x yum-3.4.3-1.aix6.1.noarch.rpm, 1378777 bytes, 2693 media blocks.
x yum-metadata-parser-1.1.4-1.aix6.1.ppc.rpm, 62211 bytes, 122 media blocks.
# rpm -Uvh curl-7.44.0-1.aix6.1.ppc.rpm glib2-2.14.6-2.aix5.2.ppc.rpm pysqlite-1.1.7-1.aix6.1.ppc.rpm python-2.7.10-1.aix6.1.ppc.rpm python-devel-2.7.10-1.aix6.1.ppc.rpm python-iniparse-0.4-1.ai
x6.1.noarch.rpm python-pycurl-7.19.3-1.aix6.1.ppc.rpm python-tools-2.7.10-1.aix6.1.ppc.rpm python-urlgrabber-3.10.1-1.aix6.1.noarch.rpm yum-3.4.3-1.aix6.1.noarch.rpm yum-metadata-parser-1.1.4-
1.aix6.1.ppc.rpm
# Preparing...                ########################################### [100%]
   1:python                 ########################################### [  9%]
   2:pysqlite               ########################################### [ 18%]
   3:python-iniparse        ########################################### [ 27%]
   4:glib2                  ########################################### [ 36%]
   5:yum-metadata-parser    ########################################### [ 45%]
   6:curl                   ########################################### [ 55%]
   7:python-pycurl          ########################################### [ 64%]
   8:python-urlgrabber      ########################################### [ 73%]
   9:yum                    ########################################### [ 82%]
  10:python-devel           ########################################### [ 91%]
  11:python-tools           ########################################### [100%]

Yum is now ready to be configured and used :-)

# which yum
/usr/bin/yum
# yum --version
3.4.3
  Installed: yum-3.4.3-1.noarch at 2016-07-20 23:24
  Built    : None at 2016-06-22 14:13
  Committed: Sangamesh Mallayya  at 2014-05-29

Setting up yum and you private yum repository for AIX

A private repository

As nobody wants to use the official IBM repository available directly on internet the goal here is to create your own repository. Download all the content of the official repository and “serve” this directory (the one where you download all the rpms) on an private http server (yum is using http/https obviously :-) ).

  • Using wget download the content of the whole official repository. You can notice here that IBM is providing the metadata needed (repodata directory) (if you don’t have this repodata directory yum can’t work properly. This one can be created using the createrepo command available on akk good Linux distros :-) ):
  • # wget -r ftp://public.dhe.ibm.com/aix/freeSoftware/aixtoolbox/RPMS/ppc/
    # ls -ltr
    [..]
    drwxr-xr-x    2 root     system         4096 Jul 11 22:08 readline
    drwxr-xr-x    2 root     system          256 Jul 11 22:08 rep-gtk
    drwxr-xr-x    2 root     system         4096 Jul 11 22:08 repodata
    drwxr-xr-x    2 root     system         4096 Jul 11 22:08 rpm
    drwxr-xr-x    2 root     system         4096 Jul 11 22:08 rsync
    drwxr-xr-x    2 root     system          256 Jul 11 22:08 ruby
    drwxr-xr-x    2 root     system          256 Jul 11 22:09 rxvt
    drwxr-xr-x    2 root     system         4096 Jul 11 22:09 samba
    drwxr-xr-x    2 root     system          256 Jul 11 22:09 sawfish
    drwxr-xr-x    2 root     system          256 Jul 11 22:09 screen
    drwxr-xr-x    2 root     system          256 Jul 11 22:09 scrollkeeper
    
  • Configure you web server (here it’s just an alias because I’m using my http server for other things):
  • # more httpd.conf
    [..]
    Alias /aixtoolbox/  "/apps/aixtoolbox/"
    
        Options Indexes FollowSymLinks MultiViews
        AllowOverride None
        Require all granted
    
    
  • Restart your webserver and check you repository is accessible:
  • repo

  • That’s it the private repository is ready.

Configuring yum

On the client just modify the /opt/freeware/etc/yum/yum.conf or add a file in /opt/freeware/etc/yum/yum.repos.d to point to your private repository:

# cat /opt/freeware/etc/yum/yum.conf
[main]
cachedir=/var/cache/yum
keepcache=1
debuglevel=2
logfile=/var/log/yum.log
exactarch=1
obsoletes=1

[AIX_Toolbox]
name=AIX ToolBox Repository
baseurl=http://nimserver:8080/aixtoolbox/
enabled=1
gpgcheck=0

# PUT YOUR REPOS HERE OR IN separate files named file.repo
# in /etc/yum/repos.d

That’s it the client is ready.

Chef recipe to install and configre yum

My readers all knows that I’m using Chef as a configuration management tools. As you are going to do this on every single system you have I think giving you the Chef recipe installing and configuring yum can be useful (if you don’t care about it just skip it and go to the next session). If you are not using a configuration management tool maybe this simple example will help you to move on and stop doing this by hand or writing ksh scripts. I have to do that on tons of system so for me it’s just mandatory. Here is my recipe to do all the job, configuring and installing yum, and installing some RPM:

directory '/var/tmp/yum' do
  action :create
end

remote_file '/var/tmp/yum/rpm.rte.4.9.1.3'  do
  source "http://#{node['nimserver']}/powervc/rpm.rte.4.9.1.3"
  action :create
end

execute "Do the toc" do
  command 'inutoc /var/tmp/yum'
  not_if { File.exist?('/var/tmp/yum/.toc') }
end

bff_package 'rpm.rte' do
  source '/var/tmp/yum/rpm.rte.4.9.1.3'
  action :install
end

tar_extract "http://#{node['nimserver']/powervc/yum_bundle_v1.tar" do
  target_dir '/var/tmp/yum'
  compress_char ''
  user 'root'
  group 'system'
end

# installing some rpm needed for yum
for rpm in [ 'curl-7.44.0-1.aix6.1.ppc.rpm', 'python-pycurl-7.19.3-1.aix6.1.ppc.rpm', 'python-urlgrabber-3.10.1-1.aix6.1.noarch.rpm', 'glib2-2.14.6-2.aix5.2.ppc.rpm', 'yum-metadata-parser-1.1.4-1.aix6.1.ppc.rpm', 'python-iniparse-0.4-1.aix6.1.noarch.rpm', 'pysqlite-1.1.7-1.aix6.1.ppc.rpm'  ]
  execute "installing yum" do
    command "rpm -Uvh /var/tmp/yum/#{rpm}"
    not_if "rpm -qa | grep $(echo #{rpm} | sed 's/.aix6.1//' | sed 's/.aix5.2//' | sed 's/.rpm//')"
  end
end

# updating python
execute "updating python" do
  command "rpm -Uvh /var/tmp/yum/python-devel-2.7.10-1.aix6.1.ppc.rpm /var/tmp/yum/python-2.7.10-1.aix6.1.ppc.rpm"
  not_if "rpm -qa | grep python-2.7.10-1"
end

# installing yum
execute "installing yum" do
  command "rpm -Uvh /var/tmp/yum/yum-3.4.3-1.aix6.1.noarch.rpm"
  not_if "rpm -qa | grep yum-3.4.3.1.noarch"
end

# changing yum configuration
template '/opt/freeware/etc/yum/yum.conf' do
  source 'yum.conf.erb'
end

# installing some software with aix yum
for soft in [ 'bash', 'bzip2', 'curl', 'emacs', 'gzip', 'screen', 'vim-enhanced', 'wget', 'zlib', 'zsh', 'patch', 'file', 'lua', 'nspr', 'git' ] do
  execute "install #{soft}" do
    command "yum -y install #{soft}"
  end
end

# removing temporary file
execute 'removing /var/tmp/yum' do
  command 'rm -rf /var/tmp/yum'
  only_if { File.exists?('/var/tmp/yum')}
end

chef_yum1
chef_yum2
chef_yum3

After running the chef recipe yum is fully usable \o/ :

chef_yum4

Using yum on AIX: what you need to know

yum is usable just like it is on a Linux system. You may hit some issues when using yum on AIX. For instance you can have this kind of errors:

# yum check
AIX-rpm-7.2.0.1-2.ppc has missing requires of rpm
AIX-rpm-7.2.0.1-2.ppc has missing requires of popt
AIX-rpm-7.2.0.1-2.ppc has missing requires of file-libs
AIX-rpm-7.2.0.1-2.ppc has missing requires of nss

If you are not aware of what is the purpose of AIX-rpm please read this. This rpm is what I call a meta package. It does not install anything. This rpm is used because the rpm database does not know anything about things (binaries, libraries) installed by standard AIX filesets. By default rpm are not “aware” of what is installed by a fileset (bff) but most of rpms depends on things installed by filesets. When you install a fileset … let’s say it install a library like libc.a AIX run the updtvpkg program to rebuild this AIX-rpm and says “this rpm will resolve any rpm dependencies issue for libc.a. So first, never try to uninstall this rpm, second it’s not a real problem is this rpm has missing dependencies …. as it is providing nothing. If you really want to see what dependencies resolve AIX-rpm run the following command:

# rpm -q --provides AIX-rpm-7.2.0.1-2.ppc | grep libc.a
libc.a(aio.o)
# lslpp -w /usr/lib/libc.a
  File                                        Fileset               Type
  ----------------------------------------------------------------------------
  /usr/lib/libc.a                             bos.rte.libc          Symlink

If you want to get rid of these messages just install the missing rpm … using yum:

# yum -y install popt file-libs

A few examples

Here are a few example a software installation using yum:

  • Installing git:
  • # yum install git
    Setting up Install Process
    Resolving Dependencies
    --> Running transaction check
    ---> Package git.ppc 0:4.3.20-4 will be installed
    --> Finished Dependency Resolution
    
    Dependencies Resolved
    
    ================================================================================================================================================================================================
     Package                                    Arch                                       Version                                         Repository                                          Size
    ================================================================================================================================================================================================
    Installing:
     git                                        ppc                                        4.3.20-4                                        AIX_Toolbox                                        215 k
    
    Transaction Summary
    ================================================================================================================================================================================================
    Install       1 Package
    
    Total size: 215 k
    Installed size: 889 k
    Is this ok [y/N]: y
    Downloading Packages:
    Running Transaction Check
    Running Transaction Test
    Transaction Test Succeeded
    Running Transaction
      Installing : git-4.3.20-4.ppc                                                                                                                                                             1/1
    
    Installed:
      git.ppc 0:4.3.20-4
    
    Complete!
    
  • Removing git :
  • # yum remove git
    Setting up Remove Process
    Resolving Dependencies
    --> Running transaction check
    ---> Package git.ppc 0:4.3.20-4 will be erased
    --> Finished Dependency Resolution
    
    Dependencies Resolved
    
    ================================================================================================================================================================================================
     Package                                   Arch                                      Version                                           Repository                                          Size
    ================================================================================================================================================================================================
    Removing:
     git                                       ppc                                       4.3.20-4                                          @AIX_Toolbox                                       889 k
    
    Transaction Summary
    ================================================================================================================================================================================================
    Remove        1 Package
    
    Installed size: 889 k
    Is this ok [y/N]: y
    Downloading Packages:
    Running Transaction Check
    Running Transaction Test
    Transaction Test Succeeded
    Running Transaction
      Erasing    : git-4.3.20-4.ppc                                                                                                                                                             1/1
    
    Removed:
      git.ppc 0:4.3.20-4
    
    Complete!
    
  • List available repo
  • yum repolist
    repo id                                                                                repo name                                                                                          status
    AIX_Toolbox                                                                            AIX ToolBox Repository                                                                             233
    repolist: 233
    

Getting rid of nimsh: USE HTTPS !

A new feature that is now available on latest version of AIX (7.2) allows you to use nim over http. It is a long awaited feature for different reasons (it’s just my opinion). I personally don’t like proprietary protocols such as nimsh and nimsh secure … security teams neither. Who has never experienced installation problems because of nimsh port not opened, because of ids, because of security teams ? Using http or https is the solution? No company is not allowing http or https ! This protocol is so used and secured, widely spread in a lot of products that everybody trust it. I personally prefer opening on single port than struggling opening all nimsh ports. You’ll understand that using http is far better than using nimsh. Before explaining this in details here are a few things you need to now. nimhttp is only available on latest version of AIX (7.2 SP0/1/2), same for the nimclient. If there is a problem using http the nimclient will automatically fallback in an NFS mode. Only certain nim operation are available over http:

Configuring the nim server

To use nim over http (nimhttp) you nim server must be at least deployed on an AIX 7.2 server (mine is updated to the latest service pack (SP2)). Start the service nimhttp on the nim server to allow nim to use http for its operations:

# oslevel -s
7200-00-02-1614
# startsrc -s nimhttp
0513-059 The nimhttp Subsystem has been started. Subsystem PID is 11665728.
# lssrc -a | grep nimhttp
 nimhttp                           11665728     active

The nimhttp service will listen on port 4901, this port is defined in the /etc/services :

# grep nimhttp /etc/services
nimhttp         4901/tcp
nimhttp         4901/udp
# netstat -an | grep 4901
tcp4       0      0  *.4901                 *.*                    LISTEN
# rmsock f1000e0004a483b8 tcpcb
The socket 0xf1000e0004a48008 is being held by proccess 14811568 (nimhttpd).
# ps -ef | grep 14811568
    root 14811568  4456760   0 04:03:22      -  0:02 /usr/sbin/nimhttpd -v

If you want to enable crypto/ssl to encrypt http authentication, just add the -a “-c” to your command line. This “-c” argument will tell nimhttp to start in secure mode and encrypt the authentication:

# startsrc -s nimhttp -a "-c"
0513-059 The nimhttp Subsystem has been started. Subsystem PID is 14811570.
# ps -ef | grep nimhttp
    root 14811570  4456760   0 22:57:51      -  0:00 /usr/sbin/nimhttpd -v -c

Starting the service for the first time will create an httpd.conf file in the root home directory :

# grep ^document_root ~/httpd.conf
document_root=/export/nim/
# grep ^service.log ~/httpd.conf
service.log=/var/adm/ras/nimhttp.log

If you choose to enable the secure authentication nimhttp will use the pem certificates file used by nim. If you are already using secure nimsh you don’t have to run the “nimconfig -c” command. If it is the first time this command will create the two pem files (root and server in /ssl_nim/certs) (check my blog post about secure nimsh for more information about that):

# nimconfig -c
# grep ^ssl. ~/httpd.conf
ssl.cert_authority=/ssl_nimsh/certs/root.pem
ssl.pemfile=/ssl_nimsh/certs/server.pem

The document_root of the http server will define the resource the nim http will “serve”. The default one is /export/nim (default nim place for all nim resources (spot, mksysb, lpp_source) and cannot be changed today (I think it is now ok on SP2, I’ll change the blog post as soon as the test will be done). Unfortunately for me one of my production nim was created by someone not very aware of AIX and … resources are not in /export/nim (I had to recreate my own nim because of that :-( )

On the client side ?

On the client side you just have nothing to do. If you’re using AIX 7.2 and nimhttp is enabled the client will automatically use http for communication (if it is enabled on the nim server). Just note that if you’re using nimhttp in secure mode, you must enable your nimclient in secure mode too:

# nimclient -c
Received 2788 Bytes in 0.0 Seconds
0513-044 The nimsh Subsystem was requested to stop.
0513-077 Subsystem has been changed.
0513-059 The nimsh Subsystem has been started. Subsystem PID is 13500758.
# stopsrc -s nimsh
# startsrc -s nimsh

Changing nimhttp port

You can easily change the port on which nimhttp is listening by modify the /etc/services file. Here is an example with the port 443 (I know this is not a good idea to use this one but it’s just for the example)

#nimhttp                4901/tcp
#nimhttp                4901/udp
nimhttp         443/tcp
nimhttp         443/udp
# stopsrc -s nimhttp
# startsrc -s nimhttp -a "-c"
# netstat -Aan | grep 443
f1000e00047fb3b8 tcp4       0      0  *.443                 *.*                   LISTEN
# rmsock f1000e00047fb3b8 tcpcb
The socket 0xf1000e00047fb008 is being held by proccess 14811574 (nimhttpd).

Same on the client side, just change the /etc/services file and use your nimclient as usual

# grep nimhttp /etc/services
#nimhttp                4901/tcp
#nimhttp                4901/udp
nimhttp         443/tcp
nimhttp         443/udp
# nimclient -l

To be sure I’m not using nfs anymore I’m removing any entries in my /etc/export file. I know that it will just work for some case (some type of resources) as nimesis is filling the file even if this one is empty:

# > /etc/exports
# exportfs -uav
exportfs: 1831-184 unexported /export/nim/bosinst_data/golden-vios-2233-08192014-bosinst_data
exportfs: 1831-184 unexported /export/nim/spot/golden-vios-22422-05072016-spot/usr
exportfs: 1831-184 unexported /export/nim/spot/golden-vios-22410-22012015-spot/usr
exportfs: 1831-184 unexported /export/nim/mksysb
exportfs: 1831-184 unexported /export/nim/hmc
exportfs: 1831-184 unexported /export/nim/lpp_source
[..]

Let’s do this

Let’s now try this with a simple example. I’m here installing powervp on a machine using a cust operation from the nimclient, on the client I’m doing like I have always do running the exact same command as before. Super simple:

# nimclient -o cust -a lpp_source=powervp1100-lpp_source -a filesets=powervp.rte

+-----------------------------------------------------------------------------+
                    Pre-installation Verification...
+-----------------------------------------------------------------------------+
Verifying selections...done
Verifying requisites...done
Results...

SUCCESSES
---------
  Filesets listed in this section passed pre-installation verification
  and will be installed.

  Selected Filesets
  -----------------
  powervp.rte 1.1.0.0                         # PowerVP for AIX

  << End of Success Section >>

+-----------------------------------------------------------------------------+
                   BUILDDATE Verification ...
+-----------------------------------------------------------------------------+
Verifying build dates...done
FILESET STATISTICS
------------------
    1  Selected to be installed, of which:
        1  Passed pre-installation verification
  ----
    1  Total to be installed

+-----------------------------------------------------------------------------+
                         Installing Software...
+-----------------------------------------------------------------------------+

installp: APPLYING software for:
        powervp.rte 1.1.0.0

0513-071 The syslet Subsystem has been added.
Finished processing all filesets.  (Total time:  4 secs).

+-----------------------------------------------------------------------------+
                                Summaries:
+-----------------------------------------------------------------------------+

Installation Summary
--------------------
Name                        Level           Part        Event       Result
-------------------------------------------------------------------------------
powervp.rte                 1.1.0.0         USR         APPLY       SUCCESS
powervp.rte                 1.1.0.0         ROOT        APPLY       SUCCESS

On the server side I’m checking the /var/adm/ras/nimhttp.log (log file for nimhttp) and I can check that files are transferred from the server to the client using the http protocol. So it works great.

# Thu Jul 21 23:44:19 2016        Request Type is GET
Thu Jul 21 23:44:19 2016        Mime not supported
Thu Jul 21 23:44:19 2016        Sending Response Header "200 OK"
Thu Jul 21 23:44:19 2016        Sending file over socket 6. Expected length is 600
Thu Jul 21 23:44:19 2016        Total length sent is 600
Thu Jul 21 23:44:19 2016        handle_httpGET: Entering cleanup statement
Thu Jul 21 23:44:20 2016        nim_http: queue socket create product (memory *)200739e8
Thu Jul 21 23:44:20 2016        nim_http: 200739e8 6 200947e8 20098138
Thu Jul 21 23:44:20 2016        nim_http: file descriptor is 6
Thu Jul 21 23:44:20 2016        nim_buffer: (resize) buffer size is 0
Thu Jul 21 23:44:20 2016        file descriptor is : 6
Thu Jul 21 23:44:20 2016        family is : 2 (AF_INET)
Thu Jul 21 23:44:20 2016        source address is : 10.14.33.253
Thu Jul 21 23:44:20 2016        socks: Removing socksObject 2ff1ec80
Thu Jul 21 23:44:20 2016        socks: 200739e8 132 <- 87 bytes (SSL)
Thu Jul 21 23:44:20 2016        nim_buffer: (append) len is 87, buffer length is 87
Thu Jul 21 23:44:20 2016        nim_http: data string passed to get_http_request: "GET /export/nim/lpp_source/powervp/powervp.1.1.0.0.bff HTTP/1.1

Let's do the same thing with a fileset coming from a bigger lpp_source (in fact an simage one for the latest release of AIX 7.2):

# nimclient -o cust -a lpp_source=7200-00-02-1614-lpp_source -a filesets=bos.loc.utf.en_KE
[..]

Looking on the nim server I notice that files are transfered from the server to the client, but NOT my fileset and it's dependencies .... but the whole lpp_source (seriously ? uh ? why ?)

# tail -f /var/adm/ras/nimhttp.log
Thu Jul 21 23:28:39 2016        Request Type is GET
Thu Jul 21 23:28:39 2016        Mime not supported
Thu Jul 21 23:28:39 2016        Sending Response Header "200 OK"
Thu Jul 21 23:28:39 2016        Sending file over socket 6. Expected length is 4482048
Thu Jul 21 23:28:39 2016        Total length sent is 4482048
Thu Jul 21 23:28:39 2016        handle_httpGET: Entering cleanup statement
Thu Jul 21 23:28:39 2016        nim_http: queue socket create product (memory *)200739e8
Thu Jul 21 23:28:39 2016        nim_http: 200739e8 6 200947e8 20098138
Thu Jul 21 23:28:39 2016        nim_http: file descriptor is 6
Thu Jul 21 23:28:39 2016        nim_buffer: (resize) buffer size is 0
Thu Jul 21 23:28:39 2016        file descriptor is : 6
Thu Jul 21 23:28:39 2016        family is : 2 (AF_INET)
Thu Jul 21 23:28:39 2016        source address is : 10.14.33.253
Thu Jul 21 23:28:39 2016        socks: Removing socksObject 2ff1ec80
Thu Jul 21 23:28:39 2016        socks: 200739e8 132 <- 106 bytes (SSL)
Thu Jul 21 23:28:39 2016        nim_buffer: (append) len is 106, buffer length is 106
Thu Jul 21 23:28:39 2016        nim_http: data string passed to get_http_request: "GET /export/nim/lpp_source/7200-00-02-1614/installp/ppc/X11.fnt.7.2.0.0.I HTTP/1.1

If you have a deeper look of what is nimclient doing when using nimhttp .... he is just transfering the whole lpp_source from the server to the client and then installing the needed fileset from a local filesystem. Filesets are storred into /tmp so be sure you have a /tmp bigger enough to store your biggest lpp_source. Maybe this will be changed in the future but it is like it is for the moment :-) . The nimclient is creating temporary directory named (prefix) "_nim_dir_" to store the lpp_source:

root@nim_server:/export/nim/lpp_source/7200-00-02-1614/installp/ppc# du -sm .
7179.57 .
root@nim_client:/tmp/_nim_dir_5964094/export/nim/lpp_source/7200-00-02-1614/installp/ppc# du -sm .
7179.74 .

More details ?

You can notice while running a cust operation from the nim client that nimhttp is also running in background (on the client itself). The truth is that the nimhttp binary running on client act as an http client. In the output below the http client is getting the file Java8_64.samples.jnlp.8.0.0.120.U and

# ps -ef |grep nim
    root  3342790 16253432   6 23:29:10  pts/0  0:00 /bin/ksh /usr/lpp/bos.sysmgt/nim/methods/c_installp -afilesets=bos.loc.utf.en_KE -alpp_source=s00va9932137:/export/nim/lpp_source/7200-00-02-1614
    root  6291880 13893926   0 23:29:10  pts/0  0:00 /bin/ksh /usr/lpp/bos.sysmgt/nim/methods/c_script -alocation=s00va9932137:/export/nim/scripts/s00va9954403.script
    root 12190194  3342790  11 23:30:06  pts/0  0:00 /usr/sbin/nimhttp -f /export/nim/lpp_source/7200-00-02-1614/installp/ppc/Java8_64.samples.jnlp.8.0.0.120.U -odest -s
    root 13500758  4325730   0 23:23:29      -  0:00 /usr/sbin/nimsh -s -c
    root 13893926 15991202   0 23:29:10  pts/0  0:00 /bin/ksh -c /var/adm/nim/15991202/nc.1469222947
    root 15991202 16974092   0 23:29:07  pts/0  0:00 nimclient -o cust -a lpp_source=7200-00-02-1614-lpp_source -a filesets=bos.loc.utf.en_KE
    root 16253432  6291880   0 23:29:10  pts/0  0:00 /bin/ksh /tmp/_nim_dir_6291880/script

You can use the nimhttp as a client to download file directly from the nim server. Here I'm just listing the content of /export/nim/lpp_source from the client

# nimhttp -f /export/nim/lpp_source -o dest=/tmp -v
nimhttp: (source)       /export/nim/lpp_source
nimhttp: (dest_dir)     /tmp
nimhttp: (verbose)      debug
nimhttp: (master_ip)    nimserver
nimhttp: (master_port)  4901

sending to master...
size= 59
pull_request= "GET /export/nim/lpp_source HTTP/1.1
Connection: close

"
Writing 1697 bytes of data to /tmp/export/nim/lpp_source/.content
Total size of datalen is 1697. Content_length size is 1697.
# cat /tmp/export/nim/lpp_source/.content
DIR: 71-04-02-1614 0:0 00240755 256
DIR: 7100-03-00-0000 0:0 00240755 256
DIR: 7100-03-01-1341 0:0 00240755 256
DIR: 7100-03-02-1412 0:0 00240755 256
DIR: 7100-03-03-1415 0:0 00240755 256
DIR: 7100-03-04-1441 0:0 00240755 256
DIR: 7100-03-05-1524 0:0 00240755 256
DIR: 7100-04-00-1543 0:0 00240755 256
DIR: 7100-04-01-1543 0:0 00240755 256
DIR: 7200-00-00-0000 0:0 00240755 256
DIR: 7200-00-01-1543 0:0 00240755 256
DIR: 7200-00-02-1614 0:0 00240755 256
FILE: MH01609.iso 0:0 00100644 1520027648
FILE: aixtools.python.2.7.11.4.I 0:0 00100644 50140160

Here I'm just downloading a python fileset !

# nimhttp -f /export/nim/lpp_source/aixtools.python.2.7.11.4.I -o dest=/tmp -v
[..]
Writing 65536 bytes of data to /tmp/export/nim/lpp_source/aixtools.python.2.7.11.4.I
Writing 69344 bytes of data to /tmp/export/nim/lpp_source/aixtools.python.2.7.11.4.I
Writing 7776 bytes of data to /tmp/export/nim/lpp_source/aixtools.python.2.7.11.4.I
Total size of datalen is 50140160. Content_length size is 50140160.
# ls -l /tmp/export/nim/lpp_source/aixtools.python.2.7.11.4.I
-rw-r--r--    1 root     system     50140160 Jul 23 01:21 /tmp/export/nim/lpp_source/aixtools.python.2.7.11.4.I

Allowed operation

All cust operations on nim objects type lpp_source, installp_bundle, fix_bundle, scripts, and file_res in push or pull are working great with nimhttp. Here are a few examples (from the official doc, thanks to Paul F for that ;-) ) :

  • Push:
  • # nim –o cust –a file_res=obj_name client_obj_name
    # nim –o cust –a script=obj_name client_obj_name
    # nim –o cust –a lpp_source=obj_name -a filesets=fileset names to install client_obj_name
    # nim –o cust –a lpp_source=obj_name -a installp_bundle=obj_name client_obj_name
    # nim –o cust –a lpp_source=obj_name ‐a fixes=update_all client_obj_name
    
  • Pull:
  • # nimclient -o cust -a lpp_source=obj_name -a filesets=fileset names to install
    # nimclient –o cust –a file_res=obj_name
    # nimclient –o cust –a script=obj_name nimclient –o cust –a lpp_source=obj_name -‐a filesets=fileset names to install
    # nimclient –o cust –a lpp_source=obj_name -a installp_bundle=obj_name
    # nimclient –o cust –a lpp_source=obj_name -a fixes=update
    

Proxying: use your own http server

You can use you own webserver to host nimhttp and the nimhttp binary will just act as a proxy between your client and you http server. I have tried to do it but didn't succeed with that I'll let you know if I'm finding the solution:

# grep ^proxt ~/httpd.conf
service.proxy_port=80
enable_proxy=yes

Conclusion: "about administration and post-installation"

Just a few words about best practices of post-installation and administration on AIX. On on the major purpose of this blog post is to prove to you than you need to get rid of an old way of working. The first thing to do is always to try using http or https instead of NFS. To give you an example of that I'm always using http to transfer my files whatever it is (configuration, product installation and so on ...). With an automation tool such as Chef it is so simple to integrate the download of a file from an http server that you must now avoid using NFS ;-) . Second good practice is to never install things "by hand" and using yum is one of the reflex you need to have instead of using the rpm command (Linux users will laugh reading that ... I'm laughing writing that, using yum is just something I'm doing for more than 10 years ... but for AIX admins it's still not the case and not so simple to understand :-) ). As always I hope it helps.

About blogging

I just wanted to say one word about blogging because I got a lot of questions about this (from friends, readers, managers, haters, lovers). I'm doing this for two reasons. The first one is that writing and explaining things force me to better understand what I'm doing and force me to always discover new features, new bugs, new everything. Second I'm doing this for you, for my readers because I remember how blogs were useful to me when I began AIX (Chris and Nigel are the best example of that). I don't care about being the best or the worst. I'm just me. I'm doing this because I love that that's all. Even if manager, recruiters or anybody else don't care about it I'll continue to do this whatever appends. I agree with them "It does not prove anything at all". I'm just like you a standard admin trying to do his job at his best. Sorry for the two months "break" about blogging but it was really crazy at work and in my life. Take care all. Haters gonna hate.