vmware.vmware_rest (https://galaxy.ansible.com/vmware/vmware_rest) is a new Ansible Collection for VMware. You can use it to manage the guests of your vCenter. If you’re familiar with Ansible and VMware, you will notice this Collection overlaps with some features of community.vmware. You may think the two collections are competing and this it’s a waste of resources. It’s not that simple.
A bit of context will be necessary to fully understand why it’s not exactly the case. The development of the community.vmware collection started during the vCenter 6.0 cycle. At this time, the de facto SDK to build Python application was pyvmomi, which you may know as the vSphere SDK for Python. This Python library relies on the SOAP interface that has been around for more than a decade. By comparison, the vSphere REST interface was still a novelty. The support of some important API were missing and documentation was limited.
Today, the situation has evolved. Pyvmomi is not actively maintained anymore and some new services are only exposed on the REST interface; a good example is the tagging API. VMware has also introduced a new Python SDK called vSphere Automation SDK (https://github.com/vmware/vsphere-automation-sdk-python) to consume this new API. For instance, this is what community.vmware_tag_info uses underneath.
This new SDK comes at a cost for the users. They need to pull an extra Python dependency in addition to pyvmomi and to make the situation worse, this library is not on pypi (See: https://github.com/vmware/vsphere-automation-sdk-python/issues/38), Python’s official library repository. They need to install it from Github instead. This is a source of confusion for our users.
From a development perspective, we don’t like when a module needs to load a gazillion of Python dependencies because this slows down the execution time and it’s a source of complexity. But we cannot ditch pyvmomi immediately because a lot of modules rely on it. We can potentially rewrite these modules to use the vSphere Automation SDK.
These modules are already stable and of high quality. Many users depend on them. Modifying these modules to use the vSphere Automation SDK is risky. Any single regression would have wide impact.
Our users would be frustrated by a transition, especially because it would bring absolutely zero new features to them. This also means we would have to reproduce the exact same behaviour, we miss an opportunity to improve the modules.
Technically speaking, an application that consumes a REST interface doesn’t really need an SDK. It can be handy sometime, for instance for the authentication, but overall the standard HTTP client should be enough. After all, the ‘S’ of REST stands for Simple for a reason.
vSphere REST API is not always consistent, but it’s well documented. VMware maintains a tool called vmware-openapi-generator (https://github.com/vmware/vmware-openapi-generator) to extract it in a machine compatible format (Swagger 2.0 or OpenAPI 3).
During our quest for a solution to this Python dependency problem, we’ve designed a Proof of Concept (PoC). It was based on a set modules with no dependency on any third party library. And off course, these modules were auto-generated. We’ve mentioned the conclusion of the PoC back in March during the community during the VMware / Ansible weekly meeting (https://github.com/ansible/community/issues/423).
The feedback convinced us we were on the right path. And here we go, 5 months after. The first beta release of vmware.vmware_rest has just been announced on Ansible’s blog!
Since January 2020, every new Ansible VMware pull request is tested by the CI against a real VMware lab. Creating the CI environment against a real VMWare lab has been a long journey, which I’ll share in this blog post.
Ansible VMware provides more than 170 modules, each of them dedicated to a specific area. You can use them to manage your ESXi hosts, the vCenters, the vSAN, do the common day-to-day guest management, etc.
Our modules are maintained by a community of contributors, and a large number of the Pull Requests (PR) are contributions from newcomers. The classic scenario is a user who’s found a problem or a limitation with a module, and would like to address it.
This is the reason why our contributors are not necessary developers. The average contributor doesn’t necessarily have advanced Python experience. We can hardly ask them to write Python unit-tests. Requiring this level of work creates a barrier to contribution. this would be a source of confusion and frustration and we would lose a lot of valuable contributions. However, they are power users. They have a great understanding of VMware and Ansible, and so we maintain a test playbook for most of the modules.
Previously, when a new change was submitted, was running the light Ansible sanity test-suite and an integration test against govcsim, a VMware API simulator (https://github.com/vmware/govmomi/tree/master/vcsim).
govcsim is a handy piece of software; you can start it locally to mock a vSphere infrastructure. But it doesn’t fully support some important VMware components like the network devices or datastore. As a consequence, the core-reviewers were asked to download the changeset locally, and run the functional tests against their own vSphere lab.
In our context, a vSphere lab is:
– a vCenter instance
– 2 ESXi
– 2 NFS datastores, with some pre-existing files.
We also had the challenge in our test environment. Functional tests destroy or create network switches, enable IPv6, add new datastores, and rarely if ever restored the system to initial configuration once complete. Leaving the labs in disarray, and compounding with each series of tests. Consequently, the reviews were slow, and we were wasting days fixing our infrastructures. Since the tests were not reproducible and done locally, it was hard to distinguish set-up errors from actual issues and therefore it was hard to provide meaningful feedback to contributors: Is this error coming from my set-up? I need to manually copy/past the error with the contributor, sometime several days after the initial commit.
This was a frustrating situation for us, and for the contributors. But well, we’ve spent years doing that…
You may find we like to suffer, which is probably true to some extent, but the real problem is that it’s rather complex to automate the full deployment of a lab. vSphere is an appliance VM in the OVA format. It has to be deployed on an ESXi. Officially, the ESXi can’t be virtualized, unless they run on an ESXi themselves. In addition, we use Evaluation licenses, and as a consequence, we cannot rely on features like snapshotting, and we have to redeploy our lab every 60 days.
We can do better! Some others did!
The Ansible network modules were facing similar challenges. Network devices are required to fully validate a change, but it’s costly to stack and maintain operation of hundreds of devices just for validation.
They’ve decided to invest in OpenStack and a CI solution called Zuul-CI (https://zuul-ci.org/). I don’t want to elaborate too much on Zuul since the topic itself is worth a book. But basically, everytime a change gets pushed, Zuul will spawn a multi node test environment, prepare the test execution using … Ansible, Yeah! And finally, run the test and collect the result. This environment makes use of appliances coming from the vendors. It’s basically just a VM. OpenStack is pretty flexible for this use-case, especially when you’ve got top-notch support with the providers.
Let’s build some VMware Cloud images!
To run a VM in a cloud environment, it has to match the following requirements:
use one single disk image, a qcow2 in the case of OpenStack.
supports the hardware exposed by the hypervisor, qemu-kvm in our case
configures itself according to the metadata information exposed by the cloud provider (IP, SSH keys, etc). This service is handled by Cloud-init most of the time.
ESXi cloud image
For ESXi, the first step was to deploy ESXi on libvirt/qemu-kvm. This works fine as we avoid virtio. And with a bit more effort, we can automate the process ( https://github.com/virt-lightning/esxi-cloud-images ). But our VM is not yet self-configuring. We need an alternative to Cloud-init. This is what esxi-cloud-init ( https://github.com/goneri/esxi-cloud-init/ ) will do for us. It reads the cloud metadata and prepares the network configuration of the ESXi host, and it also injects the SSH keys.
The image build process is rather simple once you’ve got libvirt and virt-install on your machine:
We wanted to deploy vCenter on our instance, but this is daunting. vCenter has a slow installation process, it requires an ESXi host, and is extremely sensitive to any form of network configuration changes…
So the initial strategy was to spawn a ESXi instance, and deploy vCenter on it. This is handled by ansible-role-vcenter-instance ( https://github.com/goneri/ansible-role-vcenter-instance ). The full process takes about 25m.
We became operational but the deployment process overwhelmed our lab. Additionally, the ESXi instance (16GB of RAM) was too much for running on a laptop. I started investigating new options.
Technically speaking, the vCenter Server Appliance or VCSA, is based on Photon Linux, the Linux distribution of VMware, and the VM actually comes with 15 large disks. This is a bit problematic since our final cloud image must be a single disk and be as small as possible. I developed this strategy:
connect on the running VCSA, move all the content from the partition to the main partition, and drop the extra disk from the /etc/fstab
do some extra things regarding the network and Cloud-init configuration.
stop the server
extract the raw disk image from the ESXi datastore
convert it to the qcow2 format
and voilà! You’ve got a nice cloud image of your vCenter.
To simplify the deployment, I use the following tool: https://github.com/goneri/deploy-vmware-ci. It will use virt-lightning to spawn the nodes, and do the post-configuration with Ansible. It reuses the roles that we consume in the CI to configure the vCenter, the host names, and populate the datastore.
In this example, I use it to start my ESXi environment on my Lenovo T580 laptop; the full run takes 15 minutes: https://asciinema.org/a/349246
Being able to redeploy a work environment in 15 minutes has been a life changer. I often recreate it several times a day. In addition, the local deployment workflow reproduces what we do in the CI, it’s handy to validate a changeset, or troubleshoot a problem.
The CI integration
Each Ansible module is different, which makes for different test requirements. We’ve got 3 topologies:
vcenter_only: only one single vCenter instance)
vcenter_1esxi_with_nested: one vCenter with an ESXi, this ESXi is capable of starting a nested VM.
vcenter_1_esxi_without_nested: the same. but this time, we don’t start nested VM. Compared to the previous case, this set-up is compatible with all our providers.
vcenter_2_esxi_without_nested: well, like the previous one, but with a second ESXi, for instance to test ha or migration.
We split the hours long test execution time on the different environments. An example of job result:
As you can see, we still run govcsim in the CI, even if it’s superseded by a real test environment. Since govcsim jobs run faster, we assume that failure would also fail against the real lab and abort the other jobs. This is a way to save time and resources.
I would like to thank Chuck Copello for the helpful review of this blog post.
This post compares the start-up duration of the most popular Cloud images.By start-up, I mean the time until we’ve got an operational SSH server.
For this test, I use a pet project called Virt-Lightning ( https://virt-lightning.org/ ). This tool allow any Linux user to start standard Cloud image locally. It will prepare the meta-data and start a VM in your local libvirt. It’s very handy for people like be, who work on Linux and spend the day starting new VM. The image are in the QCow2 format, and it uses the OpenStack meta-data format. Technically speaking, the performance should match what you get with OpenStack.
Actually, OpenStack is often slightly slower because it does some extra operations. It may need to create a volume on Ceph, or prepare extra network configuration.
The 2.0.0 release of Virt-Lighnint exposes a public API. My test scenario is built on top of that. It uses Python to pull the different images, and creates a VM from it 10 times in a row.
All the images are public, Virt-Lightning can fetch them with the vl pull foo command:
vl pull centos-6
During the boot process, the VM will set-up a static network configuration, resize the filesystem, create an user, and inject a SSH key.
By default, Virt-Lightning uses a static network configuration because it’s faster, and it gives better performance when we start a large number of VM at the same time. I choose to stick with this.
I did my tests on my Lenovo T580, which comes with a NVMe storage, and 32GB of memory. I would be curious to see the results with the same scenario, but on regular spinning disk.
The target images
For this test, I compare the following Linux distributions: CentOS, Debian, Fedora, Ubuntu and OpenSUSE. As far as I know, there is no public Cloud image available for the other common distributions. If you think I’m wrong, please post a comment below.
I also included the last FreeBSD, NetBSD and OpenBSD releases. They don’t provide official Cloud Images. This is the reason why, I reuse the unofficial ones from https://bsd-cloud-image.org/.
The lack of pre-existing Windows image is the reason why this OS is not included.
Results
Debian 10 is by far the fastest image with an impressive 15s on average. Basically, 5s less than any other Cloud Image.
Regarding the BSD, FreeBSD is the only system able to resize the root filesystem without a reboot. Consequently, OpenBSD and NetBSD need to start two times in a row. This explains to big difference. The NetBSD kernel hardware probe is rather slow, for instance it takes 5s to initialize the ATA bus of the CDROM. This is the reason why the results look rather bad.
About Ubuntu, I was surprised by the boot duration of Ubuntu 18.04. It is about two times longer than for 16.04. 20.04 is bit better but still, we are far from the 15s of 14.04. I would be curious to know the origin of this. Maybe AppArmor?
CentOS 6 results are not really consiste. They vary between 17.9s and 25.21s. This is the largest delta if you compare with the other distribution. This being said, CentOS 6 is rather old, and won’t be supported anymore at the end of the year.
Conclusions
All the recent Linux images are based on systemd. It would be great to extract the metrics from systemd-analyze to understand what impact the performance the most.
Most of the time, when I deploy a test VM, the very first thing I do is the installation of some import packages. This scenario may be covered later in another blog post.
I’m a big fan of BURP to maintain my backup. This article explains how to reuse the PuppetMaster CA for authentification. I use Debian burp package on Wheezy.
First, you need to generate the dhfile.pem on both the server and the agent:
J’ai acheté le correcteur orthographique Antidote 8 que j’ai installé hier. L’outil est vraiment impressionnant et agréable à utiliser.
L’installation sur Debian Sid n’est pas supportée, cependant son utilisation est possible. Je dois encore voir si je peux l’intégrer avec Firefox (Iceweasel) et Thunderbird (Icedove).
Pour éviter un problème avec les kernel >= 3 il faut faire une petite manip présentée ici : http://www.debian-fr.org/certains-logiciels-dysfonctionnent-en-changeant-de-noyau-t42688.html # wget https://mail.gnome.org/archives/evolution-list/2003-December/txtBEWSVk2eft.txt -O /tmp/uname.c
$ (echo #define _GNU_SOURCE; cat /tmp/uname.c) > /tmp/uname.c
$ gcc -shared -fPIC -ldl uname.c -o /opt/Druide/Antidote8/Programmes64/fake-uname.so
Il ne reste plus qu’a ajouter les deux lignes suivantes au début du script /opt/Druide/Antidote8/Programmes64/Antidote8. export LD_PRELOAD=/opt/Druide/Antidote8/Programmes64/fake-uname.so
export RELEASE=$(uname -r | sed 's/^\(...\)/\1.0-antidote-fix/g')