Tag: vmware

  • Ansible: How we prepare the vSphere instances of the VMware CI

    As explain quickly in CI of the Ansible modules for VMware: a retrospective, the Ansible CI uses OpenStack to spawn ephemeral vSphere labs. Our CI tests are run against them.

    A full vSphere deployment is a long process that requires quite a lot of resources. In addition to that, vSphere is rather picky regarding its execution environment.

    The CI of the VMware modules for Ansible runs on OpenStack. Our OpenStack providers use kvm based hypervisor. They expect image in the qcow2 format.

    In this blog post, we will explain how we prepare a cloud image of vSphere (also called golden image).

    a full lab running on libvirt

    First thing, get an large ESXi instance

    The vSphere (VCSA) installation process depends on an ESXi. In our case we use a script and Virt-Lightning to prepare and run an ESXi image on Libvirt. But you can use your own ESXi node as soon as it respects the following minimal constraints:

    • 12GB of memory
    • 50GB of disk space
    • 2 vCPUs

    Deploy the vSphere VM (VCSA)

    For this, I use my own role called goneri.ansible-role-vcenter-instance. It delegates to the vcsa-deploy command deployment. As a result, you don’t needany human interaction during the full process. This is handy if you want to deploy your vSphere in a CI environment.

    At the end of the process, you’ve got a large VM running on your ESXi node.

    In my case, all these steps are handled by the following playbook: https://github.com/virt-lightning/vcsa_to_qcow2/blob/master/install_vcsa.yml

    Tune-up the instance

    Before you shut down the freshly created VM, you would like to do some adjustment.
    I use the following playbook for this: prepare_vm.yml

    During this step, I ensure that:

    • Cloud-Init is installed,
    • the root account is enabled with a real shell,
    • the virtio drivers are available

    Cloud-Init is the de-facto tool that handle all the post-configuration tasks that we can expect from a Cloud image: inject the user SSH key, resize the filesystem, create an user account, etc.

    By default, the vSphere VCSA comes with a gazillion of disks, this is a problem in the case of a cloud environment where an instance is associated with a single disk image.
    So I also move the content of the different partitions in the root filesystem and adjust the /etc/fstab to remove all the reference to the other disks. This way I will be able to only maintain on qcow2 image.

    All these steps are handled by the following playbook: prepare_vm.yml

    Prepare the final Qcow2 image

    At this stage, the VM is still running, so I shut it down.
    Once this is done, I extract the raw image of the disk using the curl command:

    curl -v -k --user 'root:!234AaAa56' -o vCenterServerAppliance.raw 'https://192.168.123.5/folder/vCenter-Server-Appliance/vCenter-Server-Appliance-flat.vmdk?dcPath=ha%252ddatacenter&dsName=l
    ocal'
    • root:!234AaAa56 is my login and password
    • vCenterServerAppliance.raw is the name of the local file
    • 192.168.123.5 is the IP address of my ESXi
    • vCenter-Server-Appliance is the name of the vSphere instance vCenter-Server-Appliance-flat.vmdk is the associated raw disk

    The local .raw file is large (50GB), ensure you’ve got enough free space.

    You can finally convert the raw file to a qcow2 file. You can use Qemu’s qemu-img for that, it will work fine BUT the image will be monstrously large. I instead use virt-sparsify from the libGuestFS project. This command will reduce the size of the image to the bare minimum.

    virt-sparsify --tmp tmp --compress --convert qcow2 vCenterServerAppliance.raw vSphere.qcow2

    Conclusion

    You can upload the image in your OpenStack project with following command:

    openstack image create --disk-format qcow2 --file vSphere.qcow2 --property hw_qemu_guest_agent=no vSphere

    If your OpenStack provider uses Ceph, you will probably want to reconvert the image to a flat raw file before the upload. With vSphere 6.7U3 and before, you need to force the use of a e1000 NIC. For that, add --property hw_vif_model=e1000 to the command above.

    I’ve just done done the whole process with vSphere 7.0.0U1 in 1h30 (Lenovo T580 laptop). I use the ./run.sh script from https://github.com/virt-lightning/vcsa_to_qcow2, which auotmate everything.

    The final result is certainly not supported by VMware, but we’ve already run hundreds of successful CI jobs with this kind of vSphere instances. The CI prepares a fresh CI lab in around 10 minutes.

  • vmware_rest: why a new Ansible Collection?

    vmware.vmware_rest (https://galaxy.ansible.com/vmware/vmware_rest) is a new Ansible Collection for VMware. You can use it to manage the guests of your vCenter. If you’re familiar with Ansible and VMware, you will notice this Collection overlaps with some features of community.vmware. You may think the two collections are competing and this it’s a waste of resources. It’s not that simple.

    A bit of context will be necessary to fully understand why it’s not exactly the case. The development of the community.vmware collection started during the vCenter 6.0 cycle. At this time, the de facto SDK to build Python application was pyvmomi, which you may know as the vSphere SDK for Python. This Python library relies on the SOAP interface that has been around for more than a decade. By comparison, the vSphere REST interface was still a novelty. The support of some important API were missing and documentation was limited.

    Today, the situation has evolved. Pyvmomi is not actively maintained anymore and some new services are only exposed on the REST interface; a good example is the tagging API. VMware has also introduced a new Python SDK called vSphere Automation SDK (https://github.com/vmware/vsphere-automation-sdk-python) to consume this new API. For instance, this is what community.vmware_tag_info uses underneath.

    This new SDK comes at a cost for the users. They need to pull an extra Python dependency in addition to pyvmomi and to make the situation worse, this library is not on pypi (See: https://github.com/vmware/vsphere-automation-sdk-python/issues/38), Python’s official library repository. They need to install it from Github instead. This is a source of confusion for our users.

    From a development perspective, we don’t like when a module needs to load a gazillion of Python dependencies because this slows down the execution time and it’s a source of complexity. But we cannot ditch pyvmomi immediately because a lot of modules rely on it. We can potentially rewrite these modules to use the vSphere Automation SDK.

    These modules are already stable and of high quality. Many users depend on them. Modifying these modules to use the vSphere Automation SDK is risky. Any single regression would have wide impact.

    Our users would be frustrated by a transition, especially because it would bring absolutely zero new features to them. This also means we would have to reproduce the exact same behaviour, we miss an opportunity to improve the modules.

    Technically speaking, an application that consumes a REST interface doesn’t really need an SDK. It can be handy sometime, for instance for the authentication, but overall the standard HTTP client should be enough. After all, the ‘S’ of REST stands for Simple for a reason.

    vSphere REST API is not always consistent, but it’s well documented. VMware maintains a tool called vmware-openapi-generator (https://github.com/vmware/vmware-openapi-generator) to extract it in a machine compatible format (Swagger 2.0 or OpenAPI 3).

    During our quest for a solution to this Python dependency problem, we’ve designed a Proof of Concept (PoC). It was based on a set modules with no dependency on any third party library. And off course, these modules were auto-generated. We’ve mentioned the conclusion of the PoC back in March during the community during the VMware / Ansible weekly meeting (https://github.com/ansible/community/issues/423).

    The feedback convinced us we were on the right path. And here we go, 5 months after. The first beta release of vmware.vmware_rest has just been announced on Ansible’s blog!

  • CI of the Ansible modules for VMware: a retrospective

    Simple VMware Provisioning, Management and Deprovisioning

    Since January 2020, every new Ansible VMware pull request is tested by the CI against a real VMware lab. Creating the CI environment against a real VMWare lab has been a long journey, which I’ll share in this blog post.

    Ansible VMware provides more than 170 modules, each of them dedicated to a specific area. You can use them to manage your ESXi hosts, the vCenters, the vSAN, do the common day-to-day guest management, etc.

    Our modules are maintained by a community of contributors, and a large number of the Pull Requests (PR) are contributions from newcomers. The classic scenario is a user who’s found a problem or a limitation with a module, and would like to address it.

    This is the reason why our contributors are not necessary developers. The average contributor doesn’t necessarily have advanced Python experience. We can hardly ask them to write Python unit-tests. Requiring this level of work creates a barrier to contribution. this would be a source of confusion and frustration and we would lose a lot of valuable contributions. However, they are power users. They have a great understanding of VMware and Ansible, and so we maintain a test playbook for most of the modules.

    Previously, when a new change was submitted, was running the light Ansible sanity test-suite and an integration test against govcsim, a VMware API simulator (https://github.com/vmware/govmomi/tree/master/vcsim).

    govcsim is a handy piece of software; you can start it locally to mock a vSphere infrastructure. But it doesn’t fully support some important VMware components like the network devices or datastore. As a consequence, the core-reviewers were asked to download the changeset locally, and run the functional tests against their own vSphere lab.

    In our context, a vSphere lab is:

    – a vCenter instance

    – 2 ESXi

    – 2 NFS datastores, with some pre-existing files.

    We also had the challenge in our test environment. Functional tests destroy or create network switches, enable IPv6, add new datastores, and rarely if ever restored the system to initial configuration once complete. Leaving the labs in disarray, and compounding with each series of tests. Consequently, the reviews were slow, and we were wasting days fixing our infrastructures. Since the tests were not reproducible and done locally, it was hard to distinguish set-up errors from actual issues and therefore it was hard to provide meaningful feedback to contributors: Is this error coming from my set-up? I need to manually copy/past the error with the contributor, sometime several days after the initial commit.

    This was a frustrating situation for us, and for the contributors. But well, we’ve spent years doing that…

    You may find we like to suffer, which is probably true to some extent, but the real problem is that it’s rather complex to automate the full deployment of a lab. vSphere is an appliance VM in the OVA format. It has to be deployed on an ESXi. Officially, the ESXi can’t be virtualized, unless they run on an ESXi themselves. In addition, we use Evaluation licenses, and as a consequence, we cannot rely on features like snapshotting, and we have to redeploy our lab every 60 days.

    We can do better! Some others did!

    The Ansible network modules were facing similar challenges. Network devices are required to fully validate a change, but it’s costly to stack and maintain operation of hundreds of devices just for validation.

    They’ve decided to invest in OpenStack and a CI solution called Zuul-CI (https://zuul-ci.org/). I don’t want to elaborate too much on Zuul since the topic itself is worth a book. But basically, everytime a change gets pushed, Zuul will spawn a multi node test environment, prepare the test execution using … Ansible, Yeah! And finally, run the test and collect the result. This environment makes use of appliances coming from the vendors. It’s basically just a VM. OpenStack is pretty flexible for this use-case, especially when you’ve got top-notch support with the providers.

    Let’s build some VMware Cloud images!

    To run a VM in a cloud environment, it has to match the following requirements:

    • use one single disk image, a qcow2 in the case of OpenStack.
    • supports the hardware exposed by the hypervisor, qemu-kvm in our case
    • configures itself according to the metadata information exposed by the cloud provider (IP, SSH keys, etc). This service is handled by Cloud-init most of the time.

    ESXi cloud image

    For ESXi, the first step was to deploy ESXi on libvirt/qemu-kvm. This works fine as we avoid virtio. And with a bit more effort, we can automate the process ( https://github.com/virt-lightning/esxi-cloud-images ). But our VM is not yet self-configuring. We need an alternative to Cloud-init. This is what esxi-cloud-init ( https://github.com/goneri/esxi-cloud-init/ ) will do for us. It reads the cloud metadata and prepares the network configuration of the ESXi host, and it also injects the SSH keys.

    The image build process is rather simple once you’ve got libvirt and virt-install on your machine:

    $ git clone https://github.com/virt-lightning/esxi-cloud-images

    $ cd esxi-cloud-images

    $ ./build.sh ~/Downloads/VMware-VMvisor-Installer-7.0.0-15525992.x86_64.iso

    (…)

    $ ls esxi-6.7.0-20190802001-STANDARD.qcow2

    The image can run on OpenStack, but also on libvirt. Virt-Lightning (https://virt-lightning.org/) is the tool we use to spawn our environment locally.

    vCenter cloud image too?

    update: See Ansible: How we prepare the vSphere instances of the VMware CI for a more detailed explaination of the VCSA deployment process.

    We wanted to deploy vCenter on our instance, but this is daunting. vCenter has a slow installation process, it requires an ESXi host, and is extremely sensitive to any form of network configuration changes…

    So the initial strategy was to spawn a ESXi instance, and deploy vCenter on it. This is handled by ansible-role-vcenter-instance ( https://github.com/goneri/ansible-role-vcenter-instance ). The full process takes about 25m.

    We became operational but the deployment process overwhelmed our lab. Additionally, the ESXi instance (16GB of RAM) was too much for running on a laptop. I started investigating new options.

    Technically speaking, the vCenter Server Appliance or VCSA, is based on Photon Linux, the Linux distribution of VMware, and the VM actually comes with 15 large disks. This is a bit problematic since our final cloud image must be a single disk and be as small as possible. I developed this strategy:

    1. connect on the running VCSA, move all the content from the partition to the main partition, and drop the extra disk from the /etc/fstab
    2. do some extra things regarding the network and Cloud-init configuration.
    3. stop the server
    4. extract the raw disk image from the ESXi datastore
    5. convert it to the qcow2 format
    6. and voilà! You’ve got a nice cloud image of your vCenter.

    All the steps are automated by the following tool:  https://github.com/virt-lightning/vcsa_to_qcow2. It also enables virtio for better performance.

    Preparing development environment locally

    To simplify the deployment, I use the following tool: https://github.com/goneri/deploy-vmware-ci. It will use virt-lightning to spawn the nodes, and do the post-configuration with Ansible. It reuses the roles that we consume in the CI to configure the vCenter, the host names, and populate the datastore.

    In this example, I use it to start my ESXi environment on my Lenovo T580 laptop; the full run takes 15 minutes: https://asciinema.org/a/349246

    Being able to redeploy a work environment in 15 minutes has been a life changer. I often recreate it several times a day. In addition, the local deployment workflow reproduces what we do in the CI, it’s handy to validate a changeset, or troubleshoot a problem.

    The CI integration

    Each Ansible module is different, which makes for different test requirements. We’ve got 3 topologies:

    • vcenter_only: only one single vCenter instance)
    • vcenter_1esxi_with_nested: one vCenter with an ESXi, this ESXi is capable of starting a nested VM.
    • vcenter_1_esxi_without_nested: the same. but this time, we don’t start nested VM. Compared to the previous case, this set-up is compatible with all our providers.
    • vcenter_2_esxi_without_nested: well, like the previous one, but with a second ESXi, for instance to test ha or migration.

    The nodeset definition is done in the following file: https://github.com/ansible/ansible-zuul-jobs/blob/master/zuul.d/nodesets.yaml

    We split the hours long test execution time on the different environments. An example of job result:

    As you can see, we still run govcsim in the CI, even if it’s superseded by a real test environment. Since govcsim jobs run faster, we assume that failure would also fail against the real lab and abort the other jobs. This is a way to save time and resources.

    I would like to thank Chuck Copello for the helpful review of this blog post.