Blog

  • vmware_rest: why a new Ansible Collection?

    vmware.vmware_rest (https://galaxy.ansible.com/vmware/vmware_rest) is a new Ansible Collection for VMware. You can use it to manage the guests of your vCenter. If you’re familiar with Ansible and VMware, you will notice this Collection overlaps with some features of community.vmware. You may think the two collections are competing and this it’s a waste of resources. It’s not that simple.

    A bit of context will be necessary to fully understand why it’s not exactly the case. The development of the community.vmware collection started during the vCenter 6.0 cycle. At this time, the de facto SDK to build Python application was pyvmomi, which you may know as the vSphere SDK for Python. This Python library relies on the SOAP interface that has been around for more than a decade. By comparison, the vSphere REST interface was still a novelty. The support of some important API were missing and documentation was limited.

    Today, the situation has evolved. Pyvmomi is not actively maintained anymore and some new services are only exposed on the REST interface; a good example is the tagging API. VMware has also introduced a new Python SDK called vSphere Automation SDK (https://github.com/vmware/vsphere-automation-sdk-python) to consume this new API. For instance, this is what community.vmware_tag_info uses underneath.

    This new SDK comes at a cost for the users. They need to pull an extra Python dependency in addition to pyvmomi and to make the situation worse, this library is not on pypi (See: https://github.com/vmware/vsphere-automation-sdk-python/issues/38), Python’s official library repository. They need to install it from Github instead. This is a source of confusion for our users.

    From a development perspective, we don’t like when a module needs to load a gazillion of Python dependencies because this slows down the execution time and it’s a source of complexity. But we cannot ditch pyvmomi immediately because a lot of modules rely on it. We can potentially rewrite these modules to use the vSphere Automation SDK.

    These modules are already stable and of high quality. Many users depend on them. Modifying these modules to use the vSphere Automation SDK is risky. Any single regression would have wide impact.

    Our users would be frustrated by a transition, especially because it would bring absolutely zero new features to them. This also means we would have to reproduce the exact same behaviour, we miss an opportunity to improve the modules.

    Technically speaking, an application that consumes a REST interface doesn’t really need an SDK. It can be handy sometime, for instance for the authentication, but overall the standard HTTP client should be enough. After all, the ‘S’ of REST stands for Simple for a reason.

    vSphere REST API is not always consistent, but it’s well documented. VMware maintains a tool called vmware-openapi-generator (https://github.com/vmware/vmware-openapi-generator) to extract it in a machine compatible format (Swagger 2.0 or OpenAPI 3).

    During our quest for a solution to this Python dependency problem, we’ve designed a Proof of Concept (PoC). It was based on a set modules with no dependency on any third party library. And off course, these modules were auto-generated. We’ve mentioned the conclusion of the PoC back in March during the community during the VMware / Ansible weekly meeting (https://github.com/ansible/community/issues/423).

    The feedback convinced us we were on the right path. And here we go, 5 months after. The first beta release of vmware.vmware_rest has just been announced on Ansible’s blog!

  • CI of the Ansible modules for VMware: a retrospective

    Simple VMware Provisioning, Management and Deprovisioning

    Since January 2020, every new Ansible VMware pull request is tested by the CI against a real VMware lab. Creating the CI environment against a real VMWare lab has been a long journey, which I’ll share in this blog post.

    Ansible VMware provides more than 170 modules, each of them dedicated to a specific area. You can use them to manage your ESXi hosts, the vCenters, the vSAN, do the common day-to-day guest management, etc.

    Our modules are maintained by a community of contributors, and a large number of the Pull Requests (PR) are contributions from newcomers. The classic scenario is a user who’s found a problem or a limitation with a module, and would like to address it.

    This is the reason why our contributors are not necessary developers. The average contributor doesn’t necessarily have advanced Python experience. We can hardly ask them to write Python unit-tests. Requiring this level of work creates a barrier to contribution. this would be a source of confusion and frustration and we would lose a lot of valuable contributions. However, they are power users. They have a great understanding of VMware and Ansible, and so we maintain a test playbook for most of the modules.

    Previously, when a new change was submitted, was running the light Ansible sanity test-suite and an integration test against govcsim, a VMware API simulator (https://github.com/vmware/govmomi/tree/master/vcsim).

    govcsim is a handy piece of software; you can start it locally to mock a vSphere infrastructure. But it doesn’t fully support some important VMware components like the network devices or datastore. As a consequence, the core-reviewers were asked to download the changeset locally, and run the functional tests against their own vSphere lab.

    In our context, a vSphere lab is:

    – a vCenter instance

    – 2 ESXi

    – 2 NFS datastores, with some pre-existing files.

    We also had the challenge in our test environment. Functional tests destroy or create network switches, enable IPv6, add new datastores, and rarely if ever restored the system to initial configuration once complete. Leaving the labs in disarray, and compounding with each series of tests. Consequently, the reviews were slow, and we were wasting days fixing our infrastructures. Since the tests were not reproducible and done locally, it was hard to distinguish set-up errors from actual issues and therefore it was hard to provide meaningful feedback to contributors: Is this error coming from my set-up? I need to manually copy/past the error with the contributor, sometime several days after the initial commit.

    This was a frustrating situation for us, and for the contributors. But well, we’ve spent years doing that…

    You may find we like to suffer, which is probably true to some extent, but the real problem is that it’s rather complex to automate the full deployment of a lab. vSphere is an appliance VM in the OVA format. It has to be deployed on an ESXi. Officially, the ESXi can’t be virtualized, unless they run on an ESXi themselves. In addition, we use Evaluation licenses, and as a consequence, we cannot rely on features like snapshotting, and we have to redeploy our lab every 60 days.

    We can do better! Some others did!

    The Ansible network modules were facing similar challenges. Network devices are required to fully validate a change, but it’s costly to stack and maintain operation of hundreds of devices just for validation.

    They’ve decided to invest in OpenStack and a CI solution called Zuul-CI (https://zuul-ci.org/). I don’t want to elaborate too much on Zuul since the topic itself is worth a book. But basically, everytime a change gets pushed, Zuul will spawn a multi node test environment, prepare the test execution using … Ansible, Yeah! And finally, run the test and collect the result. This environment makes use of appliances coming from the vendors. It’s basically just a VM. OpenStack is pretty flexible for this use-case, especially when you’ve got top-notch support with the providers.

    Let’s build some VMware Cloud images!

    To run a VM in a cloud environment, it has to match the following requirements:

    • use one single disk image, a qcow2 in the case of OpenStack.
    • supports the hardware exposed by the hypervisor, qemu-kvm in our case
    • configures itself according to the metadata information exposed by the cloud provider (IP, SSH keys, etc). This service is handled by Cloud-init most of the time.

    ESXi cloud image

    For ESXi, the first step was to deploy ESXi on libvirt/qemu-kvm. This works fine as we avoid virtio. And with a bit more effort, we can automate the process ( https://github.com/virt-lightning/esxi-cloud-images ). But our VM is not yet self-configuring. We need an alternative to Cloud-init. This is what esxi-cloud-init ( https://github.com/goneri/esxi-cloud-init/ ) will do for us. It reads the cloud metadata and prepares the network configuration of the ESXi host, and it also injects the SSH keys.

    The image build process is rather simple once you’ve got libvirt and virt-install on your machine:

    $ git clone https://github.com/virt-lightning/esxi-cloud-images

    $ cd esxi-cloud-images

    $ ./build.sh ~/Downloads/VMware-VMvisor-Installer-7.0.0-15525992.x86_64.iso

    (…)

    $ ls esxi-6.7.0-20190802001-STANDARD.qcow2

    The image can run on OpenStack, but also on libvirt. Virt-Lightning (https://virt-lightning.org/) is the tool we use to spawn our environment locally.

    vCenter cloud image too?

    update: See Ansible: How we prepare the vSphere instances of the VMware CI for a more detailed explaination of the VCSA deployment process.

    We wanted to deploy vCenter on our instance, but this is daunting. vCenter has a slow installation process, it requires an ESXi host, and is extremely sensitive to any form of network configuration changes…

    So the initial strategy was to spawn a ESXi instance, and deploy vCenter on it. This is handled by ansible-role-vcenter-instance ( https://github.com/goneri/ansible-role-vcenter-instance ). The full process takes about 25m.

    We became operational but the deployment process overwhelmed our lab. Additionally, the ESXi instance (16GB of RAM) was too much for running on a laptop. I started investigating new options.

    Technically speaking, the vCenter Server Appliance or VCSA, is based on Photon Linux, the Linux distribution of VMware, and the VM actually comes with 15 large disks. This is a bit problematic since our final cloud image must be a single disk and be as small as possible. I developed this strategy:

    1. connect on the running VCSA, move all the content from the partition to the main partition, and drop the extra disk from the /etc/fstab
    2. do some extra things regarding the network and Cloud-init configuration.
    3. stop the server
    4. extract the raw disk image from the ESXi datastore
    5. convert it to the qcow2 format
    6. and voilà! You’ve got a nice cloud image of your vCenter.

    All the steps are automated by the following tool:  https://github.com/virt-lightning/vcsa_to_qcow2. It also enables virtio for better performance.

    Preparing development environment locally

    To simplify the deployment, I use the following tool: https://github.com/goneri/deploy-vmware-ci. It will use virt-lightning to spawn the nodes, and do the post-configuration with Ansible. It reuses the roles that we consume in the CI to configure the vCenter, the host names, and populate the datastore.

    In this example, I use it to start my ESXi environment on my Lenovo T580 laptop; the full run takes 15 minutes: https://asciinema.org/a/349246

    Being able to redeploy a work environment in 15 minutes has been a life changer. I often recreate it several times a day. In addition, the local deployment workflow reproduces what we do in the CI, it’s handy to validate a changeset, or troubleshoot a problem.

    The CI integration

    Each Ansible module is different, which makes for different test requirements. We’ve got 3 topologies:

    • vcenter_only: only one single vCenter instance)
    • vcenter_1esxi_with_nested: one vCenter with an ESXi, this ESXi is capable of starting a nested VM.
    • vcenter_1_esxi_without_nested: the same. but this time, we don’t start nested VM. Compared to the previous case, this set-up is compatible with all our providers.
    • vcenter_2_esxi_without_nested: well, like the previous one, but with a second ESXi, for instance to test ha or migration.

    The nodeset definition is done in the following file: https://github.com/ansible/ansible-zuul-jobs/blob/master/zuul.d/nodesets.yaml

    We split the hours long test execution time on the different environments. An example of job result:

    As you can see, we still run govcsim in the CI, even if it’s superseded by a real test environment. Since govcsim jobs run faster, we assume that failure would also fail against the real lab and abort the other jobs. This is a way to save time and resources.

    I would like to thank Chuck Copello for the helpful review of this blog post.

  • Cloud images, which one is the fastest?

    Introduction

    This post compares the start-up duration of the most popular Cloud images.By start-up, I mean the time until we’ve got an operational SSH server.

    For this test, I use a pet project called Virt-Lightning ( https://virt-lightning.org/ ). This tool allow any Linux user to start standard Cloud image locally. It will prepare the meta-data and start a VM in your local libvirt. It’s very handy for people like be, who work on Linux and spend the day starting new VM. The image are in the QCow2 format, and it uses the OpenStack meta-data format. Technically speaking, the performance should match what you get with OpenStack.

    Actually, OpenStack is often slightly slower because it does some extra operations. It may need to create a volume on Ceph, or prepare extra network configuration.

    The 2.0.0 release of Virt-Lighnint exposes a public API. My test scenario is built on top of that. It uses Python to pull the different images, and creates a VM from it 10 times in a row.

    All the images are public, Virt-Lightning can fetch them with the vl pull foo command:

    vl pull centos-6

    During the boot process, the VM will set-up a static network configuration, resize the filesystem, create an user, and inject a SSH key.

    By default, Virt-Lightning uses a static network configuration because it’s faster, and it gives better performance when we start a large number of VM at the same time. I choose to stick with this.

    I did my tests on my Lenovo T580, which comes with a NVMe storage, and 32GB of memory. I would be curious to see the results with the same scenario, but on regular spinning disk.

    The target images

    For this test, I compare the following Linux distributions: CentOS, Debian, Fedora, Ubuntu and OpenSUSE. As far as I know, there is no public Cloud image available for the other common distributions. If you think I’m wrong, please post a comment below.

    I also included the last FreeBSD, NetBSD and OpenBSD releases. They don’t provide official Cloud Images. This is the reason why, I reuse the unofficial ones from https://bsd-cloud-image.org/.

    The lack of pre-existing Windows image is the reason why this OS is not included.

    Results

    Debian 10 is by far the fastest image with an impressive 15s on average. Basically, 5s less than any other Cloud Image.

    Regarding the BSD, FreeBSD is the only system able to resize the root filesystem without a reboot. Consequently, OpenBSD and NetBSD need to start two times in a row. This explains to big difference. The NetBSD kernel hardware probe is rather slow, for instance it takes 5s to initialize the ATA bus of the CDROM. This is the reason why the results look rather bad.

    About Ubuntu, I was surprised by the boot duration of Ubuntu 18.04. It is about two times longer than for 16.04. 20.04 is bit better but still, we are far from the 15s of 14.04. I would be curious to know the origin of this. Maybe AppArmor?

    CentOS 6 results are not really consiste. They vary between 17.9s and 25.21s. This is the largest delta if you compare with the other distribution. This being said, CentOS 6 is rather old, and won’t be supported anymore at the end of the year.

    Conclusions

    All the recent Linux images are based on systemd. It would be great to extract the metrics from systemd-analyze to understand what impact the performance the most.

    Most of the time, when I deploy a test VM, the very first thing I do is the installation of some import packages. This scenario may be covered later in another blog post.

    Raw results for each images

    CentOS 6

    image from: https://cloud.centos.org/centos/6/images/CentOS-6-x86_64-GenericCloud.qcow2
    Date: Thu, 08 Aug 2019 13:28:32 GMT
    Size: 806748160

    distro=centos-6, elapsed_time=023.20
    distro=centos-6, elapsed_time=021.41
    distro=centos-6, elapsed_time=024.97
    distro=centos-6, elapsed_time=025.21
    distro=centos-6, elapsed_time=020.29
    distro=centos-6, elapsed_time=020.67
    distro=centos-6, elapsed_time=020.13
    distro=centos-6, elapsed_time=019.83
    distro=centos-6, elapsed_time=020.09
    distro=centos-6, elapsed_time=017.92

    The average is 21.3s.

    CentOS 7

    image from: http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2
    Date: Wed, 22 Apr 2020 12:24:07 GMT
    Size: 858783744

    distro=centos-7, elapsed_time=020.88
    distro=centos-7, elapsed_time=020.51
    distro=centos-7, elapsed_time=020.42
    distro=centos-7, elapsed_time=020.58
    distro=centos-7, elapsed_time=020.18
    distro=centos-7, elapsed_time=021.14
    distro=centos-7, elapsed_time=020.74
    distro=centos-7, elapsed_time=020.80
    distro=centos-7, elapsed_time=020.48
    distro=centos-7, elapsed_time=020.15

    Average: 20.5s

    CentOS 8

    image from: https://cloud.centos.org/centos/8/x86_64/images/CentOS-8-GenericCloud-8.1.1911-20200113.3.x86_64.qcow2
    Date: Mon, 13 Jan 2020 21:57:45 GMT
    Size: 716176896

    distro=centos-8, elapsed_time=023.55
    distro=centos-8, elapsed_time=023.27
    distro=centos-8, elapsed_time=024.39
    distro=centos-8, elapsed_time=023.61
    distro=centos-8, elapsed_time=023.52
    distro=centos-8, elapsed_time=023.49
    distro=centos-8, elapsed_time=023.53
    distro=centos-8, elapsed_time=023.30
    distro=centos-8, elapsed_time=023.34
    distro=centos-8, elapsed_time=023.67

    Average: 23.5s

    Debian 9

    image from: https://cdimage.debian.org/cdimage/openstack/current-9/debian-9-openstack-amd64.qcow2
    Date: Wed, 29 Jul 2020 09:59:59 GMT
    Size: 594190848

    distro=debian-9, elapsed_time=020.69
    distro=debian-9, elapsed_time=020.59
    distro=debian-9, elapsed_time=020.16
    distro=debian-9, elapsed_time=020.30
    distro=debian-9, elapsed_time=020.02
    distro=debian-9, elapsed_time=020.01
    distro=debian-9, elapsed_time=020.71
    distro=debian-9, elapsed_time=020.48
    distro=debian-9, elapsed_time=020.65
    distro=debian-9, elapsed_time=020.57

    Average is 20.4s.

    Debian 10

    image from: https://cdimage.debian.org/cdimage/openstack/current-10/debian-10-openstack-amd64.qcow2
    Date: Sat, 01 Aug 2020 20:10:01 GMT
    Size: 530629120

    distro=debian-10, elapsed_time=015.25
    distro=debian-10, elapsed_time=015.28
    distro=debian-10, elapsed_time=014.88
    distro=debian-10, elapsed_time=015.07
    distro=debian-10, elapsed_time=015.39
    distro=debian-10, elapsed_time=015.35
    distro=debian-10, elapsed_time=015.47
    distro=debian-10, elapsed_time=014.94
    distro=debian-10, elapsed_time=015.57
    distro=debian-10, elapsed_time=015.57

    Average is 15.2s

    Debian testing

    Debian testing is a rolling release, so I won’t include it in the charts, but I found interesting to include it in the results.

    image from: https://cdimage.debian.org/cdimage/openstack/testing/debian-testing-openstack-amd64.qcow2
    Date: Mon, 01 Jul 2019 08:39:27 GMT
    Size: 536621056

    distro=debian-testing, elapsed_time=015.07
    distro=debian-testing, elapsed_time=015.03
    distro=debian-testing, elapsed_time=014.93
    distro=debian-testing, elapsed_time=015.33
    distro=debian-testing, elapsed_time=014.85
    distro=debian-testing, elapsed_time=015.53
    distro=debian-testing, elapsed_time=014.94
    distro=debian-testing, elapsed_time=015.22
    distro=debian-testing, elapsed_time=015.19
    distro=debian-testing, elapsed_time=014.86

    Average 15s

    Fedora 31

    image from: https://download.fedoraproject.org/pub/fedora/linux/releases/31/Cloud/x86_64/images/Fedora-Cloud-Base-31-1.9.x86_64.qcow2
    Date: Wed, 23 Oct 2019 23:06:38 GMT
    Size: 355350528

    distro=fedora-31, elapsed_time=020.48
    distro=fedora-31, elapsed_time=020.39
    distro=fedora-31, elapsed_time=020.37
    distro=fedora-31, elapsed_time=020.30
    distro=fedora-31, elapsed_time=020.29
    distro=fedora-31, elapsed_time=020.31
    distro=fedora-31, elapsed_time=020.50
    distro=fedora-31, elapsed_time=020.51
    distro=fedora-31, elapsed_time=020.27
    distro=fedora-31, elapsed_time=020.91

    Average 20.4s

    Fedora 32

    image from: https://download.fedoraproject.org/pub/fedora/linux/releases/32/Cloud/x86_64/images/Fedora-Cloud-Base-32-1.6.x86_64.qcow2
    Date: Wed, 22 Apr 2020 22:36:57 GMT
    Size: 302841856

    distro=fedora-32, elapsed_time=021.68
    distro=fedora-32, elapsed_time=022.43
    distro=fedora-32, elapsed_time=022.17
    distro=fedora-32, elapsed_time=023.06
    distro=fedora-32, elapsed_time=022.23
    distro=fedora-32, elapsed_time=022.83
    distro=fedora-32, elapsed_time=022.54
    distro=fedora-32, elapsed_time=021.46
    distro=fedora-32, elapsed_time=022.37
    distro=fedora-32, elapsed_time=023.14

    Average: 22.4s

    FreeBSD 11.4

    image from: https://bsd-cloud-image.org/images/freebsd/11.4/freebsd-11.4.qcow2
    Date: Wed, 05 Aug 2020 01:24:32 GMT
    Size: 412895744

    distro=freebsd-11.4, elapsed_time=030.68
    distro=freebsd-11.4, elapsed_time=030.64
    distro=freebsd-11.4, elapsed_time=030.29
    distro=freebsd-11.4, elapsed_time=030.29
    distro=freebsd-11.4, elapsed_time=029.86
    distro=freebsd-11.4, elapsed_time=029.74
    distro=freebsd-11.4, elapsed_time=029.90
    distro=freebsd-11.4, elapsed_time=029.77
    distro=freebsd-11.4, elapsed_time=030.04
    distro=freebsd-11.4, elapsed_time=029.70

    Average 30s

    FreeBSD 12.1

    image from: https://bsd-cloud-image.org/images/freebsd/12.1/freebsd-12.1.qcow2
    Date: Wed, 05 Aug 2020 01:46:11 GMT
    Size: 479029760

    distro=freebsd-12.1, elapsed_time=029.78
    distro=freebsd-12.1, elapsed_time=030.32
    distro=freebsd-12.1, elapsed_time=029.56
    distro=freebsd-12.1, elapsed_time=029.60
    distro=freebsd-12.1, elapsed_time=029.76
    distro=freebsd-12.1, elapsed_time=029.89
    distro=freebsd-12.1, elapsed_time=029.55
    distro=freebsd-12.1, elapsed_time=029.66
    distro=freebsd-12.1, elapsed_time=029.31
    distro=freebsd-12.1, elapsed_time=029.77

    Average 29.7

    NetBSD 8.2

    image from: https://bsd-cloud-image.org/images/netbsd/8.2/netbsd-8.2.qcow2
    Date: Wed, 05 Aug 2020 02:06:57 GMT
    Size: 155385856

    distro=netbsd-8.2, elapsed_time=066.71
    distro=netbsd-8.2, elapsed_time=067.80
    distro=netbsd-8.2, elapsed_time=067.15
    distro=netbsd-8.2, elapsed_time=066.97
    distro=netbsd-8.2, elapsed_time=066.84
    distro=netbsd-8.2, elapsed_time=067.01
    distro=netbsd-8.2, elapsed_time=066.98
    distro=netbsd-8.2, elapsed_time=067.73
    distro=netbsd-8.2, elapsed_time=067.34
    distro=netbsd-8.2, elapsed_time=066.90

    Average 67.1

    NetBSD 9.0

    image from: https://bsd-cloud-image.org/images/netbsd/9.0/netbsd-9.0.qcow2
    Date: Wed, 05 Aug 2020 02:25:11 GMT
    Size: 149291008

    distro=netbsd-9.0, elapsed_time=067.04
    distro=netbsd-9.0, elapsed_time=066.92
    distro=netbsd-9.0, elapsed_time=066.89
    distro=netbsd-9.0, elapsed_time=067.24
    distro=netbsd-9.0, elapsed_time=067.41
    distro=netbsd-9.0, elapsed_time=067.13
    distro=netbsd-9.0, elapsed_time=066.14
    distro=netbsd-9.0, elapsed_time=066.75
    distro=netbsd-9.0, elapsed_time=067.25
    distro=netbsd-9.0, elapsed_time=066.60

    Average: 66.9s

    OpenBSD 6.6

    image from: https://bsd-cloud-image.org/images/openbsd/6.7/openbsd-6.7.qcow2
    Date: Wed, 05 Aug 2020 04:09:44 GMT
    Size: 520704512

    distro=openbsd-6.6, elapsed_time=048.80
    distro=openbsd-6.6, elapsed_time=049.72
    distro=openbsd-6.6, elapsed_time=049.07
    distro=openbsd-6.6, elapsed_time=048.36
    distro=openbsd-6.6, elapsed_time=049.28
    distro=openbsd-6.6, elapsed_time=049.12
    distro=openbsd-6.6, elapsed_time=049.36
    distro=openbsd-6.6, elapsed_time=049.80
    distro=openbsd-6.6, elapsed_time=048.05
    distro=openbsd-6.6, elapsed_time=049.71

    Average: 49.1s

    OpenBSD 6.7

    image from: https://bsd-cloud-image.org/images/openbsd/6.7/openbsd-6.7.qcow2
    Date: Wed, 05 Aug 2020 04:09:44 GMT
    Size: 520704512

    distro=openbsd-6.7, elapsed_time=048.81
    distro=openbsd-6.7, elapsed_time=048.96
    distro=openbsd-6.7, elapsed_time=049.86
    distro=openbsd-6.7, elapsed_time=049.12
    distro=openbsd-6.7, elapsed_time=049.75
    distro=openbsd-6.7, elapsed_time=050.63
    distro=openbsd-6.7, elapsed_time=050.85
    distro=openbsd-6.7, elapsed_time=049.92
    distro=openbsd-6.7, elapsed_time=048.98
    distro=openbsd-6.7, elapsed_time=050.83

    Average: 49.7s

    Ubuntu 14.04

    image from: https://cloud-images.ubuntu.com/trusty/current/trusty-server-cloudimg-amd64-disk1.img
    Date: Thu, 07 Nov 2019 15:38:05 GMT
    Size: 264897024

    distro=ubuntu-14.04, elapsed_time=014.40
    distro=ubuntu-14.04, elapsed_time=014.42
    distro=ubuntu-14.04, elapsed_time=014.94
    distro=ubuntu-14.04, elapsed_time=015.44
    distro=ubuntu-14.04, elapsed_time=015.64
    distro=ubuntu-14.04, elapsed_time=014.59
    distro=ubuntu-14.04, elapsed_time=015.02
    distro=ubuntu-14.04, elapsed_time=015.22
    distro=ubuntu-14.04, elapsed_time=015.44
    distro=ubuntu-14.04, elapsed_time=015.44

    Average: 15s

    Ubuntu 16.04

    image from: https://cloud-images.ubuntu.com/xenial/current/xenial-server-cloudimg-amd64-disk1.img
    Date: Thu, 13 Aug 2020 08:36:38 GMT
    Size: 309657600

    distro=ubuntu-16.04, elapsed_time=015.13
    distro=ubuntu-16.04, elapsed_time=015.39
    distro=ubuntu-16.04, elapsed_time=015.42
    distro=ubuntu-16.04, elapsed_time=015.62
    distro=ubuntu-16.04, elapsed_time=015.29
    distro=ubuntu-16.04, elapsed_time=015.60
    distro=ubuntu-16.04, elapsed_time=015.62
    distro=ubuntu-16.04, elapsed_time=015.21
    distro=ubuntu-16.04, elapsed_time=015.62
    distro=ubuntu-16.04, elapsed_time=015.67

    Average: 15.4

    Ubuntu 18.04

    image from: https://cloud-images.ubuntu.com/bionic/current/bionic-server-cloudimg-amd64.img
    Date: Wed, 12 Aug 2020 16:58:30 GMT
    Size: 357302272

    distro=ubuntu-18.04, elapsed_time=028.58
    distro=ubuntu-18.04, elapsed_time=028.25
    distro=ubuntu-18.04, elapsed_time=028.36
    distro=ubuntu-18.04, elapsed_time=028.45
    distro=ubuntu-18.04, elapsed_time=028.79
    distro=ubuntu-18.04, elapsed_time=028.28
    distro=ubuntu-18.04, elapsed_time=028.11
    distro=ubuntu-18.04, elapsed_time=028.07
    distro=ubuntu-18.04, elapsed_time=027.75
    distro=ubuntu-18.04, elapsed_time=028.25

    Average: 28.3s

    Ubuntu 20.04

    image from: https://cloud-images.ubuntu.com/focal/current/focal-server-cloudimg-amd64.img
    Date: Mon, 10 Aug 2020 22:19:47 GMT
    Size: 545587200

    distro=ubuntu-20.04, elapsed_time=023.23
    distro=ubuntu-20.04, elapsed_time=022.74
    distro=ubuntu-20.04, elapsed_time=023.20
    distro=ubuntu-20.04, elapsed_time=022.96
    distro=ubuntu-20.04, elapsed_time=024.04
    distro=ubuntu-20.04, elapsed_time=024.06
    distro=ubuntu-20.04, elapsed_time=023.60
    distro=ubuntu-20.04, elapsed_time=023.88
    distro=ubuntu-20.04, elapsed_time=023.24
    distro=ubuntu-20.04, elapsed_time=024.27

    Average: 23.5s

    OpenSUSE Leap 15.2

    image from: https://download.opensuse.org/repositories/Cloud:/Images:/Leap_15.2/images/openSUSE-Leap-15.2-OpenStack.x86_64.qcow2
    Date: Sun, 07 Jun 2020 11:42:01 GMT
    Size: 566047744

    distro=opensuse-leap-15.2, elapsed_time=027.10
    distro=opensuse-leap-15.2, elapsed_time=027.61
    distro=opensuse-leap-15.2, elapsed_time=027.07
    distro=opensuse-leap-15.2, elapsed_time=027.12
    distro=opensuse-leap-15.2, elapsed_time=027.57
    distro=opensuse-leap-15.2, elapsed_time=026.86
    distro=opensuse-leap-15.2, elapsed_time=027.25
    distro=opensuse-leap-15.2, elapsed_time=027.10
    distro=opensuse-leap-15.2, elapsed_time=027.69
    distro=opensuse-leap-15.2, elapsed_time=027.39

    Average: 27.3s

  • How to clean docker up when used on btrfs

    My docker uses to forget a lot of files when I just remove the images with rmi.


    systemctl stop docker.service
    systemctl stop docker.socket
    rm -rf /var/lib/docker/
    btrfs subvolume list /var/lib/docker|awk '/ID/ {print "/"$9}'|xargs btrfs sub delete

  • Create thumbnail images of a set of pictures

    Load 4 pictures in a row from the input directory and create a thumbnail image:

    #!/usr/bin/env python3
    
    import glob
    import subprocess
    
    inputs = glob.glob('input/*.JPG')
    
    cpt = 0
    while len(inputs) > 0:
    to_process = inputs[:4] + ['logo:', 'logo:', 'logo:']
    inputs = inputs[4:]
    cpt += 1
    subprocess.call(["montage"] + to_process[:4] + ["-geometry", "800x600+2+2", "final/final_%02d.jpg" % cpt])
  • pbuilder on Debian kFreeBSD Wheezy

    There is a little trick if you want to use pbuilder on kFreeBSD. Add these links in /etc/pbuilderrc:

    MIRRORSITE=http://cdn.debian.net/debian
    USEPROC=yes
    USEDEVFS=yes
    USEDEVPTS=yes
    BINDMOUNTS=”/home/goneri”

  • getting burp to use puppet CA

    I’m a big fan of BURP to maintain my backup. This article explains how to reuse the PuppetMaster CA for authentification. I use Debian burp package on Wheezy.

    First, you need to generate the dhfile.pem on both the server and the agent:


    openssl dhparam -outform PEM -out /etc/burp/dhfile.pem 1024

    The server

    The configuration is in /etc/burp/burp-server.conf:


    mode = server
    (...)
    # ca_conf = /etc/burp/CA.cnf
    # ca_name = burpCA
    # ca_server_name = burpserver
    # ca_burp_ca = /usr/sbin/burp_ca
    (...)
    ssl_cert_ca = /var/lib/puppet/ssl/certs/ca.pem
    ssl_cert = /var/lib/puppet/ssl/ca/signed/newpuppet.lebouder.net.pem
    ssl_key = /var/lib/puppet/ssl/private_keys/newpuppet.lebouder.net.pem
    ssl_key_password = password
    ssl_dhfile = /etc/burp/dhfile.pem
    (...)

    The agent

    The configuration file is /etc/burp/burp.conf:

    mode = client
    port = 4971
    server = newpuppet.lebouder.net
    ssl_cert_ca = /var/lib/puppet/ssl/certs/ca.pem
    ssl_cert = /var/lib/puppet/ssl/certs/newclient.lebouder.net.pem
    ssl_key = /var/lib/puppet/ssl/private_keys/newclient.lebouder.net.pem
    ssl_peer_cn = newpuppet.lebouder.net
    (...)

    newpuppet.lebouder.net is the Puppet server.

  • refresh all my git clone

    This is commands I use to refresh all my git clones. For example, when I know I will be offline during the coming hours:


    locate --regex '\.git$'|parallel 'cd {} && cd .. && echo $PWD && git fetch --all'

    The use of GNU parallel is helpful to reduce the sync duration.

  • Duide Antidote8 sur Debian Sid

    J’ai acheté le correcteur orthographique Antidote 8 que j’ai installé hier. L’outil est vraiment impressionnant et agréable à utiliser.

    L’installation sur Debian Sid n’est pas supportée, cependant son utilisation est possible. Je dois encore voir si je peux l’intégrer avec Firefox (Iceweasel) et Thunderbird (Icedove).

    Installation

     
    # apt-get install libx11-6 libxslt1.1 libvorbis0a libxrender1 libgstreamer-plugins-base0.10-0 libpulse0 libpulse0 libpulse-mainloop-glib0 libfreetype6libpulse-mainloop-glib0 libfontconfig1 libxext6 libicu48

    # wget http://ftp.fr.debian.org/debian/pool/main/o/openssl/libssl0.9.8_0.9.8o-4squeeze14_amd64.deb
    # dpkg -i libssl0.9.8_0.9.8o-4squeeze14_amd64.deb

    # wget http://ftp.fr.debian.org/debian/pool/main/i/icu/libicu44_4.4.1-8_amd64.deb

    # dpkg -i libicu44_4.4.1-8_amd64.deb

    Pour éviter un problème avec les kernel >= 3 il faut faire une petite manip présentée ici : http://www.debian-fr.org/certains-logiciels-dysfonctionnent-en-changeant-de-noyau-t42688.html
    # wget https://mail.gnome.org/archives/evolution-list/2003-December/txtBEWSVk2eft.txt -O /tmp/uname.c
    $ (echo #define _GNU_SOURCE; cat /tmp/uname.c) > /tmp/uname.c
    $ gcc -shared -fPIC -ldl uname.c -o /opt/Druide/Antidote8/Programmes64/fake-uname.so

    Il ne reste plus qu’a ajouter les deux lignes suivantes au début du script /opt/Druide/Antidote8/Programmes64/Antidote8.
    export LD_PRELOAD=/opt/Druide/Antidote8/Programmes64/fake-uname.so
    export RELEASE=$(uname -r | sed 's/^\(...\)/\1.0-antidote-fix/g')

  • Finally, a 4096 GnuPG key


    -----BEGIN PGP SIGNED MESSAGE-----
    Hash: SHA1

    I finally generated a 4096 key ( 0x049ED9B94765572E ) which is signed
    by my old key 0x37D9412C. I will revoke this old key in the coming months.

    The old 1024bit key:
    pub 1024D/37D9412C 2004-08-18
    Key fingerprint = D3BC 65BB 48B1 1DA8 BC8A 5C88 B0A4 C5A4 37D9 412C
    uid Gonéri Le Bouder
    uid Gonéri Le Bouder
    uid Gonéri Le Bouder
    uid Gonéri Le Bouder
    uid [jpeg image of size 7650]
    uid Gonéri Le Bouder (Professional address)
    uid [jpeg image of size 4672]
    sub 1024g/E47802B2 2004-08-18
    sub 2048R/F89D348A 2013-06-01

    The new key:
    pub 4096R/4765572E 2013-06-18 [expires: 2023-07-15]
    Key fingerprint = 1FF3 68E8 0199 1373 1705 B8AF 049E D9B9 4765 572E
    uid Gonéri Le Bouder
    uid Gonéri Le Bouder
    uid Gonéri Le Bouder
    uid [jpeg image of size 7650]
    uid Gonéri Le Bouder
    sub 4096R/E496738B 2013-06-18

    -----BEGIN PGP SIGNATURE-----
    Version: GnuPG v1.4.15 (GNU/Linux)

    iEYEARECAAYFAlJlJjsACgkQsKTFpDfZQSxwzwCeLDuJoMOwJ4H2fbQionyejDck
    GX8Anjp0V+rZHJ5fLlLv3yXWbsBt9K5m
    =RHzK
    -----END PGP SIGNATURE-----