Category: Uncategorized

  • Virt-Lightning 2.2.0

    Release 2.2.0 of Virt-Lightning, a lightweight CLI for libvirt which can serve as an alternative to Vagrant. It’s also a stable API that you can use in Python to quickly spawn new VM, like you would do with a Cloud provider.

    Most of the new features come from contributors and I’m pretty happy with this.

    Changelog

    • Cosmetic documentation changes
    • Don’t try to fetch an image that already exists
    • Add ability to boot old system with no virtio support
    • Use Libvirt default settings when possible
    • Use the VNC display by default
    • Add support for OpenVSwitch (a.k.a OVS )bridge
    • vl stop: avoid a Python backtrace if the VM doesn’t exist

  • Ansible’s vmware.vmware_rest collection, Execution Environment and ansible-navigator

    Ansible-Navigator is a new terminal base UI for Ansible. It aims to provide an alternative to the different ansible commands that you probably already familiar with. To learn more about Ansible-Navigator, a couple of recent blog posts on Ansible Blog cover this new tool. The Execution Environment, or just EE, is also a rather recent concept. With a EE Ansible and all its dependencies are shipped as a single container. You don’t need anymore to care about the Python version, the virtualenv, the collections and Python dependency.

    Ansible-Navigator provides an interface that is inspired by ansible-playbook. In this example we will see how we can use the CLI to run a playbook. My pass some credentials through environment variables, we will also see how to expose them properly.

    The first thing is to prepare a ansible-navigator.yml file in your project directory.

    ---
    ansible-navigator:
       execution-environment:
         container-engine: podman
         enabled: True
         image: myregister/ansible-automation-platform-21-ee-supported-rhel8:2.1.0
         pull-policy: never
         environment-variables:
            pass:
              - VMWARE_VALIDATE_CERTS
              - VMWARE_HOST
              - VMWARE_PASSWORD
              - VMWARE_USER
              - ESXI1_PASSWORD
              - ESXI1_HOSTNAME
              - ESXI1_USERNAME
    

    The image key point on the container, I use a Fedora and Podman is the default for container. I ensure Navigator use the right engine with the container-engine: podman configuration. I use the environment-variables section to list all the variables that I want to expose in my container. If you want to read about all the other configuration options, just read the documentation at https://ansible-navigator.readthedocs.io/en/latest/.

    If your playbook depends on some roles, for instance within a roles directory, it’s important to call ansible-navigator from the root directory of your project. Otherwise, the roles won’t be reachable from within the container. If your roles are maintained at a different location, you can still expose their directories with the volume-mounts option. This is in my opinion slightly less elegant.

    [goneri@t580 targets]$ ls -lh
    total 416K
    -rw-r--r--. 1 goneri goneri  469 Oct 19 15:09 ansible-navigator.yml
    drwxrwxr-x. 2 goneri goneri    6 Oct 19 15:39 playbooks
    drwxrwxr-x. 2 goneri goneri    6 Oct 19 15:39 roles
    

    Now, I can just run my playbook with: ansible-navigator run --mode stdout playbooks/vcenter_vm_scenario1

  • Connect to Zookeeper over TLS/SSL

    It’s surprisingly tricky to connect to a Zookeeper cluster over TLS/SSL using the zkCli.sh command. You’ve got to wrap the command and pass some extra incantations. This is the script I use. Here my certificates are in /etc/zookeeper/ca, you may need to adjust that to match your local installation.

    #!/bin/bash
    
    ZK_CLIENT_HEAP="${ZK_CLIENT_HEAP:-256}"
    export ZK_CLIENT_SSL="-Dzookeeper.clientCnxnSocket=org.apache.zookeeper.ClientCnxnSocketNetty -Dzookeeper.ssl.keyStore.location=/etc/zookeeper/ca/keystores/server.pem -Dzookeeper.ssl.trustStore.location=/etc/zookeeper/ca/certs/cacert.pem -Dzookeeper.client.secure=true"
    export CLIENT_JVMFLAGS="-Xmx${ZK_CLIENT_HEAP}m $ZK_CLIENT_SSL $CLIENT_JVMFLAGS"
    /opt/zookeeper/bin/zkCli.sh -server my-host-fqdn:2281
    
  • Zuul cheat sheet

    My team at Ansible use Zuul CI to develop and release our collections. Time to time, I need to do some basic operation and I’ve started a cheat sheet. I’m sharing it since it may be helpful for someone else.

    Abort a job:

    $ zuul dequeue --tenant=ansible --pipeline=release --project=ansible-collections/ansible.utils --ref=refs/tags/2.4.2
    

    Recreate a job job. In this case, we manually recreate a job in the release pipeline. This job was initially created by the push of a tag (–newrev).

    $ zuul enqueue-ref --tenant=ansible --trigger=github --pipeline=release --project=ansible-collections/ansible.utils --ref=refs/tags/2.4.2 --newrev=6a0372849bec52672a74168b93b0677d2c6471fc
    

    List all the ongoing nodepool request:

    $ nodepool -s /etc/nodepool/secure.conf request-list
    

    Kill a periodical job:

    $ zuul dequeue --tenant=ansible --pipeline=periodic --project=ansible/ansible --ref refs/heads/stable-2.9
    
  • Aborting, target uses selinux but python bindings (libselinux-python) aren’t installed!

    TASK [helm : Copy test chart] **************************************************
    fatal: [localhost]: FAILED! => {"changed": false, "checksum": "8b41aa269bd850134cd95bd27343edf6d4ed2e30", "msg": "Aborting, target uses selinux but python bindings (libselinux-python) aren't installed!"}
    

    Ah, this is one of the most irritating message that you can get when you use Ansible in a venv and SELinux. The problem should not happen anymore since this commit that was first released in Ansible core 2.11.5, thanks David Moreau Simard for the information!. If you cannot upgrade, you can continue to read. This article we will quickly explain the problem and cover our options.

    copy or selinux are two Ansible modules that depends on some system binary libraries. These binary libraries are linked/build using the Python of the system, on RHEL8 it’s Python3.6. So it you use the Python 3.6 of the system, everything should be all right. You may just need to install python3-selinux if it’s not already installed.

    Things are getting more difficult if you use a virtualenv, and it’s often a good idea to use a virtualenv. Here you two options:

    If the Python3 version of your virtualenv is close enough with the version of the one of the system, you can use the selinux package from Pypi. When a module will try to interact with the selinux module, this module will pretend to be the right person and will actually redirect the request the Python module from the system. It works well most of the time and pretty much the single way to do SElinux operation from a venv.

    If you Python version is too new comparing to the system, Pypi’s selinux will raise an error like this one:

    ImportError: cannot import name '_selinux'
    

    And in this case, you’ve got this second option. Here we assume that you don’t really care about SElinux. For instance you use Ansible’s copy module just to duplicate a file once. In this case, the whole SElinux war machine is not necessary. You can use selinux-please-lie-to-me/, it’s another Pypi module and it’s similar to Pypi’s SElinux module. The main difference is that this time, it will just tell Ansible that SELinux is off on the system and it can bypass it.

    Oh! There is yet another option, you can overload the ansible_python_interpreter just for problematical task.

        - copy:
            src: /etc/fstab
            dest: /tmp/fstab
          vars:
            ansible_python_interpreter: /usr/bin/python3
    

    Which one should I use? The ansible_python_interpreter creates a dependency with the system that is often annoying. I prefer to avoid this strategy. Overall it’s better to use Pypi’s SELinux because it will preserve the interaction with SELinux, but sometime, the delta between the version of Python is too important and the system binary module just cannot be load. In this case, use selinux-please-lie-to-me as a fallback option. Just remember that this Python module will silently inhibit all the SElinux operations.

  • Ansible Molecule, how to use the local copy of the dependencies

    The community.okd collection depends on kubernetes.core. It uses Molecule to run the tests. We can call it using either the Makefile and the make molecule command. We can also install molecule manually with pip and run it with molecule test.

    When I work on community.okd, I often need to also adjust the kubernetes.core collection. By default Molecule fetches the dependencies silently from internet. This is done during either the prerun or the dependency steps. The mechanism is handy when you focus on one single collection at a time. But in my case, it’s a bit annoying. The clean copy of kubernetes.core comes over my local changes and prevents the testing of my local copy of the dependency.

    This is how I ensure Molecule uses my local copy of the collections. I store them in the default location ~/.ansible/collections/ansible_collections.

    In molecule/default/molecule.yml I set the following key to turn of the prerun:

    prerun: false
    

    and I set the following block in the dependency: section:

     dependency:
       name: galaxy
       enabled: False
    

    This will turn off the dependency resolved. Molecule won’t call galaxy anymore to fetch the roles and the collections defined in the galaxy.yml.

    You’re done, as a bonus point, the start up of molecule should be slightly faster!

  • Extract data from an Android phone

    So, I’ve got an old phone with 35GB of picture that I want to save. So far, I tried NextCloud sync, scp copy, GIO/NTP file. For all of those, it’s a 12h+ operation. And the real solution is to enable the developer mode on the phone and go straight to adb:

    $ adb pull /sdcard/DCIM
    /sdcard/DCIM/: 5825 files pulled. 31.9 MB/s (37517869453 bytes in 1122.166s)
    
  • performance: expanduser with pathlib or os.path

    Python3 provides a new fancy library to manage pretty much all the Path related operations This is a really welcome improvement since the before that we had to use a long list of unrelated modules.

    I recently had to chose between Pathlib and os.path to expand a string in the ~/path format to the absolute path. Since the performance was important I took the time to benchmark the two options:

    #!/usr/bin/env python3
    
    import timeit
    
    setup = '''
    from pathlib import PosixPath
    '''
    with_pathlib = timeit.timeit("abs_remote_tmp = str(PosixPath('~/.ansible/tmp').expanduser())", setup=setup)
    
    setup = '''
    from os.path import expanduser
    '''
    
    with_os_path = timeit.timeit("abs_remote_tmp = expanduser('~/.ansible/tmp')", setup=setup)
    
    print(f"with pathlib: {with_pathlib}\nwith os.path: {with_os_path}")
    

    os.path is just about 4 times faster (x1000000) for this very specific case. The fact we need to instantiate a PosixPath object has an impact. Also, once again we observe a nice performance boost with Python 3.8 onwards.

  • Ansible collections and venv

    I work on a large number of collections and in order to test them properly, I’ve to switch between the Python versions and the associated Pypi dependencies. Nothing special here, this is pretty much the life of all of us who work on the Ansible collections.

    Initially, I was maintaining a set of clean Python virtual environments. Basically, one per version of Python. And I was juggling between then. Sadly, it’s easy to lose track of what’s going one. Every time I was switching to a different collection, I had to pull a new set of dependencies and the order was never the same.

    I ended up being actually frustrated by the wasted time spent on looking at the pip freeze output to understand some oddity. It’s so easy to mess up the whole cathedral. A good example is that use a lot pip install -e git/something to install a local copy of a library. And as a result, any change there can potentially nuke the fragile little creature.

    So now, I use another approach. I’ve got a script that spawn a virtual environment on the light, pull the right dependencies and initialize the shell. It may sounds like a trivial thing, but I actually use it several times every days and I don’t call pip freeze that much.

    For instance if I need to work with Ansible 2.10 and Python 3.10, I just need to do:

    $ cd .ansible/collections/ansible_collections/vmware/vmware_rest
    $ source ~/bin/ansible-venv.fish 3.10 stable-2.10
    

    and I’m ready to run ansible-playbook or ansible-test in my clean environment. And when I want to reinitialize the venv, I’ve just to remove the venv directory.

    The script is here and depends on FishShell, my favorite Shell.

  • Update on BSD Cloud Image

    I’ve pushed some new image on https://bsd-cloud-image.org/:

    These images also include a recent fix for non-standard MTU values.