How to speed up your (API client) modules

The slide deck of my presentation for AnsibleFest 2020. It focus on the modules designed to interact with a remote service (REST, SOAP, etc). In general these modules just wrap a SDK library, the presentation explains how to improve the performance. I actually use this strategy ( ansible_turbo.module ) with the vmware.vmware_rest collection to speed up the modules.

How we use auto-generate content in the documentation of our Ansible Collection


Most of the content of the vmware.vmware_rest collection is auto-generated. This article focuses on the documentation and explains how we build it.

Auto-generated example blocks

This collection comes with an exhaustive series of functional tests. Technically speaking, these tests are just some Ansible playbooks that we run with ansible-playbook. They should run all the modules and ideally, in all the potential scenarios (e.g: create, modify, delete). If the playbooks execution is fine, the test is successful and we assume the modules are in a consistent state.

We can hardly generate the content of documentation but these playbooks are an interesting source of inspiration since they actually cover and go beyond all the use-cases that we want to document.

Our strategy is to record all the tasks and their results in a directory. And our documentation will just point on this content. This provides two interesting benefits:

  • We know our examples work fine because it’s actually the output of the CI.
  • When the format of a result changes, our documentation will take it into account automatically.

We import these files in our git repository, git-diff shows us the difference between the previous version. It’s an opportunity to spot a regression.

Cooking the collection

How do we collect the tasks and the results?

For this, we use a callback plugin ( ). The configuration is done using three environment variables:

  • ANSIBLE_CALLBACK_WHITELIST=goneri.utils.collect_task_outputs: Ask Ansible to load the callback plugin.
  • COLLECT_TASK_OUTPUTS_COLLECTION=vmware.vmware_rest: Specify the name of the collection.
  • COLLECT_TASK_OUTPUTS_TARGET_DIR=/somewhere: Target directory where to write the results.

When we finally calls the ansible-playbook command, the callback plugin will be loaded, record all the interaction of the vmware.vmware_rest modules and store the results in the target directory.

The final script looks like that:

#!/usr/bin/env bash
set -eux

export ANSIBLE_CALLBACK_WHITELIST=goneri.utils.collect_task_outputs
export COLLECT_TASK_OUTPUTS_COLLECTION=vmware.vmware_rest
export COLLECT_TASK_OUTPUTS_TARGET_DIR=$(realpath ../../../../docs/source/vmware_rest_scenarios/task_outputs/)
export INVENTORY_PATH=/tmp/inventory-vmware_rest
source ../
exec ansible-playbook -i ${INVENTORY_PATH} playbook.yaml

The documentation

Like a lot of Python project, Ansible uses ReStructuredText for it’s documentation. To include our samples we use the literalinclude directive. The result looks like that, the includes are done line 3 and 8:

Here we use ``vcenter_datastore_info`` to get a list of all the datastores:

.. literalinclude:: task_outputs/Retrieve_a_list_of_all_the_datastores.task.yaml


.. literalinclude:: task_outputs/Retrieve_a_list_of_all_the_datastores.result.json

This is how the final result looks like:

And the RETURN blocks?

Each Ansible module is supposed to come with a RETURN block ( ) that describe the output of the module. Each key of the module output is documented in this JSON structure.
The RETURN section and the task result above should be consistent. We can actually reformat the result and generate a JSON structure that matches the RETURN block expectation.
Once this is done, we just need to inject the content in the module file.

We reuse the task results in our modules with the following command:

./scripts/ ~/.ansible/collections/ansible_collections/vmware/vmware_rest/docs/source/vmware_rest_scenarios/task_outputs/ ~/git_repos/ansible-collections/vmware_rest/ --config-file config/inject_RETURN.yaml