This is the contribution guide to the enough.community infrastructure which is based on Ansible and OpenStack. If you’re a seasoned Free Software contributor looking for a quick start, take a look at the list of bugs and features, otherwise keep reading.
Bugs and features list¶
Each service under the enough.community domain can be worked on independently and have their own integration tests. There is no need to understand how Weblate is deployed if you’re improving Discourse, for instance.
All contributors are organized horizontally
- People with access to an exclusive resource must register themselves in the team directory
git clone https://lab.enough.community/main/infrastructure
- Install Docker.
tests/run-tests.sh script builds a docker image suitable for
running the tests, with all the required dependencies, based on a
Debian GNU/Linux buster. The following volumes are bind-mounted:
- the root of the infrastructure repository
The working directory, in the container, is the root of the infrastructure repository.
Manually run commands similar to playbooks/gitlab/roles/gitlab-ci/tasks/gitlab-ci.yml (it could be a playbook running on localhost with sudo sudo ?)
Running tests that do not require OpenStack¶
PYTEST_ADDOPTS='-m "not openstack_integration"' tests/run-tests.sh
Running tests that require OpenStack¶
The tests running without OpenStack only cover a fraction of what Enough does. To verify that a playbook actually works, it needs to be run on a live host and tests must check that is working. For instance the tests for Weblate request that the Weblate server sends a mail and verify it reaches the postfix server.
When modifying a role or a playbook in the directory playbooks/ABC one is expected to add a test for the new behavior and verify it runs successfully:
tests/run-tests.sh tox -e ABC
When relevant, integration tests should be created as icinga monitoring checks so they can be run on a regular basis in the production environment to verify it keeps working.
A SSH authentication key is generated by the tests which requires OpenStack. The private key is named infrastructure_key, the public key is named infrastructure_key.pub and both are located in .tox/ABC/.pytest_cache/d/dotenough/ABC.test/ directory.
Obtain an API token¶
Most integration tests need a publicly available DNS server. The https://api.enough.community provides a publicly available API to delegate a domain to the designated DNS server. Members of the group enough can sign-in, others can request access.
The Token: value displayed after signing in https://api.enough.community
must be set to the
ENOUGH_API_TOKEN environment variable.
ENOUGH_API_TOKEN=XXXXXXX tests/run-tests.sh tox -e bind
Set the OpenStack credentials using clouds.yml¶
Assuming you have your own OpenStack tenant or one was provided to you, the
clouds.yml file must be copied to tests/clouds.yml.
The openstack_provider variable must be added to the clouds.yml file:
--- openstack_provider: fuga clouds: ...
--- openstack_provider: ovh clouds: ...
It must define two cloud environment: production and clone (for backup restoration testing purposes). Here is a complete example:
--- openstack_provider: fuga clouds: production: auth: auth_url: "https://identity.api.ams.fuga.cloud:443/v3" user_id: "6a79dfb7410c4884fceb23031189b" password: "qecOSdBAH6ZjE4M2UnZbnnWdsZihe" user_domain_id: "99009ec244eebb85827488bb2aed4" project_domain_id: "9900e2c244eebb85827488bb2aed4" project_id: "203e72ec8a85b9dc808719e452902" region_name: "ams" interface: "public" identity_api_version: 3 clone: auth: auth_url: "https://identity.api.ams.fuga.cloud:443/v3" user_id: "3b40cf2cb71b4bdc95c009347445f" password: "RBX0S2BdXWlBztUKkPWcAfnNFSNNj" user_domain_id: "de844dabe43948cb87ed24e2d5c438a9" project_domain_id: "de8abe43948cb87ed24e2d5c438a9" project_id: "82cb2f62a70f5928e3a4686622e39" region_name: "ams" interface: "public" identity_api_version: 3
tests/run-tests.sh tox -e <service name>
If the command fails, because of a network failure or any other reason, it is safe to run it again. It is idempotent and will re-use the environment from the failed test.
The list of service names (i.e. tox test environments) is in the tox.ini file. It is possible to skip some steps to speed up test debugging:
$ tox -e bind -- --help playbooks ... custom options: --enough-no-create Do not run the create step --enough-no-tests Do not run the tests step --enough-no-destroy Do not run the destroy step ... $ tests/run-tests.sh tox -e authorized_keys -- --enough-no-destroy playbooks/authorized_keys/tests
The domain name used for testing is in .pytest_cache/d/dotenough/bind.test/inventory/group_vars/all/domain.yml, where bind must be replaced by the name of the service. It is handy for debugging (i.e. browsing the web interface of a service, ssh to a machine that failed to run properly, etc.)
To verify that a service (icinga for instance) can be upgraded from a given Enough version (2.1.14 for instance), use:
$ tests/run-upgrade-tests.sh 2.1.14 icinga -- \ --enough-no-destroy \ --enough-driver=libvirt playbooks/icinga/tests ...
run-upgrade-tests.sh performs the following steps:
- checkout the
tox -e icingafrom the
2.1.14directory and keep the hosts because of
tox -e icingafrom the current version, re-using the hosts with the icinga version installed from
ssh to a host under test¶
If tests/run-tests.sh tox -e chat was run and the hosts have not been destroyed because the –enough-no-destroy option was set, the following can be used to ssh on a host:
$ tests/run-tests.sh tests/ssh chat bind-host debian@bind-host:~$ $ tests/run-tests.sh tests/ssh chat bind-host hostname bind-host
To run the tests manually within the test container:
$ tests/run-tests.sh bash user@6642e3759c43:~/infrastructure$ tox -e flake8
--log-cli-level switch in order to:
- enable log display during test run (live logging)
- control the test log level
$ tests/run-tests.sh tox -e py3 -- --log-cli-level=INFO -s -x tests/enough/common/test_openstack.py
To execute only one test:
tests/run-tests.sh tox -e py3 -- tests/enough/common/test_openstack.py::test_heat_definition
There should not be any leftover after a test involving OpenStack
fails, because the fixtures are supposed to thoroughly cleanup. But
bugs are to be expected in a test environment and it may be necessary
to manually remove leftovers, using the
openstack command like so:
tests/run-tests.sh env OS_CLIENT_CONFIG_FILE=tests/clouds.yml openstack --os-cloud production stack list
tests/run-tests.sh env OS_CLIENT_CONFIG_FILE=tests/clouds.yml openstack --os-cloud clone stack list
In case leftover are manually deleted using
stack delete command, the
following directory must be manually removed:
Execute Ansible on the test infrastructure¶
Display content of
test environment is used:
$ tests/run-tests.sh .tox/icinga/bin/ansible bind-host \ -i .tox/icinga/.pytest_cache/d/dotenough/icinga.test/inventory \ -mraw cat /path/to/a/file
Check the value of an ansible variable:
$ tests/run-tests.sh .tox/icinga/bin/ansible bind-host \ -i .tox/icinga/.pytest_cache/d/dotenough/icinga.test/inventory \ -m debug -avar=ansible_host
The ansible part of the repository groups playbooks and roles in separate directories to reduce the number of files to consider when working on improving a playbook or a service.
playbooks/authorized_keys: distribute SSH public keys
playbooks/backup: daily VMs snapshots
playbooks/bind: DNS server and client
playbooks/icinga: resources monitoring
playbooks/infrastructure: VMs creation and firewalling
playbooks/postfix: outgoing mail relay for all VMs
The toplevel directory contains the playbook that applies to the enough.community production environment. It imports playbooks found in the playbooks directory.
Managing python dependencies¶
- adding a new dependency: pipenv install thepackage
- creating the requirements*.txt files needed to create a distribution: pipenv run pipenv_to_requirements -f