Test Development

This is an overview of style and rules we follow when automating system functional test cases.

Code style

All python code in this project must be python 3 compatible. Moreover no python 2 compatibility layers or extensions should be added into the code, with the exceptions of pytest plugins (code in plugin directory).

We follow PEP 8 with a single exception regarding the maximum line length: we use 80 character as a soft limit so that one could break this rule if readability is affected, assuming the line length doesn’t go over 100 characters (the hard limit).

Reading Configuration Values

To access USM QE configuration, use UsmConfig object:

from usmqe.usmqeconfig import UsmConfig

CONF = UsmConfig()

username = CONF.config["usmqe"]["username"]

Obviously this assumes that the username option has been specified in a config file which is referenced in conf/main.yaml file. The minimal yaml file for the previous example to work would look like this:

usmqe:
  username: admin

To access data from the host inventory, use functions provided by class InventoryManager from ansible.inventory.manager module. If inventory_file option is specified correctly in conf/main.yaml file, then instance of this class is available after loading the configuration under inventory key, e.g. CONF.inventory.

Unit Tests

We have unit tests of usmqe-tests project itself, which covers some code in usmqe module and runs flake8 checks on this module and the test code. One is encouraged to add unit tests when adding code into usmqe module and to run the tests before submitting a pull request.

Code related to unit testing:

  • usmqe/unit_tests directory which contains pytest configuration (pytest.ini, conftest.py) and the code of unit tests itself
  • tox.ini file
  • .travis.yml config for Travis CI integration, uses tox

Unit test execution

To execute the unit tests, just run tox command in root directory of usmqe-tests repo.

Moreover the unit tests are executed for each new pull request via Travis CI.

Integration Tests

We have integration tests of usmqe-tests project itself, which covers some code in usmqe module. One is encouraged to add integration tests when adding code into usmqe module and to run the tests before submitting a pull request. Among these tests can be tests that use global configuration via UsmConfig.

usmqe/integration_tests is directory that contains the code of integration tests.

To execute the integration tests, just run pytest command in usmqe/integration_tests directory.

Structure of Functional Tests

Setup of Gluster trusted storage pool(s) is done prior test execution, and is fully automated via gdeploy config files in usmqe-setup/gdeploy_config/.

No pytest fixture or test case creates or removes Gluster trusted storage pool(s) on it’s own.

The test cases are implemented as pytest functions, stored in logical chunks in python source files (python modules).

Test cases which requires an imported cluster (aka trusted storage pool), uses pytest fixture imported_cluster, which:

  • Doesn’t create the cluster, but just checks if the cluster is already imported and tries to import it if it’s not imported already. If it fails during the import or no suitable cluster is available for import, it raises an errror.
  • Cluster suitable for import is identified using node defined by usm_cluster_member parameter in usmqe configuration file.
  • Returns information about the imported cluster via value of the fixture passed to the test function (cluster object), which includes cluster name, cluster id, volumes in the cluster.
  • Teardown of this fixture runs cluster unmanage if the cluster was imported during setup phase.

Test casess are tagged by tags:

  • Positive test case: @pytest.mark.happypath
  • Negative test case: @pytest.mark.negative
  • TODO: marker for gluster related tests
  • TODO: marker for volume type
  • TODO: marker for status of gluster profiling
  • TODO: marker for human readable name
  • marker for working/stable test - currently mark.testready
  • TODO: marker for wip test?

Note

Open questions, enhancements:

  • fixture to read markers and change import accordingly
  • fixture to read markers and check if the cluster matches the requirements (eg. do we have specified volume type there?)
  • multiple clusters

Tagging makes it possible to run for example just tests related to particular volume which requires profiling to be enabled.

All tests should use a proper pytest fixture for setup and teardown, if setup or teardown is needed. All objects created during testing should be removed after test run. The same applies for the fixtures, if something is created during setup phase, it should be removed during teardown. There should not be any remains after test run.

Exceptions

There are only 2 exceptions from the rules listed above.

Test cases which test import or unamanage cluster operations itself should not use imported_cluster fixture, but handle the import itself in the code of the test case.

Such cases should be stored in separate module (python source file) so that it could be part of separate test runs.

The same would apply for CRUD happy path tests, which are stored in one python source file where they share object created and deleted during testing tests from file. These tests should run in same order like they are written in the file. Such cases are run at the beginning of testing because they left created/imported clusters for further testing. This exception exists because cluster creation have extremly big resource needs.

Note

Note that we don’t have any CRUD happy path tests and are not going to have them untill we need to test day 1 or day 2 operations, which includes creating or deleting gluster clusters, volumes or other cluster components.