Write scenario tests for your charm

From Zero to Hero: Write your first Kubernetes charm > Write scenario tests for your charm

See previous: Write unit tests for your charm

This document is part of a series, and we recommend you follow it in sequence. However, you can also jump straight in by checking out the code from the previous branches:

git clone https://github.com/canonical/juju-sdk-tutorial-k8s.git
cd juju-sdk-tutorial-k8s
git checkout 08_unit_testing
git checkout -b 09_scenario_testing

In the previous chapter we checked the basic functionality of our charm by writing unit tests.

However, there is one more type of test to cover, namely: functional tests.

In the charming world the current recommendation is to write functional tests with the ‘scenario’ model popularised by the ops-scenario library.

Scenario is a state-transition testing SDK for operator framework charms.

In this chapter you will write a scenario test to check that the get_db_info action that you defined in an earlier chapter behaves as expected.

Contents:

  1. Prepare your test environment
  2. Prepare your test directory
  3. Write your scenario test
  4. Run the test
  5. Review the final code

Prepare your test environment

Install ops-scenario:

pip install ops-scenario

In your project root’s existing tox.ini file, add the following:

...

[testenv:scenario]
description = Run scenario tests
deps =
    pytest
    cosl
    ops-scenario
    coverage[toml]
    -r {tox_root}/requirements.txt
commands =
    coverage run --source={[vars]src_path} \
                 -m pytest \
                 --tb native \
                 -v \
                 -s \
                 {posargs} \
                 {[vars]tests_path}/scenario
    coverage report

And adjust the env_list so that the Scenario tests will run with a plain tox command:

env_list = unit, scenario

Prepare your test directory

By convention, scenario tests are kept in a separate directory, tests/scenario. Create it as below:

mkdir -p tests/scenario
cd tests/scenario

Write your scenario test

In your tests/scenario directory, create a new file test_charm.py and add the test below. This test will check the behavior of the get_db_info action that you set up in a previous chapter. It will first set up the test context by setting the appropriate metadata, then define the input state, then run the action and, finally, check if the results match the expected values.

import scenario
from charm import FastAPIDemoCharm

import unittest.mock


@unittest.mock.patch("charm.LogProxyConsumer")
@unittest.mock.patch("charm.MetricsEndpointProvider")
@unittest.mock.patch("charm.GrafanaDashboardProvider")
def test_get_db_info_action(*_):
    # Use scenario.Context to declare what charm we are testing.
    ctx = scenario.Context(
        FastAPIDemoCharm,
        meta={
            "name": "demo-api-charm",
            "containers": {"demo-server": {}},
            "peers": {"fastapi-peer": {"interface": "fastapi_demo_peers"}},
            "requires": {
                "database": {
                    "interface": "postgresql_client",
                }
            },
        },
        config={
            "options": {
                "server-port": {
                    "default": 8000,
                }
            }
        },
        actions={
            "get-db-info": {
                "params": {"show-password": {"default": False, "type": "boolean"}}
            }
        },
    )

    # Declare the input state.
    state_in = scenario.State(
        leader=True,
        relations=[
            scenario.Relation(
                endpoint="database",
                interface="postgresql_client",
                remote_app_name="postgresql-k8s",
                local_unit_data={},
                remote_app_data={
                    "endpoints": "127.0.0.1:5432",
                    "username": "foo",
                    "password": "bar",
                },
            ),
        ],
        containers=[
            scenario.Container(name="demo-server", can_connect=True),
        ],
    )

    # Run the action with the defined state and collect the output.
    action = scenario.Action("get-db-info", {"show-password": True})
    action_out = ctx.run_action(action, state_in)

    assert action_out.results == {
        "db-host": "127.0.0.1",
        "db-port": "5432",
        "db-username": "foo",
        "db-password": "bar",
    }

Run the test

In your Multipass Ubuntu VM shell, run your scenario test as below:

ubuntu@charm-dev:~/fastapi-demo$ tox -e scenario     

You should get an output similar to the one below:

scenario: commands[0]> coverage run --source=/home/ubuntu/fastapi-demo/src -m pytest --tb native -v -s /home/ubunbu/fastapi-demo/tests/scenario
=============================================================================================================================================================================== test session starts ===============================================================================================================================================================================
platform linux -- Python 3.10.13, pytest-8.0.2, pluggy-1.4.0 -- /home/ubuntu/fastapi-demo/.tox/scenario/bin/python
cachedir: .tox/scenario/.pytest_cache
rootdir: /home/ubuntu/fastapi-demo
collected 1 item                                                                                                                                                                                                                                                                                                                                                                  

tests/scenario/test_charm.py::test_get_db_info_action PASSED

================================================================================================================================================================================ 1 passed in 0.21s ================================================================================================================================================================================
scenario: commands[1]> coverage report
Name           Stmts   Miss  Cover
----------------------------------
src/charm.py     118     51    57%
----------------------------------
TOTAL            118     51    57%
  scenario: OK (0.51=setup[0.03]+cmd[0.40,0.07] seconds)
  congratulations :) (0.54 seconds)

Congratulation, you have written your first scenario test!

Review the final code

For the full code see: 09_scenario_testing

For a comparative view of the code before and after this doc see: Comparison

See next: Write integration tests for your charm

Contributors: @bschimke95

Last updated 18 days ago. Help improve this document in the forum.