Developer manual#
This document is aimed at Quality-time developers and maintainers and describes how to develop, test, document, release, and maintain Quality-time. To read more about how Quality-time is structured, see the software documentation.
Developing#
Running Quality-time locally#
When developing Quality-time, there are two ways to run Quality-time locally: in Docker completely (scenario 1 below) or partly in Docker and partly from shells (scenario 2 below).
If you want to get Quality-time up and running quickly, for example for a demo, we recommend scenario 1. For software development, we recommend scenario 2.
Install prerequisites#
Prerequisites are Docker and Git for both scenario’s. For scenario 2 you also need Python 3.12, uv, and a recent version of Node.js (we currently use Node.js v22).
Clone this repository:
git clone git@github.com:ICTU/quality-time.git
cd quality-time
If you don’t have a public key in your GitHub account, use:
git clone https://github.com/ICTU/quality-time.git
cd quality-time
Scenario 1: run all components in Docker#
To run Quality-time in Docker completely, open a terminal and start all containers with docker compose:
docker compose up
The advantage of this scenario is that Python and Node.js don’t need to be installed. However, as building the containers can be time-consuming we don’t recommend this for working on the Quality-time source code.
Scenario 2: run bespoke component from shells and other components in Docker#
In this scenario, we run the bespoke components from shells and the standard components and test components as Docker containers.
The advantage of this scenario is that you don’t need to rebuild the bespoke container images while developing. Also, the server component and the frontend component have auto-reload, meaning that when you edit the code, they will restart and run the new code automatically. The collector and notifier components don’t have auto-reload, and need to stopped and started by hand to activate new code.
Start standard and test components in Docker#
Open a terminal and start the standard containers and test components with docker compose:
docker compose up database ldap phpldapadmin mongo-express testdata
PHP-LDAP-admin is served at http://localhost:3890 and can be used to inspect and edit the LDAP database. Click login, check the “Anonymous” box and click “Authenticate” to login.
Mongo-express is served at http://localhost:8081 and can be used to inspect and edit the database contents.
The test data is served at http://localhost:8000.
There are two users defined in the LDAP database:
User
Jane Doe
has user idjadoe
and passwordsecret
.User
John Doe
has user idjodoe
and passwordsecret
.
Start the API-server#
Open another terminal and run the API-server:
cd components/api_server
uv venv
. .venv/bin/activate # on Windows: venv\Scripts\activate
ci/pip-install.sh
python src/quality_time_server.py
The API of the API-server is served at http://localhost:5001, e.g. access http://localhost:5001/api/internal/report to get the available reports combined with their recent measurements.
Note
If you’re new to Python virtual environments, note that:
Creating a virtual environment (
uv venv
) has to be done once. Only when the Python version changes, you want to recreate the virtual environment.Activating the virtual environment (
. .venv/bin/activate
) has to be done every time you open a new shell and want to use the Python installed in the virtual environment.Installing the requirements (
ci/pip-install.sh
) has to be repeated when the dependencies, specified in the requirements files, change.
See also
See the Python docs for more information on creating virtual environments.
See this Gist on how to automatically activate and deactivate Python virtual environments when changing directories.
Start the collector#
Open another terminal and run the collector:
cd components/collector
uv venv
. .venv/bin/activate # on Windows: venv\Scripts\activate
ci/pip-install.sh
python src/quality_time_collector.py
By default, the collector attempts to write a health check time stamp to /home/collector/health_check.txt
every few seconds. If that fails, you’ll see these messages in the log:
ERROR:root:Could not write health check time stamp to /home/collector/health_check.txt: [Errno 2] No such file or directory: '/home/collector/health_check.txt'
To prevent the error and the resulting log messages, you can set the HEALTH_CHECK_FILE
environment variable to a location that can be written on your machine, for example:
export HEALTH_CHECK_FILE=/tmp/health_check.txt
Start the frontend#
Open another terminal and run the frontend:
cd components/frontend
npm install --ignore-scripts
npm run start
The frontend is served at http://localhost:3000.
Start the notifier#
Optionally, open yet another terminal and run the notifier:
cd components/notifier
uv venv
. .venv/bin/activate # on Windows: venv\Scripts\activate
ci/pip-install.sh
python src/quality_time_notifier.py
Running the proxy component#
The proxy
component is mapped to the www
service in the docker compose file, which runs on port 80 by default.
This container runs an unprivileged version of Nginx, which will not require additional capabilities when ran on a higher port by specifying PROXY_PORT
.
Coding style#
This section contains some notes on coding style used in this project. It’s far from complete, however.
Python#
Most of the coding standard are enforced by the quality checks.
Methods that can or should be overridden in subclasses have a name with one leading underscore, e.g. _api_url(self) -> URL
. Methods that should only be used by a class instance itself have a name with two leading underscores, e.g. __fields(self) -> List[str]
.
Production code and unit tests are organized in parallel hierarchies. Each Python component has a src
with the production code and a tests
folder with the unit tests. The folder layout of the tests
follows the layout of the src
hierarchy.
JavaScript#
Functional React components are preferred over class-based components.
Production code and unit tests are organized together in one src
folder hierarchy.
Adding metrics and sources#
Quality-time has been designed with the goal of making it easy to add new metrics and sources. The data model specifies all the details about metrics and sources, like the scale and unit of metrics, and the parameters needed for sources. In general, to add a new metric or source, only the data model and the collector need to be changed.
Adding a new metric#
To add a new metric you need to make two changes to the data model:
Add a specification of the new metric to the data model. See the documentation of the shared data model component for a description of the data model and the different metric fields.
Update the
metric_type
parameter of thequality_time
source in the data model. You need to add the human readable name of the new metric to thevalues
list of themetric_type
parameter and you need to add a key-value pair to theapi_values
mapping of themetric_type
parameter, where the key is the human readable name of the metric and the value is the metric key.
Be sure to run the unit tests of the shared data model component after adding a metric to the data model, to check the integrity of the data model. If you forget to do step 2 above, one of the tests will fail. Other than changing the data model, no code changes are needed to support new metrics.
Suppose we want to add a lines of code metric to the data model, to measure the size of software. We would add the metric to the METRICS
model in src/shared_data_model/metrics.py
:
"""Data model metrics."""
from .meta.metric import ..., Metric, Tag, Unit
...
METRICS = {
...
"loc": Metric(
name="Size (LOC)",
description="The size of the software in lines of code.",
rationale="The size of software is correlated with the effort it takes to maintain it. Lines of code is "
"one of the most widely used metrics to measure size of software.",
unit=Unit.LINES,
target="30000",
near_target="35000",
sources=["manual_number"],
tags=[Tag.MAINTAINABILITY],
),
...
}
Since we have no (automated) source for the size metric yet, we have added manual number to the list of sources. We also need to add the size metric to the list of metrics that the manual number source supports:
"""Manual number source."""
from ..meta.source import Source
...
from ..parameters import IntegerParameter
MANUAL_NUMBER = Source(
name="Manual number",
description="A number entered manually by a Quality-time user.",
parameters=dict(
number=IntegerParameter(
...
metrics=[
...
"loc", # Add the size metric here
...
],
)
),
)
After restart of the API-server, you should be able to add the new metric to a quality report and select manual number as a source for the new metric.
Adding a new source#
To add support for a new source, the source (including a logo) needs to be added to the data model. In addition, code to retrieve and parse the source data needs to be added to the collector component, including unit tests of course.
Adding a new source to the data model#
To add a new source you need to make three changes to the data model:
Add a specification of the source to the data model. See the documentation of the shared data model component for a description of the data model and the different source fields.
Update the
source_type
parameter of thequality_time
source in the data model. You need to add the human readable name of the new source to thevalues
list of thesource_type
parameter and you need to add a key-value pair to theapi_values
mapping of thesource_type
parameter, where the key is the human readable name of the source and the value is the metric source key (cloc
in the example below).Add a small PNG file of the logo in
components/shared_code/src/shared_data_model/logos
. Make sure the filename of the logo is<source_type>.png
. The frontend will use theapi/internal/logo/<source_type>
endpoint to retrieve the logo.
Be sure to run the unit tests of the shared data model component after adding a source to the data model, to check the integrity of the data model. If you forget to do step 2 above, one of the tests will fail.
Suppose we want to add cloc as source for the LOC (size) metric and read the size of source code from the JSON file that cloc can produce. We would add a cloc.py
to src/shared_data_model/sources/
:
"""cloc source."""
from ..meta.source import Source
from ..parameters import access_parameters
CLOC = Source(
name="cloc",
description="cloc is an open-source tool for counting blank lines, comment lines, and physical lines of source "
"code in many programming languages.",
url="https://github.com/AlDanial/cloc",
parameters=dict(
**access_parameters(["loc"], source_type="cloc report", source_type_format="JSON")
),
)
Because cloc can be used to measure the lines of code metric, we need to add the cloc source to the list of sources that can measure lines of code:
METRICS = {
...
"loc": Metric(
name="Size (LOC)",
description="The size of the software in lines of code.",
rationale="The size of software is correlated with the effort it takes to maintain it. Lines of code is "
"one of the most widely used metrics to measure size of software.",
unit=Unit.LINES,
target="30000",
near_target="35000",
sources=["cloc", "manual_number"], # Add cloc here
tags=[Tag.MAINTAINABILITY],
),
...
}
Adding a new source to the collector#
To specify how Quality-time can collect data from the source, a new subclass of SourceCollector
needs to be created.
Add a new Python package to the source_collectors
folder with the same name as the source type in the data model. For example, if the new source type is cloc
, the folder name of the collectors is also cloc
. Next, create a module for each metric that the new source supports. For example, if the new source cloc
supports the metric LOC (size) and the metric source-up-to-dateness, you would create two modules, each containing a subclass of SourceCollector
: a ClocLOC
class in cloc/loc.py
and a ClocSourceUpToDateness
class if cloc/source_up_to_dateness.py
. If code can be shared between these classes, add a cloc/base.py
file with a ClocBaseClass
.
To reduce duplication, SourceCollector
has several abstract subclasses. The class hierarchy is currently as follows:
SourceCollector
UnmergedBranchesSourceCollector
: for sources that collect data for the number of unmerged branches metricTimeCollector
: for sources that collect time since or until a certain moment in timeTimePassedCollector
: for source-up-to-datenessJenkinsPluginSourceUpToDatenessCollector
: for getting the source-up-to-dateness from Jenkins plugins
TimeRemainingCollector
: for sources that time remaining until a future date
SourceVersionCollector
: for sources that report version numbersSlowTransactionsCollector
: for sources that report slow performance transactionsJenkinsPluginCollector
: for sources that collect their data from Jenkins pluginsFileSourceCollector
: for sources that parse filesCSVFileSourceCollector
: for sources that parse CSV filesHTMLFileSourceCollector
: for sources that parse HTML filesJSONFileSourceCollector
: for sources that parse JSON filesXMLFileSourceCollector
: for sources that parse XML files
To support cloc as source for the LOC (size) metric we need to read the size of source code from the JSON file that cloc can produce. We add a cloc/loc.py
file and in loc.py
we create a ClocLOC
class with JSONFileSourceCollector
as super class. The only method that needs to be implemented is _parse_source_responses()
to get the amount of lines from the cloc JSON file. This could be as simple as:
"""cloc lines of code collector."""
from base_collectors import JSONFileSourceCollector
from model import SourceMeasurement, SourceResponses
class ClocLOC(JSONFileSourceCollector):
"""cloc collector for size/lines of code."""
async def _parse_source_responses(self, responses: SourceResponses) -> SourceMeasurement:
loc = 0
for response in responses:
for key, value in (await response.json()).items():
if key not in ("header", "SUM"):
loc += value["code"]
return SourceMeasurement(value=str(loc))
Most collector classes are a bit more complex than that, because to retrieve the data they have to deal with APIs and while parsing the data they have to take parameters into account. See the collector source code for more examples.
Writing and running unit tests#
To test the ClocLOC
collector class, we add unit tests to the collector tests package, for example:
"""Unit tests for the cloc source."""
from ...source_collector_test_case import SourceCollectorTestCase
class ClocLOCTest(SourceCollectorTestCase):
"""Unit tests for the cloc loc collector."""
SOURCE_TYPE = "cloc"
METRIC_TYPE = "loc"
async def test_loc(self):
"""Test that the number of lines is returned."""
cloc_json = {
"header": {}, "SUM": {}, # header and SUM are not used
"Python": {"nFiles": 1, "blank": 5, "comment": 10, "code": 60},
"JavaScript": {"nFiles": 1, "blank": 2, "comment": 0, "code": 30}}
response = await self.collect(get_request_json_return_value=cloc_json)
self.assert_measurement(response, value="90", total="100")
Note that the ClocTest
class is a subclass of SourceCollectorTestCase
which creates a source and metric for us, specified using SOURCE_TYPE
and METRIC_TYPE
, and provides us with helper methods to make it easier to mock sources (SourceCollectorTestCase.collect()
) and test results (SourceCollectorTestCase.assert_measurement()
).
In the case of collectors that use files as source, also add an example file to the test data component.
To run the unit tests:
cd components/collector
ci/unittest.sh
You should get 100% line and branch coverage.
Running quality checks#
To run the quality checks:
cd components/collector
ci/quality.sh
Because the source collector classes register themselves (see SourceCollector.__init_subclass__()
), Vulture will think the new source collector subclass is unused:
ci/quality.sh
src/source_collectors/file_source_collectors/cloc.py:26: unused class 'ClocLOC' (60% confidence)
Add Cloc*
to the NAMES_TO_IGNORE
in components/collector/ci/quality.sh
to suppress Vulture’s warning.
Testing#
This section assumes you have created a Python virtual environment, activated it, and installed the requirements for each Python component and that you installed the requirements for the frontend component, as described above.
Unit tests#
To run the unit tests and measure unit test coverage of the backend components (this assumes you have created a Python virtual environment, activated it, and installed the requirements as described above):
cd components/api_server # or components/collector, components/notifier, components/shared_code, components/frontend
ci/unittest.sh
Quality checks#
To run Ruff, mypy, and some other security and quality checks on the backend components, or ESLint and Prettier on the frontend component:
cd components/api_server # or components/collector, components/notifier, components/shared_code, components/frontend
ci/quality.sh
Feature tests#
The feature tests currently test all features through the API served by the API-server. They touch all components except the frontend, the collector, and the notifier. To run the feature tests, invoke this script, it will build and start all the necessary components, run the tests, and gather coverage information:
tests/feature_tests/ci/test.sh
The test.sh
shell script will start a server under coverage and then run the feature tests.
It’s also possible to run a subset of the feature tests by passing the feature file as argument:
tests/feature_tests/ci/test.sh tests/feature_tests/features/metric.feature
Application tests#
The application tests in theory test all components through the frontend, but unfortunately the number of tests is too small to meet that goal. To run the application tests, start all components and then start the tests:
docker-compose up -d
docker run -it -w `pwd` -v `pwd`:`pwd` --network=container:qualitytime_www_1 ghcr.io/astral-sh/uv:python3.12-bookworm tests/application_tests/ci/test.sh
Documentation and changelog#
The documentation is written in Markdown files and published on Read the Docs.
To generate the documentation locally:
cd docs
uv venv
. .venv/bin/activate # on Windows: venv\Scripts\activate
ci/pip-install.sh
make html
open build/html/index.html
make html
also generates the docs/src/reference.md
reference manual, containing an overview of all subjects, metrics, and sources.
To check the correctness of the links:
make linkcheck
Releasing#
Preparation#
Make sure the release folder is the current directory, and you have the dependencies for the release script installed:
cd release
uv venv
. .venv/bin/activate
ci/pip-install.sh
Run the release script with --help
to show help information, including the current release.
python release.py --help
Decide the release type#
Quality-time adheres to Semantic Versioning, so first you need to decide on the type of release you want to create, conform the release policy.
Create a major release if an operator needs to make manual changes to the Docker-composition before deploying the next release.
Create a minor release if the next release contains new or changed functionality.
Create a patch release if the next release contains only bug fixes.
Before creating a release, it first needs to be tested as a release candidate for a major, minor, or patch release. Aside from testing by automatic pipeline it can for example be deployed to a test environment, or rolled out to early adopters. Before finalizing the release candidate, make sure to update the version overview.
Important
To determine whether a release is major, minor, or patch, compare the changes to the previous most recent release.
Determine the version bump#
Having decided on the release type, there are the following possibilities for the version bump argument that you will be passing to the release script:
If the current release is a release candidate,
and you want to create another release candidate, use:
rc
. If the current release is e.g. v3.6.1-rc.0, this will bump the version to v3.6.1-rc.1.and the next release will not be, use:
release
. If the current release is e.g. v3.6.1-rc.0, this will bump the version to v3.6.1.and changes have been made since the previous release candidate that impact the release type, use:
major
,minor
, orpatch
. If the current release is e.g. v3.6.1-rc.0, usingminor
will bump the version to v3.7.0-rc.0.
If the current release is not a release candidate:
to create a release candidate, use:
major
,minor
, orpatch
. If the current release is e.g. v3.6.1, usingminor
will bump the version to v3.7.0-rc.0.
Check the preconditions#
The release script will check a number of preconditions before actually creating the release. To check the preconditions without releasing, invoke the release script with the version bump as determined:
python release.py --check-preconditions-only <bump> # Where bump is major, minor, patch, rc, or release
If everything is ok, there is no output, and you can proceed creating the release. Otherwise, the release script will list the preconditions that have not been met and need fixing before you can create the release.
Create the release#
To release Quality-time, issue the release command (in the release folder) from an already created release candidate:
python release.py release
If all preconditions are met, the release script will bump the version numbers, update the change history, commit the changes, push the commit, tag the commit, and push the tag to GitHub. The GitHub Actions release workflow will then build the Docker images and push them to Docker Hub. It will also create an Software Bill of Materials (SBOM) for the release, which can be found under the “Artifacts” header of the workflow run.
The Docker images are quality-time_database
, quality-time_renderer
, quality-time_api_server
, quality-time_collector
, quality-time_notifier
, quality-time_proxy
, quality-time_testldap
, and quality-time_frontend
. The images are tagged with the version number. We don’t use the latest
tag.
Maintenance#
Python and JavaScript dependencies#
Keeping dependencies up-to-date is an important aspect of software maintenance. Python (pip) and JavaScript (npm) dependencies are kept up-to-date via the Dependabot GitHub action.
For Python, we follow the dependency management practice described by James Bennett, to a large extent.
Docker images#
Base images used in the Docker containers, and additionally installed software, need to be upgraded by hand from time to time. These are:
API-server: the Python base image.
Collector: the Python base image.
Notifier: the Python base image.
Frontend: the Node base image, the curl version, the npm version, and the serve version.
Database: the MongoDB base image.
Proxy: the Nginx base image.
Renderer: the Node base image, the curl version, the Chromium version, and the npm version.
Test data: the Python base image.
Container images directly specified in compose files used for development and continuous integration:
mongo-express
,ldap
,phpldapadmin
, andselenium
.