Edit this page | Blame

Automated Testing

Tags

  • assigned: fredm, bonfacem, alex
  • priority: medium
  • type: enhancement
  • keywords: testing, CI, CD

Introduction

As part of the

there is need for automated tests to ensure that the system is working as expected.

This document is meant to track the implementation of the automated tests and possibly the related infrastructure for running the tests.

Genenetwork 3

Unit Tests

There is a collection of unit tests in the *tests/unit* in the

Integration

There is (as of 2022-Feb-10) an

in the Genenetwork 3 repository.

The tests there, however, are technically unit tests. Each test seems to test a single logical unit of the system e.g. correlations, gemma, etc.

There is no test that seems to check for interactions among the logical units/modules of the system e.g.

  • authorisation <==> file-upload <==> correlations
  • partial-correlations <==> trait-editting <==> gemma analysis

etc.

API Tests

There is need for tests to ensure that all expected endpoints are up and running.

Maybe even check that the data is correct.

Performance and Responsiveness

There is a need to ensure that the system does not take forever to compute stuff.

There is a single performance tests module in

for Genenetwork 3 but it is run manually, and mostly tests a very specific query that might or might not have been used in the code.

The performance tests in GN3 should probably be focussed on checking the following (among others):

  • Each API endpoint responds within a specified amount of time
  • Select computation-heavy functions respond within a specified amount of time for given data
  • Database-querying functions used in the system respond within specified amount of time

etc.

This is relevant since GN3 is behind Nginx which defines a timeout.

Regression Tests

Checks that previously working features are not broken. These can be added as we go along

Genenetwork 2

Unit Tests

Present under the

in the GN2 repository.

Integration

Genenetwork 2 has a "Mechanical Rob" testing system that is under construction whose purpose (as far as I - fredm - can tell) is to "walk" some common paths that have multiple logical units working together, thus performing some form of integration testing.

The only issue I (fredm) find in that as it is currently, it will not be able to test javascript interactions that are crucial to some operations in certain flows.

Performance and Responsiveness

Since GN2 is not meant to handle computations itself, the bigger concern here is responsiveness.

There might need to be checks for responsiveness built in.

Regression Tests

Checks that previously working features are not broken. These can be added as we go along

Notes from Email Correspondence

Selenium (and other browser-automation tools) were said to be too complicated, and are to be avoided as much as possible. It is better to use headless Firefox/Chromium and fetch pages with Mechanical Rob.

Selenium had been previously introduced in GN2 and then swiftly removed.

We should cover the GN2/GN3 ones.

For GN2 we should write scripts that test:

  • [ ] main menu selector
  • [ ] search
  • [ ] global search
  • [ ] search with wild cards
  • [ ] select items and add to shopping basket/collections
  • [ ] mapping page
  • [ ] run R/qtl mapping
  • [ ] run GEMMA mapping
  • [ ] run correlations
  • [ ] Run the functions on the shopping basket/collections, such as CTL, network graph
Please don't emulate hitting browser buttons (selenium style). Simply find URL paths that do the job. You may have to set cookies. Mechanical Rob can do that. If using the browser interface is too hard, then create 'back-end' tests that cover the real functionality. I am not too concerned about the browser display - i.e., real Rob catches that quickly enough. I am *really* worried about regressions where search etc. starts giving different results. See what I mean? Pj.

Testing interface

Tests in different categories should be grouped into different command-line endpoints. For example, unit tests could be run by "python3 setup.py check", integration tests could be run by "python3 setup.py integration-check", performance tests could be run by "python3 setup.py performance-check", and so on. This way, the CI will have to be configured only once, and then committers will be able to add new tests without requesting for a CI reconfiguration each time. We won't have to wait on others to respond. Less coordination will be required leading to smoother work for everyone.

Resolution

This has been moved to it's own topic in:

Should there be anything actionable to be done, separate issues will be created.

(made with skribilo)