Update testing.rst

pull/95/head
JulioV 2020-05-18 11:38:39 -04:00 committed by GitHub
parent 3311cf2a02
commit 93d51de811
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
1 changed files with 12 additions and 9 deletions

View File

@ -1,7 +1,7 @@
Testing
==========
The following is a simple guide to testing RAPIDS. All of documents that are used for testing is stored in the ``tests`` directory. The following is the structure of the ``tests`` directory
The following is a simple guide to testing RAPIDS. All files necessary for testing are stored in the ``tests`` directory:
::
@ -22,30 +22,33 @@ The following is a simple guide to testing RAPIDS. All of documents that are use
│ └── Snakefile <- The Snakefile for testing only. It contains the rules that you would be testing.
To begin testing RAPIDS place the raw data ``csv`` files in the ``tests/data/raw`` directory. The expected output of applying the rules being tested on the input data should be placed in the ``tests/data/processesd``. In order to test a rule, copy the necessary input data files from the ``tests/data/raw`` directory into the ``data/raw`` directory. The rule(s) that are to be tested must be placed in the ``tests/Snakemake`` file. The current ``tests/Snakemake`` is a good example of how to define the rules that are to be tested.
Your test scripts are to be placed in the ``tests/scripts`` directory. The next step is to run the rule that is to be tested. You can run all rules in the ``tests/Snakemake`` file by using the following command
To begin testing RAPIDS place the input data ``csv`` files in the ``tests/data/raw`` directory. The expected output of RAPIDS with the raw input data should be placed in the ``tests/data/processesd``.
Copy all files from ``tests/data/raw`` directory into the ``data/raw`` directory. The rule(s) that are to be tested must be placed in the ``tests/Snakemake`` file. The current ``tests/Snakemake`` is a good example of how to define the rules that are to be tested.
Store your test scripts in the ``tests/scripts`` directory. Next, you can run all rules in the ``tests/Snakemake`` with:
::
snakemake --profile tests/settings
Or run a single rule by using a command similar to the following example
Or run a single rule with
::
snakemake --profile tests/settings -R sms_features
The above example runs the ``sms_features`` rule that is defined in the ``tests/Snakemake`` file. This can be changed to the name of the rule that you desire to test. It should be noted that the ``--profile`` option is used in order to specify which configuration setting are to be used in the running of Snakemake. The path to the configurations is passed as an argument for this option.
The above example runs the ``sms_features`` rule that is defined in the ``tests/Snakemake`` file. Replace this with the name of the rule you want to test. The ``--profile`` flag is used to run ``Snakemake`` with the ``Snakfile`` and ``confi.yaml`` file stored in ``tests/settings``.
Once the rule(s) has been run on the sample data the next step is to test the output. The testing is implemented using Python's Unittest. To run the tests scripts that are stored in the ``tests/scripts`` directory use the following command
Once RAPIDS has processed the sample data, the next step is to test the output. Testing is implemented using Python's Unittest. To run all the tests scripts stored in the ``tests/scripts`` directory use the following command:
::
python -m unittest discover tests/scripts/ -v
The above command would run all of the test scripts that are in the ``tests/scripts`` directory. The ``discover`` option in the above command finds all of the test scripts within the ``tests/scripts`` directory by matching all of the scripts with that begin with ``test_`` and run them. It should be noted that the test methods in the test scripts also begin with ``test_``. This is how unittest finds the tests to be run.
The ``discover`` flag finds and runs all of the test scripts within the ``tests/scripts`` directory that start with ``test_``. The name of all test methods in the these scripts should also start with ``test_``.
The following is an example snippet of the output that you should see after running your test.
The following is a snippet of the output you should see after running your test.
::
@ -56,6 +59,6 @@ The following is an example snippet of the output that you should see after runn
FAIL: test_sensors_features_calculations (test_sensor_features.TestSensorFeatures)
----------------------------------------------------------------------
The results above show that the first test ``test_sensors_files_exist`` passed while ``test_sensors_features_calculations`` failed. Note that this is just a snippet of the expected results and the rest of the results (not shown) is the traceback and an explanation of for the failure. For more information on how to implement test scripts and use unittest please see `Unittest Documentation`_
The results above show that the first test ``test_sensors_files_exist`` passed while ``test_sensors_features_calculations`` failed. In addition you should get the traceback of the failure (not shown here). For more information on how to implement test scripts and use unittest please see `Unittest Documentation`_
.. _`Unittest Documentation`: https://docs.python.org/3.7/library/unittest.html#command-line-interface