Just writing tests for your Django codebase is not enough. You need to check how much of the code is covered in your tests. For this, there are some tools available. Again, it is not just the number of lines of code tested that matters. What matters is “Are these lines important?”. Well, for this, we have to use our head.
I was able to plugin django-test-coverage in Transifex. For some test cases it ran, for others, it failed. It raised a warning saying that a module was being imported more than once. But, the stats generated by it were misleading. Except for files with 0 lines of code (like __init__.py), it showed code coverage % as 0 and 100 only for the files having 0 line of code. I hacked into its code and was able to run it for cases where it had failed previously. But still the statistics were misleading. Time was running out. So, I decided that I would revisit its code some time later.
I resorted to use coverage.py. You can find an introduction about coverage.py at here. Using coverage boils down to 3 steps:
- Erase previously collected data : $coverage -e
- Execute the necessary tests and collect data: $coverage -x <test module>
- Display the result: $coverage -r -m
If the test module is large, the report generated by coverage is also large. I usually save the report to a file : $coverage -r -m > report.txt. Now, I can use grep to shortlist the report to see the details of the files which concerns me now. That’s pretty easy.
coverage.py gives you very useful informations like percentage code coverage of a file, missing statements, etc. Although a higher percentage code coverage is better, but the importance of the lines also matters a lot. You could increase the code coverage by including 10 not so important lines rather than including 1 important line. So, code coverage statistics helps us to write tests to cover more codes, but it is not a replacement to thinking. The final judgement is to be done by us.