Start testing Transifex

 

How do you setup Transifex?

Here is all you need to know to setup Transifex: http://help.transifex.net/server/install.html

http://fosswithme.wordpress.com/2011/10/20/setup-transifex-in-virtualenv/ is another good write-up on how to setup and run Transifex in virtualenv. So, I’d be building on top of that to show you how to start testing Transifex using django-nose.

What packages will you need?

django-nose, nose, nose-exclude, coverage

You can install the above packages using

pip install <package_name>

Configure Transifex settings to enable django-nose Test Runner

cd <transifex source code's root directory>

cd transifex/settings

cp  90-local.conf.sample 90-local.conf

open and edit 90-local.conf and add the following lines:

INSTALLED_APPS += [

‘django_nose’,

]

TEST_RUNNER = ‘django_nose.NoseTestSuiteRunner’

cd ..

Now, save the file. Now you are good to go.

Start testing

python manage.py test <app_name>

for example:

python manage.py test resources

You can also test a particular test class like below:

python manage.py test resources.tests:TestJavaProperties

You can even run a particular test method:

python manage.py test resources.tests:TestJavaProperties.test_properties_parser

Run coverage on Transifex tests

django-nose has a plugin for coverage. So, you can run the above tests and collect coverage data.

For example:

python manage.py test resources.tests:TestJavaProperties.test_properties_parser --with-coverage --cover-package=resources.tests.formats

All the coverage results are saved in a .coverge file by default in the current directory. Although, running tests with coverage plugin of django-nose shows the coverage results by default. You can also see the coverage results in the usual way:

coverage -rm

You can also use grep along with the above command to filter the results displayed.

Well, that’s all you need to know to start testing Transifex.

What’s next?

Start testing transifex. If you find a test fails, try to find the reason why it failed. Read the traceback info properly. Find where the error took place. There are various reasons why a test may fail:

  • Test is not updated according to updates in code
  • Bug in code
  • A wrong test case

and others…

You can report the issues or any bug you find at http://trac.transifex.org/newticket. Feel free to submit a patch that fixes the issue. The patch will be reviewed by the Transifex upstream and if it is ok, it will be merged with Transifex’s code at http://code.indifex.com/transifex/.

Keep hacking 🙂

Advertisements

Testing coverage of your Django code

Just writing tests for your Django codebase is not enough. You need to check how much of the code is covered in your tests. For this, there are some tools available. Again, it is not just the number of lines of code tested that matters. What matters is “Are these lines important?”. Well, for this, we have to use our head.

covergae is a tool for checking test coverage of python applications. django-test-coverage was built on top of coverage.py to meet requirements of django tests.

I was able to plugin django-test-coverage in Transifex. For some test cases it ran, for others, it failed. It raised a warning saying that a module was being imported more than once. But, the stats generated by it were misleading. Except for files with 0 lines of code (like __init__.py), it showed code coverage % as 0 and 100 only for the files having 0 line of code. I hacked into its code and was able to run it for cases where it had failed previously. But still the statistics were misleading. Time was running out. So, I decided that I would revisit its code some time later.

I resorted to use coverage.py. You can find an introduction about coverage.py at here. Using coverage boils down to 3 steps:

  1. Erase previously collected data : $coverage -e
  2. Execute the necessary tests and collect data: $coverage -x <test module>
  3. Display the result: $coverage -r -m

If the test module is large, the report generated by coverage is also large. I usually save the report to a file : $coverage -r -m > report.txt. Now, I can use grep to shortlist the report to see the details of the files which concerns me now. That’s pretty easy.

coverage.py gives you very useful informations like percentage code coverage of a file, missing statements, etc. Although a higher percentage code coverage is better, but the importance of the lines also matters a lot. You could increase the code coverage by including 10 not so important lines rather than including 1 important line. So, code coverage statistics helps us to write tests to cover more codes, but it is not a replacement to thinking. The final judgement is to be done by us.