Roundup Tracker - Issues

Issue 2550894

classification
Replace existing run_tests.py script with pytest
Type: rfe Severity: normal
Components: None Versions: devel
process
Status: closed accepted
:
: jerrykan : ber, jerrykan, schlatterbeck, techtonik
Priority: : patch

Created on 2015-08-21 08:21 by jerrykan, last changed 2016-01-12 12:55 by jerrykan.

Files
File name Uploaded Description Edit Remove
0001-Update-tests-to-work-with-py.test.patch jerrykan, 2015-08-21 08:21
0002-Update-test-test_actions.py-to-work-with-py.test.patch jerrykan, 2015-08-21 08:21
0003-Rename-TestMessage-to-ExampleMessage.patch jerrykan, 2015-08-21 08:21
0004-Replace-existing-run_tests.py-script-with-a-pytest-s.patch jerrykan, 2015-08-21 08:22
0005-Remove-unneeded-TestSuite-code-from-tests.patch jerrykan, 2015-08-21 08:22
unnamed techtonik, 2015-08-21 15:59
unnamed techtonik, 2015-08-22 07:42
unnamed techtonik, 2015-09-03 15:52
unnamed techtonik, 2015-09-07 08:20
Messages
msg5358 Author: [hidden] (jerrykan) Date: 2015-08-21 08:21
The existing run_tests.py script is quite old (un-maintained?) and
doesn't always always behave documented.

The pytest testing tool has been around for quite a while, it is mature,
well documented, maintained and used by a large number of other python
projects.

I am proposing that we replace the existing run_tests.py script with a
pytest standalone script.

A proposed patch set will be attached shortly. Though you can also view
the changes in the 'pytest' branch on my github clone:

  https://github.com/jerrykan/roundup/commits/pytest
msg5359 Author: [hidden] (jerrykan) Date: 2015-08-21 08:27
A couple of things to note:

- pytest requires python v2.6+ so these changes would have to wait until
after Roundup v1.5.1 is release (the last to support python v2.5) before
they can be pushed to the repository 

- the new pytest run_tests.py script will automatically search for tests
to run, so if you have a virtualenv directory in the local repository
clone directory you should probably invoke the script using:

  ./run_tests.py test

to specify that you only want pytest to look in the test/ directory for
tests.
msg5362 Author: [hidden] (techtonik) Date: 2015-08-21 15:59
The argumentation "it's old - let's update it" doesn't seem too
exciting. Right now you don't need anything to run tests, and
if you want to add external dependency just because it is "new",
I am -1.

Do you have an example of real problems that you've
encountered with existing test suite?
msg5363 Author: [hidden] (jerrykan) Date: 2015-08-22 05:08
On 22/08/15 01:58, anatoly techtonik wrote:
> The argumentation "it's old - let's update it" doesn't seem too
> exciting. Right now you don't need anything to run tests, and
> if you want to add external dependency just because it is "new",
> I am -1.

A self-contained pytest script can be, and has been, generated to 
replace the existing script (see part 4 of the patchset[1]). So while 
the pytest generated script is larger than the old script there are no 
external dependencies required to run the tests.

>
> Do you have an example of real problems that you've
> encountered with existing test suite?
>

The existing run_tests.py script is a non-starter with python3, so if we 
want to even look at adding python3 support to Roundup the run_tests.py 
script would need to be updated to add python3 support. This could be 
done, but also begs the question of why are we maintaining our own test 
runner script? Why not just push that maintenance burden off to some 
other project (ie. pytest)?

As best as I can tell the existing script originates from the Zope 
project. I found a few dead links to what I assume was the original 
script, but it seems as though the Zope project now uses the 
zope.testing package. So updating the existing script from upstream also 
doesn't seem like an option.

I personally find the documentation for the existing script 
('run_tests.py -h') a bit confusing, I'm not sure if I have ever figured 
out how to run a single test case using the existing script, and some 
options don't behave as expected (ie. 'run_tests.py --dir test' works as 
one might expect, but 'run_tests.py --dir test/' doesn't).

If there are others like me who have problems with the existing script 
where do they go for more information or examples on how to use the 
script? Doing a search for "pytest" is likely to turn up more useful 
results than searching for "run_tests.py" or "zope test.py" (if they can 
figure out its origins). The pytest tool is widely used and well 
documented[2], which means there is already lots of information out 
there and people coming to Roundup are more likely to be already 
familiar with the tool.

There are also a number of benefits that adopting pytest could provide 
(less boilerplate, fixtures, using 'assert', etc.) if we were to fully 
embrace it, but the one nice benefit I have included in the current 
patchset is the support for test skipping decorators it brings to 
python2.6 (not available in unittest before python2.7). When running 
tests you now know how may tests have been skipped (and why), instead of 
just being told some tests won't be run.

I'm not trying to do a "hard sell" for pytest, but I do think we need to 
move away from the existing run_test.py script, hence why I thought it 
would be useful to put some patches together that would replace it with 
what I think is the best tool for the job.

If the consensus is to use another tool, then I am OK with that.

SeeYa,
John.

[1] 
http://issues.roundup-tracker.org/file1583/0004-Replace-existing-run_tests.py-script-with-a-pytest-s.patch

[2] http://pytest.org/latest/
msg5364 Author: [hidden] (techtonik) Date: 2015-08-22 07:42
>
> The existing run_tests.py script is a non-starter with python3, so if we
> want to even look at adding python3 support to Roundup the run_tests.py
> script would need to be updated to add python3 support. This could be done,
> but also begs the question of why are we maintaining our own test runner
> script? Why not just push that maintenance burden off to some other project
> (ie. pytest)?
>

Seems fair. As long as we maintain our own *short* user level usage
documentation. Stating that we use pytest will redirect people to
comprehensive pytest docs, which I most people doing testing don't really
have time to read. And I also would like to avoid a ton of options that we
don't need. Some testing utilities are frightening me away when I try to
read through their options..

> I personally find the documentation for the existing script ('run_tests.py
> -h') a bit confusing, I'm not sure if I have ever figured out how to run a
> single test case using the existing script, and some options don't behave
> as expected (ie. 'run_tests.py --dir test' works as one might expect, but
> 'run_tests.py --dir test/' doesn't).
>

Yes. I remember I was doing this part, but I am not sure if it was SCons or
Roundup, so my requirement for test utility are:
1. list tests with quick filter (preview test selection)
2. run tests filtered by list
3. run single test
4. run tests that match substring

In ideal world I'd wrap pytest or similar project and strip all the option
that are not used in a project unless user specifies -v -h (--verbose
--help).

There are also a number of benefits that adopting pytest could provide
> (less boilerplate, fixtures, using 'assert', etc.) if we were to fully
> embrace it, but the one nice benefit I have included in the current
> patchset is the support for test skipping decorators it brings to python2.6
> (not available in unittest before python2.7). When running tests you now
> know how may tests have been skipped (and why), instead of just being told
> some tests won't be run.
>

This is a much better argumentation. Thanks. =) Are the tests become hard
depending on pytest, or it is some general practice how to skip stuff?

> If the consensus is to use another tool, then I am OK with that.
>

I don't have a preference, but it would be nice to keep tests in some
format that is independent of the tool, and for this specific problem
choose tool that is not huge. =)

Also, it looks like we don't have continuous testing setup. There are
plenty of services which we can add to the home page http://shields.io/(And
link to PyPI)
msg5365 Author: [hidden] (jerrykan) Date: 2015-08-22 14:02
On 22/08/15 17:42, anatoly techtonik wrote:
>     I personally find the documentation for the existing script
>     ('run_tests.py -h') a bit confusing, I'm not sure if I have ever
>     figured out how to run a single test case using the existing script,
>     and some options don't behave as expected (ie. 'run_tests.py --dir
>     test' works as one might expect, but 'run_tests.py --dir test/'
>     doesn't).
>
>
> Yes. I remember I was doing this part, but I am not sure if it was SCons
> or Roundup, so my requirement for test utility are:
> 1. list tests with quick filter (preview test selection)
> 2. run tests filtered by list
> 3. run single test
> 4. run tests that match substring

Here are a few examples - let me know if they don't cover what you are 
asking for (note: 'py.test' could be substituted for a new 
'run_tests.py' script):

Search for and run all tests in current working directory (recursively):

   py.test

Search for and run all tests in a directory/file (recursively):

  py.test test/test_dates.py

Search for and run tests that match a substring (classes or functions):

  py.test -k SessionTest
or
  py.test test/test_sqlite.py -k SessionTest

Run a specific test:

  py.test test/test_dates.py::DateTestCase::testSorting

To list the collected/matching tests, just add the '--collect-only' option.

>
> In ideal world I'd wrap pytest or similar project and strip all the
> option that are not used in a project unless user specifies -v -h
> (--verbose --help).
>

I'm not opposed to this, but I'm not sure if it is really worth the 
effort to maintain something that just obscures the options. Most 
"users" will probably just run the test script without any options to 
see if all the tests pass. The "developers" are the ones most likely to 
be playing around with options, in which case we just document the few 
options I've mentioned above that covers 90% of the use-cases. It is 
then up to the developers if they want to dig into the options further 
by using the '-h' option.

>     There are also a number of benefits that adopting pytest could
>     provide (less boilerplate, fixtures, using 'assert', etc.) if we
>     were to fully embrace it, but the one nice benefit I have included
>     in the current patchset is the support for test skipping decorators
>     it brings to python2.6 (not available in unittest before python2.7).
>     When running tests you now know how may tests have been skipped (and
>     why), instead of just being told some tests won't be run.
>
>
> This is a much better argumentation. Thanks. =) Are the tests become
> hard depending on pytest, or it is some general practice how to skip stuff?

 From python2.7 the unittest module has decorators for skipping test[1], 
but for the foreseeable future Roundup also supports python2.6. The 
pytest has its own test skipping decorators[2] which has support for 
python2.6, so if we were to take advantage of that we would be tied to 
using pytest. If we drop support for python2.6 then we could use the 
unittest skip decorators instead of the pytest ones.

>
>     If the consensus is to use another tool, then I am OK with that.
>
>
> I don't have a preference, but it would be nice to keep tests in some
> format that is independent of the tool, and for this specific problem
> choose tool that is not huge. =)
>

This just comes down to how much we want to embrace the pytest way of 
doing things. We could stick to using only what is available in unittest 
and keep things fairly portable, or take advantage of the nice extra 
features that pytest provides. Swings and roundabouts[3]

[1] 
https://docs.python.org/2/library/unittest.html#skipping-tests-and-expected-failures
[2] 
http://pytest.org/latest/skipping.html#marking-a-test-function-to-be-skipped
[3] http://idioms.thefreedictionary.com/it%27s+swings+and+roundabouts
msg5366 Author: [hidden] (jerrykan) Date: 2015-08-22 14:15
Hmmm... seems I didn't test the skipif() decorators correctly. They
don't always behave as expected due to an upstream bug:

  https://github.com/pytest-dev/pytest/issues/891

If a test class using a skipif() decorator shares a parent class with
other test classes, then it is possible that all these test classes
could be skipped.
msg5367 Author: [hidden] (techtonik) Date: 2015-09-03 15:52
>
> 1. list tests with quick filter (preview test selection)
>
>

> To list the collected/matching tests, just add the '--collect-only' option.

I'd prefer a better command line shortcut, because I use this pretty often.

>  not opposed to this, but I'm not sure if it is really worth the effort to
>> maintain something that just obscures the options. Most "users" will
>> probably just run the test script without any options to see if all the
>> tests pass. The "developers" are the ones most likely to be playing around
>> with options, in which case we just document the few options I've mentioned
>> above that covers 90% of the use-cases. It is then up to the developers if
>> they want to dig into the options further by using the '-h' option.
>>
>
Ok. It is not too bloated after all.

> From python2.7 the unittest module has decorators for skipping test[1],
> but for the foreseeable future Roundup also supports python2.6. The pytest
> has its own test skipping decorators[2] which has support for python2.6, so
> if we were to take advantage of that we would be tied to using pytest. If
> we drop support for python2.6 then we could use the unittest skip
> decorators instead of the pytest ones.

Are there other way to skip tests instead of decorators?

> I don't have a preference, but it would be nice to keep tests in some
>> format that is independent of the tool, and for this specific problem
>> choose tool that is not huge. =)
>>
>>
> This just comes down to how much we want to embrace the pytest way of
> doing things. We could stick to using only what is available in unittest
> and keep things fairly portable, or take advantage of the nice extra
> features that pytest provides. Swings and roundabouts[3]
>

I am not convinced with py.test dependent features, but this stuff seems
nice:

https://pytest.org/latest/goodpractises.html#create-a-pytest-standalone-script

So I'd give it a go. It looks like there could be more support in IDE for
that kind of stuff. Maybe it can also support mounting temp dirs in memory
only disks to save SSD and speedup execution.
msg5369 Author: [hidden] (jerrykan) Date: 2015-09-07 02:47
On 04/09/15 01:51, anatoly techtonik wrote:
>         1. list tests with quick filter (preview test selection)
>
>     To list the collected/matching tests, just add the '--collect-only'
>     option.
>
>
> I'd prefer a better command line shortcut, because I use this pretty often.

Maybe create an alias?

>      From python2.7 the unittest module has decorators for skipping
>     test[1], but for the foreseeable future Roundup also supports
>     python2.6. The pytest has its own test skipping decorators[2] which
>     has support for python2.6, so if we were to take advantage of that
>     we would be tied to using pytest. If we drop support for python2.6
>     then we could use the unittest skip decorators instead of the pytest
>     ones.
>
>
> Are there other way to skip tests instead of decorators?

Not in a way that I am aware of that will display to the user which 
tests have been skipped.

If might be possible to hide the tests behind an if clause, but that 
seems like an ugly hack and wouldn't convey to the users that any tests 
have been skipped (or why).
msg5370 Author: [hidden] (techtonik) Date: 2015-09-07 08:20
So, you need a declarative way of skipping tests depending on the
conditions, such as platform, availability of DB backend, Python
version etc. and also need to record a reason for users WHY a test
was skipped.

It seems to me like a job for "test skipping framework". I am not
deeply involved in that case, so I'd try to find some spec that
describes needed behavior in details and forward it to py.test devs.

And as for the subject of this ticket, I think we can try to move
py.test. Just don't forget to include argumentation for those who
will jump in later.
msg5371 Author: [hidden] (schlatterbeck) Date: 2015-09-07 08:44
On Mon, Sep 07, 2015 at 08:20:45AM +0000, anatoly techtonik wrote:
> And as for the subject of this ticket, I think we can try to move
> py.test. Just don't forget to include argumentation for those who
> will jump in later.

I also followed this ticket and also think we can move to a new
framework. Note that currently we're skipping tests if some package is
not installed (e.g. mysql, postgres) and report about skipped tests in
that case. So it should still be possible to run all tests for which the
necessary dependencies exist and skip the rest.

Ralf
msg5372 Author: [hidden] (jerrykan) Date: 2015-09-07 13:00
I think there may have been some miscommunication about the skipping of 
tests

On 07/09/15 18:20, anatoly techtonik wrote:
> So, you need a declarative way of skipping tests depending on the
> conditions, such as platform, availability of DB backend, Python
> version etc. and also need to record a reason for users WHY a test
> was skipped.

What you have described is essentially available in unittest from python2.7+

https://docs.python.org/2/library/unittest.html#skipping-tests-and-expected-failures

The issue I meant to point out is that we will still be supporting 
roundup on python2.6 which doesn't have support for this sort of test 
skipping in unittest[1]. The pytest tool can provide this sort of test 
skipping support for python2.6 (ignoring the bug mentioned earlier in 
this issue[2])

   http://pytest.org/latest/skipping.html

Using the pytest.skip() or pytest.mark.skipif() decorators would mean 
that the tests suite would be dependant on pytest - though in a fairly 
superficial way.

> And as for the subject of this ticket, I think we can try to move
> py.test. Just don't forget to include argumentation for those who
> will jump in later.

I'm not sure I understand the term "argumentation". Do you mean just 
adding a note in the CHANGES.txt file? some more detailed document? or 
something else?

SeeYa,
John.

[1] technically unittest2 could provide the skipping functionality 
back-ported to python2.6, but that would be a external dependency.

[2] I believe I have a workaround that should work until the issue is 
resolved upstream.
msg5373 Author: [hidden] (jerrykan) Date: 2015-09-07 13:08
On 07/09/15 18:44, Ralf Schlatterbeck wrote:
>
> Ralf Schlatterbeck added the comment:
>
> On Mon, Sep 07, 2015 at 08:20:45AM +0000, anatoly techtonik wrote:
>> And as for the subject of this ticket, I think we can try to move
>> py.test. Just don't forget to include argumentation for those who
>> will jump in later.
>
> I also followed this ticket and also think we can move to a new
> framework. Note that currently we're skipping tests if some package is
> not installed (e.g. mysql, postgres) and report about skipped tests in
> that case. So it should still be possible to run all tests for which the
> necessary dependencies exist and skip the rest.

The one issue with how the test skipping currently works is that the 
user is informed that a class of tests are being skipped, but doesn't 
give any indication of why (though that may be somewhat obvious) or how 
many tests are being skipped. The proposed patch set performs the same 
checks and skips tests (using the pytest.mark.skipif() decorator) that 
have missing packages/components.

Given that there doesn't seem to be any objections to switching to 
pytest, I'll look at fixing a few issues I've noticed with the initial 
patch set and have a new patch set ready to be applied once Roundup 
v1.5.1 has been released and python2.5 is no longer a dependency.
msg5401 Author: [hidden] (ber) Date: 2016-01-05 21:35
One problem I have with run_tests.py is that using the pychecker version
barfs on my. E.g. with hg5009:3766e0ca8e7a :

 python run_tests.py -c 
Running unit tests at level 1
Running unit tests from /home/bernhard/hacking/roundup/roundup-811/.
warning: couldn't find real module for class <class 'ssl.SSLError'> (module name: ssl)
warning: couldn't find real module for class <type 'ssl.SSLContext'> (module name: ssl)
warning: couldn't find real module for class <class 'ssl.SSLError'> (module name: ssl)
Traceback (most recent call last):
  File "run_tests.py", line 885, in <module>
    process_args()
  File "run_tests.py", line 875, in process_args
    bad = main(module_filter, test_filter, libdir)
  File "run_tests.py", line 637, in main
    import logging.config
  File "/usr/lib/python2.7/site-packages/pychecker/checker.py", line 391, in __import__
    pymodule = _orig__import__(name, globals, locals, fromlist)
  File "/usr/lib64/python2.7/logging/config.py", line 27, in <module>
    import sys, logging, logging.handlers, socket, struct, os, traceback, re
  File "/usr/lib/python2.7/site-packages/pychecker/checker.py", line 391, in __import__
    pymodule = _orig__import__(name, globals, locals, fromlist)
  File "/usr/lib64/python2.7/logging/handlers.py", line 26, in <module>
    import errno, logging, socket, os, cPickle, struct, time, re
  File "/usr/lib/python2.7/site-packages/pychecker/checker.py", line 391, in __import__
    pymodule = _orig__import__(name, globals, locals, fromlist)
TypeError: __import__() takes at most 4 arguments (5 given)
msg5402 Author: [hidden] (ber) Date: 2016-01-05 21:37
So I'm all for moving the a better testing framework.
As for the python3. If we manage to scramble resources to do a python3
"port" I thing we should technically rebuild from scratch to keep the spirit
and some template compatibility for roundup3. :)
msg5403 Author: [hidden] (jerrykan) Date: 2016-01-06 00:51
On 06/01/16 08:35, Bernhard Reiter wrote:
> One problem I have with run_tests.py is that using the pychecker version
> barfs on my. E.g. with hg5009:3766e0ca8e7a :

Just to clarify, this is an issue with the existing run_tests.py script 
and not the new py.test script, and this is one of the reasons you are 
happy to move to a better framework?

On 06/01/16 08:37, Bernhard Reiter wrote:
> So I'm all for moving the a better testing framework.

Once v1.5.1 is released and we can drop python2.5 I'll rebase and 
patches, clean them up a bit a apply them.

> As for the python3. If we manage to scramble resources to do a python3
> "port" I thing we should technically rebuild from scratch to keep the spirit
> and some template compatibility for roundup3.

I've done some experiments on a python3 port, and had some other 
thoughts around some of these things, but I'll leave that discussion for 
another thread. I don't want to distract from a v1.5.1 release, so once 
that is out maybe I'll write up an email to start discussing some of 
these things... which should probably involve the bugs.python.org 
maintainers as well.
msg5410 Author: [hidden] (ber) Date: 2016-01-06 09:19
On Wednesday 06 January 2016 at 01:51:11, John Kristensen wrote:
> this is an issue with the existing run_tests.py script

Correct.
Someone had asked for specific examples of run_tests.py problems.
and this is one.

[python3 port]
Of course this issue is the wrong place, I suggest to add a wiki-page
now to write down your thoughts.
msg5457 Author: [hidden] (jerrykan) Date: 2016-01-12 12:53
I have updated the run_tests.py script to use py.test v2.7.3, rebased
the patch set against the current tip, cleaned up a few things, and
pushed the changes (hg:63c79c0992ae .. hg:c977f3530944).

I'll now close this issue, but if anyone notices any issues with the
tests now we are using py.test, please create a new issue and I'll have
a look.
msg5458 Author: [hidden] (jerrykan) Date: 2016-01-12 12:55
(apparent hg:<hash> isn't enough to automatically link to the repo)

hg:5033:63c79c0992ae .. hg:5038:c977f3530944

Happy testing.
History
Date User Action Args
2016-01-12 12:55:22jerrykansetmessages: + msg5458
2016-01-12 12:53:31jerrykansetstatus: new -> closed
resolution: accepted
messages: + msg5457
2016-01-06 09:19:49bersetmessages: + msg5410
2016-01-06 00:51:11jerrykansetmessages: + msg5403
2016-01-05 21:37:00bersetmessages: + msg5402
2016-01-05 21:35:21bersetnosy: + ber
messages: + msg5401
2015-09-07 13:08:37jerrykansetmessages: + msg5373
2015-09-07 13:00:47jerrykansetmessages: + msg5372
2015-09-07 08:44:10schlatterbecksetmessages: + msg5371
2015-09-07 08:20:45techtoniksetfiles: + unnamed
messages: + msg5370
2015-09-07 02:47:57jerrykansetmessages: + msg5369
2015-09-03 15:52:11techtoniksetfiles: + unnamed
messages: + msg5367
2015-08-22 14:15:34jerrykansetmessages: + msg5366
2015-08-22 14:02:04jerrykansetmessages: + msg5365
2015-08-22 07:42:34techtoniksetfiles: + unnamed
messages: + msg5364
2015-08-22 05:08:35jerrykansetmessages: + msg5363
2015-08-21 15:59:03techtoniksetfiles: + unnamed
nosy: + techtonik
messages: + msg5362
2015-08-21 08:40:27schlatterbecksetnosy: + schlatterbeck
2015-08-21 08:27:36jerrykansetmessages: + msg5359
2015-08-21 08:22:23jerrykansetfiles: + 0005-Remove-unneeded-TestSuite-code-from-tests.patch
2015-08-21 08:22:12jerrykansetfiles: + 0004-Replace-existing-run_tests.py-script-with-a-pytest-s.patch
2015-08-21 08:21:52jerrykansetfiles: + 0003-Rename-TestMessage-to-ExampleMessage.patch
2015-08-21 08:21:42jerrykansetfiles: + 0002-Update-test-test_actions.py-to-work-with-py.test.patch
2015-08-21 08:21:30jerrykansetkeywords: + patch
files: + 0001-Update-tests-to-work-with-py.test.patch
2015-08-21 08:21:07jerrykancreate