9. Testing – CherryPy Essentials

Chapter 9. Testing

Until now, we have reviewed the different steps involved in building the photoblog application but we have not tested our design and implementation. This chapter will introduce some testing techniques such as unit, functional, and load testing using open-source products such as unittest, CherryPy webtest, FunkLoad, and Selenium. By the end of this chapter, you should have a good understanding of how to use these tools in their context and improve the test suite for your applications.

Why Testing

Why testing, some might wonder? Does it bring any value to the application? You may believe that if a problem is found in your code, it will be reported and eventually be fixed. Therefore, you may argue that testing is fairly irrelevant and is time consuming. If you do believe this, then with the help of this chapter we will try to show you that testing is not just the cherry on the cake but actually it is part of the recipe for success.

Testing is a process during which the application is audited from different perspectives in order to:

  • Find bugs

  • Find differences between the expected and real result, output, states, etc.

  • Understand how complete the implementation is

  • Exercise the application in realistic situations before its release

The goal of testing is not to put the developer at fault but to provide tools to estimate the health of the application at a given time. Testing measures the quality of an application.

Testing is, therefore, not just a part of the application life cycle but is actually the true barometer of where the application stands in that cycle. Lines of code are meaningless; but test summary and test reports are the reference points that the different members of a project can relate to for understanding what has been achieved, what still needs to be achieved, and how to plan it.

Planning a Test

From the previous section we can say that since testing is so critical to a project, everything should be tested and reviewed. This is true, but it does not mean the same amount of resources and efforts should be allocated to every part of the system under test.

First of all, it depends on the position of the project in its life cycle. For instance, there is little need for performance testing right at the beginning of the project. There might not be a need for capacity testing, if the application does not require lots of hardware or network resources. That being said some tests will be carried all along the life cycle of the project. They will be built up by successive iterations bringing more strength to the test each time.

To summarize, testing needs to be planned in advance in order to define:

  • Goals: What is it relevant to test and for what purpose?

  • Scope: What is in the scope of the test? What is not?

  • Requirements: What will the test involve in terms of resources (human, software, hardware, etc.)?

  • Risks: What are the risks related to that test if it does not pass? What will be the mitigation and action taken? Will it stop the project? What is the impact?

These are just a few points to be kept in mind while planning a test.

Another important point is that testing does not end once the application is released. It can also be carried on later so that the production release meets the defined requirements. In any case, since testing draws together so many different aspects it should be seen as a long, continuous process.

Common Testing Approach

Testing is a generic term for a range of aspects to be validated on a system or application. Here is a brief list of the common ones:

  • Unit testing: Usually carried by the developers themselves. Unit tests aim at checking whether a unit of code works as expected.

  • Usability testing: Developers may usually forget that they are writing an application for end users who do not have knowledge of the system and might end up making it unusable. Functional and usability tests provide a way to make sure that applications will fulfill user expectations.

  • Functional/Acceptance testing: While usability testing checks whether the application or system is usable, functional testing ensures that every specified functionality is implemented.

  • Load and performance testing: Once an application or system has reached a certain level of completeness, it may require load and performance tests to be conducted in order to understand whether the system can cope with its expected peak load and to find potential bottlenecks. This can lead to changing hardware, optimizing SQL queries, etc.

  • Regression testing: Regression testing verifies that successive releases of a product do not break any of the previously working functionalities. Unit testing can be considered as a part of regression testing in some ways.

  • Reliability and resilience testing: Some applications or systems cannot afford to break at any time. Reliability and resilience tests can validate how the system application copes with the breakdown of one or several components.

The previous list is far from being exhaustive and each system or application environment may require specific types of testing to be defined.

Unit Testing

Our photoblog application will extensively use unit tests in order to constantly check the following:

  • New functionalities work correctly and as expected.

  • Existing functionalities are not broken by new code release.

  • Defects are fixed and remain fixed.

Python comes in with a standard unittest module and also provides a doctest module offering a different approach to unit testing as we will explain later on.

unittest

unittest is rooted in JUnit, a Java unit test package developed by Kent Beck and Erich Gamma, which in turn came from a Smalltalk testing framework developed by Kent Beck. Let's now review a basic example of this module.

Unit tests can often work on mock objects that are so called because they support the same interface as the domain objects of the applications but do not actually perform any work. They simply return defined data. Mock objects therefore allow testing against an interface of our design without having to rely on the overall application to be deployed for instance. They also provide a way to run tests in isolation mode from other tests.

First let's define a dummy class as follows:

class Dummy:
def __init__(self, start=0, left_boundary=-10, right_boundary=10,
allow_positive=True, allow_negative=False):
self.current = start
self.left_boundary = left_boundary
self.right_boundary = right_boundary
self.allow_positive = allow_positive
self.allow_negative = allow_negative
def forward(self):
next = self.current + 1
if (next > 0) and (not self.allow_positive):
raise ValueError, "Positive values are not allowed"
if next > self.right_boundary:
raise ValueError, "Right boundary reached"
self.current = next
return self.current
def backward(self):
prev = self.current - 1
if (prev < 0) and (not self.allow_negative):
raise ValueError, "Negative values are not allowed"
if prev < self.left_boundary:
raise ValueError, "Left boundary reached"
self.current = prev
return self.current
def __str__(self):
return str(self.current)
def __repr__(self):
return "Dummy object at %s" % hex(id(self))

This class provides an interface to get the next or previous value within a range defined by the left and right boundaries. We could imagine it as a mock object of a more complex class but providing dummy data.

A simple usage of this class is as follows:

>>> from dummy import Dummy
>>> dummy = Dummy()
>>> dummy.forward()
1
>>> dummy.forward()
2
>>> dummy.backward()
1
>>> dummy.backward()
0
>>> dummy.backward()
Traceback (most recent call last):
File "<stdin>", line 1, in ?
File "dummy.py", line 27, in backward
raise ValueError, "Negative values are not allowed"
ValueError: Negative values are not allowed

Let's imagine we wish to unit test this exciting module to make sure that the code is correct.

import unittest
class DummyTest(unittest.TestCase):
def test_01_forward(self):
dummy = Dummy(right_boundary=3)
self.assertEqual(dummy.forward(), 1)
self.assertEqual(dummy.forward(), 2)
self.assertEqual(dummy.forward(), 3)
self.assertRaises(ValueError, dummy.forward)
def test_02_backward(self):
dummy = Dummy(left_boundary=-3, allow_negative=True)
self.assertEqual(dummy.backward(), -1)
self.assertEqual(dummy.backward(), -2)
self.assertEqual(dummy.backward(), -3)
self.assertRaises(ValueError, dummy.backward)
def test_03_boundaries(self):
dummy = Dummy(right_boundary=3, left_boundary=-3,
allow_negative=True)
self.assertEqual(dummy.backward(), -1)
self.assertEqual(dummy.backward(), -2)
self.assertEqual(dummy.forward(), -1)
self.assertEqual(dummy.backward(), -2)
self.assertEqual(dummy.backward(), -3)
self.assertRaises(ValueError, dummy.backward)
self.assertEqual(dummy.forward(), -2)
self.assertEqual(dummy.forward(), -1)
self.assertEqual(dummy.forward(), 0)
self.assertEqual(dummy.backward(), -1)
self.assertEqual(dummy.forward(), 0)
self.assertEqual(dummy.forward(), 1)
self.assertEqual(dummy.forward(), 2)

Let's explain this code step by step:

  1. 1. To provide unit test capabilities using the unittest standard module you only need to import that specific module.

  2. 2. Create a class that subclasses unittest.TestCase, which is the interface providing unit test functionalities to our code. This class is referred to as a test case.

  3. 3. Create methods starting with the word test. Each method starting with it will be called by the unittest internal handler. Notice that the methods this class defines also use a two-digit pattern. This is not required by unittest but it allows us to force methods to be called in the order we wish. Indeed unittest calls methods by alpha-numeric order, which can sometimes lead to unexpected results. Providing digits like this is a good way to work around that limitation.

  4. 4. Call the different assert/fail methods provided by the TestCase class to perform checking of values, exceptions, outputs, etc.

The next step is to run this test case as follows:

if __name__ == '__main__':
unittest.main()

This assumes that the call to main() is done from within the same module containing the TestCase class. The result of this test looks like the following:

...
----------------------------------------------------------------------
Ran 3 tests in 0.000s
OK

It is common to make the output a little more verbose as follows:

if __name__ == '__main__':
unittest.main(testRunner=unittest.TextTestRunner(verbosity=2))

This will produce the following output:

test_01_forward (__main__.DummyTest) ... ok
test_02_backward (__main__.DummyTest) ... ok
test_03_boundaries (__main__.DummyTest) ... ok
----------------------------------------------------------------------
Ran 3 tests in 0.000s
OK

Now let's provoke an error so that one of the tests fails. In test_01_forward replace the first assertEqual with the following:

self.assertEqual(dummy.forward(), 0)

Then while running the test again you should get the following output:

test_01_forward (__main__.DummyTest) ... FAIL
test_02_backward (__main__.DummyTest) ... ok
test_03_boundaries (__main__.DummyTest) ... ok
======================================================================
FAIL: test_01_forward (__main__.DummyTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "dummy.py", line 54, in test_01_forward
self.assertEqual(dummy.forward(), 0)
AssertionError: 1 != 0
----------------------------------------------------------------------
Ran 3 tests in 0.001s
FAILED (failures=1)

As you can see, the unittest module does not stop processing any remaining test cases when one fails. Instead, it displays the traceback of the raised assertion error. Here the test is wrong but in the case where your assertion is a valid one, it would point to a failure of your application.

Let's assume that we write a test that tries to go forward when the right boundary is less than the starting point. We assume that the documentation of the method tells us that it should raise an exception expressing the fact that the class has rejected this case.

Let's create test_00_construct accordingly:

def test_00_construct(self):
self.assertRaises(ValueError, Dummy, start=34)

Let's run the test now:

test_00_construct (__main__.DummyTest) ... FAIL
test_01_forward (__main__.DummyTest) ... ok
test_02_backward (__main__.DummyTest) ... ok
test_03_boundaries (__main__.DummyTest) ... ok
======================================================================
FAIL: test_00_construct (__main__.DummyTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "dummy.py", line 50, in test_00_construct
self.assertRaises(ValueError, Dummy, start=34)
AssertionError: ValueError not raised
----------------------------------------------------------------------
unit testingunittestRan 4 tests in 0.003s
FAILED (failures=1)

As you can see the test case does fail on the new test we have included. The reason is that the Dummy.__init__() method does not contain any error handling for this case unlike what the documentation told us. Let's fix this by adding the following code at the bottom of the __init__ method:

if (start > right_boundary) or (start < left_boundary):
raise ValueError, "Start point must belong to the boundaries"

Let's now re-run the test:

test_00_construct (__main__.DummyTest) ... ok
test_01_forward (__main__.DummyTest) ... ok
test_02_backward (__main__.DummyTest) ... ok
test_03_boundaries (__main__.DummyTest) ... ok
----------------------------------------------------------------------
Ran 4 tests in 0.000s
OK

The previous example shows that it is sometimes desirable to write the test before implementing the functionality itself in order to avoid designing the test to match the code behavior. This is often called test-driven development. Another way to achieve this is to provide the API of the application or library to a third party, who will write the test case based on that API in a neutral fashion. Either way the previous example demonstrates that unit testing is only relevant when the tests are coherent with the design and are there to test the implementation.

Now that we have introduced the unittest module let's present the doctest one.

doctest

The doctest module supports running Python code inlined within an object docstring. The advantage of this technique is that test cases are close to the code they test. The inconvenience is that some complex tests can be difficult to achieve this way. Let's see an example on the class we have defined earlier.

class Dummy:
def __init__(self, start=0, left_boundary=-10, right_boundary=10,
allow_positive=True, allow_negative=False):
"""
>>> dummy = Dummy(start=27)
Traceback (most recent call last):
...
raise ValueError, "Start point must belong to the
boundaries"
ValueError: Start point must belong to the boundaries
>>> dummy = Dummy()
>>> dummy.backward()
Traceback (most recent call last):
...
raise ValueError, "Negative values are not allowed"
ValueError: Negative values are not allowed
"""
self.current = start
self.left_boundary = left_boundary
self.right_boundary = right_boundary
self.allow_positive = allow_positive
self.allow_negative = allow_negative
if (start > right_boundary) or (start < left_boundary):
raise ValueError, "Start point must belong to the
boundaries"
def forward(self):
"""
>>> dummy = Dummy(right_boundary=3)
>>> dummy.forward()
1
>>> dummy.forward()
2
>>> dummy.forward()
3
>>> dummy.forward()
Traceback (most recent call last):
...
raise ValueError, "Right boundary reached"
ValueError: Right boundary reached
"""
next = self.current + 1
if (next > 0) and (not self.allow_positive):
raise ValueError, "Positive values are not allowed"
if next > self.right_boundary:
raise ValueError, "Right boundary reached"
self.current = next
return self.current
def backward(self):
"""
>>> dummy = Dummy(left_boundary=-3, allow_negative=True)
>>> dummy.forward()
1
>>> dummy.backward()
0
>>> dummy.backward()
-1
>>> dummy.backward()
-2
>>> dummy.backward()
-3
>>> dummy.backward()
Traceback (most recent call last):
...
raise ValueError, "Left boundary reached"
ValueError: Left boundary reached
"""
prev = self.current - 1
if (prev < 0) and (not self.allow_negative):
raise ValueError, "Negative values are not allowed"
if prev < self.left_boundary:
raise ValueError, "Left boundary reached"
self.current = prev
return self.current
def __str__(self):
return str(self.current)
def __repr__(self):
return "Dummy object at %s" % hex(id(self))

As you can see, each method you wish to test must have a docstring containing use cases that will be run as-is by the doctest module.

Then you can run the test as follows:

if __name__ == '__main__':
doctest.testmod()
sylvain@6[test]$ python dummy.py -v
Trying:
dummy = Dummy(start=27)
Expecting:
Traceback (most recent call last):
...
raise ValueError, "Start point must belong to the boundaries"
ValueError: Start point must belong to the boundaries
ok
Trying:
dummy = Dummy()
Expecting nothing
ok
Trying:
dummy.backward()
Expecting:
Traceback (most recent call last):
...
raise ValueError, "Negative values are not allowed"
ValueError: Negative values are not allowed
ok
Trying:
dummy = Dummy(left_boundary=-3, allow_negative=True)
Expecting nothing
ok
Trying:
dummy.forward()
Expecting:
1
ok

We do not reproduce the complete result trace as it is too long for the purpose of the chapter. You may consider that mixing code and documentation will reduce the efficiency of both, making the documentation harder to read. This concern is actually raised by the doctest module documentation itself, which sensibly advises handling docstring examples with care. Indeed, since the code belongs to the docstring, it will be displayed while viewing it.

>>> from dummy import Dummy
>>> help(Dummy.forward)
Help on method forward in module dummy:
forward(self) unbound dummy.Dummy method
>>> dummy = Dummy(right_boundary=3)
>>> dummy.forward()
1
>>> dummy.forward()
2
>>> dummy.forward()
3
>>> dummy.forward()
Traceback (most recent call last):
...
raise ValueError, "Right boundary reached"
ValueError: Right boundary reached

In such cases the tests can either be part of the documentation itself or be too complex making the documentation unusable.

In a nutshell both the unittest and doctest modules deserve to be reviewed for your requirements and it is common to find both being used in a single project to provide a strong unit-test suite. In any case, we recommend you to read the documentation of both the modules, which will demonstrate that there is much more than the brief introduction given in this chapter. In addition a very informative mailing-list is available at http://lists.idyll.org/listinfo/testing-in-python.

Unit Testing Web Applications

In the previous section, we have presented two standard modules to perform unit testing in Python applications and packages. Unfortunately as they stand they lack some common features to help in specific contexts such as web applications. The Python community has obviously come up with solutions and there are several good extensions to unittest or completely distinct test packages to help us.

We will use an extension to unittest, provided by CherryPy, called webtest and developed by Robert Brewer.

This module provides a transparent integration with CherryPy and also provides a command-line helper to test different configurations of servers. It allows a test to be stopped when a failure occurs, offers access to the HTTP stack when an error is raised, also supports code coverage and profiling, etc. In a nutshell this module starts a CherryPy server automatically, which each test case uses to mount CherryPy applications as needed for the test run and to perform HTTP requests on that server.

This section will now show all the different test cases of our photoblog application but you will find them within the source code of the application. Based on what we have explained in the previous section we design our test cases as follows:

class TestServicesREST(PhotoblogTest):
def test_00_REST(self):
self.getPage("/services/rest/")
self.assertStatus(404)
self.getPage("/services/rest/album/", method="XYU")
self.assertStatus(405)
def test_02_REST_GET(self):
# missing the ID
self.getPage("/services/rest/album/")
self.assertStatus(400)
# missing the Accept header
self.getPage("/services/rest/album/2")
self.assertStatus(406)
# wrong ID type
self.getPage("/services/rest/album/st",
headers=[("Accept", "application/json")])
self.assertStatus(404)
self.getPage("/services/rest/album/2",
headers=[("Accept", "application/json")])
self.assertStatus(200)
self.assertHeader('Content-Type', 'application/json')
self.assertHeader('Allow', 'DELETE, GET, HEAD, POST, PUT')
self.getPage("/services/rest/album?album_id=2",
headers=[("Accept", "application/json")])
self.assertStatus(200)
self.assertHeader('Content-Type', 'application/json')
self.assertHeader('Allow', 'DELETE, GET, HEAD, POST, PUT')
def test_03_REST_POST(self):
blog = self.photoblog
params = {'title': 'Test2',
'author': 'Test demo', 'description': 'blah blah',
'content': 'more blah blah bluh', 'blog_id':
str(blog.ID)}
# let's transform the param dictionary
# into a valid query string
query_string = urllib.urlencode(params)
self.getPage("/services/rest/album/", method="POST",
body=query_string,
headers=[("Accept", "application/json")])
self.assertStatus(201)
self.assertHeader('Content-Type', 'application/json')
# here we miss the Accept header
self.getPage("/services/rest/album/", method="POST",
body=query_string)
self.assertStatus(406)
def test_04_REST_PUT(self):
blog = self.photoblog
params = {'title': 'Test2',
'author': 'Test demo', 'description': 'blah blah',
'content': 'meh ehe eh', 'blog_id': str(blog.ID)}
query_string = urllib.urlencode(params)
# at this stage we don't have yet an album with that ID
self.getPage("/services/rest/album/23", method="PUT",
body=query_string,
headers=[("Accept", "application/json")])
self.assertStatus(404)
self.getPage("/services/rest/album/4", method="PUT",
body=query_string,
headers=[("Accept", "application/json")])
self.assertStatus(200)
self.assertHeader('Content-Type', 'application/json')
def test_06_REST_DELETE(self):
self.getPage("/services/rest/album/4", method="DELETE")
self.assertStatus(200)
# DELETE is idempotent and should always return 200 in case
# of success
self.getPage("/services/rest/album/4", method="DELETE")
self.assertStatus(200)
def test_05_REST_Collection_GET(self):
self.getPage("/services/rest/albums/3")
self.assertStatus(400, 'Invalid range')
self.getPage("/services/rest/albums/a")
self.assertStatus(400, 'Invalid range')
self.getPage("/services/rest/albums/0-")
self.assertStatus(400, 'Invalid range')
self.getPage("/services/rest/albums/a+3")
self.assertStatus(400, 'Invalid range')
self.getPage("/services/rest/albums/3-a")
self.assertStatus(400, 'Invalid range')
self.getPage("/services/rest/albums/0+3")
self.assertStatus(400, 'Invalid range')
# valid range but missing Accept header
self.getPage("/services/rest/albums/0-3")
self.assertStatus(406)
self.getPage("/services/rest/albums/0-3",
headers=[("Accept", "application/json")])
self.assertStatus(200)
self.assertHeader('Content-Type', 'application/json')
json = simplejson.loads(self.body)
self.failUnless(isinstance(json, list))
self.failUnlessEqual(len(json), 3)

The test case above is only an example of different tests we can conduct against our application and in reality more tests would be required to ensure that the application works as expected and to perform regression testing.

As you can see, our test case performs HTTP requests and validates the content of the response as well as its headers. The simplicity of these validations is brought by the unit testing extension provided by the webtest module. Let's now see in detail how to set up that module to run the test case shown earlier.

First let's create a test.py module containing the following code:

import os.path
import sys
# Tell Python where to find our application's modules.
sys.path.append(os.path.abspath('..'))
# CherryPy main test module
from cherrypy.test import test as cptest
# load the global application settings
current_dir = os.path.abspath(os.path.dirname(__file__))
conf.from_ini(os.path.join(current_dir, 'application.conf'))
from models import Photoblog, Album, Film, Photo
# dejavu main arena object
arena = storage.arena
# register our models with dejavu
storage.setup()
def initialize():
for cls in (Photoblog, Album, Film, Photo):
arena.create_storage(cls)
def shutdown():
for cls in (Photoblog, Album, Film, Photo):
if arena.has_storage(cls):
arena.drop_storage(cls)
def run():
"""
entry point to the test suite
"""
try:
initialize()
# modules name without the trailing .py
# that this test will run. They must belong
# to the same directory as test.py
test_list = ['test_models', 'test_services']
cptest.CommandLineParser(test_list).run()
finally:
shutdown()
print
raw_input('hit enter to terminate the test')
if __name__ == '__main__':
run()

Let's inspect what the test.py module can achieve:

sylvain@[test]$ python test.py --help
CherryPy Test Program
Usage:
test.py --server=* --host=127.0.0.1 --port=8080 --1.0 --cover
--basedir=path --profile --validate --conquer --dumb --tests**
* servers:
--server=modpygw: modpygw
--server=wsgi: cherrypy._cpwsgi.CPWSGIServer (default)
--server=cpmodpy: cpmodpy
--host=<name or IP addr>: use a host other than the default
(127.0.0.1).
Not yet available with mod_python servers.
--port=<int>: use a port other than the default (8080)
--1.0: use HTTP/1.0 servers instead of default HTTP/1.1
--cover: turn on code-coverage tool
--basedir=path: display coverage stats for some path other than
--cherrypy.
--profile: turn on profiling tool
--validate: use wsgiref.validate (builtin in Python 2.5).
--conquer: use wsgiconq (which uses pyconquer) to trace calls.
--dumb: turn off the interactive output features.
** tests:
--test_models
--test_services

As you can see, our test supports a handful of functionalities allowing us to run our tests in different configurations such as by using the built-in HTTP server or a mod_python handler, as we will explain inChapter 10.

Next we create a PhotoblogTest class, which will be the base class of our test cases. In a module called blogtest.py we add the following code:

from cherrypy.test import helper
unit testingweb application# default blog name for the test suite
blog_name = u"photoblog"
from models import Photoblog
class PhotoblogTest(helper.CPWebCase):
def photoblog(self):
blog = Photoblog.find_by_name(blog_name)
if not blog:
self.fail("Could not find blog '%s'" % blog_name)
return blog
photoblog = property(photoblog, doc="Returns a blog object to
work against")

The PhotoblogTest class inherits from the CherryPy CPWebCase class, which provides a list of functions to perform assertions checking against a web test. For instance, the CPWebCase class defines the following:

  • assertStatus(status) to verify the status of the last response

  • assertHeader(name, value=None) to verify whether a header is present as well as ensure that the value, if not None, is the one provided

  • assertBody(value) to check the returned body is the one we expected

  • assertInBody(value) to verify the returned content contained a given value

This class also comes with the getPage(uri, method, headers, body) method to issue an HTTP request.

Our PhotoblogTest class defines the photoblog property so that tests can easily get a reference to the blog we create by default throughout the life of the test.

The blogtest.py module also contains the following functions used to set up the server for the life cycle of a test:

from lib import storage
import services
from models import Album, Film, Photo
def populate_storage():
photoblog = Photoblog()
photoblog.create(blog_name, u'Yeah')
a1 = Album()
a1.create(photoblog, "Test album",
"Test", "blah blah", "more blah blah")
def reset_storage():
# here we simply remove every object a test has left
# in the storage so that we have a clean
# storage for the next test case run
photoblog = Photoblog.find_by_name(blog_name)
photoblog.delete()
def setup_photoblog_server():
# Update the CherryPy global configuration
cherrypy.config.update(os.path.join(current_dir, 'http.conf'))
# fill the storage with default values for the purpose of the
#test
populate_storage()
# Construct the published trees
services_app = services.construct_app()
# Mount the applications on the '/' prefix
engine_conf_path = os.path.join(current_dir, 'engine.conf')
service_app = cherrypy.tree.mount(services_app, '/services',
config=engine_conf_path)
service_app.merge(services.services_conf)
def teardown_photoblog_server():
reset_storage()

The setup_photoblog_server() function is responsible for setting up the photoblog application and loading the different configuration settings. These must be a part of the test directory. For instance, we could provide a different database name for the storage to be used so that we do not run the test on a production database.

Finally, we define our test cases in a module named test_services.py as follows:

import httplib
import os.path
import urllib
import cherrypy
import simplejson
from models import Photoblog, Album, Film, Photo
from blogtest import PhotoblogTest, blog_name, \
setup_photoblog_server, teardown_photoblog_server
current_dir = os.path.abspath(os.path.dirname(__file__))
def setup_server():
setup_photoblog_server()
def teardown_server():
teardown_photoblog_server()
# Here we insert the TestServicesREST class definition
# that we have seen at the beginning of this section

Let's explain how this module is constructed:

  1. 1. We must import a bunch of modules to perform specific tasks for our tests.

  2. 2. Our test case subclasses the PhotoblogTest class that we have described earlier.

  3. 3. We need to define two functions—setup_server() and teardown_server(), which will be automatically called by the CherryPy test module each time it starts and finishes running a test module. This allows us to initialize our photoblog application for the test case.

  4. 4. Finally we add the TestServicesREST class as our test case.

Let's now run the entire test suite:

sylvain@[test]$ python test.py
Python version used to run this test script: 2.5
CherryPy version 3.0.0
HTTP server version HTTP/1.1
Running tests: cherrypy._cpwsgi.CPWSGIServer
No handlers could be found for logger "cherrypy.error"
test_00_Photoblog_unit (test_models.TestModels) ... ok
test_01_Photoblog_create (test_models.TestModels) ... ok
test_02_Photoblog_retrieve_by_name (test_models.TestModels) ... ok
test_03_Photoblog_retrieve_by_unknown_name (test_models.TestModels)
... ok
test_04_Photoblog_retrieve_by_unsupported_id_type
(test_models.TestModels) ... ok
test_05_Photoblog_update (test_models.TestModels) ... ok
test_06_Photoblog_populate (test_models.TestModels) ... ok
test_10_Album_unit (test_models.TestModels) ... ok
test_99_Photoblog_delete (test_models.TestModels) ... ok
test_00_REST (test_services.TestServicesREST) ... ok
test_01_REST_HEAD (test_services.TestServicesREST) ... ok
test_02_REST_GET (test_services.TestServicesREST) ... ok
test_03_REST_POST (test_services.TestServicesREST) ... ok
test_04_REST_PUT (test_services.TestServicesREST) ... ok
test_05_REST_Collection_GET (test_services.TestServicesREST) ... ok
test_06_REST_DELETE (test_services.TestServicesREST) ... ok

If on the other hand you wish to run only one module:

sylvain@[test]$ python test.py --models
Python version used to run this test script: 2.5
CherryPy version 3.0.0
HTTP server version HTTP/1.1
Running tests: cherrypy._cpwsgi.CPWSGIServer
No handlers could be found for logger "cherrypy.error"
test_00_Photoblog_unit (test_models.TestModels) ... ok
test_01_Photoblog_create (test_models.TestModels) ... ok
test_02_Photoblog_retrieve_by_name (test_models.TestModels) ... ok
test_03_Photoblog_retrieve_by_unknown_name (test_models.TestModels)
... ok
test_04_Photoblog_retrieve_by_unsupported_id_type (test_models.
TestModels) ... ok
test_05_Photoblog_update (test_models.TestModels) ... ok
test_06_Photoblog_populate (test_models.TestModels) ... ok
test_10_Album_unit (test_models.TestModels) ... ok
test_99_Photoblog_delete (test_models.TestModels) ... ok

As you can see, writing unit tests using the CherryPy test module makes the task of testing an application based on CherryPy an easy one, because CherryPy takes care of a lot of common burdens allowing the tester to focus on what really matters.

Performance and Load Testing

Depending on the application you are writing and your expectations in terms of volume, you may need to run load and performance testing in order to detect potential bottlenecks in the application that are preventing it from reaching a certain level of performance.

This section will not detail how to conduct a performance or load test as it is out of its scope but we will review one Python solution, the FunkLoad package provided by Nuxeo, a French company specialized in free software written in Python. You can install FunkLoad via the easy_install command. FunkLoad is available at http://funkload.nuxeo.org/.

FunkLoad is an extension to the webunit module, a Python module oriented towards unit testing web application. FunkLoad comes with a fairly extensive API and set of tools taking care of the burden of extracting metrics from a load test to eventually generate test reports with nice-looking charts.

Let's see an extremely basic example of using FunkLoad.

from funkload.FunkLoadTestCase import FunkLoadTestCase
class LoadHomePage(FunkLoadTestCase):
def test_homepage(self):
server_url = self.conf_get('main', 'url')
nb_time = self.conf_getInt('test_homepage', 'nb_time')
home_page = "%s/" % server_url
for i in range(nb_time):
self.logd('Try %i' % i)
self.get(home_page, description='Get gome page')
if __name__ in ('main', '__main__'):
import unittest
unittest.main()

Let's understand this example in detail:

  1. 1. Your test case must inherit from the FunkLoadTestCase class so that FunkLoad can do its internal job of tracking what happens during the test.

  2. 2. Your class name is important as FunkLoad will look for a file named after that name, in our case: LoadHomePage.conf in the test directory.

  3. 3. Your test has direct access to the configuration file and gets values as follows:

    • conf_get(section, key) returns a string.

    • conf_getInt(section, key) returns the value as an integer.

    • conf_getFloat(section key) returns the value as a float.

    • conf_getList(section, key) returns the value column separated as a list of strings.

  4. 4. You then simply call the get() or post() method to issue a request against the server and retrieve the response returned by these methods.

Internally Funkload will create a set of metrics of the test and save them in an XML file that can be processed later.

Let's analyze the LoadHomePage.conf settings:

[main]
title=Photoblog home page
description=Access the photoblog home page
url=http://localhost:8080
[test_homepage]
description=Access %(nb_time)s times the following pages:
%(pages)s.
nb_time=3
pages=/
[ftest]
log_to = console file
log_path = logs/load_home_page.log
result_path = logs/load_home_page.xml
sleep_time_min = 0
sleep_time_max = 2

The main section contains global settings for the test, whereas the test_homepage contains specific values for the test_homepage() method of our test case. The ftest section is used by FunkLoad for internal processing.

After starting an instance of the photoblog application server, we run the test:

sylvain@[test]$ python test_load_home_page.py
test_homepage: Starting -----------------------------------
Access 3 times the following pages: /.
test_homepage: Try 0
test_homepage: GET: http://localhost:8080/
Page 1: Get gome page ...
test_homepage: Done in 0.039s
test_homepage: Load css and images...
test_homepage: Done in 0.044s
test_homepage: Try 1
test_homepage: GET: http://localhost:8080/
Page 2: Get gome page ...
test_homepage: Done in 0.041s
test_homepage: Load css and images...
test_homepage: Done in 0.000s
test_homepage: Try 2
test_homepage: GET: http://localhost:8080/
Page 3: Get gome page ...
test_homepage: Done in 0.051s
test_homepage: Load css and images...
test_homepage: Done in 0.000s
.
----------------------------------------------------------------------
Ran 1 test in 2.149s
OK

The previous test is not really a load test yet. To use it as a load or performance test, we need to use a FunkLoad tool called fl-run-bench. This command-line tool will run a benchmark using a test like the one we have just created.

A benchmark will simulate virtual users to run concurrently to perform a realistic use of the server. For instance, if we want to benchmark three cycles of 5, 10, and 20 virtual users during 30 seconds, we would do the following.

First add the following sections to the configuration file:

[bench]
cycles = 5:10:20
duration = 30
startup_delay = 0.05
sleep_time = 1
cycle_time = 1
log_to = file
log_path = logs/load_home_page.log
result_path = logs/load_home_page.xml
sleep_time_min = 0
sleep_time_max = 0.6

Then launch the benchmark:

sylvain@[test]$ fl-run-bench test_load_home_page.py \
LoadHomePage.test_homepage
=======================================
Benching LoadHomePage.test_homepage
=======================================
Access 3 times the following pages: /.
------------------------------------------------------------------------
Configuration
=============
* Current time: 2007-02-28T13:43:22.376339
* Configuration file: load/LoadHomePage.conf
* Log xml: logs/load_home_page.xml
* Server: http://localhost:8080
* Cycles: [5, 10, 20]
* Cycle duration: 30s
* Sleeptime between request: from 0.0s to 0.6s
* Sleeptime between test case: 1.0s
* Startup delay between thread: 0.05s
Benching
========
Cycle #0 with 5 virtual users
-----------------------------
* Current time: 2007-02-28T13:43:22.380481
* Starting threads: ..... done.
* Logging for 30s (until 2007-02-28T13:43:52.669762): .... done.
* Waiting end of threads: ..... done.
* Waiting cycle sleeptime 1s: ... done.
* End of cycle, 33.46s elapsed.
* Cycle result: **SUCCESSFUL**, 76 success, 0 failure, 0 errors.
Cycle #1 with 10 virtual users
------------------------------
* Current time: 2007-02-28T13:43:55.837831
* Starting threads: .... done.
* Logging for 30s (until 2007-02-28T13:44:26.681356): .... done.
* Waiting end of threads: .......... done.
* Waiting cycle sleeptime 1s: ... done.
* End of cycle, 34.02s elapsed.
* Cycle result: **SUCCESSFUL**, 145 success, 0 failure, 0 errors.
Cycle #2 with 20 virtual users
------------------------------
* Current time: 2007-02-28T13:44:29.859868
* Starting threads: ....... done.
* Logging for 30s (until 2007-02-28T13:45:01.191106):
* Waiting end of threads: .................... done.
* Waiting cycle sleeptime 1s: ... done.
* End of cycle, 35.59s elapsed.
* Cycle result: **SUCCESSFUL**, 203 success, 0 failure, 0 errors.
Result
======
* Success: 424
* Failures: 0
* Errors: 0
Bench status: **SUCCESSFUL**

Now that we have run our benchmark we can create a report using the fl-build-report command-line tool as follows:

sylvain@[test]$ fl-build-report --html -o reports
logs/load_home_page.xml
Creating html report: ...done:
reports/test_homepage-2007-02-28T13-43-22/index.html

This will produce an HTML page with statistics gathered from the benchmark as shown in the following figure:

In addition to these modules, FunkLoad offers tools to test XML-RPC servers or record tests from a browser directly, allowing for complex tests to be developed easily. Kindly refer the FunkLoad documentation for more details about these features.

Overall Funkload is quite a powerful tool and yet flexible and simple to use, providing a comprehensive load and performance-testing environment for Python web applications.

Functional Testing

Once your application functionalities start taking shape, you may want to conduct a set of functional testing so that you can validate your application's correctness regarding the specification. For a web application, this would mean going through the application from a browser for example. However, since the test would have to be automated it would require the use of third-party products such as Selenium (Selenium is available at http://www.openqa.org/selenium/).

Selenium is a JavaScript-based open-source product, developed and maintained by the OpenQA team to perform functional and acceptance testing. It works directly from the browser it targets helping to ensure the portability of the client-side code of the application.

Selenium comes in several packages:

  • Core: The core package allows a tester to design and run tests directly from the browser using pure HTML and JavaScript.

  • Remote Control: This package allows performing tests using common programming languages such as Python, Perl, Ruby, Java, or C#. Scripts written in these languages drive a browser to automate actions to be performed during the test.

  • IDE: The Selenium IDE is available as a Firefox extension to help the creation of tests by recording actions carried out via the browser itself. The tests can then be exported to be used by the Core and Remote Control packages.

Application under Test

Before we explain how Selenium components work, we must introduce an application example. This application will simply provide one web page with two links. One of them will replace the current page with a new one. The second link will fetch data using Ajax. We use this example rather than our photoblog application for the sake of simplicity. The code of the application is as follows:

import datetime
import os.path
import cherrypy
import simplejson
_header = """<html>
<head><title>Selenium test</title></head>
<script type="application/javascript" src="MochiKit/MochiKit.js">
</script>
<script type="application/javascript" src="MochiKit/New.js">
</script>
<script type="application/javascript">
var fetchReport = function() {
var xmlHttpReq = getXMLHttpRequest();
xmlHttpReq.open("GET", "/fetch_report", true);
xmlHttpReq.setRequestHeader('Accept', 'application/json');
var d = sendXMLHttpRequest(xmlHttpReq);
d.addCallback(function (data) {
var reportData = evalJSONRequest(data);
swapDOM($('reportName'), SPAN({'id': 'reportName'},
reportData['name']));
swapDOM($('reportAuthor'), SPAN({'id': 'reportAuthor'},
reportData['author']));
swapDOM($('reportUpdated'), SPAN({'id': 'reportUpdated'},
reportData['updated']));
});
}
</script>
<body>
<div>
<a href="javascript:void(0);" onclick="fetchReport();">Get report via
Ajax</a>
<br />
<a href="report">Get report</a>
</div>
<br />
"""
_footer = """
</body>
</html>
"""
class Dummy:
@cherrypy.expose
def index(self):
return """%s
<div id="report">
<span>Name:</span>
<span id="reportName"></span>
<br />
<span>Author:</span>
<span id="reportAuthor"></span>
<br />
<span>Updated:</span>
<span id="reportUpdated"></span>
</div>%s""" % (_header, _footer)
@cherrypy.expose
def report(self):
now = datetime.datetime.now().strftime("%d %b. %Y, %H:%M:%S")
return """%s
<div id="report">
<span>Name:</span>
<span id="reportName">Music report (HTML)</span>
<br />
<span>Author:</span>
<span id="reportAuthor">Jon Doe</span>
<br />
<span>Updated:</span>
<span id="reportUpdated">%s</span>
</div>%s""" % (_header, now, _footer)
@cherrypy.expose
def fetch_report(self):
now = datetime.datetime.now().strftime("%d %b. %Y, %H:%M:%S")
cherrypy.response.headers['Content-Type'] =
'application/json'
return simplejson.dumps({'name': 'Music report (Ajax)',
'author': 'Jon Doe',
'updated': now})
if __name__ == '__main__':
current_dir = os.path.abspath(os.path.dirname(__file__))
conf = {'/test': {'tools.staticdir.on': True,
'tools.staticdir.dir': "test",
'tools.staticdir.root': current_dir},
'/MochiKit': {'tools.staticdir.on': True,
'tools.staticdir.dir': "MochiKit",
'tools.staticdir.root': current_dir},
'/selenium': {'tools.staticdir.on': True,
'tools.staticdir.dir': "selenium",
'tools.staticdir.root': current_dir}}
cherrypy.quickstart(Dummy(), config=conf)

We define three paths to be served as static directories. The first one carries our Selenium test suite and test cases that will be detailed later. The second one contains the MochiKit JavaScript toolkit and the last one contains the Selenium Core package. Indeed, Selenium Core must be served by the same server under which the tests are conducted.

The application will look like the following in the browser:

When clicking on the first link, the fetch_report() JavaScript function will be triggered to fetch the report data via XMLHttpRequest. The result will look like the following:

When clicking on the second link, the current page will be replaced by a new page containing the report such as the following:

As you can see this application is not doing anything fancy but provides us with common use cases in modern web applications. In the following sections we will therefore describe two test cases, one for each link of our application.

Selenium Core

Selenium tests are described via HTML tables of three columns and as many rows as needed with each row describing an action to be performed by Selenium. The three columns are as follows:

  • Name of the Selenium action to be performed.

  • Target to be looked for by Selenium within the document object model of the page. It can be the identifier of an element or an XPath statement leading to an element.

  • Value. A value to compare to or to be used by the action.

Let's describe for example the following test:

  1. 1. Fetch the home page.

  2. 2. Click on the Get Report link and wait for the returned page.

  3. 3. Verify that we can find the HTML string in the new page.

This would translate into (save this in test/test_html.html ):

<html>
<head />
<body>
<table>
<thead>
<tr><td rowspan="1" colspan="3">HTML Test</td></tr>
</thead>
<tbody>
<tr>
<td>open</td>
<td>/</td>
<td></td>
</tr>
<tr>
<td>clickAndWait</td>
<td>link=Get report</td>
<td></td>
</tr>
<tr>
<td>verifyTextPresent</td>
<td></td>
<td>HTML</td>
</tr>
</tbody>
</table>
</body>
</html>

Let's describe now our second use case to test our Ajax code:

  1. 1. Fetch the home page.

  2. 2. Click on the Get Report via Ajax link.

  3. 3. Pause for a few seconds.

  4. 4. Verify that we can find the Ajax string in the new page.

The third step is compulsory because when performing an XMLHttpRequest, Selenium does not wait for the response. In such a case, you must pause Selenium's execution so that it gives time for the response to come back and update the document object model of the page. The previous use case will translate into (save this in test/test_ajax.html ):

<html>
<head />
<body>
<table cellpadding="1" cellspacing="1" border="1">
<thead>
<tr><td rowspan="1" colspan="3">Test Ajax</td></tr>
</thead>
<tbody>
<tr>
Seleniumcore<td>open</td>
<td>/</td>
<td></td>
</tr>
<tr>
<td>click</td>
<td>link=Get report via Ajax</td>
<td></td>
</tr>
<tr>
<td>pause</td>
<td>300</td>
<td></td>
</tr>
<tr>
<td>verifyTextPresent</td>
<td></td>
<td>Ajax</td>
</tr>
</tbody>
</table>
</body>
</html>

Now that we have our test cases in our test directory, we can create a test suite as follows:

<html>
<head>
<link rel="stylesheet" type="text/css"
href="/selenium/core/selenium.css" />
<head>
<body>
<table class="selenium">
<tbody>
<tr><td><b>Test Suite</b></td></tr>
<tr><td><a href="test_html.html">Test HTML</a></td></tr>
<tr><td><a href="test_ajax.html">Test Ajax</a></td></tr>
</tbody>
</table>
</body>
</html>

We now have everything we need to run a test. To do so, we will use the test runner provided by the Selenium Core package. In a browser open the following page:

http://localhost:8080/selenium/core/TestRunner.html

This will display a page like the following:

We can now load our test suite and get the next screen by entering the following path: ../../test/testsuite.html in the TestSuite input box on the top left of the page.

As you can see, the left pane lists all our test cases, the central pane displays the current selected test case, and the right pane shows Selenium's controls and results. Finally, the bottom of the page will display the result web page of each test case.

The next step is to run these tests by clicking the All button, which will generate the following screen:

Selenium TestRunner will use color codes to inform you of how test cases have performed. Green means things were fine, yellow means the step is not finished, and red shows errors during the test.

Selenium IDE

In the previous section we have written our test cases directly from a text editor, which can become a little tedious with long use cases. Thankfully, the OpenQA team provides an integrated development editor for Selenium available as an extension for the Mozilla Firefox browser. The advantages of this IDE are:

  • No need to install Selenium core package on the server

  • Ability to record actions by following the business process in the browser

  • Ability to manually amend any generated test

  • Step-by-step debugging of test cases

  • Recorded test cases can be exported to HTML or any of the supported languages of the Selenium Remote Control package

To record a test case you first need to provide the base URL of your server, http://localhost:8080 in the following window:

Since by default when you start the IDE it runs in recording mode, you can now go to the browser and follow your business process. Each step will be recorded automatically by the Selenium IDE. For instance, by clicking on Get Report, the clickAndWait step will be generated. To verify the presence of a given text, you must highlight the targeted text, right-click to open the pop-up menu, and select verifyTextPresent.

Your IDE will then look like the following:

Now that we have a recorded test we can run it by clicking the green triangle.

As you can see, the steps to create a script are much simpler using the IDE. Moreover, thanks to its great flexibility you can either insert new steps or remove and modify existing ones if the IDE failed to record an action for instance. You can also load tests created manually into the IDE and run them from there.

Finally, you can export your recorded step so that you can run it via the Test Runner or via the Selenium Remote Control package as we will see in the next section.

Selenium Remote Control

The Selenium Remote Control (RC) package offers the possibility of driving a browser using a recorded step from several programming languages. This is extremely interesting because your tests can therefore be run as regular unit tests.

You need to first get the Python modules from the Selenium RC package. Once they can be found in your PYTHONPATH, you should be able to do the following: from selenium import selenium.

Next step will be to export the previously recorded test to the Python language. The resulting script will look like the following:

from selenium import selenium
import unittest, time, re
class TestHTML(unittest.TestCase):
def setUp(self):
self.verificationErrors = []
self.selenium = selenium("localhost", 4444, "*firefox",
"http://localhost:8080")
self.selenium.start()
def test_TestHTML(self):
# Get a reference to our selenium object
sl = self.selenium
sl.open("/")
sl.click("link=Get report")
sl.wait_for_page_to_load("5000")
try: self.failUnless(sl.is_text_present("HTML"))
except AssertionError, e: self.verificationErrors.append(str(e))
def tearDown(self):
self.selenium.stop()
self.assertEqual([], self.verificationErrors)
if __name__ == "__main__":
unittest.main()

As you can see, this is a pure test case from the unittest standard module.

Let's see what the script does:

  1. 1. The setUp() method, called before each test method, initializes a Selenium object indicating the host and the port of the Selenium proxy as well as which kind of browser should be used during the test.

  2. 2. The test_TestHTML() method performs the actual steps of our test case.

  3. 3. The tearDown() method, called after each test method, stops this instance of the Selenium object.

Before running the test, you must start the Selenium proxy, which will handle the startup of the chosen browser as well as run the test. It will then return all the results to our test case.

The Selenium RC package comes with a default proxy server written in Java, which is the one we will use in our example. However, nothing prevents anyone from writing a proxy in a different language of course. To start the server, you must go to the Selenium RC package directory and issue the following command, assuming you have a Java virtual machine 1.4.2 or above installed on your machine:

sylvain@[selenium]$ java -jar server/selenium-server.jar

Once the server is started, you must start your application server and then you can run the test as follows:

python test_html.py
.
----------------------------------------------------------------------
Ran 1 test in 6.877s
OK

If you look at the Selenium proxy server logs, you should see something like the following:

queryString =
cmd=getNewBrowserSession&1=%2Afirefox&2=http%3A%2F%2Flocalhost%3A8080
Preparing Firefox profile...
Launching Firefox...
3 oct. 2006 17:35:10 org.mortbay.util.Container start
INFO: Started HttpContext[/,/]
Got result: OK,1159893304958
queryString = cmd=open&1=%2F&sessionId=1159893304958
Got result: OK
queryString = cmd=click&1=link%3DGet+report&sessionId=1159893304958
Got result: OK
queryString = cmd=waitForPageToLoad&1=5000&sessionId=1159893304958
Got result: OK
queryString = cmd=isTextPresent&1=HTML&sessionId=1159893304958
Got result: OK,true
queryString = cmd=testComplete&sessionId=1159893304958
Killing Firefox...
Got result: OK

This will launch a Firefox instance, run the test, and pass back the results to your test case as normal input.

In this section, we have presented an open-source solution, Selenium, to perform acceptance and functional testing in order to validate the correctness of our application. Although this solution is not the only one, it has gained lots of support from the community. Its flexibility and large set of features offer the tester a large palette to build his or her tests on.

Summary

Throughout this chapter we have presented different aspects of testing an application. Although this is not a comprehensive list of what can be achieved, it should provide a good starting point to understand how an application can and should be tested. It is important to note that testing should not happen at the last stage of the application development's life but instead be a part of its building as soon as possible.