Automation Toolsmithing
and Maintenance in Python
(For Acceptance Tests and Beyond)
Praveen Shirali
Test Architect, RiptideIO
BangPypers MeetUp - August 20th 2016, Bangalore, India.
This presentation IS NOT about:
- How to write tests in python
- Which test methodology to use
- Which framework to use
This presentation IS about:
- Building general-purpose tools for automation
- Extending them to suite your product under test
- Maintaining it as your product grows
- Adding more such tools to your 'suite'
Layers of testing:
-
Unit tests
-
Acceptance tests
-
End to end tests
-
Performance tests
-
Scalability tests
-
... and beyond!
In this talk we'll look at
Acceptance tests
and beyond...
Ground rules
-
Separate 'automation' from 'testing'
automation = how you exercise functionality in an automated way
testing = how you verify results
-
Automation code should go into a test-library
-
Testcase should comprise of test logic. It calls automation code, unaware of how it is performed.
-
Testcode and automation code command the same respect as product code.
Where to begin automation?
-
Identify the fastest way for your product to get to an isolated test-ready state.
(process?, VM?, container?)
-
Identify interactions.
(humans?, machines?)
-
Identify interface.
(cli?, webAPI?, UI?, other I/O devices?)
-
Brainstorm use-cases, tests for best Return-On-Investment (ROI).
Your first wrapper library
-
Choose a library or a tool as a starting point.
Example:
For product test deployment and management. (subprocess?, vm-sdks?, docker?...)
-
Write a class with basic methods to wrap this library or tool for generic use.
class Product(object):
def __init__(self, **config):
# consumes config based on which the product is launched
def start(self):
# starts the product; may be subprocess.Popen, may be docker run...
def is_alive(self):
# returns a bool on whether the product is running
def stop(self):
# stops the product
-
You've just built your wrapper lib. Version it. Test it. Maintain it.
Extending the wrapper library
-
Extend the wrapper to suit 'your' product by adding additional methods.
Example:
Your product stores logs, depends on a DB?
class MyProduct(Product, MyDatabase): # mixin
def __init__(self, **config):
def start(self):
def is_alive(self):
def stop(self):
...
def delete_log_files(self): # implmemented by MyProduct
def reset_database(self): # inherited from MyDatabase
def cleanup(self): # calls the above two methods
Mashing functionality based on common usage patterns
-
Build wrappers around functionality that can work together.
Example:
Mashup of a HTTP client and a response validator
Given a route to an endpoint, the HTTP client knows which validator to invoke on the result.
When a difference is found, it generates a diff of expected and actual responses.
-
API is simplified.
-
Only one place needs a change if a different functionality is desired.
APIs should always be simple -- hidden costs of APIs
-
Modification required at all places where an API is called. Prone to mistakes.
-
Hard to mentally track changes to APIs. Unlearning and relearning APIs is a pain.
-
API Documentation. Simpler APIs speak for themselves. Very little documentation required.
Choose your test framework wisely
-
Will your test code be python code or plain-text? (gherkin, DSL?)
-
You may be limited by features and bugs in what the test framework provides.
-
The test framework also decides the interface between the test library and the tests
Your test-library is king
-
Automation is not purely for tests alone. Think of simulations, long-running soak tests etc.
-
Consider an interactive python console to test-library APIs for quick hacks.
-
Handle CLI argument parsing through your test-library. Not through the test framework.
-
Launch the test-framework through your own script.
Avoid fixture hell
-
Fixture hell = mentally losing track of what a given fixture does
-
Keep fixtures close to your test code.
-
Pack common setup and teardown patterns into the test library.
Build a namespace of objects
-
You only need to deal with one object, which is a nested attribute tree to other objects.
-
You could use dependency-injection frameworks like SpringPython or just subclass from dict!
-
Test cases get immunity from changes to the test-library
Namespace example using dict -- Implementation
import json
class Dict(dict):
def __init__(self, *args, **kwargs):
super(Dict, self).__init__(*args, **kwargs)
def __setattr__(self, attr, value):
self[attr] = value
def __getattr__(self, attr):
return self[attr]
def __hasattr__(self, attr):
return True if attr in self else False
def __repr__(self):
return json.dumps(self, indent=4, sort_keys=True, default=repr)
Namespace example using dict -- Usage
d = Dict(one="ONE", two="TWO")
d.three = "THREE"
d.four = Dict(nested_four="NESTED_FOUR")
>>> print d
{
"four": {
"nested_four": "NESTED_FOUR"
},
"one": "ONE",
"three": "THREE",
"two": "TWO"
}
>>> print d.four.nested_four
NESTED_FOUR
Namespace example using dict -- Extended
n = Namespace()
n.set("parent.child.grandchild", MyClass())
>>> n.parent
{
"child": {
"grandchild": {{instance of MyClass}}
}
}
>>> print d.four.nested_four
NESTED_FOUR
Rely on logging
-
Python logger is insanely awesome. Use it as much as possible.
-
Override the __repr__ in the classes you define. Let the instance tell you its data.
-
Testing is meant to be debug-friendly. Make every attempt to keep things readable.
-
Quick turn around time matters a lot with in-field issues. Prepare for it in advance.
Automated data generation
-
Use a library to generate planned or random test data.
Example:
hypothesis
-
When you know your data models, its easy to build equivalence classes and generate test data.
-
Parametrize data. Implement the code to generate data in your test-library.
-
Test with insanely large values, wrong datatypes, magic strings etc
And, always plan for the following ...
-
Parallelization of test execution (with replicated test environments)
-
Replacement of 1st line libraries which you wrapped initially.
-
Maintenance of the test library for multiple version of your product
... cool stuff you could consider doing
-
Use tagging. Build reports on features covered, issues fixed, verified etc.
-
Automatically scale testruns horizontally with additional resources.
-
Tag defect IDs. Pull defect status and automatically regress fixed issues.
Some essential reading
-
How to design a good API and why it matters
Google Tech Talks, January 24, 2007
https://www.youtube.com/watch?v=aAb7hSCtvGw
-
Zen and the art of automated test suite maintenance
John Ferguson Smart, June 13, 2013
http://www.slideshare.net/wakaleo/zen-and-the-art-of-automated-acceptance-test-suite-maintenance
-
Automated Testing Patterns and Smells
Google Tech Talks, March 6, 2008
https://www.youtube.com/watch?v=Pq6LHFM4JvE
Q&A, feedback...
Praveen Shirali
praveengshirali@gmail.com
https://pshirali.github.io/automation_toolsmithing/