Tandem Testing with TestPlus
TestPlus is a new dependency injection test framework that I created to help bring debuggability to dependency injection testing. TestPlus takes in all the information about the factories that create resources and the tests and the resource the tests ask for and code generates the test sequence. Because it uses code generation to flatten the walking of the resource dependency tree, we can do some pretty cool things. We can even customize the code generation that is used to execute tests.
One of the cool things I was able to do with TestPlus is modify the code generator to have it run "Validator" tests in parallel to a test that is controlling the logical flow of the test. Its very important, if we want our test results to be deterministic, that we only allow one "Test/Actor" to orchestrate changes made to the test environment. If we have more than one test/actor instigating change without coordination, then there is no assurance that will be able to achieve the same result because we cannot predict the sequence of changes over time.
TestPlus Function
A test plus test is simply a function placed in the test hierarchy that is prefixed with "test_" as seen below.
def test_no_parameters():
testplus.logger.info("test_no_parameters: was run.")
time.sleep(10)
return
TestPlus Parameter
To create a parameter and pass it to a test we create a parameter factory.
@testplus.resource()
def yield_odd_integers_to_10() -> Generator[int, None, None]:
for ni in range(1, 10):
if ni % 2 > 0:
yield ni
Parameter Originations
Parameters can originate in the scope of a module in the test tree.
testplus.originate_parameter(yield_odd_integers_to_10, identifier="nxtint")
or on the test itself.
@testplus.param(yield_odd_integers_to_10, identifier="nxtint")
def test_odd_integers(nxtint: int):
print(f"Next Integer: {nxtint}"
return
Validators
TestPlus supports tandem testing out of the box by using similar patterns as parameter resource creation but with validators. Validators are hung directly on the test that is controlling the flow they are validating.
A validator records data as a test proceeds and then validates the results of the test data it collected when it is finalized after the test has completed. A validator raises either an AssertionError for a "Failure" or any other exception for an "Error" when its 'validate' method is called.
There are different types of validators you can inherit from right out of the box.
Simple
class TestValidator(Validator):
def __init__(self):
super().__init__()
return
def validate(self):
self._logger.info("TestValidator 'validate' called ...")
return
Recommended by LinkedIn
Looping
class TestLoopingValidator(LoopingValidator):
def do_work(self):
self._logger.info("TestLoopingValidator 'do_work' called ...")
return True
def validate(self):
self._logger.info("TestLoopingValidator 'validate' called ...")
return
Time Interval
class TestTimeIntervalValidator(TimeIntervalValidator):
def tick(self):
now = time.time()
self._logger.info(f"TestTimeIntervalValidator 'tick' called ... now={now}")
return
def validate(self):
self._logger.info("TestTimeIntervalValidator 'validate' called ...")
return
Validator Factory
Just like with a resource, you create validators using a validator factory.
@testplus.validator()
def create_validator() -> TestValidator:
validator = TestValidator()
validator.initialize()
return validator
@testplus.validator()
def create_looping_validator() -> TestLoopingValidator:
validator = TestLoopingValidator()
validator.initialize()
return validator
@testplus.validator()
def create_time_interval_validator() -> TimeIntervalValidator:
validator = TestTimeIntervalValidator(interval=1)
validator.initialize()
return validator
Tandem Test Case
In order to perform a tandem test, you declare the validators as being attached to a test and its scope.
@testplus.validate(create_validator, suffix="vcheck", identifier="nvalidator")
@testplus.validate(create_looping_validator, suffix="vlcheck", identifier="lvalidator")
@testplus.validate(create_time_interval_validator, suffix="vticheck", identifier="tivalidator")
def test_no_parameters():
testplus.logger.info("test_no_parameters: was run.")
time.sleep(10)
return
Because a validator is a object that will exist on the stack at the time the test is run, you can even pass the validator instances into the test by simply asking for them in the test method.
@testplus.validate(create_validator, suffix="vcheck", identifier="nvalidator")
@testplus.validate(create_looping_validator, suffix="vlcheck", identifier="lvalidator")
@testplus.validate(create_time_interval_validator, suffix="vticheck", identifier="tivalidator")
def test_using_validators(nvalidator: TestValidator, lvalidator: TestLoopingValidator, tivalidator: TimeIntervalValidator):
nvalidator.do_something()
lvaldiator.do_something_else()
tivalidator.do_another_thing()
return
TestPlus will use these declarations and generate the appropriate code in the test sequence document to run the validators in parallel with the test.
def scope_mojo_tests_testplus_local_test_validator_injection(sequencer):
"""
This is the entry point for the 'mojo_tests_testplus_local_test_validator_injection' test scope.
"""
with sequencer.enter_module_scope_context("mojo.tests.testplus.local.test_validator_injection") as msc:
# ================ Test Scope: mojo.tests.testplus.local.test_validator_injection#test_no_parameters ================
test_scope_name = "mojo.tests.testplus.local.test_validator_injection#test_no_parameters"
with sequencer.enter_test_scope_context(test_scope_name) as tsc:
try:
nvalidator = create_validator()
nvalidator.attach_to_test(tsc, 'vcheck')
lvalidator = create_looping_validator()
lvalidator.attach_to_test(tsc, 'vlcheck')
tivalidator = create_time_interval_validator()
tivalidator.attach_to_test(tsc, 'vticheck')
from mojo.tests.testplus.local.test_validator_injection import test_no_parameters
test_no_parameters()
finally:
tivalidator.finalize()
lvalidator.finalize()
nvalidator.finalize()
return
Test Results
The main test case which is controlling flow and each parallel validator all get unique test results entry in the test results output.
This way we don't need to fail a main test flow for any one validation failure. We can have multiple results from a single test flow. This can be very useful for some tests where we need to verify the main flow functionality and also check different things as we go.
Final Thoughts
TestPlus makes it super easy to setup Tandem testing scenarios and can provide very powerful way to get answers to more than one question from a single test.