The other day I was writing some throwaway code, and I ran into a situation I've seen before: some of my test code was (correctly) throwing an exception. My immediate reaction was the wrap it in a try block. But that only led to more try blocks, and some pretty smelly code. So I whipped up a template that would take a lambda as a argument, and would crash the program if the lambda didn't emit an exception when called.
After I had that first version working, it was obvious that the template as it was made for ugly code, as a few of the tests were wrapped in a call to my expect_exception() template, but most weren't. So I added a boolean parameter that would specify if the call was expected to emit an exception:
This was still pretty limited in what it could handle. The real issue was in its return value. It doesn't allow you to report anything about the results of the test, aside from whether or not it threw. This meant that all the test reporting had to be wrapped up into the lambda that was passed in -- and that made it difficult to come up with a clean structure for the test code. Additionally, reasonable reporting was only possible for the case when the lambda did not emit an exception.
So I set about remedying these defects. My first thought was to increase the number of expect_exception's arguments to three: the lambda to execute, the expect_to_throw flag, and a return value. That return value was only used in the case of an expected exception; in the case of a normal call return, the actual return value of the lambda was returned as the value of expect_exception.
I quickly realized the limitations of that API once I began to test things that weren't returning strings. I struck upon a version that calls the original lambda and sets a flag indicating whether or not it that call threw. If that flag fails to match the expect_to_throw flag, then expect_exception itself throws an exception, just like before. What happens now is different: if the call succeeded, then a caller-specified lambda is called and is passed the return value of the original lambda. So the caller can specify that the results are logged to file or printed in 3D or whatever. If the original lambda threw, then a different caller-specified lambda is called. The caller can specify what action to take in either case. The restriction on these lambdas is that both must return the same type, the no-throw lambda must take a single argument of the type returned by the original call, and the on-throw lambda must take no arguments.
So, yeah, you're passing in three lambdas (and there's another version where you actually pass in four lambdas, for when you need to delay evaluation of expect_to_throw). Here's the basic three-lambda version:
Let's look at some test code using this template. This code uses lambdas on lambdas (and I've actually pulled out one of them into a function once it grew -- for a minute there, it felt kind of like writing LISP). In this code, I'll be testing several different graph algorithms. Let's define a graph_tests function that will run a series of tests with a set of parameters. We will pass in the graph itself, the source and destination nodenames, and a set<string> which represents the algorithms which should throw exceptions with the given input data.
This definition of graph_tests is pretty clean. We define throw_return, which is the callback to execute when the algorithm throws an expected exception. It is the same for all algorithms, so it can be global within graph_tests. Then we need to define tst, which will run the test for one algorithm. expect_throw tells us if this algorithm is expected to throw an exception. normal_return is the function to run when the test doesn't throw, and we don't expect it to. It's argument retval will be given the value that was returned by the tested algorithm; in this case, we are simply echoing that return value to cout. Finally, the call to expect_exception actually runs the test. In this case, we have chosen to ignore the value returned by expect_exception.
The main just defines the test graphs and then calls graph_tests repeatedly. Running this, we see the expected test results
The next step is to verify that expect_exception isn't trapping all exceptions, whether they are expected or not (test code needs to be tested, too). The first attempt was to simply add a call to graph_tests(x,'A','D',none) and a call to graph_tests(z,'A','D',set<string>("Topsort"), and verify that these crashed the program with unhandled exceptions. Of course, we had to comment out the first call to verify that the second call crashed the program.
We've now got a way around that, though: recursive calls to expect_exception.
Because there are 5 tests in graph_tests, and not all of them will complete if expect_exception is working as designed, we will be able to tell from the program's output if it is trapping all exceptions. If it is, then we will see 5 lines of output on each of these tests. If it is working correctly, then we will only see three lines from each.
This is awfully close to a complete, lightweight test framework. If we had a way to pass in expected values to expect_exception, it could validate both whether or not the function threw an exception, and whether or not it it returned the expected value.
It doesn't take much modification to expect_exception to make that happen. The big question in adding to the API is whether or not you want expect_exception to validate the actual return value of its lambda, or the return value of the no_throw_lambda. I chose the latter, as it allows you to eliminate a special case by being able to compare the value returned by throw_lambda. It looks like this:
and the modified tests look like this:
The full code is here on gitHub .