Petr Schreiber
12-11-2015, 18:57
Hi Eros,
there are multiple approaches of writing tests, and multiple types of tests actually. Without going into too much depth - think of them as a system for "good sleep". It takes time to write the tests, but it pays off in the future - you have army of little guardians checking the correctness of your functionality.
Testing is such a broad topic... and I am no test engineer really, so a brief overview of 2 kinds of tests which can help us now:
- "command test" -> you basically simulate all test cases for functionality via multiple tests. This is what I did with ARRAY EXTRACT. The results are verified via the ut_assert* functions. In case the condition fails, you are notified
- "regression test" -> you remember my example where different order of CASEs caused false positive error? Fixed for now, but what to do to prevent it from returning in future? The best is to write test function, which does execute previously problematic code. This way, in case it breaks again (because of optimization, human mistake, ...), you are notified by failing test again.
The regression set should be run before each release, to make sure, or better, to minimize, the risk of breaking anything.
there are multiple approaches of writing tests, and multiple types of tests actually. Without going into too much depth - think of them as a system for "good sleep". It takes time to write the tests, but it pays off in the future - you have army of little guardians checking the correctness of your functionality.
Testing is such a broad topic... and I am no test engineer really, so a brief overview of 2 kinds of tests which can help us now:
- "command test" -> you basically simulate all test cases for functionality via multiple tests. This is what I did with ARRAY EXTRACT. The results are verified via the ut_assert* functions. In case the condition fails, you are notified
- "regression test" -> you remember my example where different order of CASEs caused false positive error? Fixed for now, but what to do to prevent it from returning in future? The best is to write test function, which does execute previously problematic code. This way, in case it breaks again (because of optimization, human mistake, ...), you are notified by failing test again.
The regression set should be run before each release, to make sure, or better, to minimize, the risk of breaking anything.