Maybe you can explain it to them this way - if they know what a function is. If you have a function that does something, foo, and it takes one boolean parameter, that's two test cases. If you add a second boolean parameter, now you have at least four total test cases. And so on.
how is writing tests scalable? how do you decide what to test? three parameters, maybe three classes per parameter, -1, 0, +1.. you now have 3^3 test cases just for one function.
This is the wrong way to think about it, though. A function encapsulates some behaviour, regardless of how short or long it is. You don't test each line (directly); you test each function's mode of operation. So a function with one if statement in it potentially needs two (happy path) tests.
A common occurrence is something like a pool of 10 actions where a bunch of tests each do 3 to 7 of them. This is very hard to abstract with a function call.
One thing to consider here though is that often times when you realize splitting up a function will make it easier to test, it's because your implementation sucks and is doing too much and too tightly coupled. I mean realistically how would splitting up a function make testing it easier unless the function is already complex and performing multiple tasks?
You can certainly argue that some of the clean code folks do a lot of needless abstraction that makes it harder to work on code, and I think that's true at times. But at the same time, a 200 line method doing 19 different things is also quite hard to understand and modify, and the reason testers want to split that method up is because it's really hard to understand and has too many possible outcomes.
I don't like to overly abstract things and I try to strike a balance here, but I can say without a doubt that I've never found it harder to understand and work on a single class with 20 methods that each do one thing (with descriptive method names) than I have a method with 200 lines of code doing the same 20 things. And the former is much easier to test as well.
They have a check clause which is detached from the functions themselves. This is where they would want you to do tests with multiple functions interacting.
A simple example: if I wrote a function, add(x, y), I would want to test that it behaved as expected. So, I'd write a test (among others) to assert that add(4,5) returned 9. I would also test things like add(null, 5) to make sure that my code gracefully handled error conditions.
Idea: since abstractions like functions exist in order to help you prevent bugs in sufficiently complex systems being manipulated iteratively, is the problem not that testing for correctness is bad form, but that the assignment wasn't sufficiently realistic for them to need the abstractions?
I also like the fact that you can put unit tests for helper functions that are inside of other functions. This means you can use the inputs to your outer function as part of the tests for an inner function, which has been hard to do in the past.
I do like C# test casing better where you take n+1 params. N being n params in function, and return value being the expected value. Very clean
#[test_case(-2, -4, 8 ; "when both operands are negative")]
#[test_case(2, 4, 8 ; "when both operands are positive")]
#[test_case(4, -2, 8 ; "when one operands is negative")]
fn multiplication_tests(x: i8, y: i8) -> i8 {
x * y
}
They may be easier in some cases to reason about - which is great until you start writing tests.
Once you start writing tests you find that the size of test functions will reflect the size of the function you're testing. The test functions start to have huge set-ups in order to test a small bit of logic, and you end up spending a lot of time fixing tests every time you make a change to that function.
This is why it's often better to find ways to make functions smaller.
There is a difference however of people refactoring code into smaller classes / functions vs what I'd call "hiding" code in classes / functions. If all someone has done is broken a large function into a chain of functions then of course that is not good.
When people refer to small functions / classes they refer to breaking up the concepts that a class / function represents into smaller concepts that build upon one another. This also increases the reusability of code.
So in short - I agree, longer pieces of code can be better for reasoning about, but I disagree that that necessarily makes them better.
That stops working quickly - namely as soon as you want to test a function A that uses two other functions B and C both of which have some output that is being used.
For example, a function B that sends an email to a user through a 3rd party system and returns an indiciation whether the request to send the email was successfull, a function C that stores in the database that a notification was sent successfully and now a function A that calls B and, if it fails, repeats it a few times, then calls C and, if it fails, repeats it a few times, otherwise fails itself.
This "do X, then depending on the ouput do Y or Z and dependin on their or ..." can't be tested in the way you describe.
You WILL end up using a form of "mocking", for example passing the functions B and C as arguments to A and then, under test, don't really pass B and C but different functions that allow you to make assertions in test. That is still mocking.
reply