An Engineering Room about testing with James Bach would be a blast. He is releasing a book on Rapid Software Testing this year.
feels like you read my mind. I have been using acceptance test without integration tests for years when developing web services api
One minor quibble - in your video you say that acceptance tests work best when they do not refer to any implementation details, but the acceptance tests you show as examples are written in java using junit. The language used is a pretty massive implementation detail. The advantage of using something like Gherkin is that it is language agnostic, so you can write acceptance tests before the programming language has even been chosen, reuse them if you need to convert to a different language, and have a common test language if are using multiple programming languages for front and back ends. Gherkin is also easier for non-programmers to understand.
Great video! Using acceptance tests to drive unit tests has worked well for me for a long time now. Could you expand more on how to effectively use approval tests, particularly in the context of avoiding the common mistake of adding tests after writing the code?
Hi Trisha and Dave, when people are discussing testing I rarely hear about ISTQB which is a good body of knowledge regarding testing. I would be glad to hear your take on it 😀
Really love your thorough explanation! This will be a very good material for providing proper understanding of test activities both to the team and management who don't grasp the understanding of test activities properly
This was a really good video, thanks Dave. You've talked about all of these things before but it was definitely worth going over them again, this was really well packaged and presented.
Pentester etc. here - I describe my work in application layer pentesting as "acceptance testing in reverse". I guess this is sort of like how antimatter is matter in a broad sense. Instead of looking to see that it does what it should I focus on whether or not an application (or API, etc.) does what it shouldn't. I thus agree largely with this taxonomy. I would like, however, to see if TDD reduces confirmation bias. Done right - without focus on implementation etc. I would hazard a guess that it does. I mention this because I have long wondered whether or not unit testing (however useful) might provoke confirmation bias.
Brilliant, Dave!! I love this!
Manual testing has its place, certainly if you are developing anything that will be subjected to humans in production. If you don't let humans test then the customer will do that for you. With predictable outcome...
From my experience * unit testing offers the most cake value when later someone modifies code. I've never understood or mastered TDD because it feels that it works against agility and requires a lot of planning. * A different/better way of understanding acceptance tests. Interesting remarks as well. Usually it's the customer who signs off but i like this one more. * From my experience the only automated tests that matter are unit and smoke tests. Smoke tests cover a lot of other tests including performance. I also like stress tests to validate and measure the tolerances of the architecture. * Most niche products can't do automation tests unfortunately. * I've always valued the idea that qa is not done by devs. Certainly for manual testing. With automation it gets a bit blurry because now the tester thinks like a programmer and brings those biases in, while the qa represents the moon power user.
This rocked and loved how you hit on the 2 most important tests at the end with solid reasoning.
Great video as usual 🙂
Unit test is best done at the boundary of the domain model, as to not tie the tests to implementation details of the model.
Thank you for this nice overview and explanation. I also like healthchecks which perform expensive consistency tests for stored data and program behavior. A great deal of problems can be discoved befor currupted data spreads throughout the system. Secondly, I am fond of defensive programming by assertions and contracts. Contracts and assertions add so much clarity to the code. They allow errors to be discovered after release, and they allow (some) integration tests to become more like end-to-end tests. They are a prestep to provable correctness, and they bring their own complexity challenges (sometimes an assertion simply cannot afford a proportional amount of time).
One way I use Integration Tests is to learn about external to the app dependencies. Such as an Rest API. Here I can add tests to assure that when my app calls that API I get an expected response from the API. It helps isolate and quickly troubleshoot when a response I'm expecting does not occur from that API.
Dave, this was an excellent video and very encouraging to me because it has very similar content to what I preach almost daily at my work. I've been labeled as the automated testing expert in my department - which I feel like is being labeled as an expert in breathing. I really like your list and plan on updating my testing evangelist slide decks with your ideas. One thing I'd like point out that I often start with in my slides is that I feel there are many ways to define what a "type" of test is. E.g., black box v. white box are types by methodology while Marick's quadrants are types based on function and yours are based off purpose. Perhaps I'm off course about this but I'm interested to hear your thoughts about this idea. I've been experimenting lately with in-memory test doubles as part of my acceptance tests. So far I've found that they give me lots more confidence even though it doesn't add much code coverage and doesn't really exercise more of my code than if I use stubs. I'd be interested to hear more of what you think about heavier use of test doubles.
Thanks!
But Dave aren't integration tests useful for testing integration points such as gateway classes and repositories? I always think about Integration Tests as testing my ''understanding and usage'' of external dependencies.
@ModernSoftwareEngineeringYT