My approach to Test Driven Development has changed a few times during my career. As a testing newbie, I started out writing high-level integration tests that made me realize my designs weren't easily testable. So I started writing tests first starting at a finer granularity, using mock objects extensively for collaborators. Then my designs became hard to change as the tests were tightly verifying all the interactions between objects. I learned that I was mocking the individual components of an algorithm, and should instead focus on using more of the âˆšÂ¢â€šÃ‡Â¨â‰ˆÃ¬realâˆšÂ¢â€šÃ‡Â¨Â¬Ã¹ production objects in my tests. Next I started writing integration tests first which focused on the end state of a scenario, then writing unit tests to explore the underlying object details and edge cases. Part of this pinball effect has come from learning other languages and other tools, notably learning Ruby on Rails after working in Java with WebWork?.
I'm interested in the following issues:
Evolving your style and improving your game
How have other developers changed their design and testing style? What has caused the most influential changes? Working with a colleague? Learning other languages and tools? Pain?
Automated end-to-end testing web applications with a browser: Sirens or Guardians?
Tools like Selenium and Watir can navigate a web application just like a real user, which makes them good tools for end-to-end tests. But these tests are the most expensive to maintain and brittle. After getting burned several times, I've learned to write shorter tests that serve more as smoke tests âˆšÂ¢â€šÃ‡Â¨â€šÃ„Ãº but even these are slow to run and painful to diagnose failures. I've coached a few teams adopting TDD, and everyone is mesmerized the first time they see a browser autonomously clicking through a web application. These tools are admittedly cool, but their pitfalls make them feel like the Sirens of testing. Do these tests catch enough bugs and provide enough feedback to be worth their cost? Is there a better alternative?
Evolving system designs over time with multiple teams
One project I've worked on his been around for several years and has had several team changes. The design and testing strategies have evolved as the team changed and as the team learned new techniques. Now the development is done concurrently by two teams in two different cities, which has introduced more variations in the system design. How do other teams keep their design and testing strategies unified over long-running projects with several teams?
I've also had some experience with customer test/specification tools like Fit, Concordion, and RSpec. I'm interested in learning how other people use them, though I don't have any crystallized goals on these topics.