Published: May 09 2014

Episode 3 is the  shortened transcript of an interview I did with James Shore via Skype; that's why it's a bit longer than the previous ones.


James Shore is a long-time practitioner of Extreme Programming and a thought leader in the Agile software development community. He is author of the seminal book "The Art of Agile Development". James is also founder of the Agile Fluency™ project with Diana Larsen. Learn more at project.agilefluency.com or read about the fluency model at martinfowler.com.

In his lastest screencasting project Let's Code JavaScript  he is immerging into the intricacies of test-driven JavaScript. You can find more information about James and how to contact him on www.jamesshore.com.

Q: When was your first contact with TDD and what did you think about it at the time?

JS: I first heard about XP from a colleague of mine in 1999. We were working on a small, four-programmer project. I was the technical lead and one of the people on the project said: "Have you heard about the C2-wiki, Ward Cunningham's wiki?" I said: "No". He said: "It's really cool, you should check it out. They are talking about this thing called Extreme Programming (XP) – we should do it". I was like: "Yeah, right. We're doing this other thing called 'Feature-Driven Development'. Let's do that instead." So we didn't do XP, we didn't do test-driven development, but it turned me onto Ward's wiki and I thought: 'This is ridiculous! These are the dumbest ideas ever.' What is this pair-programming stuff, test-driven development? But I was really into software process; and the XP stuff looked really weird but also fascinating. So after that contract ended I had an opportunity for a little tiny project to do web scraping with Perl and I thought: 'Hey, let's try TDD' and it worked out really well. You know, TDD for regular expressions is a match made in heaven. So when that same company brought me back to continue working on the same project, the one they wanted to do with XP, I said: Let's try this XP thing. And we tried it and we did everything, TDD, we did pair-programming, we did all the XP practices – huge success. The team really jelled, we really wrote excellent code and I was hooked.

Q: And what did eventually convince you that TDD is more than just working in this one instance but a worthwhile approach overall?

JS: I have always been pretty willing to experiment, especially with things that look ridiculous. And so when I tried it on that Perl project I gave it a real shot. I was lucky in that TDD is a very good fit for testing regular expressions and it worked really well. That was all it took.

Q: What has changed in the way you practice and teach TDD since those early days?

JS: That is a difficult question to answer. In the early days, you know, the very first project I did, I was doing a lot of reading about TDD and I would say that one major difference is that on that first project, we were doing a web application that was basically the kind of thing that would be a perfect fit for Rails today. It was basically a CRUD app that gave a web interface to a database. Doing TDD with that wasn't particularly easy because there was no application logic. There was just a lot of data moving around, a lot of database access and yet we used TDD heavily. We didn't use evolutionary design for that project. We used some upfront design and then evolutionary design. The upfront design we came up with was overly complex. It was a five-layer architecture; between each layer we used mock objects to isolate each layer for testing. And that was not pleasant, it really didn't work well. The rest of the team which wasn't very proficient with Object-Oriented Programming, found it extremely confusing. We found it to be brittle. Of course, at that point there weren't a lot of good mocking tools or anything like that either. So one thing that I took away from that was that mock objects are a seductive idea that are probably indicative of design flaws. So since those early days I have moved more and more towards seeing mock objects as an easy way to get started with TDD and a sign of a flaw that needs to eventually be worked out with evolutionary design. That's a crusade I've been on for, I guess, 14 years now and I'm continually refining my understanding of how to do that well.

Q: So would you say that holds true for the kind of mock approach that Steve Freeman and Nat Pryce are proposing in their book?

JS: I don't know. I have not read their book 'Growing Object-Oriented Software, Guided by Tests'. I really want to, it's on my list. I want to actually sit down and study it, and write blogs about it and stuff like that, like really go through it. And I haven't had the opportunity to sit down with them and talk with them in person about it, although I wish I could. I do know that when we introduced mocks in 2000, it was based on the mock object paper that they've had part of, you know, the first mock paper. It was done to the best of our understanding of how mock objects work. We weren't mocking out third-party dependencies, we were mocking out objects we controlled, which, I understand, is part of what they care about, and we were designing our code through interfaces. But I don't know for a fact that we were doing it the way they would have.

Q: Are there situations in which you consider TDD not to be the right approach for developing software?

JS: Absolutely. TDD has a learning curve. It's not appropriate if you're dealing with software that is a prototype, that is going to be thrown away or where there's immense time pressure and there's no or only minimal consequences for increased costs down the road. And those costs are higher if you haven't done TDD before or if you're trying to do TDD in an entirely new problem domain. TDD is a lot harder to do in environments that haven't been created with TDD in mind. So, for projects that are going to live less than six weeks I would definitely not use TDD if I didn't already know how to do it in that environment. For projects that are going to live more than three months I almost certainly would, depending on the costs. And for projects that are going to live more than a year I unquestionably would.

Q: So it's just a matter of the time-frame of how long your project is going to live?

JS: Oh yeah. There's no technical reason not to use TDD. I haven't found a single type of thing I couldn't figure out how to TDD eventually, and the mental rigor that TDD brings is so powerful, as well as the sense of confidence of being able to refactor so important, that I will use it whenever it makes financial sense or whenever the investment makes sense.

Q: What do you think is TDD's relevance in todays world of lean startups, functional and concurrent programming, continuous delivery, and this mobile everywhere?

JS: Well, that is a huge list of things. I think that TDD is relevant anywhere that you are trying to create code you want to change. And we can go into specific cases from there if you like.

Q: Let's dive into the Lean Startup context where many people argue that - because requirements will change so often - it's not worth to fix them using automated tests.

JS: If someone is saying that TDD makes your software hard to change, then they are doing TDD wrong. This goes back to the mock-objects approach. I've seen many people who misunderstand mock objects and use TDD to basically test-drive the lines of code they are writing and thereby lock their design in concrete; and, yes, that makes code that is incredibly difficult to change. But in my mind, TDD is about enabling refactoring and enabling evolutionary design. So if your code is changing rapidly then it's even more valuable to have TDD on it, so you can change it easily without worrying about breaking things. Now that said, in a Lean Startup environment you want to create validated learning. You want to understand what your market is, you want to create experiments and learn from those experiments rapidly and in some cases those experiments are entirely throw-away. But if you are doing something you're going to keep and change, I would use TDD.

Q: James, you've recently been doing a lot of TDD in the JavaScript space – do you think that test-driving software that runs, at least to a considerable degree, in a browser is a new game or is it just the same old techniques applied to a different thing?

JS: It's absolutely the same techniques. I've been doing this screencast for two years now – and doing TDD with JavaScript is just like doing TDD in any other language. There are, at least in terms of design principles and TDD techniques, some things that are unique to JavaScript; specifically that not all browsers behave the same way. So it's really important in JavaScript to do cross-browser testing which we do in the screencast with Karma. And JavaScript is a dynamic language rather than a static language, so there are some issues there that are different than doing TDD with, say Java. I would say that the biggest difference with JavaScript, once you get away from the whole browser issue, is that, because JavaScript is such a loose language, in terms of "it doesn't lock things down", you can go in and basically change any function anywhere. This makes it a lot easier to do things, like spies, and to really poke-in and get stuff done when you need to, even if it's not great form; that can actually be really, really useful for when you are getting started trying to figure out how to test or how to program something. Just don't leave it in; be sure you take that mess out.

Q: I seem to remember that you're not a big fan of acceptance-test-driven development. Why is that, and how do you tackle the problem of verifying the customer's business requirements?

JS: So ATDD really came out of Ward Cunningham's work on Fit which was inspired by some work he did in WyCash, a financial company. I talked a lot with Ward about it and my understanding is that - while he was at WyCash - Ward wanted to make sure that the software they were creating, behaved in a way that stock analysts needed it to; and the way the stock analysts worked was with spreadsheets. So he would ask them to give examples of behavior in a spreadsheet. He took those spreadsheets and actually hooked them up to the WyCash program, so that WyCash would display in the spreadsheet the answer it was getting. That turned out to be a fantastic way for the analysts to communicate with the development team about how the software was working. That type of communication between the business experts and the programming team is absolutely crucial. It's one of the foundational concepts of Agile that we have close-knit communication between our development team and the rest of the business.

And so Fit was born in trying to recreate that; and Selenium came along and Cucumber came along. These later tools have kind of forgotten the original purpose, which is that WyCash spreadsheets were created to create communication with business stake holders. When I look at what people are doing with those tools I don't see that kind of communication. What I see is a rigid desire that people write a script for the program to follow. Now, when I first got involved with Fit, I was excited about this idea of having business people and programming teams collaborating more strongly, I got along with the project doing consulting on it. What I saw in the field was really demoralizing for me. What I saw was that you go to these business folks and say "Give us these examples and you have to write it like a script so the program can follow it" and invariably I would get two answers: the first answer was 'I don't want to do this. Give it to the test department', and the second answer was: 'How do I know this is actually really working? How do I know this is working and you're not just programming to give me the answer I want?'. So what was happening with Fit was that it was being seen as an added burden for these people and, second, it was seen as a way for the programming team to get out of doing the work they were supposed to do. There was a lot of distrust. What it wasn't being used for, at all, was any sort of really solid communication between the programmers and the business folks about how the program worked and how it was supposed to work.

Q: So, do you recommend to replace the usage of the tool with just more meetings, more talking?

JS: No, I never recommend just more meetings; meetings are pointless. But, if you are having really good communication and that happens to be in a meeting, that's great. It's about the communication and the collaboration. Examples on a white board are ultimately the value we got out of Fit and that's where all the value was. Actually codifying those examples into a funky, not quite human programming language did not have a whole lot of value. The reason this worked for Ward back in WyCash was that he didn't go to the business team and said 'I've got this tool and you're going to use it and I'm going to tell you the format to use and give me what I want'. No, he went to the business folks and said 'I see that you already created these examples in a spreadsheet – let me do something cool'. And that's very collaborative, it's very helpful and that's powerful. He was doing the business folks a favor, he was asking them, 'How can I take what you're already doing and make it better'. If you can to that, then ATTD is great. But that's not what I see people doing with Cucumber and similar tools. What I see people do is pushing their needs on the rest of the organization. And as a result, what they are doing is being ignored. And so it's turning into yet another way of writing tests, just clunkier than using a programming tool.

Q: That's about my list of questions. Is there anything you would like to add as an oldtimer?

JS: Yeah, I do have two things I would add. First is: Don't give up. Because TDD is moments to learn and a life-time to master. I've been doing it for 14 years and I'm still learning new things. And there are some things that are really, really easy to do with TDD and that's what you tend to learn in the class room. But once you start getting into the real world and having to deal with databases and third-party systems and multiple interacting classes, it becomes a lot more challenging and that's what takes a life-time to learn. If doing TDD in that environment doesn't cause you to discover new things about how to do design, then you're probably not trying hard enough.

And the second thing is, for beginners especially, the key to learning TDD is minimizing the time to your next green bar. So as you learn TDD, as I see from beginners over and over again, they take steps that are too big. My recommendation for people who are starting out is to set a stop-watch on your desk - not every day, but as an exercise - and time how long it takes you to get from green bar back to green bar. See if you can get that time down really small. See if you can get the amount of code you write down to just 2 or 3 lines of code between each step. That's a really valuable exercise and you'll learn a lot from doing that.

Q: How short should the steps be?

JS: There is no right answer. What I find is that I write just a few lines of code when I know what I'm doing and then I can write just a few lines of code between each step. But there might be a lot of thinking involved in getting to those few lines of code. So it's not so much about the time it's about not doing a huge amount between each step. If you really understand the problem-space, if you really are good at TDD you will be able to take really small steps where the number of lines you write are maybe one line of test, one line of production code or two or three. I definitely aim for less than five lines of test before I get my red bar and less than five lines of production code before I get my green bar.

Many thanks, James, for answering my questions!


Other episodes of the series:

blog comments powered by Disqus