Excellence doesn’t come from Cost/Benefit analysis

Many organisations strive for excellence, talking about and planning for becoming a Center of Excellence. However, many of those organisations seems to be stuck in a culture of Cost/Benefit analysis.

But noone has become an expert, a top level athlete or world artist by calculating the return on investment on reading a book on a subject, practicing six hours a day or painting picture after picture.

We do these things because, in our heart, we belive it is the right thing to do.

I believe it is the same with excellence, we can only become excellent if we truly believe in the actions we take. Just as a simple example, there probably would never be a framework for automated acceptance testing if you did a cost/benefit analysis before implementing it. It is just to hard to calculate the benefit, the return on that investment. And particularly so if you only consider the current project…

Excellence comes from a burning conviction to become better!

Behavioural Science and Agile

In this blog entry Mike Griffiths summarises some discussions he had with Tony Parrottino, who is a Behavioural Scientist. It is an interesting blog post and Tony’s answers to some of the questions are important clues on how to progress the agile thinking and ways of working.

I have often said that I think “the three questions” (what did I do, what am I going to do, is anything blocking me) are not the best ones. In my view the focus on the daily meeting should be continuous, visible progress, so maybe the most important question is the one about blockage. If we focus on that we can grow a helping culture which is one important component in a fully self-organising, top-notch, team.

I get some support from Tony Parrottino, since he says the three questions are “sub-optimal”. I think Tony is answering the wrong question, though. Because the questions are not put to the team by a manager that wants them to perform. Instead they are the things other team members need to hear to be able to help out.

Tony is much focused on that the one thing we need to manage is behaviour to increase performance of individuals and teams. One of his comments was

trying to remove what you don’t want will not ensure you will get what you do want

But that is in the context of team behaviour, not in terms of obstacles or difficulties. He goes on to talk about “pinpointing” which seems to be a term used to be precise in expressing what you expect your collegues to do and how to behave. To this I can only agree. Focus is one of the most important factors in performance, and you can only focus if you know exactly what is expected.

Statistical Large Scale Planning

In larger organistations it seems more difficult to get acceptance for the agile estimation and planning. By agile estimation and planning I mean measuring velocity of a team and using burndowns or other methods to forecast finished stories or functions at the time of the deadline. There are many good places, blogs and books to read about this.

This reluctance is probably related to the larger number of people focused on the number crunching of traditional resource allocation and planning, but also because the size of features are bigger.

Traditionally we have always hoped there was a way to find the “true” and absolute cost of some development early. Usually the thinking goes along the lines of thinking and planning really long and hard. That will never happen… and we know that more thinking will not help. Those that get their initial estimates fairly right is using statistics to adjust them over time to match historical observable data. Some companies actually collect data about the relation between some initial estimate and the “real” number of worked hours. Let’s call this the estimation factor.

This factor is based on statistical, empirical evidence, and includes any and all effects that influence the “real” outcome: de-scoping, requirements creep, features not realized up-front, difference in skills etc. All which are normal effects in any development and should be expected. In any particular case the factor may be off by a mile, but overall it will even itself out. That’s what statistics is about.

However, statistics only work if you measure the same way every time. If you re-estimate in the middle of development, you can’t reliably use the same estimation factor, because you are not in the same position as when you first estimated. You have more knowledge about the actual required functionality, its complexity and some of the work has already been done, possibly with de-scoping and just realized required additions thrown in.

So if you mix and match these numbers and throw in some “converted agile estimate” too, you will get a number which will tell you nothing.

Actually, the estimation factor, if used correctly, is exactly what the team velocity is: a factor between some guestimated effort (points) and some empirical data (points per iteration) that can, if used correctly, help you to forecast future outcome. The estimation factor is on large objects like features and whole departments, the team velocity is on stories and a single team.

I am not a beliver in converting team velocity to hours, and I am sure that planning and follow-up could be extremly simplified (and de-mystified 😉 if some agile techniques where applied. Agile is about making things simpler, not more complicated.

Recommended Reading

Repeatedly I get the question “What books should I read about Agile?”. Here is a list I have compiled. There are probably more good books, and if you find one, don’t hesitate to recommend it to me.


  • Lean Software Development: An Agile Toolkit, by Tom & Mary Poppendieck.
  • Agile Software Development, by Alistair Cockburn.
  • Agile Software Development Ecosystems, by Jim Highsmith.
  • Scrum and XP from the Trenches, by Henrik Kniberg.


  • Agile Software Development with Scrum, by Ken Schwaber & Mike Beedle.


  • Extreme Programming Explained: Embrace Change, by Kent Beck.
    The first edition is more practical, the second is re-written extensively to show how values fit together with the techniques and practices.
  • Planning Extreme Programming, by Kent Beck & Martin Fowler.

Management & Leadership

  • Agile and Iterative Development—A Manager’s Guide, by Craig Larman.
  • Collaboration Explained : Facilitation Skills for Software Project Leaders, by Jean Tabaka.
  • Managing The Design Factory, by Donald Reinertsen.
  • Agile Retrospectives: Making Good Teams Great, by Esther Derby & Diana Larsen.
  • Peopleware: Productive Projects and Teams , by Tom DeMarco & Tim Lister.

Requirement Management & Modelling

  • User Stories Applied: For Agile Software Development, by Mike Cohn.
  • Agile modeling: Effective Practices for Extreme Programming and the Unified Process, by Scott Ambler.


  • Agile Estimating and Planning, by Mike Cohn.


  • Agile Testing: A Practical Guide for Testers and Agile Teams, by Lisa Crispin.
  • Test Driven: TDD and Acceptance TDD for Java Developers, by Lasse Koskela.
  • Test-Driven Development By Example, by Kent Beck.

Programming & Refactoring

  • The Pragmatic Programmer: From Journeyman to Master, by Andrew Hunt & David Thomas.
  • Working Effectively with Legacy Code, by Michael Feathers.
  • Refactoring: Improving the Design of Existing Code, by Martin Fowler.
  • Refactoring to Patterns, by Joshua Kerievsky.


  • Implementing Lean Software Development: From Concept to Cash, by Mary & Tom Poppendieck.

Plan Well

I don’t consider myself a Toyota/Lean-expert, but there is an interesting saying that I have heard is used within Lean, “Plan Well to Execute Fast”. And that might sound very much like BD/PUF (Big Design/Planning Up-Front, if you didn’t get that…).Empire State Building Steel Delivery Schedule

But isn’t this exactly what the Agile and Lean techniques are trying to help us with? We plan the days work carefully in the daily meeting, in the iteration planning we plan the content and do the design necessary to execute the stories fast.

I consider the view that you should plan before you do to be embraced by most people. If you do this you can focus on smaller pieces at the time and get more focused and efficient. The parallel with personal productivity techniques like Pomodoro and GTD are, to me, obvious.

“Waterfall” is a, failing, attempt to do the same thing. But it is not always easy (or good) to break down a complete project to tasks small enough that people can focus on them, particularly if you are not knowledgeable in the area (as the case usually is for a specialist in project planning). So “Plan Well” has been become one big plan/design/do-anything-as-long-as-it-is-not-real-work-phase before the real work can start. And for some areas it might still be necessary to do, if you are building a new factory plant for example.

But the trend seems to be clearly visible, that in many areas, the insight is spreading. It is possible, and often even very benefitial, to plan and design, and execute, a much smaller part of the work at a time. More opportunities to do it present themselves if you focus on a small change in the “product” that you are changing, instead of on the task and the “human resource” allocation. There are examples in building construction (all the way back to the early skyscrapers, picture is the steel delivery schedule of Empire State Building, with floors on the vertical axis), road and bridge construction, budgeting as well as in systems- and software development.

Companies and people that will not get this shift will be stuck in “waterfall”, “command and control” and “human resource allocation”, miss out on the financial survival advantages, and, like the dinosaurs, soon become extinct.

XP2009 Coaching Workshop Summary

XP2009 was a nice experience. The Flamingo Hotel outside Pula in Sardinia might not have been on par with a normal Scandinavian hotel when it comes to Internet Access, hot water, and definitely not on par with Italian restaurants when it comes to food. But as usual the people visiting (139) are exceptionally experienced, nice and willing to share and contribute.

My workshop ran on Monday afternoon as planned with dozen+ people. The workshop was presented in the program with its initial title and description, so me acting on the workshop commitees feedback never made it to the actual program. But me and Andreas ran it using staying on the topic listed in the program, “Coaching Agile in Large Scale Development” as well as the ideas and discussion topics of the participants.  Jutta Eckstein participated and contributed much of her extensive experience.

Jutta proposed a discussion around “Creating a Agile Community”, supposedly as a means to inject Agile thinking and that way creating or sustaining an Agile Transition. This actually became the most important topic and after extensive discussions about what a community actually is and how to create a special one on Agile topics, we actually concluded two important things:

First, don’t try to create a new Agile community to initiate change. A new community is hard to start, and you will get limited leverage by talking to believers. Instead find the communities that already exist, and there probably are a lot already. A community is a set of people with some relation to each other, for example that they share some interest; the development team, the lunch buddies, the java experts, the sci-fi readers and the art club. Not everyone of these can be leveraged for an agile transition, but you could probably find something from the agile toolset or value base that you (or someone) can inject in most of them. But you probably shouldn’t advocate agile as such in each and every one, but stay on the topic of the network, for example talk about jUnit and how to leverage that on the Java Expert Community. And of course we talked about using the techniques from “Fearless Change” by Linda Rising and Mary Lynn Manns.

Secondly, try to create a new Agile community, but not to initiate change. Use it to make sure that the belivers, the coaches and the change agents have somewhere to draw strenght and inspiration from. Change isn’t easy so the people that try to sustain change need every piece of support they can get.

Coin Sorting with a Twist

pile_of_swedish_coins500x750I went to Budapest last week to do a workshop with the local management team of one of my customers. The goal was to talk to them about how they could move into the agile way of working. This was triggered by a move of some substantial product development from Sweden to Hungary. In Linköping, Sweden, I have been coaching a dozen teams in a transition to Agile, and I was asked to help the management to ensure that the transfer retain as much as possible of the Agile way of working that we have established.

I decided to select a few exercises, and add a set of scenarios that we could analyze together. I was hoping that the exercise would give them first hand expericene in the values of Agile and that this and the scenario analysis would spawn good discussions. And it did. Actually, I did only use one of the exercises I planned.

The exercise that I used was from Tasty Cupcakes and is about sorting coins. We use that exercise extensively in our Agile training, so I knew it would work, but I decided to add a few twists.

The exercise is to ask the group, preferably divided into teams, to estimate how long it would take them to sort a set of coins. The team with the lowest estimate actually get to do the sorting.  This raises the level of competitiveness in the group.

I decided to add a “Requirements Specification” to the game. Instead of telling the team what it was about I had prepared a document stating that they would get a number of coins of various value, how high piles could be, how far apart they where allowed or required to be placed and some other facts. (An unplanned complication was that I had stated distances in millimeters and a couple in the group where actually Irish…)

So after a “pre-study” period the bidding commenced. As often happens one group had a low bid and the others said “Let them do it!”

The team were actually done on time, which is kind of a bonus. Because the kicker is that almost always the coins are sorted according to value although that has never been specified.

I think adding the written “Requirements” shows how easy we are thrown of track by something written. It’s not just the fact that written communication is narrow-band, it is also very often misleading since if someone has spent so much time writing the document, specifically if it also has been reviewed and approved, the truth must actually be in there somewhere. And everything in it must be equally important, right?

Of course, the 10,000$ question that someone should ask is how you actually want them sorted. (Occasionally this question is actually put during the bidding and then you can either try to be undecisive, misleading, or you can just take it from there.)

But the billion dollar question that I really want to hear is “Why do you want them sorted?” I had prepared an answer: I want to be able to sell piles of coins as birthday presents so all the coins should be from their birthyear. I hope that I will someday do this exercise and have to use this answer.

By the way, at the end of the workshop one of the managers said “So I think we cannot wait for the transfer to be done and then start becoming Agile, we must do this now”.

Workshop on XP2009

I will host a workshop at XP2009 in Sardinia, Italy, titled Agile in Large-Scale Development Workshop : Coaching, Transitioning and Practicing. At least that is the title in the proceedings, in the conference program it still goes under the proposed name of Coaching Agile in Large Organisations. The format will be lightning speeches and Open Space to enable listening, sharing and networking early on in the conference. The Agile Café of XP2007 was a great idea, perhaps we could build on that.

The Story on Velocity

Chances are that you are calculating your Velocity the wrong way! The Velocity is the speed with which a team can deliver new, valuable, finished and tested features to stakeholders. This blog is about why the common way to handle estimates and planning does not give you that. And how applying a Lean view on the matter can show your real Velocity.


Agile Estimation and Planning relies heavily on Velocity and Relative Estimates. They are all based on the concept of Story Points, a relative estimate of the effort it takes to implement the functionality of Story.

The Velocity is the rate with which a development team delivers working, tested, finished functionality to stakeholders. In the best of worlds this is the customer or user directly, however, in many cases there are still some separate testing activities carried out before this actually happens.

To calculate Velocity we cannot estimate in hours or days, because that would make Velocity a unitless factor (hours divided by hours). It is common estimate in Story Points, where User Stories are compared to each other and some known story as the prototype story to compare to. The idea is to decouple effort from time and instead base future commitments on empirical measurements of past effort executed per time/iteration.

The Stories are found in the prioritized list of Stories to be implemented (what Scrum calls the Product Backlog). The Product Owner should prioritize in which order the wanted features should be implemented.

Signs of Debt

Technical Debt is a term, alluding to the financial world, describing a situation where shortcuts have previously been made in development. In some future situation this will cost more effort, like bug fixes, rewrites or implementation of automated test cases.  In situations where an existing organisation is transitioning to Agile there is usually a substantial Technical Debt that needs to be repaid. This is often signaled by

  • the team struggles to find ways to handle trouble reports
  • rewrites, often incorrectly labelled “refactorings”, are listed in the Product Backlog

The Trouble with Trouble Reports

Of course Trouble Reports need to be addressed as soon as possible. But in a legacy situation many Trouble Reports are found in the late stages of integration and verification, which can be months after that development was “finished”. The development team has moved on or even been replaced. So many of the Trouble Reports must first to be analysed before a correction can even be attempted. This makes them impossible to estimate. So putting them in the Product Backlog can only be done after the solution, and an estimate, is known.

A technique I recommend is to reserve a fixed amount of resources for this analysis. This limits the amount of work for Trouble Report analysis to a known amount. The method of reservation can vary from a number of named persons per week/day, a specific TR-day in the sprint or just the gut feeling of the team. The point is that the team now knows how much of their resources can go into working of the backlog. And adjustments can easily be made after a retrospective and based on empirical data.


This technique has a number of merits:

  • known and maximized decrease in resources
  • only estimatable items go in the backlog
  • all developers get to see where Troble Reports come from
  • team members are relieved from stress caused by split focus (“fix TR:s while really trying to get Stories done”)


Once a solution has been found the Trouble Report can be put in the backlog for estimation and sprint planning. (But the fix is often simple, once found, so a team might opt to not put the fix in the Product Backlog. Instead the fix itself can also be made as part of the reserved Trouble Report resource box. This avoids the extra administration and lead time.)

But, if the Trouble Reports are put on the Backlog and estimated as normal stories, we give Velocity points to non-value-adding work items!

Rewrites are Pointless

A system with high technical debt might need a rewrite of some parts. More often than not, these has been known for some time but always get low priority from project/product managers. And rightly so, they do not add any business value.

They still need to be done, though. The technical debt need to be repaid. So, what to do?

First I usually ask the developers if the suggested rewrite really is the smallest step to improving the situation? Often I find that if I push sufficiently hard, they can back down a bit and a smaller improvement presents themselves. This is good news since the smaller improvement is easier to get acceptance for. And once they are small enough, they can be kept below the radar of the PO.

If the rewrites are indeed put on the Backlog and estimated as normal stories, we give Velocity points to non-value-adding work items!

The Point of the Story

And this brings me to the core of this blog entry. Trouble reports and rewrites are things that need to be done, but a product owner should never need to see them. The team should be able to address these issues by themselves. They should be delivering working, tested, valuable software. The Product Owner should prioritize the new features.

This indicates that neither rewrites nor Trouble Reports should go into the Product Backlog.

How would a team know what to commit to then? Well, the (smaller) rewrites and the trouble reports (if put in the backlog) need to be estimated. We could do that the same way as previously. But there should be no points earned for these items.

What would this strategy mean then?

  • The Product Backlog should not contain Trouble Reports or rewrite “stories”
  • The Team should include a limited amount of necessary non-value-adding items in the iteration
  • The Team needs to estimate non-value-adding items
  • The Team commit to a number of points including non-value-adding items
  • The Team Speed is calculated from all Done Done items
  • The Velocity is calculated only from value-adding stories

With this technique you will probably see an increase in Velocity as your Technical Debt is repaid. Something which I find very Lean.

The Real Velocity

The only Velocity that is really worth measuring is the speed at which a team can add business value. Then we need to make a difference between value adding work and extra work disguised as such.

Names of tests

At my new customer there are a number of various types of testing, BT, BIT, FIT, EST, FT, ST etc. etc. You can imaging how hard it can be to get a grip on what all those acronyms actually mean. And more than that, even if you can decipher the acronyms, what to they really mean? Is Basic Integration Test “basic” with respect to what is integrated or does it imply that there is only some basic tests run? System Test is testing which system?

As we are introducing Agile techniques here, many people are asking me what is the difference between “acceptance test” and xT? (Where x can be almost any combination of letters…)

In my mind there are only two levels of tests: unit tests and functional tests.

We all know what a Unit Test is, a test verifying that the implementation of a small software unit, usually a class, matches the expected behaviour. And it should run fast. This means that unit tests should decouple the unit from almost every other unit. However we may define a unit to be some small set of classes that interacts, but a unit could never include a database or the network. (As we all know, in the “real world” we need to make compromises all the time…)

A Functional Test is a test that verifies some function of the product from a user/customer perspective. It should demonstrate some Customer Benefit. (This is probably not the only definition of “Functional Test” that exists, but I reused it instead of trying to invent another name for tests…)

But the funny thing is that Functional Tests are Acceptance Tests. At least until we are confident that the functionality is actually implemented. Then they becomes a Regression Tests. And running such tests for functionalities that require other applications makes them Integration Tests? And if we run the same tests in an environment simulating a real environment, then they become System Tests!

So there are at least two dimensions when we are talking about Tests. One is on granularity, units or the whole as percieved by some user. The other is in which environment and for which purpose are we running the tests. I find it helpful to talk about Tests when I talk about test cases, either or unit or functional level, and Testing when I talk about the environment and purpose running some tests.

So in Implementation Testing we usually run all Unit Tests and all Automated Functional Tests, typically using Continuous Integration in a development environment. The purpose is to catch errors in logic, setup and data.

System Testing is about running Functional Tests in an environment as close to a real environment as possible. Usually your application is not the only one which is exercised at the same time.

Performance Testing is about running many Functional Test at the same time to find performance bottle necks and errors caused by limited resources etc.

Its all in the name.