Statistical Large Scale Planning

In larger organistations it seems more difficult to get acceptance for the agile estimation and planning. By agile estimation and planning I mean measuring velocity of a team and using burndowns or other methods to forecast finished stories or functions at the time of the deadline. There are many good places, blogs and books to read about this.

This reluctance is probably related to the larger number of people focused on the number crunching of traditional resource allocation and planning, but also because the size of features are bigger.

Traditionally we have always hoped there was a way to find the “true” and absolute cost of some development early. Usually the thinking goes along the lines of thinking and planning really long and hard. That will never happen… and we know that more thinking will not help. Those that get their initial estimates fairly right is using statistics to adjust them over time to match historical observable data. Some companies actually collect data about the relation between some initial estimate and the “real” number of worked hours. Let’s call this the estimation factor.

This factor is based on statistical, empirical evidence, and includes any and all effects that influence the “real” outcome: de-scoping, requirements creep, features not realized up-front, difference in skills etc. All which are normal effects in any development and should be expected. In any particular case the factor may be off by a mile, but overall it will even itself out. That’s what statistics is about.

However, statistics only work if you measure the same way every time. If you re-estimate in the middle of development, you can’t reliably use the same estimation factor, because you are not in the same position as when you first estimated. You have more knowledge about the actual required functionality, its complexity and some of the work has already been done, possibly with de-scoping and just realized required additions thrown in.

So if you mix and match these numbers and throw in some “converted agile estimate” too, you will get a number which will tell you nothing.

Actually, the estimation factor, if used correctly, is exactly what the team velocity is: a factor between some guestimated effort (points) and some empirical data (points per iteration) that can, if used correctly, help you to forecast future outcome. The estimation factor is on large objects like features and whole departments, the team velocity is on stories and a single team.

I am not a beliver in converting team velocity to hours, and I am sure that planning and follow-up could be extremly simplified (and de-mystified 😉 if some agile techniques where applied. Agile is about making things simpler, not more complicated.

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.