Workshop on XP2009

I will host a workshop at XP2009 in Sardinia, Italy, titled Agile in Large-Scale Development Workshop : Coaching, Transitioning and Practicing. At least that is the title in the proceedings, in the conference program it still goes under the proposed name of Coaching Agile in Large Organisations. The format will be lightning speeches and Open Space to enable listening, sharing and networking early on in the conference. The Agile Café of XP2007 was a great idea, perhaps we could build on that.

The Story on Velocity

Chances are that you are calculating your Velocity the wrong way! The Velocity is the speed with which a team can deliver new, valuable, finished and tested features to stakeholders. This blog is about why the common way to handle estimates and planning does not give you that. And how applying a Lean view on the matter can show your real Velocity.

Background

Agile Estimation and Planning relies heavily on Velocity and Relative Estimates. They are all based on the concept of Story Points, a relative estimate of the effort it takes to implement the functionality of Story.

The Velocity is the rate with which a development team delivers working, tested, finished functionality to stakeholders. In the best of worlds this is the customer or user directly, however, in many cases there are still some separate testing activities carried out before this actually happens.

To calculate Velocity we cannot estimate in hours or days, because that would make Velocity a unitless factor (hours divided by hours). It is common estimate in Story Points, where User Stories are compared to each other and some known story as the prototype story to compare to. The idea is to decouple effort from time and instead base future commitments on empirical measurements of past effort executed per time/iteration.

The Stories are found in the prioritized list of Stories to be implemented (what Scrum calls the Product Backlog). The Product Owner should prioritize in which order the wanted features should be implemented.

Signs of Debt

Technical Debt is a term, alluding to the financial world, describing a situation where shortcuts have previously been made in development. In some future situation this will cost more effort, like bug fixes, rewrites or implementation of automated test cases.  In situations where an existing organisation is transitioning to Agile there is usually a substantial Technical Debt that needs to be repaid. This is often signaled by

  • the team struggles to find ways to handle trouble reports
  • rewrites, often incorrectly labelled “refactorings”, are listed in the Product Backlog

The Trouble with Trouble Reports

Of course Trouble Reports need to be addressed as soon as possible. But in a legacy situation many Trouble Reports are found in the late stages of integration and verification, which can be months after that development was “finished”. The development team has moved on or even been replaced. So many of the Trouble Reports must first to be analysed before a correction can even be attempted. This makes them impossible to estimate. So putting them in the Product Backlog can only be done after the solution, and an estimate, is known.

A technique I recommend is to reserve a fixed amount of resources for this analysis. This limits the amount of work for Trouble Report analysis to a known amount. The method of reservation can vary from a number of named persons per week/day, a specific TR-day in the sprint or just the gut feeling of the team. The point is that the team now knows how much of their resources can go into working of the backlog. And adjustments can easily be made after a retrospective and based on empirical data.

 

This technique has a number of merits:

  • known and maximized decrease in resources
  • only estimatable items go in the backlog
  • all developers get to see where Troble Reports come from
  • team members are relieved from stress caused by split focus (“fix TR:s while really trying to get Stories done”)

 

Once a solution has been found the Trouble Report can be put in the backlog for estimation and sprint planning. (But the fix is often simple, once found, so a team might opt to not put the fix in the Product Backlog. Instead the fix itself can also be made as part of the reserved Trouble Report resource box. This avoids the extra administration and lead time.)

But, if the Trouble Reports are put on the Backlog and estimated as normal stories, we give Velocity points to non-value-adding work items!

Rewrites are Pointless

A system with high technical debt might need a rewrite of some parts. More often than not, these has been known for some time but always get low priority from project/product managers. And rightly so, they do not add any business value.

They still need to be done, though. The technical debt need to be repaid. So, what to do?

First I usually ask the developers if the suggested rewrite really is the smallest step to improving the situation? Often I find that if I push sufficiently hard, they can back down a bit and a smaller improvement presents themselves. This is good news since the smaller improvement is easier to get acceptance for. And once they are small enough, they can be kept below the radar of the PO.

If the rewrites are indeed put on the Backlog and estimated as normal stories, we give Velocity points to non-value-adding work items!

The Point of the Story

And this brings me to the core of this blog entry. Trouble reports and rewrites are things that need to be done, but a product owner should never need to see them. The team should be able to address these issues by themselves. They should be delivering working, tested, valuable software. The Product Owner should prioritize the new features.

This indicates that neither rewrites nor Trouble Reports should go into the Product Backlog.

How would a team know what to commit to then? Well, the (smaller) rewrites and the trouble reports (if put in the backlog) need to be estimated. We could do that the same way as previously. But there should be no points earned for these items.

What would this strategy mean then?

  • The Product Backlog should not contain Trouble Reports or rewrite “stories”
  • The Team should include a limited amount of necessary non-value-adding items in the iteration
  • The Team needs to estimate non-value-adding items
  • The Team commit to a number of points including non-value-adding items
  • The Team Speed is calculated from all Done Done items
  • The Velocity is calculated only from value-adding stories

With this technique you will probably see an increase in Velocity as your Technical Debt is repaid. Something which I find very Lean.

The Real Velocity

The only Velocity that is really worth measuring is the speed at which a team can add business value. Then we need to make a difference between value adding work and extra work disguised as such.

Names of tests

At my new customer there are a number of various types of testing, BT, BIT, FIT, EST, FT, ST etc. etc. You can imaging how hard it can be to get a grip on what all those acronyms actually mean. And more than that, even if you can decipher the acronyms, what to they really mean? Is Basic Integration Test “basic” with respect to what is integrated or does it imply that there is only some basic tests run? System Test is testing which system?

As we are introducing Agile techniques here, many people are asking me what is the difference between “acceptance test” and xT? (Where x can be almost any combination of letters…)

In my mind there are only two levels of tests: unit tests and functional tests.

We all know what a Unit Test is, a test verifying that the implementation of a small software unit, usually a class, matches the expected behaviour. And it should run fast. This means that unit tests should decouple the unit from almost every other unit. However we may define a unit to be some small set of classes that interacts, but a unit could never include a database or the network. (As we all know, in the “real world” we need to make compromises all the time…)

A Functional Test is a test that verifies some function of the product from a user/customer perspective. It should demonstrate some Customer Benefit. (This is probably not the only definition of “Functional Test” that exists, but I reused it instead of trying to invent another name for tests…)

But the funny thing is that Functional Tests are Acceptance Tests. At least until we are confident that the functionality is actually implemented. Then they becomes a Regression Tests. And running such tests for functionalities that require other applications makes them Integration Tests? And if we run the same tests in an environment simulating a real environment, then they become System Tests!

So there are at least two dimensions when we are talking about Tests. One is on granularity, units or the whole as percieved by some user. The other is in which environment and for which purpose are we running the tests. I find it helpful to talk about Tests when I talk about test cases, either or unit or functional level, and Testing when I talk about the environment and purpose running some tests.

So in Implementation Testing we usually run all Unit Tests and all Automated Functional Tests, typically using Continuous Integration in a development environment. The purpose is to catch errors in logic, setup and data.

System Testing is about running Functional Tests in an environment as close to a real environment as possible. Usually your application is not the only one which is exercised at the same time.

Performance Testing is about running many Functional Test at the same time to find performance bottle necks and errors caused by limited resources etc.

Its all in the name.

Measuring Business Value

Measurements is a two edged sword. On one hand you cannot see and compare something that you have not measured. One the other hand there are evidence that it generates dysfunction. Conciuosly or unconciously we tweak our ways to get good numbers. We all know and love the story about the development organisation that was to be measured on the number of comment lines in production code. The tool to insert the correct number of comment lines was available just a few days later.

We need to measure, but most organizations seem to measure the wrong things. I think Systems Thinking and Lean can teach us a bit about how to measure in a meaningful manner. Systems Thinking (and Gödel for that matter) keep pointing “one level up”, i.e. don’t measure inside the system, instead measure the system.

Lean (and Scrum and XP) indicates in various ways that we should prioritize according to business value which seems like a good proposition. So I try to convince managers in my client organizations to measure flow of value through the development organization, and as Lean puts it, measure the whole. (Which in itself can be hard, but at least it is not only the development team!)

However, the notion of business value is a fluffy one. And it is usually very hard to get anyone to assign dollars and cents to a customer request in the general case. (Unless you’re a contract developer, when it’s kind of easy, but probably wrong!)

So what I am trying now is to use business value points. In the same way as Story Points work for effort estimation, Business Value Points can be used to let marketing people, business analysts etc. assign relative values to requirements or features.

One complexity still left to address is the fact that at early stages requirements etc. are fairly large and later broken down and re-prioritized. This means that a complete “requirement” is seldom delivered. So the Business Value Points needs somehow to be split when a requirement is split into Epics and Stories.

But if we could measure the flow of value we can also optimize the whole and avoid sub-optimization and measurement value tweaking.

Agile Anti-Pattern: Over-Generalisation

This is the first in a series of posts in which I will try to collect my view of problems in introducing Agile techniques in an organisation, forces that work against Agile and traps that are just waiting to catch the unwary agile practitioner.

Slogan

Generalization makes useless

Applies to

With the obvious risk of falling into the Agile Anti-Pattern: Over-Generalization this Anti-Pattern can apply to many different types of items:

  • Goals
  • Definitions of terms
  • Requirements
  • Plans
  • Stories
  • Definition of Done
  • Tasks
  • Implementation

Examples

Programmers often feel a need to write robust source code. If the exact circumstances in which the code will be executed is not explicitly stated, it is tempting to generalise, second-guessing any future and currently unknown situations.

Exactly the same thing might happen for requirements. A stakeholder might be very broad in his description of the need. But it must be broken down into more and more concrete examples. We know that “The system must react fast to any user input …” is not a requirement usable for development. It might be usable for validation in system test though, because it is a value articulated by the users, and as such can be subjectively validated. But it does not give the concrete information to know when it is actually done.

Over-Generalization in definitions of terms is also common. I have seen it often in planning of business change programs. Particularly in large organizations, where there are many different types of products, situations and projects, definitions of practices easily becomes to generalised to be useful.

For example, in a transition to Agile, the practice “Continuous Integration” might be taken to mean

  • all tests and integrations are automated
  • there are no manual tests
  • system tests are done automatically

… and more. It is not that these goals are bad, in fact they are excellent. But at the core “Continuous Integration” probably means something like “giving programmers concrete feed-back of every change as soon as possible by running builds of as large part of the whole product as possible and as large set of tests as possible as soon as possible after every change to any part of the source code”. As for any change that is to be implemented we need to know exactly what we are trying to implement.

Often people want to include their particular situation, need or interpretation into a definition. The mostly generates bloat. This might also be one of the causes of “requirements creep”.

If a definition is taken to encompass everything related, the definition becomes useless.

Remedy

The best way to handle Generalisation is of course to break down to concrete or less general items. For source code this is already addressed in Agile software development techniques. As the saying goes, don’t implement anything that the current set of tests doesn’t require. This makes us do the simplest thing that could possibly work. This is good advice in any circumstance, and at the core of Agile.

However, sometimes this is not accepted by some of the people involved or affected. A common pattern to this resistance to simply is that it could be perceived as making one persons particular definition or need out of scope. One technique here is to separate that particular part of the definition out into a separate item. It is important not let this come across as removing it, particularly to infer less value in it. This way it is usually possible to get a concensus on a set of tighter, more concrete, items.

Once a set of more concrete items has been agreed upon, a prioritisation process can commence. So it is important to have this prioritization step when handling all types of items. This separates the assignment of importance from the concretisation.

The Fifth Element

Uncle Bob has recently published an new book, “Clean Code”. In it, as I understand it, and in his energetic keynote at the Agile 2008 Conference Dinner keynote in Toronto, he talked a lot on the subject of Craftsmanship. He proposed a fifth element to the Agile Manifesto, “Craftsmanship over Crap”, or as he revised it in his blog, “Craftsmanship over Execution”.

I fully agree that our work requires craftsmanship and ethics and a lot of other things to be carried out with quality. However, I do not agree that there is a need for this to be a fifth element. I think that by examining what we mean by “Working Software” it is not needed.

“Working Software” does not mean “it’s done, since it is running”. To me it means that it actually works, in the same way that it fullfills *all* of its purposes, “fitness to purpose” if you will. This would mean that not only does it function, it also gives a positive user experience, is fit for future change, has quality enough to not surprise and so on. I strongly believe that the term quality do include all dimensions of such fitness.

Usually “Quality” is used in a more restricted meaning, which takes away from its broad spectrum of value. Again, by viewing “Working Software” to include this dimension we don’t need any specific craftmanship element. If anything, I think that David Anderssons suggestion of a fifth element has more merits. He proposed a fifth element focusing on Continous Improvement, which I think is at the heart of Agile, and unfortunately missing in the Manifesto.

Blogs about my Toronto presentation

Ronica, a coach from Rally, have blogged about my “GTD + Kanban + Round Robin for Product Owners” presentation. It is actually a good and correct sum up of the content, so go read it!

In a sum up on pull system presentations at Agile 2008 Corey Ladas mentions my presentation. He doesn’t indicate if he went to see it or not 😉 Jim at Modus specifically says he won’t be going, but points to Kanban and Pull Systems presentations at Agile 2008 in this blogKarl Scotland recommended my presentation as one of the ones you should attend. Again it is unknown if he went.

Toronto presentation

View from my Toronto hotel roomI have just finished doing my presentation at Agile 2008 in Toronto. The presentation was titled “GTD + Kanban + Round Robin for Product Owners” and was an extended version of one of my lightning talks from “Agila Sverige” in July. I will get back and blog a bit about how the idea was born and how we actually used that in my latest project. Meanwhile I will provide the presentation here in Flash and PDF format.

The Theory of X and Y (and Z)

I am sitting here at the Shiphol airport, Amsterdam, waiting for my flight to Toronto. (A 4,5 hour transit so I have not much else to do but to write a new blog post…) I had a quick look at the books in the bookstore and my eyes fell on a couple of small books with the titles 50 ideas you really need to know about. 50 concepts within a specific subject described in just a page or two. One of them was about management ideas, written by Edward Russell-Walling. I flipped through it, most of the concepts where of course know to me already, but there was one I hadn’t heard of before, or maybe forgotten, The Theory of X and Y. Douglas McGregor described these two theories in his 1960 book The Human Side of Enterprise, thinking that the management style of a leader reflects his/her view of human nature. The idea is that they are two contrasting ideas about how people act and should be treated.

Theory X says that people are lazy, self-centered and must be kept in control with hard rules and firm, and detailed management. These people can only be controlled and only work if their basic lower level needs (physiological and safety, according to Maslow) are at stake. And of course Theory Y then is the opposite, people take responsibility, are full of initiative and will find their own solutions to any problem if they are treated as mature adult persons. Of course drawing on the higher Maslow levels.

Theory Z is then a mix deviced by the Hawaiian-born William Ouchi aimed at mixing American and Japanese practices.

So is the management view of people, or at least developers in your organisation, shifting towards Theory Y? Maybe that is a trend in many areas, but it is interesting to see such an explicit correlation made between American (low level needs, threats, control) and Japanese (higher level needs, inspiration, self-direction) management styles.

Powered by Qumana

Single-point, Multi-point

There are many things that are different in the “new world of development” as opposed to the way it was in “the old”. One of the more subtle, but perhaps fundamental is the way that decisions and communication has switched characteristics.

When a developer was developing code “in the old days” he or she had a task defined by some designer, project manager or architect. This was usually the only point of communication that was available to discuss the features. But frequently sales persons or other customer contacts would show up at the developers desk and ask about some features, estimates or changes that could be made “since doing them at the same time would not cost anything”. This single-point communication, multi-point control has been one of the primary sources for unpredictability in the “old” way. Although detailed plans was made and project managers tried everything possible to keep them, the urge from business to rush and push unplanned things into the workload of development ensured failure to meet the plans, no matter how strict control was applied.

In agile development one of the most important techniques is to enable developers to communicate with stakeholders of various kinds, users, customers, business people. This multi-point communication is essential to ensure that valuable features are developed. On the other hand decisions are focused around the product owner and his power of prioritization. This is exactly enough to work as a single point of decision. In conjuction with the multi-point communication it is “right-sized” decisions.

So removing the single-point communication, multi-point control and replacing it with single-point control, multi-point communication gives us controlled flow of tasks and predictability of development.