TDD Series Part 1 – First write a test? Not.

I’m an avid TDD’er. I just love it. I teach it a lot. To those who think they know and to beginners. And it is not so easy to grook. And maybe it’s an acquired taste. I feel it is an essential tool in a programmers tricks, and am amazed that so few actually know it well enough. And that there are still those that actively denounce it. It is not universally applicable, true, but in my mind, if you consider yourself a programmer, you must know how to do it fluently. Period.

In a few installments I plan on showing the way I do it. And possibly allow a few others to acquire the taste, and some proficiency. Maybe even some more seasoned coders out there will pick up some tricks. Or teach me some…

For me TDD is very much about three words: focus, confidence and pace.

TDD done right allows, and forces, you to focus. We all know that focus is key to progress. But also that you usually only manage to focus on smaller, clear, things. Big, fuzzy things are hard to pinpoint, to focus on and of course, to get any progress on. So the first step to TDD is partitioning into smaller chunks.

Let’s get started. You have probably read, or heard, that the first thing that you should do is write a test.

Wrong.

Continue reading “TDD Series Part 1 – First write a test? Not.”

Texas Hold’em with Cgreen

I’ve ben involved with the development of Cgreen for a few years, so when Software Craftsmanship Linköping asked me to do a TDD session for them I obviously choose that as my basis.

Cgreen is nice for allowing modern TDDing in C (and C++) using fluent API, mocks and the rest.

I talked and we coded. I selected the Texas Hold’em kata, which is interesting because of the multitude of dimensions that need to be covered. It is also a good kata to retry to experiment with different order of the tests. (Actually, I did it from memory and got it wrong, players have 2 private cards and community cards are delt until player folds. So the tests below are inaccurate.) Continue reading “Texas Hold’em with Cgreen”

Xrefactory to c-xref – refactoring C with Emacs

For a long time, probably around a decade, I have been using a refactoring tool built by Slovakian researcher Marián Vittek. It was probably one of the first refactoring tools to cross the “Refactoring Rubicon“.

It is an Emacs plugin that adds refactoring, navigation, completion and crossreference functionality for the C language. There is also some Java support, and they built a commercial C++ version.

Mostly it just works. Of course it has some trouble with heavy macro usage, it’s missing a few basic refactorings, e.g. it doesn’t extract an expression to a function returning a value correctly, so you need to edit the result. I haven’t really thought much about it until I started developing on a new computer and just took a quick look for a new version. I knew the project was kind of hibernating so I hadn’t been up-to-date with events.

To my sadness the xref-tech site was no more. After some googling I found that the C-version had survived as a SourceForge-project created already in 2009 by Marián.

This post is very much a payback for the good service Xrefactory have been giving me during many years. And a strong recommendation to you to look into c-xref if you are into Emacs and C-programming.

Do you know of any similar tools for C?

From CVS to Git, the short story

I decided, finally, to move my main hobby project from CVS to Git. I wasn’t new to Git but I hadn’t worked with it for real. So I thought it was a good idea to start doing that and learning the ropes.

Of course there where two parts to that, first migrating the repository, which this blog post will not talk about at all. My only tip is to do that on a genuine Linux system, on Cygwin I ran into a lot of problems.

The second part was to start learning to use Git on a day to day basis. So here’s a very short tutorial on Git for CVS users. Continue reading “From CVS to Git, the short story”

Debugging memory leaks with Valgrind and GDB

While debugging memory leaks in one of my private projects, I discovered that GDB and Valgrind can actually operate together in a very nice fashion.

GDB is capable of debugging remote programs, like for embedded device software development, by using a remote protocol to communicate with a proxy within the device.

Valgrind is an almost necessary tool if you are working in an environment of dynamically allocated and returned memory. It follows each allocation in your program and tracks it to see if it is returned properly, continue to be referenced or is lost in space, which is a ‘memory leak’. And as any leak, given enough time you will drown, in this case require more and more memory, until either you program is eating up your whole computer, or you get out of memory. Continue reading “Debugging memory leaks with Valgrind and GDB”

XP2012

I’m just back from XP2012 which was held at Malmömässan in Malmö, Sweden. So when it’s “at home” you just can’t miss out. I stayed almost the whole week and felt injected with a lot of inspiration, which is exhausting.

While, in my eyes, the program didn’t look quite the same level as some of the previous XP’s, I had some very inspiring sessions, including information packed keynote by Dave Snowden, a promising workshop with Tobias Anderberg and Ola Ellnestam, presentation on agile contracts by lawyer Lars Ahrred which brought hope and tips for future cooperations, a workshop with Ivana Gancheva and Bent Myllerup on coaching/teaching balance and the new concept “Validated Influencing”.

Open Space session about “Deliberate Practice” initiated by me became an extreme success thanks to Willem Larsen, David Campey and Markku Åhman. Thanks guys.

Willem also did a language/fluency hunting session which was inspiring to see. I’ll take away the “Live” tool (as a participant being engaged, inspired, finding forms that makes us get into that operating mode).

The conference dinner prepared by Jan Boris-Möller (an engineer turned master chef challenging almost every cooking preconception) was so much more interesting after his presentation and the following conversation with him on the height of chefs hats, chemistry and physic of cooking and waiters selling what works (rather than what the customer thinks he wants).

Also my own presentation, which was a remake of my “Agile Analysis” titled “Continuous Analysis, or Kanban for Product Owners” got some traction.

Increments and iterations

Describing the difference, and similarities, of the two words iteration and increment have been very hard for most of us. Using paintings have never really “clicked” with me…

But now, there is a nice and clearifying description by Eivind Nordby. With the help of some well known guys and maybe someone not so well known, Eivind takes a step forward in understanding the concepts and explains that both can be applied in the process and the product dimensions, but might still mean “adding” (incremental) and “reworking” (iteratively).

And I suppose that it’s the fact that both addition and re-work can be applied in both the process and the product dimension, that makes it so hard to pinpoint and describe.

In a “true” agile sense we really like re-work in the process dimension (repeating the activities, so that we can get good at them). But we dislike re-work in the product dimension (could be considered “unknown amount of work left to do”) becase we want the functionality to be Done Done. In real life though, mostly we aren’t really, really Done Done. Sometimes because of misunderstandings, time constrains and what not, but also not seldom because of the “systems implementation uncertainty principle”, the fact that implementing a system changes both the perception of that system and the needs it should fulfill.

So I guess that we should continue to strive for pure incrementatlity in the product dimension, but sometimes accept a “failure” and then iterate a bit. Particularly to get the early feedback that is so essential for delivering the functionality and properties that are really needed, and not the percieved needs.

The (im)possibility of planning development

When I talk to project managers, or managers in general, one of their main concerns is the precision, or lack there of, in their planning. It is still common that development projects overrun their deadlines, resulting in frustration, loss of money and trust, and cause a lot of extra work in re-planning dependent activities. So many managers look to Agile for a solution to this problem.

But very few seem to realize the inherent problem in planning development work. It is not uncommon for managers of large projects to think of planning as a simple process of converting required functionality to manhours and then allocate enough people to do the hours. It seems to be working when planning other types of projects, so why shouldn’t it work for development?

Well, first, does it really work for other types of projects? Software people have always been blaimed for beeing the worst when it comes to planning; road work, house building, are always on time. Well, no, their not. At least not most of them.

What is development, really? Some people view software development as a production process: from the requirements manufacture the software. Sometimes it can be, but usually we then quickly create a tool that can do that repetetive work for us. So what’s left? Only the parts that are not repeatable. The ones that require engineering and design. That means that development work is a creative process. Or rather, it is problem solving. Constantly solving new problems is what development is. At its core it’s like solving a continuous flow of crossword puzzles. And like crossword puzzles, some things are easy, sometimes as hard as we thought. And some things, much, much harder. Did you ever give up on a crossword puzzle?

So can you tell me how long it’s going to take you to solve the Sunday crossword puzzle? Of course you can’t. You don’t know what the questions will be, what problems you will have to solve. So development work is inherently unplannable. Period.

How can we then promise anything at all about when something will be ready? Agile planning uses statistical methods to get planning to work. And statistical methods needs multiple values to work, you can’t use a single value as a basis for statistics. And you don’t do statistics by guessing, you measure. And the more data points you have, the better your statistics will be.

Doing the Sunday crosswords for a year will give you 52 datapoints, resulting in a reasonable probability in your guess for the next one. It still won’t give you any guarantees for how long the next one will take, but on average you will know. If you wanted to know the total time for the next 52, you’d have a pretty good guess.

If you do various sizes and types of crossword puzzles you could probably find some statistical correlation between the number of squares or questions in a puzzle and the time it took. This adds to you statistical samples, maybe up to a few hundred squares over a year, increasing the statistical probabillity of your future projections.

If you want to have good statistically based projections you need many actual samples, and many planned samples. And how do Agile planning help us with that?

By

  • breaking down functionality into small parts
  • always include everything required to keep quality
  • measuring average development speed

Because development work is problem solving we need statistical support for our planning, and because we need statistical support for our planning we need many samples. The agile techniques to do that are small stories, done criteria, velocity and story points. And as many of them as we can get.