lunedì 22 febbraio 2010

The way we plan

In our Agile team, we have iterations lasting one week, and we plan our work every Monday morning. Our customer comes to our office, we gather around a table, and we watch the project backlog to choose the new stories to be worked in the current week.
For the majority of the User Stories, we already have an estimate made at the beginning of the project. This estimate has been made in Story Points, and sometimes it's really inaccurate, because when we started our project some features looked really different to our eyes; so, we have to re-estimate these stories. For this we use another measuring unit, the Pomodoro (look here if you don't know what I'm talking about).
After these estimates, we decide with our customer which user stories are to be worked in the current iteration, trying to balance the business value and the stories costs. We plan stories to fit our work capacity for the iteration (e.g., 3 developer pairs work each 10 pomodori per day, so in a week we can plan 150 pomodori/pair).
Of course, when we discuss features with the customers, some new stories may be introduced, some could get splitted, and some others delayed for future releases.

There's something wrong with this. First of all, estimating in Pomodori needs a lot of deep analysis to be made to get to an accurate estimate. This slows the estimating process a lot, because we end up talking about implementation details. Of course our customer gets bored soon, as he's not a technical customer. We introduced this way of estimating after the first iterations of the project, because we were estimating using Story Points, and we were having a very low accuracy; later, we never tried to switch back to Story Points, even if now our accuracy could have improved. Just to mess things up, we also use a "fake" Story Point measuring, obtained just multiplying pomodori by 10. So we estimate 10 pomodori, and we write 1 story point. Using the Real Story Points could speed up our planning work a lot, because we could step up by an order of magnitude, and estimate user stories comparing them with the ones already worked.

Another problem we are having is that we're splitting stories too much. We're following a rule of thumb, like "no stories bigger than 2 points". Such a rule had been voted in a retrospective, after a couple of weekly fails. We had some big stories (like 4 points stories) not being accepted by our customer because of minor issues; so we decided to split them up, to minimize the impact of a rejected user story on the iteration score. The wrong thing with this approach is that we're just changing the way of measuring our system to obtain better measures. Think about it: an iteration ends, and you discover a bug during the demo. Of course you'll have to fix it in the following iteration, and of course this will cost you some additional work. If the iteration scores are 2-10 or 5-7, nothing changes in the system; you still would need to do some additional work. The only thing changing is a couple of numbers written on a spreadsheet. Also, different iteration scores would impact only on the short term velocity, and would instead have no effect on the long term velocity. The short term measure gives no confidence and is much less important than the long term one, so there's no need of getting better short term results. This "hack" on measuring, in our case, is also introducing new problems. We spend time thinking about ways to correctly split stories, we introduce unneeded dependencies and we get to stories which are really difficult to demonstrate to the customer.