Hitting a Blurry Target
One of the lessons that I’m learning about how to work well in an Agile environment is that vague tasks are fine, large tasks are fine, but vague large tasks just make problems. Vague large tasks that are interlinked with other vague large tasks make large problems. I ended up spending the entire sprint on a task that I spent about a week estimating would be done within 2 days. The task got done, at near the last minute and with a large assist from one of my teammates, but it was a lose call and made for a stressful tail end of the sprint.
What Went Wrong?
As is often the case in programming: state is the enemy. In this case, not plotting all of the potential state mixtures led to underestimation of the complexity because the sheer breadth of potential states wasn’t obvious when looking at the task in the estimation process.
How to Make It Not Happen Again?
Partially, by breaking down tasks into more bite sized chunks before taking them on. Breaking a view into subviews, building all of those in isolation and integrating at the end where possible would have kept the story more manageable. Some tasks are large or risky and can’t be broken down into bits, but more checking needs to be done on large tasks to make that call.
After that, there’s the question of how to divide the work in interlinked tasks, there isn’t a right answer to generalize there. The tighter coupling between tasks, the more it probably should be a single unit of work rather than two tasks with separate owners, but it’s a call that needs to be made per item.
The big answer on how to make it work better is probably building tests. It’s hard to do for complex state, but on the other hand having a representation of all of my states ready for review before having anyone else test my code would have saved time, helped avoid regressions or test cases that involved having to check multiple layers for issues that could be causing the view to behave badly before even checking my own view code.