Defects- What is the point of estimating them?


If you ask a dozen agile practitioners "what’s the point of estimating defects?" you are likely to get more than a dozen answers.

In some cases the answer for most agile teams is simple and straight forward- If a defect is found that needs to be addressed, and there is no plan to fix it in the current iteration or release, it should go into the team’s backlog and be estimated and prioritized just like any other backlog item.

Even this seemingly simple scenario can be complicated and controversial in some agile circles.

Some see it as simply a way to manage a team’s capacity, maintain visibility and transparency and most importantly to keep tabs on potential technical debt.

Dealing with defects of this kind (and maybe even enhancement requests) is one of many possible choices teams should be making on their own or in conjunction with their programs or release trains. How they are managed is less important than ensuring consistency across the organization.

A much more controversial question is "what about defects that are found during an iteration?"

Here there are two basic types:

1.  Defects found by the developer/creator during regular unit testing before a merge. Most would agree that these types of defects should not be estimated but instead should be addressed as part of normal development work.

2.  Defects found by Quality Services after the merge to the main line but before the end of the iteration and with enough time remaining to be worked on. In these cases the Product Owner needs to determine the severity and prioritization over already planned work that could be impacted by the “new work” added to the iteration backlog. If the decision is to defer the fix and the PO is willing to accept the delivered functionality, as is, then the scenario above should probably be followed. Here is where things get muddled though.

Some teams will apply universal “estimates” to a defect and add it to the iteration backlog as if it were just another story point. This has the effect of inflating velocity in the case where originally planned work is deferred to accommodate the “new work”. Velocity will appear to remain stable even though the originally planned work has been pushed out. Over time, if this pattern repeats itself , the original estimate to completion will not be met with out some intervention or reprioritization. And more importantly, the fact that there may be a problem on the team will not be visible and can delay a plan to resolve the issue. 

Other teams will add a task (or tasks) to the original story to account for the extra time needed by a team member. This will cause the burn down chart to show a spike and capacity estimations vs. actuals to indicate a change.

Lets face it, defects happen. Even in the best of circumstances there will always be issues with software.  How teams and organizations deal with them is the real secret sauce. In agile we don’t use them as sticks to beat our teams but as opportunities to review the process with an eye towards improvement. We do root cause analysis to discover the reasons things happen and then adjust the process to make incremental improvements.  

We try our best to fail fast and we don’t punish for it, we understand it, we embrace it, we look to fix it.

What is the right answer? Like many agile operational guidelines the answer is- it depends.

Are your teams new or mature? The same question applies to the organization as well.

What is the point of estimating defects? Maybe it actually is pointless after all.

Decide, and whatever you do, be consistent and transparent and you won’t go wrong.