Tuesday, July 17, 2012

Software Deliverability and Predictability


The process of Software Quality does not need to be unpredictable. By that, I mean I've walked into software producing shops big and small, all with the idea that QA is successful by providing visibility and feedback about the number of bugs, the severity of bugs, number of builds rejected, etc (Quality Analysis reporting, awesome for testing airliners).

The better shops require people provide an assessment on the impact (based on where the code broke in the grand scheme of the architecture, so we understand he rippling effect when a change is made, and if this impacts the deadline). So we spend a lot of time around tools that uncover bugs quickly and tools that report and track how many bugs there are when a feature is "code complete" or "dev complete". And we give status on the quality by reporting the number of defects uncovered in the release candidate, and perhaps the speed at which they were uncovered.

This makes the software quality process unpredictable and unmanageable as a timely deliverable. At the end of the delivery cycle, someone still needs to make the thumbs up or down decision based on the risk accepted (and acceptable), based on sampling of the number of bugs uncovered in such a short amount of time.

The release/quality process is unpredictable because the team makes it unpredictable. If your team lays out the criteria up front about what is required to make the release acceptable (I'm not talking just about TDD which might work well in a small shop with no QA, but not in a massively integrated eco system with lots of testers), build and put the systems in place to measure these milestones along the way, set the agreement in which the risk tolerance is acceptable (not just the number or types of bugs), and define all criteria that defines a shippable product and have an infrastructure that can continuously measure this, you will have a quality product that can be contained, not within a pre-set boundary of time, but within the fastest manner possible.

We often look to fixed Sprints in Scrum, which defines this pre set boundary of time that requires us to ship a deliverable. The QA process typically still follows along the tail end of the development cycle. Developers forgo testing when they discovered this pre-set boundary of time is actually not enough and dip into testing time. Instead of developers doing any sort of testing or quality analysis, we begin to assume the mentality that, "I have a QA team, that's what they're paid to do anyway." And now we have a completely unpredictable product in where the pressure is placed upon QA to decide whether it can be shippable. Yikes! To further add to the scariness, the development team not only stops from checking the quality of what they just wrote, they forgo all efforts to add hooks into their software for the quality analysis tools to even do their jobs, putting the automation team into a state of panic in where they will resort to manual testing. Double Yikes!!

No comments:

Post a Comment