Sequel: the processes behind Software Quality Assurance

Tools, methods and principles such as those presented in our previous article  are a sound basis for creating a solid product, and they need to be embedded into a good process. So let’s dive into today’s topic:

The processes

Designing a good process is hard. Fortunately, standard process frameworks are readily available: CMMI and Scrum are two famous standards, the first one being very detailed and having a very broad scope, the other agile with a focus on software development in single teams. Knowing and understanding such standards is already an asset, even if you are not aiming for 100% compliance – you will not need every tool in the box, but you should at least be familiar with them. In general, we strive to continuously improve our processes – not as an explicit procedure but more as a habit. For instance, regular lessons learned and retrospections help us to identify potential problems and improvements, and we act accordingly.

For the development of PTV xServer, we use Scrum as a basis and adapt and extend it as we see fit. We usually combine classic approaches such as requirement, change, configuration, and risk management and roadmap planning with agile approaches such as daily standup meetings, task planning and tracking on the wall, retrospectives, public sprint reviews and time-boxed iterations. This mixed approach is fairly pragmatic and tends to work very well.

Requirements management

is arguably the process with the greatest potential impact. In Scrum, you keep a backlog and write stories. It is essential to clarify the motivation of a requirement so that the engineers assigned to implement it can start thinking on their own and help both to save time and reach a better solution. In some cases, generalizing a specific requirement might even be easier than carrying out the initial request. Requirements are regularly discussed with stakeholders, evaluated and refined by technical experts, finally prioritized and scheduled. Since multiple teams are usually involved, the requirements must first go top–down from the originator to the impacted teams, to the specialists, and then bottom-up again with feedback and proposals from the specialists to the originator and product managers.

We endeavor to maintain a certain traceability not only from high-level requirements (product management level) to low-level requirements (team level, stories), but also to work packages (stories or task notes), tests and reviews, and all of these across departments.

If requirements make it into the backlogs, we can start the design and implementation.

This is mostly driven by the methods and tools already discussed. Processes may define standards for the use of tools and methods, however, which is important as it allows engineers from other teams to help out. Although every department has a lot of freedom, things such as coding standards should be company-wide. And they are at PTV. We have a dedicated “code quality” expert group staffed by different departments that defines in-house coding standards and assures their use with reviews and tool configurations.

The implementation is followed by the validation and verification phases. Of course, these are only logically identifiable as phases – you carry out validation and verification on multiple levels, from individual changes up to full releases, usually heavily intermixed with other activities. The only “phase” that can be easily identified is just before the releases where verification is the primary activity.

However, the key to good quality is in the frontloading – start your V&V as early as feasible:

  • when you finish your work on a procedure / module, write and run automated unit tests,
  • when you finish your change, write and run automated integration tests,
  • have tests run during continuous integration,
  • at regular intervals, during development, perform regression tests (did former bugs come back, maybe during merges?), performance tests (did response times degrade?), endurance tests (are there memory leaks?), stress tests (is the system still robust under heavy load?),
  • after major technological changes, and before releases, perform installation tests (does the system still start and run in the intended environments?)

You may have to run manual tests as well, which will require a test plan. Also, code reviews are a valuable verification method; one advantage of pair programming is the in-built peer review. At PTV, we sometimes also apply Tom Gilb-style inspections for critical concepts, designs and code: these are elaborate forms of review and can be highly effective if done right.

Regular code walkthroughs and refactoring sessions

help to keep the architecture and general code structure healthy. We perform these on a regular basis with manageable effort, with and without concrete triggers.

Bugs that were not revealed by our various test suites are added to these test suites as part of the bug fixing process. This is important in order to prevent regression after code merges or other modifications that can easily reintroduce bugs that had already been fixed.

As a general rule, bugs should be fixed as soon as feasible. A severe bug can and should interrupt current development work until its cause and the solution have been found. In many cases, hunts for less severe bugs can be postponed until a feature is completed, but you should always keep an eye on the bug count. A heap of open minor bugs can easily become a major problem on its own. To counteract a continuous ramp up of bugs, we reserve time before major releases to fix as many minor bugs as we can.

We not only keep track of bug counts, but also measure other things. In Scrum, you track your progress, speed and estimation accuracy. Software metrics can help you to find good spots for improvements. Test coverage measurement is important to decide how good your tests are. There are hundreds of potentially useful measurements, but all of them take time and effort, so we only measure what we are evaluating and only evaluate what we find useful for decision making.

The logically final step of the development cycle is the validation of the results.

As with verifications, one should validate as early as feasible in order to avoid building the wrong product. Our iterative incremental development process supports this operation fairly well: every sprint is followed by presentation of the results, regularly inspected by internal customers and upper management. We also have several internal and external feedback channels and have started to provide internal alpha and external beta releases for early adopters.

But, in the end, even the best process is worthless if people are not working to keep it alive. Therefore, in our next article we will take a closer look at the people who are probably the most influential factor on product quality.

[ratings]