Saturday 21 September 2013

Creating a decent submission gate

In continuous integration (CI), a well-functioning submission gate is crucial (and it's important also in non-CI process) . Submission gate is the criteria which has to be passed in order for a change to be accepted to the common codebase. Note the difference: this is about single changes to be accepted to a baseline, while definition of done is about acceptance of a feature. Basically, all features are composed of several changes.

Key characteristics for a submission gate criteria:

  1.  It should prevent the breakage of the common codebase
  2.  It should be fluent and swift to use
  3.  It should be reliable


For preventing the breakage, we need to have good enough coverage. However, coverage is limited by the need to make the entries through the gate quick. Therefore, we need to pick an optimal set of activities. These should include:

  • Static code analysis for detecting e.g. possible memory leaks
  • Tests for frequently used features, breakage of which would prevent the use of the baseline for many
  • Tests for areas where the breakage could prevent a major part of further and more expensive testing
  • Tests for easily broken features
  • Unit testing done beforehand
  • Code review
  • Creation of change metadata


Quite a set and all these need to be fast! For static code analysis there are good tools available, which usually provide reports that enable quick fixing of upcoming problems - so those are very useful! Code review is very important and, if organized properly, an inexpensive way to discover problems that testing can't typically find.

Change metadata refers to all formalities that are related to change management, e.g. creating a ticket to the change management tool. This is often a heavy part of the process and should be optimized to make the creation of changes fluent while serving the management with enough information.

Tests need to be selected based on the above criteria, need to be automated (for fluent use) and quick. But we need to remember that the tests need to be also reliable! That's a major challenge as there are many things that could fail, e.g. bugs in test scripts, or failures in SW/HW environment. We will never have ~100% reliable tests (unless we test only very simple things), and therefore we need to be prepared for random failures. What should we do when a test fails?

  • Discard the change which seemed to break the test, if we rely on the test and our analysis on the results support the view.
  • Run the test again a few times to check if it is a random failure. How many times is enough? Do we have time and resources for retesting? Random failure may be caused also by the change at hand, so we need to run further tests also for older SW stacks in our codebase. We may also classify the failure as random if it has already appeared in earlier test runs.
  • Same failure has appeared occasionally already before - let's report an error and get a high priority for fixing it. Perhaps we should even drop the test until we receive a fix? Running a failing test is not sensible, it just grows irritation in everybody. On the other hand, problem should be fixed quickly as while the test is out of use, or random failures present, new errors causing additional failures in the test may enter our codebase without us noticing it.


A lot of things that we should be doing in designing and operating a submission gate. It will never be perfect, we will suffer in either speed, coverage or reliability. So we need to aim to make it decent. However, the most important aspect for a submission gate is always fast feedback, because good coverage is more a requirement for further testing.

No comments:

Post a Comment