If you’re going to build a website or app you need to face facts: it’s going to have bugs. Bugs have existed in software since Grace Murray Hopper found her punch card program failed because a moth got trapped in the reader.
Bugs are a fact of software life, because humans are fallible. We make mistakes, we misunderstand specs, we write algorithms incorrectly, or we use the wrong variable. We believe that instead of trying to write perfect code, we should spend as much time trying to write a good trap. Testing, therefore, is the saviour of poorly-written code.
But there is a huge difference in the approaches agencies take to testing. These range from highly-formalised processes to a more laissez-faire approach.
Some agencies follow a practice called Test Driven Development (TDD), where developers write tests before they write any code. This makes testing a very formal part of the development process.
Automated testing, where developers write code to test the system automatically, is often an important part of TDD, but can be used in more traditional testing regimes too. Examples include Unit testing which exercises all the individual parts of a system, as well as performance testing. The great advantages of automated testing are that it is so cheap to run, and can test automatically every time your software is built. Unfortunately, the flip side is that developing these tests can be costly.
At the loosest end of the spectrum, devs are given free reign to undertake testing without any clear test plans or evidence of testing completed. This can reduce the upfront development costs of a project, but brings with it the risk of incomplete or patchy testing, or a tested system that goes live with bugs remaining.
Testing is an important aspect of software development, and as a buyer, you should ensure that you understand how your agency carries this out. As with all procurement decisions, you’re balancing risk against cost - thorough cross-browser testing of every browser that’s ever existed may be beneficial in an ideal world, for example, but you may simply not have the budget for this.
At Carbon Six Digital, we tend to follow a very formal testing process where test strategy is agreed at the start of a project. Detailed test plans are produced in advance which can be then be cross-referenced back to the customer requirements in our Traceability Matrix, with testing logs updated as testing progresses. This enables us to be in control of all aspects of testing, ensuring that we have good testing coverage, as well as giving us a tool to effectively measure progress and track any issues that arise.
Next time you commission a new website,talk to usabout how we manage testing on Umbraco and Mobile App projects to ensure that only a quality system goes live. And take a look at our ‘How to acceptance test an Umbraco project’blogfor more on this, too.
Lastly, if you are suffering the impact of a poorly-tested Umbraco site, don’t despair - our Health Check service can help you quickly get things back under control.
When starting out on a new Umbraco Project it's really important to know and communicate what features and other requirements you have.
There are many ways of communicating these requirements, but here at Carbon Six Digital, we use a spreadsheet called the Prioritised Requirements and Traceability Matrix. Long name, but easy idea.
We’ve previously talked about producing a Prioritised Requirements List, which is a crucial first step in commissioning a new Umbraco site. In this post, we look at the other side of this template, the Traceability Matrix.
The purpose of this side of the template is to track the…