• Conversion Rate Optimisation

20th Jul 2016

6 min

Quality Assurance (QA) is an essential step in any production environment (both online and off) and previously I have been spoilt by having a designated testing team do the research for me. Removing myself from the matrix of code to research a defining QA process has certainly been a fun (interesting and frustrating) experience, and while there are tools and services out there that can do this, we were looking for something more tailored, flexible and easy for our clients to access.

QA for A/B testing should not overlooked. Whilst our process may seem tedious, we want to ensure that our clients have an accurate conclusion based on users’ experiences that have not been hindered with CSS discrepancies and broken JavaScript. Different browsers and devices can cause CSS and JavaScript rendering issues and from my own experiences, where this has been a ‘blocker’ I have instantly left that site. That’s not going to help your conversion, at all.

QA for A/B testing should not be overlooked

What browsers and devices should I choose? Initially, I created a list of the most common web browsers and devices collated from gs.statcounter.com. These statistics give you only an indication of overall usage and therefore is not indicative of our client’s customers browser/device choice. This list was obviously short lived. This information however, is easily accessible with Google Analytics (a.k.a. GA) and now each of our clients has their own tailored Media QA document which outlines the multiple devices and browsers that receive the largest proportion of traffic, based on their GA statistics.

At first glance, the list looks quite small. However, it is wise to remember you are not quality assurance testing a site rebuild. At this moment in time, you want to target the media where the bulk of traffic comes from, which will allow you to prove or disprove some test hypotheses. To perfect a test for every browser and device would not be a productive use of development time and to further build a test suite would be costly. Tools like BrowserStack boast 100% accuracy for cross browser functionality and combined with our own personal devices, this is ample, and so far proving accurate.

I would advise that QA tests should be conducted across the oldest and latest media versions documented. Testing across the lowest common denominator / version of media will provide you with an accurate representation of how the test should run across modern media, as older versions tend to be backwards compatible. To ensure this, as I mentioned above, you should still carry out QA testing across the latest version too. Lower traffic media sources should by no means be disregarded. They can be excluded from the experiment to protect users from exposure to any rendering issues you may be unaware of that could be detrimental to them and also protect data from being skewed.

Your A/B test may be running, for example, within the header section of your site (or in a few locations) and you want to make sure that any parent and sibling elements are not affected by the variation code. It would be useful to create a document like I have done, consisting of:

  • A (not exhaustive) list of common elements that you find across your client’s sites, from the header to the footer and anything in-between, across multiple pages. For example, navigation cosmetics, navigation functionality, banners, carousels, login forms.
  • A configuration sheet which contains information about, audiences, targeting, goals, code optimisation and linting etc.
  • A user flow to map out the user journey and all the actions they are able to carry out along this journey.
  • Status – where the client can submit their approval (hopefully)

Steps one to three should be checked per outlined media and marked off as complete. This can then be shared with the client for them to duplicate steps one to three in a hope that they return the document with a ‘pass’ status. Like I mentioned at the beginning of this post, this may seem a bit tedious, however being thorough will help to eliminate data invalidity caused by test discrepancies, which is crucial. That’s not to say issues never arise during QA, or even while a test is running (we’re only human). We make mistakes and can overlook things, but having a process like this can reduce that chance significantly.

Once the test is live, I would highly recommend that a monitoring process takes place over the first day or so. We choose to monitor screen recordings, which give us some early insight into how users are finding the test, as well as emphasising any issues that may have been overlooked during the QA stage. Additionally, we inspect GA to guarantee our data is feeding through and goals are being tracked.

To summarise the QA process and mention a few points not included above:

  • Choose high traffic browsers and devices from GA noting down the oldest and latest
  • Run your code through a linter and through W3C school’s validator
  • DRY (do not repeat yourself)
  • Check the test doesn’t affect page loading time
  • Outline all the elements that your test could affect and check all, per outlined browser or device
  • Create the user flow, actions the user should be able to carry out during this test and take this journey in each outlined browser or device
  • Check all goals are firing to GA and to your testing tool (should you have set these up)
  • Share with your office and get them to carry out the user journey
  • Once the client has provided approval, re-check audiences, targeting and finally go live

As an extension to this process, we want to measure the QA and live test success rate and begin to log the approved / failed status, as well as logging no issues or issues that may have been noticed while the test was live. Hopefully, by doing so, we can achieve a 95% QA and live test success rate that will only instil further confidence in our processes.