Global App Testing is a crowd-testing software QA platform that relies on manual workflows to complete much of the tests requested by customers. This limited the efficiency and scalability of our scaleup business. I designed a system of rules and conditions that automates tasks and assigns others to human test managers.
🏆 GAT's test manager to customer ratio improved from 1:2 to 1:6
🏆 The amount of test manager time taken up by manual interventions decreased from 90%+ to <40%
🏆 We generate structured data that highlights opportunities for automations and optimisations
🏆 We operate tests with a system-first approach that will enhance the adoption of future improvements
GAT as a business relied heavily on manual processes and workers to complete the testing activities requested by customers. This manual dependency posed a risk for GAT as it limited the scalability of the business and reduced the profitability of our tests. We needed to improve how GAT runs tests through automations and efficiencies.
Succeeding in this project required a deep understanding of the existing processes and requirements for tests to be managed to completion. I interviewed numerous stakeholders from Ops to understand exactly what goes into running a test.
As soon as I began user research, it became clear that there was a different root cause problem that needed to be addressed. Much of the work being done by test managers was invisible. It was either being done outside of the GAT platform, or wasn't generating structured data that we can use as a trigger for automations and standardization.
How might we reveal and generate structured data for all tasks completed by test managers?
Based on countless user interviews and shadowing sessions, myself and my team began constructing a map of the test manager process. We used this to uncover the tasks and triggers involved in running a test. I then used this to highlight visually where in the process the system has visibility, where it doesn't and where there are opportunities for automation.
I took a critical view of the process, looking for opportunities to remove tasks or change them, rather than simply re-creating the existing process.
With an understanding of the process as is, I began generating ideas along with my product trio and external stakeholders for ways we might provide the system with visibility of the manual actions being taken during the course of a test.
The direction I chose to pursue was to avoid individual features and to instead develop a system of tasks and triggers. Actions than can be automated, will be automated, and for those that can't conditions are defined that if met cause a task to be generated for a human to complete. For example, conditions that indicate that a tester requires support.
This approach is impactful for two key reasons. Firstly, the test management process is driven by the system which will increase the impact and adoption of any process changes. Secondly, and most importantly, it allows the system to be wrong and FAIL. These failures can be used to improve the system, without humans making invisible allowances for an imperfect design.
To assist adoption of this new system, I leveraged the existing internal management tool used by GAT. The means less context switching for users and using a UI they're already familiar with. I also opted for manual completion of tasks to ensure users receive the satisfaction of completing tasks to further reinforce the process.
When designing conditions for the system, I took a high iterative approach to reduce the time to learning and to allow for what would inevitably be a high adoption barrier. I first designed holistic conditions that indicated that tasks needed to be completed in a general area of the process e.g. setup a test, check on progress, deliver results. As these tasks began to be completed, I would design additional tasks that were smaller in scope.
This system led way of working became the standard for GAT and continues to be developed now as additional opportunities for process improvements are identified.
🏆 GAT's test manager to customer ratio improved from 1:2 to 1:6
🏆 The amount of test manager time taken up by manual interventions decreased from 90%+ to <40%
🏆 We generate structured data that highlights opportunities for automations and optimisations
🏆 We operate tests with a system-first approach that will enhance the adoption of future improvements