Flight Centre's transformation program, Copernicus, aimed to revamp and unify the booking systems used by their travel agents around the world. It was important for us to keep track of how the system was used in a scalable and efficient way, as we released new features in different regions. Continuously learning fast to identify issues enabled us to iterate and improve, thus creating value for our users. We put together a framework to learn how agents used the system as we progressed with the program, and make data-informed decisions to solving our next most important problems.
QUALITATIVE AND QUANTITATIVE INSIGHTS
Understanding why users do something (scenarios, intentions, expectations, etc.) and its context are as important as seeing what they're doing with the product. With this in mind, we ran in-person usability tests to understand 'the why'; and used a UX analytics tool, Fullstory, to help us 'see' what agents were doing. We aimed for the best of both worlds – getting rich qualitative insights from conversations with our users and the scalability of gathering data from around the world.
With our in-person usability testing, we started with defining our goal for the research. This clarified the purpose and aligned the team in a clear direction. Next, we noted down our assumptions and hypotheses to validate and any questions we wanted answers to. Asking 'what the next most important thing to learn is', and knowing what resources we had available formed the scope of our session. Using the Usability Test Plan Canvas helped us document all of these clearly and efficiently.
Practicality was key, so we aimed to only test with up to 5 users for each session to maximise our insights yield with minimal effort. Small tests led to learning fast and succeeding quickly.
For the UX analytics piece, Fullstory was our chosen tool to provide the insights we needed to learn our users' engagement and usage. Through the session replays, we watched the events and interactions from various agents around the world, enabling us to cross-reference with the things we observed and heard from the in-person sessions. This gave us the reliability and further validation of our findings.
With the vast amount of insights gathered, how do we know which ones are the most important to work on right now? By rating the amount of friction something caused for a user, then seeing how often it happened. This focused us to act on the problems causing the biggest issues, that happened frequently. This is where the Friction-Frequency matrix was born.
Identifying the most important problems to solve now triggered ideation activities and sketching, creating design prototypes, reviewing the solutions, then estimating its feasibility with the engineers and solution architects. One caveat we found when rating the friction was – it can be quite subjective. We overcame this by reverting to our notes and artefacts about our users; digging up evidence from previous user interviews and research sessions.
PUTTING IT ALL TOGETHER
How does it all tie together? As new insights are gathered either from Fullstory or in-person usability testing, they are added into the Insights Bucket and clustered around specific themes. Those insights then get rated in the Friction-Frequency matrix and mapped relative to what already exists. The matrix becomes a continuous and living artefact that's shifting and changing as new insights are added. When the top 5 high-friction and high-frequency problems are identified, they are now ready to be actioned. We use a Kanban board to track the progress of each problem we're solving.
Aim to get both qualitative and quantitative insights from your research through small in-person usability tests (5 users) and UX analytics tools like Fullstory.
Use the Usability Test Plan canvas to align the team together on the purpose of the test and what the next most important thing to learn is.
The Friction-Frequency matrix will help focus efforts on solving the biggest problem to users that are happening the most often.