Heuristica
: an AI-driven SaaS platform designed for accelerating the UX audit process based on metrics of heuristic analysis defined by Jakob Nielsen.
About the Hackathon
This challenge was hosted by NanoGiants, aiming to develop MVPs throughout the weekend. The goal is to provide functional software that can measure and evaluate the quality of user experience utilising AI-driven algorithms.
Role
Team Lead, Lead UX Designer
My contributions
• Led the team of 5 UX/UI designers over the course of the hackathon
• Led overall visual design of the final product
• Established an agile method to optimise workflow
• Produced a high-fidelity prototype within 54 hours and earned 1st place
Process
Research, Ideation, Sketching, Prototyping and Evaluation
Scope
54 hours, March 5th 2021 - March 7th 2021
Tools
Figma, Miro, Paper & Pencil
Day 1 - Ideation
We started by throwing conceptual ideas on how we can approach this challenge by using AI technology before diving into research. Some possible ideas are:
AI for data protection
AI for design system
AI for wireframing automation
AI for content auditing
AI for heuristics analysis
In order to move forward, we questioned ourselves with the below problem statements before making the final decision and agreeing on the potential solution of AI for heuristics analysis.
How can we simplify the UX research process while saving time and money?
What is the problem with the website?
Errors and mistakes of design related to UX
How do we do the analysis? Can AI analyse the problem?
What tools do we need to give support on the analysis and process?
How do we look for consistency?
Where do users spend a lot of time? (tracking)
After establishing what problems we’re solving, we had a discussion of what our solution would look like.
A heuristic analysis is often used to identify a product's common usability issues. Experts use established heuristics (e.g. Nielsen-Molich’s) and reveal insights that can help design teams enhance product usability from early in development, or combine it with usability testing during the development loop. Interaction Design Foundation listed the pros and cons of this method as follows.
The fact that evaluators have to go through a checklist of criteria to find flaws that design teams overlooked and the labour cost with the time/money spent on the whole process, despite the risk of being subjective and biased from different evaluators, could be a huge burden for small-scale companies or entrepreneurs who run their business single-handedly.
Our goal is to build a product that utilises AI technology to train a machine learning model based on Jakob Nielsen’s 10 heuristic principles and accomplish the measurement with less labour, time, money cost during the development process.
This offers a solution for two-sided targets:
For end-customers, an AI-driven software that evaluates websites to help raise usability.
For UX experts, an AI-driven tool that helps speed up the process of heuristics analysis.
Day 2 - Definition
What do we use to train the machine? Can AI analyse the problem?
We started early the second day to set standards for each heuristic point and research if AI can mimic the human testing process. Our findings were limited due to the lack of AI experts in our team, but we managed to use the insights to establish the foundation of our continued solution.
What AI can do is analyse the images to detect a pattern structure, rephrase sentences to create personalised copy, scan website structure to reassure clear hierarchy and identify functional issues to optimise website efficiency.
How can we simplify the heuristic analysis? Do people find the fundamental structure clear?
At this point, we decided to step back and look at the standardised 10 heuristic points, and we realised that to make sure the machine can understand the dataset we provide, we need to simplify the 10 principles and eliminate repetitive points or invalid points that are impossible for machines to read.
We took the heuristic evaluation checklist conducted by UMKC University Libraries Usability Team for reference and simplified the 10 principles into 4 sections - Aesthetics, Content, Navigation and Efficiency.
Furthermore, we defined the metrics by utilising Jakob Nielsen’s severity ratings to help us mark off the final scoring system.
After establishing the core standard on how to train the machine to evaluate a website, we moved on to draft out the user flow to make sure the product’s information structure was clear enough.
We then continued to draw a few sketches on the key screens we would implement into the product.
We discussed which sketches could represent our product better and what each of us was good at in order to assign the design of low- to high-fidelity wireframes. Here is a sneak preview of the low-fidelity wireframes.
We also adjusted our user flow at this point, as we realised a new opportunity of adding features for users to continue using our product after receiving feedback from NanoGiants.
Day 3 - Prototyping
We spent a whole day on the third day finishing the high-fidelity prototype and preparing for the presentation. As our team was the only one without a developer, we aimed to provide a well-crafted prototype that could simulate real final software to compensate for our weakness.
I also created a moodboard and a manageable style guide, so each team member could apply the same style to the key screens they were responsible for. Additionally, this could help us overcome the difficulty of not being able to co-design on the same Figma project because none of us has a premium account.
I chose orange as the core colour to brighten up the whole product while maintaining a sophisticated tone to keep it professional. Instead of using pure black and white that could cause eye strain when reading long paragraphs, I picked dark purple and light orange as the fundamental texts and background colours.
And here is our final prototype for Heuristica, which we completed in 54 hours!
*The 3D image on the landing page needs some time to load. Please bear with it for a moment. :)
What I’ve Learned
Teamwork
My teammates (Kabelo, Koraljka, Raissa, Silvana) made me realise how meaningful a project could be, even without what we thought we lacked the most - developers and AI experts. It was a pity that NanoGiants didn’t manage to balance out the roles of participants, which left us in a situation with 5 designers in the same team. Despite the dilemma we were facing, all of us were contributing 110% of our effort and time to build the final product. I’m truly thankful for what we’ve accomplished and how this event made us gather together.
Leadership
Since this was my first time leading a team for a UX project, also my first time attending a hackathon, I was extremely intense about team member’s active level in participation. I wanted to make sure everyone’s voice was heard and that they understood all kinds of opinions were welcome. I also wanted all members to be on the same page, knowing what we were building and being aware of the ongoing process, so I was a bit worn out on keeping everyone up-to-date through private messages.
Next time I would assure communication should be built through one channel, which would the most efficient way to work as a team during the whole process.
Where to Improve
Validation and User Testing
We didn’t have enough time to conduct user interviews or surveys to validate our ideas. Even though we finalised our final product based on our assumptions, it wasn’t a fully user-centric product as no real users were involved in the development process.
Ideation Methods
Our ideation method wasn’t the most effective. Ideally, we should brainstorm through a mind map and go through major solutions for each idea. This goes the same as our definition phase when we decided to simplify 10 heuristics to 4, a decision should be made through in-depth discussion instead of pure individual research.
Future Possibilities
I found the feedback from NanoGiants’ teams really insightful. They advised that we could develop our product towards additional prospects moving forward, such as building a Google extension plug-in on top of the original software and adding the function to review past analysed reports.