Overview
ACE Insights was created to help users better understand their spending behaviour and the value they receive from Airtime, using open banking and transaction data. The challenge was not a lack of data, but deciding how to surface insight in a way that felt useful, trustworthy, and actionable rather than overwhelming or intrusive.
This project focused on transforming raw financial data into insights that users could quickly understand and act on, while respecting sensitivity, trust, and regulatory constraints.
The problem
Through user feedback and behavioural analysis, we identified a core issue:
users were generating large volumes of transaction data but struggled to clearly understand what it meant for them.
Key problems included:
• Users could see rewards accumulating, but didn’t fully understand why or where value was coming from
• Transaction and spending data felt abstract rather than meaningful
• There was a risk of overwhelming users with charts, numbers or unnecessary detail
• Insight surfaces risked feeling intrusive or “surveillance-like” if not handled carefully
The challenge was to design an insight experience that felt helpful and empowering, not complex or uncomfortable.
What does success look like
Success for ACE Insights was purposely not based on frequency of use, but more influencing users and their spending habits whilst also providing reassurance, understanding and perceived value without overwhelming users. Basically success would be seen through trust foremost and influence as a secondary metric.
Success for ACE Insights was purposely not based on frequency of use, but more influencing users and their spending habits whilst also providing reassurance, understanding and perceived value without overwhelming users. Basically success would be seen through trust foremost and influence as a secondary metric.
The data we used
ACE Insights was grounded in both qualitative and quantitative signals:
Quantitative data:
• Open banking transaction data across connected accounts
• Reward earning behaviour and redemption patterns
• Engagement data showing how often users viewed rewards and wallet surfaces
• Reward earning behaviour and redemption patterns
• Engagement data showing how often users viewed rewards and wallet surfaces
Qualitative insight:
• User interviews highlighting confusion around how rewards were earned
• Feedback indicating users wanted reassurance that Airtime was “working in the background”
• Observations that users valued clarity and reassurance over deep financial analytics
• Feedback indicating users wanted reassurance that Airtime was “working in the background”
• Observations that users valued clarity and reassurance over deep financial analytics
This combination made it clear that the problem wasn’t access to data, but interpretation and prioritisation.
AI
AI was used to prototype different ways insights could be framed, summarised and explained. Allowing me to test how varying levels of confidence, language and specificity felt before exposing anything to users. This helped ensure insights felt supportive rather than judgemental and informative rather than invasive.
During internal testing, AI supported rapid analysis of qualitative feedback, making it easier to surface recurring concerns around relevance, privacy and interpretation. This allowed the team to refine the MVP by removing or softening insights that were not consistently accurate or helpful.
Design
At this point I move into ideation, where I explore a wide range of ideas and possible solutions. We would go through the process of brainstorming and collaborative workshops to make sure we came out of the design process aligned and agreed on the most promising concepts.
Internal testing
Before exposing ACE Insights to any users, we ran company-wide internal testing.
The purpose of this phase was to:
• Validate data accuracy across varied real spending patterns
• Identify incorrect assumptions in categorisation and recommendations
• Surface edge cases such as credit card usage, subscriptions, or irregular spending
• Sense-check tone and framing before risking user trust
This allowed us to remove or soften insights that looked intelligent but were not consistently accurate.
Only once the feature felt dependable internally did we move to external user testing. However we did get mixed feedback from internal testing (dogfooding) which we tried to address.
The purpose of this phase was to:
• Validate data accuracy across varied real spending patterns
• Identify incorrect assumptions in categorisation and recommendations
• Surface edge cases such as credit card usage, subscriptions, or irregular spending
• Sense-check tone and framing before risking user trust
This allowed us to remove or soften insights that looked intelligent but were not consistently accurate.
Only once the feature felt dependable internally did we move to external user testing. However we did get mixed feedback from internal testing (dogfooding) which we tried to address.
User pain point
The thing to point out with this feature is user's opinion on how their data will be used. This could turn out to be a massive deferent for some users. We have been explicit from the beginning on how we user their data with open banking, but now they can see first hand that we know where they have made purchases and that might feel inherently invasive to some users.
Iterations
We decided to do 2 versions of Insights. This is so we could test with our user's to see if they preferred more transparency around their banking data or more of a gamified experience. On the left we have a version that shows 1-1 direct comparisons of places they have shopped previously and retailers on the Airtime platform they could earn rewards with. On the right is a persona based score for users to improve the more they shop with retailers on our platform. We tested the both of these to get feedback and align on direction.
what external testing taught us
External testing confirmed many of our concerns and helped streamline the direction.
Relevance and impact:
• Recommendations were generally seen as moderately relevant, not transformative
• Users appreciated awareness of alternatives, but convenience and habit often outweighed switching:
• Location, budget and intent were critical context that recommendations sometimes missed
• Recommendations were generally seen as moderately relevant, not transformative
• Users appreciated awareness of alternatives, but convenience and habit often outweighed switching:
• Location, budget and intent were critical context that recommendations sometimes missed
This reinforced that ACE Insights should inform decisions, not try to force behaviour change.
Improvements from testing
Both internal and external testing led to clear refinement opportunities.
Key improvements included:
• Reducing the number of found insights to avoid noise
• Prioritising summaries over granular data
• Improving categorisation accuracy and transparency
• Avoiding recommendations where confidence was low
• Being explicit when data coverage was limited
• Considering opt-in for feature rather than auto placement
• Reducing the number of found insights to avoid noise
• Prioritising summaries over granular data
• Improving categorisation accuracy and transparency
• Avoiding recommendations where confidence was low
• Being explicit when data coverage was limited
• Considering opt-in for feature rather than auto placement
These changes focused on protecting trust first, even at the cost of feature breadth. It also taught us that maybe we are going too in depth with this feature. Users want to understand how their open banking data is being used, but this might be too invading.
Outcome and reflection
ACE Insights demonstrates how thoughtful UX can turn complex, sensitive data into meaningful product value. The project required restraint as much as creativity, with a focus on deciding what not to show and how to communicate insight without overwhelming or alarming users.
This work highlights my approach to designing data-led experiences: grounding decisions in user understanding, balancing transparency with simplicity and aligning user value with business outcomes.
This work highlights my approach to designing data-led experiences: grounding decisions in user understanding, balancing transparency with simplicity and aligning user value with business outcomes.
Thank you