Overview
ACE Insights was created to help users better understand their spending behaviour and the value they receive from Airtime, using open banking and transaction data. The challenge was not a lack of data, but deciding how to surface insight in a way that felt useful, trustworthy, and actionable rather than overwhelming or intrusive.
This project focused on transforming raw financial data into insights that users could quickly understand and act on, while respecting sensitivity, trust, and regulatory constraints.
The problem
Through user feedback and behavioural analysis, we identified a core issue:
users were generating large volumes of transaction data, but struggled to clearly understand what it meant for them.
Key problems included:
Users could see rewards accumulating, but didn’t fully understand why or where value was coming from
Transaction and spending data felt abstract rather than meaningful
There was a risk of overwhelming users with charts, numbers, or unnecessary detail
Insight surfaces risked feeling intrusive or “surveillance-like” if not handled carefully
The challenge was to design an insight experience that felt helpful and empowering, not complex or uncomfortable.
What does success look like
Success for ACE Insights was purposely
• Users understanding insights without explanation
• Insights feeling relevant, not invasive
• No increase in discomfort around data usage
• Reinforced confidence that Airtime works in the background
• No erosion of trust due to incorrect, misleading, or over-confident recommendations
• Influence of changing shopping patterns
• exploration of new/ more retailers on the platform
Basically success would be seen through trust foremost and influence as a secondary metric.
Success for ACE Insights was purposely
• Users understanding insights without explanation
• Insights feeling relevant, not invasive
• No increase in discomfort around data usage
• Reinforced confidence that Airtime works in the background
• No erosion of trust due to incorrect, misleading, or over-confident recommendations
• Influence of changing shopping patterns
• exploration of new/ more retailers on the platform
Basically success would be seen through trust foremost and influence as a secondary metric.
The data we used
ACE Insights was grounded in both qualitative and quantitative signals:
Quantitative data:
• Open banking transaction data across connected accounts
• Reward earning behaviour and redemption patterns
• Engagement data showing how often users viewed rewards and wallet surfaces
• Reward earning behaviour and redemption patterns
• Engagement data showing how often users viewed rewards and wallet surfaces
Qualitative insight:
• User interviews highlighting confusion around how rewards were earned
• Feedback indicating users wanted reassurance that Airtime was “working in the background”
• Observations that users valued clarity and reassurance over deep financial analytics
• Feedback indicating users wanted reassurance that Airtime was “working in the background”
• Observations that users valued clarity and reassurance over deep financial analytics
This combination made it clear that the problem wasn’t access to data, but interpretation and prioritisation.
AI
AI was used to prototype different ways insights could be framed, summarised, and explained, allowing me to test how varying levels of confidence, language, and specificity felt before exposing anything to users. This helped ensure insights felt supportive rather than judgemental, and informative rather than invasive.
During internal testing, AI supported rapid analysis of qualitative feedback, making it easier to surface recurring concerns around relevance, privacy, and interpretation. This allowed the team to refine the MVP by removing or softening insights that were not consistently accurate or helpful.
Design
lksjdnfks
Internal testing
Before exposing ACE Insights to any users, we ran company-wide internal testing.
The purpose of this phase was to:
• Validate data accuracy across varied real spending patterns
• Identify incorrect assumptions in categorisation and recommendations
• Surface edge cases such as credit card usage, subscriptions, or irregular spending
• Sense-check tone and framing before risking user trust
This allowed us to remove or soften insights that looked intelligent but were not consistently accurate.
Only once the feature felt dependable internally did we move to external user testing.
The purpose of this phase was to:
• Validate data accuracy across varied real spending patterns
• Identify incorrect assumptions in categorisation and recommendations
• Surface edge cases such as credit card usage, subscriptions, or irregular spending
• Sense-check tone and framing before risking user trust
This allowed us to remove or soften insights that looked intelligent but were not consistently accurate.
Only once the feature felt dependable internally did we move to external user testing.
what external testing taught us
External testing confirmed many of our concerns and helped streamline the direction.
Relevance and impact:
• Recommendations were generally seen as moderately relevant, not transformative
• Users appreciated awareness of alternatives, but convenience and habit often outweighed switching:
• Location, budget, and intent were critical context that recommendations sometimes missed
• Recommendations were generally seen as moderately relevant, not transformative
• Users appreciated awareness of alternatives, but convenience and habit often outweighed switching:
• Location, budget, and intent were critical context that recommendations sometimes missed
This reinforced that ACE Insights should inform decisions, not try to force behaviour change.
Improvements from testing
Both internal and external testing led to clear refinement opportunities.
Key improvements included:
• Reducing the number of found insights to avoid noise
• Prioritising summaries over granular data
• Improving categorisation accuracy and transparency
• Avoiding recommendations where confidence was low
• Being explicit when data coverage was limited
• Considering opt-in for feature rather than auto placement
• Reducing the number of found insights to avoid noise
• Prioritising summaries over granular data
• Improving categorisation accuracy and transparency
• Avoiding recommendations where confidence was low
• Being explicit when data coverage was limited
• Considering opt-in for feature rather than auto placement
These changes focused on protecting trust first, even at the cost of feature breadth.
Outcome and reflection
ACE Insights demonstrates how thoughtful UX can turn complex, sensitive data into meaningful product value. The project required restraint as much as creativity, with a focus on deciding what not to show and how to communicate insight without overwhelming or alarming users.
This work highlights my approach to designing data-led experiences: grounding decisions in user understanding, balancing transparency with simplicity, and aligning user value with business outcomes.
This work highlights my approach to designing data-led experiences: grounding decisions in user understanding, balancing transparency with simplicity, and aligning user value with business outcomes.
Thank you