01 / My role What I owned in the V2 sprint.
- Leading the ideation workshop to narrow a long list of possible fixes to a set of high-confidence improvements.
- Making the key design decisions on Fast Track communication, selection pattern, and page length.
- Designing across three intersecting teams: Booking Process, Ancillary Track, and Post-Book.
- Presenting the V2 rationale and designs to stakeholders and engineering for alignment.
- Defining the experiment hypothesis and the metrics that would tell us whether V2 had worked.
02 / Picking up where V1 left off A 1.3% attach rate proved demand. A -0.9% conversion drop showed the cost.
V1 proved customers wanted Fast Track. A 1.3% attach rate meant roughly 22 purchases per day, even with a broken experience. But V1 also cost us. Flight conversion dropped 0.9%, translating to 23 lost bookings and roughly €270,000 over the 5-week experiment.
My diagnosis identified three root causes:
- Decision fatigue from the radio button pattern.
- Page length pushing the Next button below the fold.
- Unclear value messaging that mixed benefits with redemption instructions.
Leadership approved a 2-month sprint. The brief was clear. Fix the UX without changing the product. Same price, same placement in the funnel, same airports. The problems were all designable.
| Metric | V1 result |
|---|---|
| Flight conversion | -0.9% |
| Attach rate | 1.3% |
| Funnel drop-off after Fast Track screen | +1% |
03 / Change 1: Selection pattern From a forced yes-or-no to a quietly optional add-on.
The problem
V1 used radio buttons that forced a yes-or-no decision about Fast Track alongside three other ancillary choices. Users had to actively reject Fast Track to continue. This created decision fatigue and was the largest single contributor to the 0.9% conversion drop.
The fix
V1
V1 Extras page: radio-button pattern. Yes/No for Fast Track alongside baggage, insurance, flexibility
V2
V2 Extras page: checkbox pattern. Fast Track as an optional add-on, default unchecked
I moved from radio buttons to a checkbox pattern. Instead of forcing users to make a decision, V2 let them scroll past Fast Track if they were not interested. The default state was unchecked, so not selecting Fast Track required zero effort. That removed the forced decision point that was causing users to abandon.
04 / Change 2: Page structure Keeping the Next button visible.
The problem
V1 stacked baggage, insurance, flexibility, and Fast Track on one Extras page. The combined height pushed the Next button below the fold. Users had to scroll past content they had already decided on just to proceed.
The fix
V1
V1 Extras page at actual scroll depth. Next button below the fold
V2
V2 shortened Extras page. Baggage moved out. Next button above the fold
I moved baggage selection to its own dedicated screen earlier in the funnel. That reduced the Extras page to insurance, flexibility, and Fast Track, cutting scroll depth by roughly 40%. The Next button stayed above or near the fold for most screen sizes.
05 / Change 3: Value communication Making the price worth it.
The problem
V1 mixed the value proposition with redemption instructions. The 'How Fast Track Works' section tried to explain both why Fast Track was valuable and how to use the QR code at the airport. Users saw the €23 price without understanding what they were getting. 6 of 16 usability participants said they were not sure if the price was worth it.
The fix
V1
V1 'How Fast Track Works' content. Value and redemption mixed in one block
V2
V2 separated approach. Benefit-led value first, redemption details collapsible below
I separated value messaging from redemption instructions. V2 leads with the benefit. Skip the security queue at the airport. The price is presented with context. €23 covers all travelers in the booking, not per person. Redemption details (the QR code, where to find it, what to do at the airport) move to a collapsible section that users can open if they want them.
06 / What I deliberately kept V2 was a targeted redesign, not a rebuild.
Equally important was what I chose to leave alone. Three deliberate decisions:
- Kept Fast Track in the booking funnel. Research validated that customers preferred buying during booking. The problem was execution, not placement strategy.
- Kept the interaction simple. Moving from radio buttons to checkbox was sufficient. No need to introduce complex configurators or multi-step flows.
- Kept the price at €23. Pricing perception was a real signal in V1, but the diagnosis was unclear value communication, not unaffordable cost. We could revisit pricing later, after the UX was fixed.
07 / The experiment Same conditions as V1. Direct comparison.
V2 launched as an A/B experiment under the same conditions as V1. Same markets, same airports, same traffic allocation. The experiment ran for 5 weeks to match V1's duration, giving us a direct comparison.
Before launch, I defined three metrics that would determine whether V2 had succeeded.
| Metric | V1 baseline | V2 target | Why it matters |
|---|---|---|---|
| Flight conversion | -0.9% | Neutral or positive | Primary goal. Stop losing bookings. |
| Fast Track attach rate | 1.3% | Above 2% | Validate that the value-prop changes worked. |
| Funnel drop-off after Fast Track screen | +1% | Neutral or reduced | Validate the decision-fatigue fix. |
08 / Results It worked.
| Metric | V1 baseline | V2 result | Change |
|---|---|---|---|
| Flight conversion | -0.9% | +2.05% | +2.95 pp |
| Fast Track attach rate | 1.3% | 2.5% | +1.2 pp |
| Funnel drop-off after Fast Track screen | +1% | -0.3% | -1.3 pp |
Revenue impact
Recovered conversion. +2.05% meant roughly 50 additional bookings per day. Approximately €150,000 per month in recovered and incremental flight revenue.
Fast Track revenue. The 2.5% attach rate meant roughly 43 purchases per day. Approximately €30,000 per month in ancillary revenue.
09 / What I'd carry forward Three principles V2 confirmed.
V1 was a calculated risk that did not pay off. V2 proved the diagnosis was right. The problem was always how we presented the choice, not the choice itself.
- Diagnosis before redesign. After V1 failed, the instinct was to redesign everything. Running usability testing first meant V2 changed only what needed changing. That discipline saved time and kept the scope manageable for a 2-month sprint.
- Test one thing at a time when you can. V2 bundled multiple changes into one experiment because the business pressure demanded speed. With more time, I would have tested the checkbox pattern separately from the page restructure to isolate each variable's contribution. The combined approach worked, but it means I cannot say exactly how much each change contributed.
- Failure data is the best design brief. V1 gave me something no amount of upfront research could. Real customer behaviour data showing exactly where the experience broke. V2 would not have been as focused or as successful without V1's failure.
V2 was not a rebuild. It was three corrections, each traceable to a specific signal V1 sent.