01 / The Lake District trip A road trip I couldn't plan with the tools that were supposed to plan it.
Last week of December, 2025. I drove from London up to the Lake District with friends and my dog Nala. The drive itself was the point. We wanted scenic, not motorway-fastest. We wanted to know where to stop, when, where to refuel, where to grab a coffee, which areas were dog-friendly, and where Nala could pop out for a break. We wanted to know what was worth seeing between A and B.
I tried to plan it with the tools you'd reach for in 2025: Claude, Gemini, ChatGPT. Each one gave a partial, plausible answer. None gave me a complete plan. I went back and forth across three chat windows, sending screenshots into the group chat, jumping out to Google Maps to check what the places actually looked like and how close they were on the map, then back to the chats to refine. The output was a fragmented list, not a route.
The friction wasn't AI being bad at travel. It was that no surface stitched the AI's knowledge to real coordinates, real photos, real opening hours, real distances, real ferry crossings, real fuel stops, and back to the AI when I wanted to swap something. Every product I touched did one of those things well and none of them did the loop.
02 / The build Solo, with Claude Code as the engineering partner, on doc-driven discipline.
The build is the case study. Most of the staff-level signal here isn't in any single screen. It's in how this got shipped at all, by one person, at this scope. Three things hold it together: the documentation, the design system, and the working pattern with Claude Code.
Documentation as the source of truth
Every feature, decision, bug, and ID in the project routes through a single living document: SCENRA_PRD.md. 366 KB, treated as authoritative. Backlog rows are mirrored to memory caches for Claude session continuity, and the interactive Kanban board (backlog_board.html) is regenerated from the PRD, not the other way around. Conflicts always resolve in the PRD's favour.
Why this matters for a one-person project: documentation isn't ceremony, it's how Claude sessions stay consistent across days, branches, and contexts. A new session, in a new chat, with zero memory of yesterday's work, can read the PRD and the playbook and ship correctly. The PRD is the org chart.
| File | Role |
|---|---|
SCENRA_PRD.md |
Single source of truth. Every implementation, decision, backlog row, changelog entry. |
PROJECT_SYNC.md |
Playbook for any Claude session. Ship-implementation, decision-capture, backlog-add recipes. |
BACKLOG_SYNC.md |
The four-way sync between PRD ↔ memory caches ↔ HTML board ↔ browser localStorage. |
backlog_board.html |
Interactive Kanban, regenerated from the PRD. Drag-drop state lives in localStorage. |
Working pattern with Claude Code
I used Claude Code as the engineering partner, not as a code completer. The shape of the workflow:
- Me · 01 Ideation Jobs-to-be-done. What problem, for whom, why now.
- Me · 02 PRD-first Spec, edge cases, open questions. Written before any code.
- Claude · 03 Read & challenge Push back on gaps in the spec before writing any code.
- Me · 04 Design / Architecture Data model, API boundary, design-system primitives.
- Claude · 05 Implementation Code against the architecture in an isolated worktree.
- Claude · 06 Changelog entry Conventional commit + a written record back into the PRD.
- Me · 07 Sim + review iOS Simulator pass, code review, override AI where needed.
- Me · 08 Real-world trip Cheddar Gorge · Edinburgh · Lake District. Where the real bugs surface.
By the numbers
across Views, ViewModels, Services, DesignSystem
all RW-prefixed, all reusable
Vercel serverless, Node.js
conventional commits, ID-tagged
backlog_board.html: the interactive Kanban regenerated from the PRD. Show the columns (Not Started / In Progress / Done / Parked) and the prefix-coloured cards (Q* quality, PL* pre-launch, TD* tech debt, etc.). Take in the browser, full-page width. · 1440×900 · 16:10
03 / How the app works Two input screens, a review step, then the AI plans the trip and reveals the timeline.
Two screens of input cover trip details and scenery preferences. A review step lets the user tweak anything before tapping Generate. Behind the scenes: AI route plan, Google Places verification on every stop, polyline routing, per-leg drive times, all surfaced as a progressively revealed timeline.
Trip details
One-way or return. Departure, destination, date, and time.
Preferences
Scenery, number of stops, and service stops on the route.
Review and edit
Confirm or tweak any input before tapping Generate.
Timeline
The AI plans the route, verifies every stop, and reveals the trip progressively.
04 / Design decisions worth showing Four problems where the design choice mattered more than the feature.
13 features ship; four design choices did the load-bearing work. Each is a problem, a choice, what I rejected, and the detail nobody sees.
Decision 1 · Full flexibility over the AI's first answer
The AI generates a reasonable trip; the person taking it has context the AI doesn't. The product fails if the first answer is also the final answer. Three gestures, one principle: do whatever you want, instantly, no rigid timeline. No edit-mode, no save button. Every modification triggers a real-time polyline + drive-time recalc.
Add a stop. Got a place in mind that wasn't in the AI's first answer? Type it in: "Lake District," "York," "near Bath." The AI slots it into the route at the right point so the trip keeps moving toward your destination, no backtracking, no zigzagging. The same input targets fuel, EV charging, or coffee via the service-mode toggle.
Replace a stop. Swipe a stop card to reveal the action. The replace sheet opens with up to 15 alternatives already filtered by the user's scenery preferences. No re-asking what kind of trip they want.
Delete a stop. Same swipe-to-reveal, then a quick confirm. The confirm is intentional friction: removing a stop is destructive enough to deserve the half-second pause. Removed stops are also kept in case the user changes their mind anyway.
Decision 2 · Drag service stops between segments
Service stops (fuel, EV, coffee) are placed by an algorithm that maximises drive-time efficiency. But the user might want fuel before the long stretch, not after. Long-press-and-drag, segment-aware drop, route recalculates. SwiftUI-native: .draggable + .dropDestination, with everything else around it doing the work.
Decision 3 · Smart fuel stop placement
"Add a fuel stop" sounds simple. The optimal point in the drive doesn't always have a station. The route includes a ferry. The AI returns a "petrol station" that's a closed forecourt. Generic distance-based placement fails on every one of these. The pipeline: distance-proportional corridor · 5-point fan-out search · fuel verification gate · segment-bound exclusion.
Decision 4 · Progressive generation
Generation takes 15–25 seconds: AI call, route fetch, verification, polyline routing, per-leg drive times. Two phones at the same wait length feel like different products depending on the loading UX. Map polyline first, scenic stops fade in one by one with photos and descriptions, drive times last. Plain-English phase text in the bottom sheet.
The product arc
Each of the four decisions above lives in Plan, the only phase shipped today. Two more phases are in design.
| Phase | What it does | Status |
|---|---|---|
| Plan | AI-generated scenic routes, verified stops, full editing (Add / Replace / Delete / Drag), return trips, sharing. | Shipped on TestFlight |
| Companion | During-trip mode. Mark stops as visited, journal memories with photos, audio briefings as you approach each stop. | In design |
| Memory | After-trip. Completed-trip cards, photo journals, share back to friends. | Long view |
05 / What I'm not building The decline list. What a one-person project says no to is half the design work.
A one-person project ships when scope discipline is brutal. Below are the things Scenra doesn't do today, why each is a deliberate "no," and what would change my mind.
Native turn-by-turn navigation
Apple, Google, and Waze each spend thousands of engineering-years on real-time traffic, lane guidance, and offline tiles. I don't compete with that. Scenra plans the trip; the user's preferred maps app drives it. The multi-app export is the answer.
Group trips & multi-user accounts
Sharing is solved for v1 by the web link: anyone in the group can see the trip without installing. Real multi-user (vote on stops, real-time sync) needs auth, sync infrastructure, and conflict resolution. None of that earns its keep before product-market fit.
Multi-night trips
Single-overnight (PL10) is queued. Multi-night requires a day-by-day timeline, hotel integration, and substantially more route logic. Building it before PL10 lands would compound risk for no payoff. ~95% of beta trips don't need it.
Live trip tracking
In-app navigation during the drive (proximity alerts, "you're 15 min from your next stop," visited/upcoming states) is the natural extension of Companion mode. Worth building once Companion ships and we know what users do mid-trip.
Paid monetisation
Pre-launch and beta are free. Monetisation strategy (subscription, freemium feature gates, affiliate from fuel/EV stops, partnerships) is a post-PMF question. Charging beta users distorts the signal we need.
06 / Field Notes Real trips with the real app. Beta validation = me using it across the UK.
Field Notes is the proof. After the Lake District trip seeded the build, I started using each beta build of Scenra on real trips, capturing what worked, what didn't, what shipped after each return. Below: two test trips, in chronological order. The first one is the trip that changed how the AI works.
Cheddar Gorge: the trip that built the hallucination pipeline
The shortest meaningful test: London to Cheddar Gorge in a day. The scenic stops were real and beautiful. We were happy. Then we hit the service stop.
Claude had confidently named a fuel station with coordinates and everything. It didn't exist. The AI had hallucinated a petrol station and dropped us in the middle of nowhere. We hadn't eaten, hadn't drunk, hadn't let Nala out. All of that was saved for the service stop. The stop was a lie.
That's not a UX bug. That's a trust loss. If a user's first real trip with Scenra ends with a hallucinated fuel stop dropping them in a lay-by, they don't come back. A real product can't ship that.
The Cheddar Gorge trip is what built the AI hallucination pipeline. Every stop (scenic or service) is now verified against Google Places before it lands in the user's timeline. Unverified fuzzy matches are rejected outright. The 4-step fuel-stop pipeline in Decision 3 is the version of this discipline applied specifically to service stops. Every stop shown to the user is real.
Edinburgh: multi-day return, all real stops
The first long beta after the verification pipeline shipped: London to Edinburgh and back, with separate scenic routes for each leg. Tested return-trip generation, the Heads Up sheet on actual ferry-flagged routes, and offline mode through rural patches. Every stop on both legs was real. The pipeline held.
Each trip ends with a list of fixes that lands in the next build. The drag-to-reposition feature, the Heads Up sheet redesign, the fuel-verification quality gate, the hallucination pipeline itself: all came from a real trip surfacing a real gap.
The build is the artifact. The product is the proof.