Issue #1

18,000 Lines of Code to Replace a Screenshot

How we rebuilt our entire app inside our website so our marketing imagery never goes out of date, and why AI made it actually worth doing.

The Problem

Before I get into why I wrote 18,000 lines of code to avoid taking a screenshot, let me tell you about the most tedious part of product marketing, the part that nobody ever talks about.

Keeping product imagery up to date.

Product imagery isn't just screenshots. It's screenshots for the website. PNGs for the docs. GIFs for the emails. Different GIFs for social posts. Videos for the landing page hero. An OG image so link previews don't look like garbage, and every single one of these needs to be updated every time you change the UI.

You know the dance. Take a screenshot, spend 20 minutes getting it cropped just right, realize you need one with the dark theme too, do the whole thing again, then export a GIF version for the email, then a square crop for Twitter, then a 16:9 for LinkedIn, and then a week later you ship a UI update and every single one of those assets is wrong.

That thing has a name. It's called asset debt, and I just made that term up, but honestly it should be a real term because the surface area is way bigger than people think. It's not one screenshot. It's dozens of assets across the website, docs, emails, social, and OG images... all going stale the moment you ship a UI update.

And that's just the maintenance side. The creation side is just as painful. Every new GIF means opening the app, setting it up in exactly the right state, populating it with realistic-looking content, hitting record, getting the timing right, watching the recording back, realizing the timing was wrong, re-recording, cropping, exporting for the right platform, then doing it all again at a different aspect ratio. It's a 30-minute process for a single asset. Need four formats? That's your afternoon.

There's two of us at Blueberry. I focus on growth, Renato on the product. Which means every hour spent on manual busywork is an hour we don't have. So when I caught myself doing this dance for the third time in our first month... Renato redesigns something in the app, we feel great about it, ship it, then realize every visual asset across the website, docs, emails, and social is now out of date... I stopped and asked a different question.

Not "how do I update the assets faster?" but "how do I never have to do this again?"

When there's two of you, you can't afford to think in tasks. You have to think in systems.

Every manual process you repeat is a process you should be asking: can this be automated? Can this be eliminated? Because you don't have a team of 20 to absorb the overhead. The things that don't scale will eat you alive.


So I Did Something Slightly Unhinged

What if, and stay with me here, I just didn't use screenshots at all? What if every single image of the app on the website was a real, rendered React component?

Not a screenshot. Not a Figma export. Not a carefully cropped PNG that will be wrong by next Tuesday. An actual component that uses the same design tokens, the same colors, the same layout logic as the real app. When the app changes, the website changes. Automatically. Everywhere.

My immediate reaction was somewhere between "that's genius" and "I need to go outside more." Both were valid.

To pull it off, I'd need to rebuild every panel of the app as a standalone component. Terminal, editor, file tree, preview browser, source control, the works. A design token system to keep it all in sync. Scripted animations for the demos. Dialogs, settings panels, a music player, even a phone frame for mobile mockups, and a way to capture all of it as GIFs, PNGs, and MP4s for the places where you can't embed React. Emails. Social media. OG images.

So that's exactly what I built.

0
Files
0+
Lines of code
0
Theme presets
0
Screenshots

OK, But What Did You Actually Build?

Live Component - Interactive AppMockup

The system has six layers, and honestly, each layer is where you go "OK that's reasonable" and then see the next one and go "wait, you did what?"

Layer 1: The Design Tokens

Everything starts with design-tokens.ts, 205 lines that define the entire visual language. Colors in OKLCH (perceptually uniform, so gradients look natural), typography scales using Geist Sans and Geist Mono, spacing constants, animation curves with spring physics configs, 8 wallpaper theme presets, and mockup defaults for desktop (960×600) and mobile (375×812).

When we change a color in the real app, I update one file and every mockup on the website, in the docs, in the emails, everywhere, picks it up. That's the whole point.

Deep Dive

Why OKLCH instead of hex or HSL? OKLCH is a perceptually uniform color space. When you interpolate between two OKLCH colors, the gradient looks natural. No muddy midpoints, no unexpected hue shifts. HSL gradients between complementary colors pass through gray; OKLCH doesn't. This matters because the mockup system generates gradients for wallpaper themes, blends colors for hover states, and creates consistent opacity variations. The token file defines surface colors, syntax highlighting colors, terminal Dracula palette, git status colors, typography, spacing extracted from the Electron app's layout, animation spring configs matching Framer Motion, border radii, and 8 wallpaper theme presets.

Layer 2: The Panel Components

Each panel of the real Blueberry IDE has a corresponding mock component, and when I say "mock," I don't mean a grey box with some text in it. These are accurate enough that even I've confused them for screenshots of the actual app. That's the bar. If it looks "mocked up," the illusion breaks.

Here's the terminal component. I want to be very clear: this is not a screenshot. This is a live React component rendering in your browser right now. Click the tabs. Hover around. It's real.

Live Component - TerminalCard

Here's the full roster:

  • MockTerminal (1,483 lines)
    Full terminal emulation with Dracula colors, ANSI 256-color palette, 4 CLI presets (Claude, Codex, Gemini, and dev server), blinking cursor at exactly the right cadence, interactive tabs you can add/close/reorder, and a running-process indicator that pulses green.
  • MockEditor (1,127 lines)
    Custom syntax highlighting with tokenization for TSX, CSS, and JSON. No libraries, hand-rolled tokenizer with Monaco-inspired colors. File tabs, line numbers, breadcrumb navigation, expandable sidebar with file tree and source control view.
  • MockFileTree (585 lines)
    Expandable folder tree with git status indicators. Modified files get a yellow badge. Untracked files get green. Deleted files get red. Because obviously.
  • MockPreview (642 lines)
    Browser preview panel with a URL bar, dev tools inspector toggle, and responsive content rendering.
  • MockSourceControl (900 lines)
    Git staging UI with staged/unstaged sections, diffs, commit message input, branch display, and ahead/behind indicators.
  • MockSearchPanel (414 lines)
    Full-text search with results, match highlighting, line numbers, and a case sensitivity toggle that actually toggles.
  • MockCommandCenter (934 lines)
    Command palette overlay with fuzzy search and keyboard navigation. Because if you're going to over-engineer something, commit to it.
  • MockElementPicker (300 lines)
    Browser element inspector with hover highlights and clickable selection. For when you need to show the dev tools experience.
  • MockCodeArea (541 lines)
    Lightweight code viewer for when you need syntax highlighting without the full editor bloat.
  • MockEmptyState (164 lines)
    The empty workspace home screen with a search bar and keyboard shortcut hints.
  • MockPinnedApp (123 lines)
    Pinned app content area with header chrome (favicon, title, navigation buttons).
  • MockPinnedAppContent (1,140 lines)
    Eight mock pinned apps: Vercel (deployments), GitHub (PR view), Slack (chat), Supabase (table editor), Linear (ticket), Figma (design file with canvas), Notion (page), and PostHog (analytics dashboard). Each one styled to match the real app's dark theme. Click any pinned app icon in the sidebar above to see them.

Every single one of these pulls from the same design-tokens.ts file. When we change a color in the real app, the website components will pick it up. That's the whole point.

Deep Dive

The MockTerminal implements the full ANSI 256-color palette. The standard 16 colors map to the Dracula palette. The extended range (colors 16-255) follows a specific mathematical pattern: a 6×6×6 color cube for 16-231, then a 24-step grayscale ramp for 232-255. This matters because tools like delta (a git diff viewer) use these extended colors, and rendering them correctly is the difference between a terminal that feels genuinely real and one that's "mostly right with weird artifacts in git diffs."

Deep Dive

The MockEditor's tokenizer is hand-rolled instead of using Prism or Shiki. Why? Bundle size. A full syntax highlighting library adds 50-200KB. My tokenizer is ~200 lines and handles the three languages I actually need. It's not a general-purpose parser, it's optimized for "make code look correct in a mockup."

Layer 3: The Shell System

So you've got your individual panels. Cool. But the website doesn't show individual panels, it shows the full workspace. So I needed a shell system.

Live Component - AppMockup

And this is where things got... extensive.

  • AppWindow (44 lines)
    The outer frame with a glossy border and traffic light buttons. The simple one.
  • Header (589 lines)
    Project name, worktree pills with hover states and click handlers, layout picker (Fibonacci, grid, columns, rows), theme picker with wallpaper previews, search, and a music indicator. Five hundred and eighty-nine lines for a header. I know.
  • Sidebar (613 lines)
    Icon toolbar with panel toggles, pinned apps, position badges that appear when you hold Ctrl, and drag-to-reorder. Because if the real app has drag-to-reorder, the mockup needs drag-to-reorder.
  • PanelLayout (983 lines)
    A full panel tiling engine that arranges panels in Fibonacci, grid, column, or row configurations. Active panel highlighting with colored borders. Theme overlay dimming. Responsive resizing.
Deep Dive

The Fibonacci layout is the flagship. It recursively splits available space using the golden ratio (61.8%/38.2%), alternating between horizontal and vertical divisions. The first panel gets the largest area, the second gets the next largest, and so on. The result looks like a magazine layout, naturally balanced, with clear visual hierarchy. It's absurd overkill for a layout engine, but it produces genuinely beautiful arrangements that you'd never get from a simple grid.

Layer 4: The Card Wrappers

Here's where I built a whole abstraction layer nobody asked for. Each panel component can render in two modes: embedded (inside the full AppMockup shell) or standalone (floating on a wallpaper background for embedding in blog posts, the website, emails). The standalone mode needed consistent chrome. Wallpaper, floating panel border, scroll-to-activate animation, hover effects, theme selection.

Live Component - SourceControlCard

So I built 7 standardized card components. ~1,206 lines for what is essentially a fancy wrapper. But it means I can drop a <TerminalCard /> into any page and it just works. Wallpaper, animation, theme, everything.

  • TerminalCard (166 lines)
    Standalone terminal panel with wallpaper background, scroll-to-activate animation, and theme selection.
  • EditorCard (212 lines)
    Standalone editor panel with the same wallpaper chrome and hover effects.
  • CodeAreaCard (175 lines)
    Lightweight code block wrapper for when you need syntax highlighting without the full editor.
  • FileTreeCard (162 lines)
    Standalone file tree with expandable folders and git status indicators.
  • SearchCard (171 lines)
    Search panel card with file matching and result highlighting.
  • PreviewCard (146 lines)
    Browser preview panel with URL bar and responsive content rendering.
  • SourceControlCard (174 lines)
    Git staging UI with diffs, commit message input, and branch display.

Layer 5: Dialogs & Floating Panels

The real Blueberry app has dialogs. If the mockup doesn't have them, you can't demo features that require dialogs. So I built the dialogs.

Live Component - ThemeDemo
  • MockSettingsDialog (613 lines)
    Multi-section tabbed interface: About, General, MCP, Music, Network, Notifications, Shortcuts. Custom toggle switches, range sliders with gradient fill, keyboard shortcut display with styled <Kbd> keys, and a copy-to-clipboard button with transient feedback.
  • MockCreateWorktreeDialog (408 lines)
    Branch dropdown, animated text input where the cursor types the branch name character by character, Enter-to-submit.
  • MockRemoveWorktreeDialog (234 lines)
    Confirmation dialog with dual action buttons: "Remove" vs. "Remove & Delete Branch."
  • MockTerminalSettingsDialog (293 lines)
    Inline/overlay dual-mode dialog for terminal name and startup command.
  • MockMusicPanel (323 lines)
    A Spotify-like floating music player with album art, progress bar, play/pause, shuffle/repeat, and here's the unhinged part: it actually plays music. There's a hidden YouTube iframe that the panel controls. You click play, real audio comes out. I built a working music player inside a mockup of my IDE. I am not a normal person.
  • MockNotesPanel (394 lines)
    Multi-page notes with editable titles, formatted checkboxes, bullet points, and page navigation.

Layer 6: The Interactive AppMockup

This is the part I'm most unreasonably proud of.

AppMockup (741 lines)
Composes everything above into a fully interactive IDE simulation. Not a static mockup. An actual interactive thing where you can toggle panels on/off, switch between layout modes, open the command palette with Cmd+P, hold Ctrl to see position badges on sidebar icons, switch themes with live wallpaper previews, open the settings dialog, play music through the YouTube-integrated music panel, and interact with every panel as if it were the real app.

Live Component - Interactive AppMockup

I built this for the website's hero section and product demos. It's a working IDE in your browser, except none of it is real. It's all mockup components pretending to be an app. The illusion is thorough enough that I've caught myself trying to type in the terminal.

Bringing It to Life

Static mockups only get you so far. The website needs motion, demos that show features in action. So the system includes scripted animations across multiple components: terminals that type and execute commands, editors that highlight code, and full workspace demos like the WorktreeDemo, where a fake cursor fades in, moves across the screen, hovers over UI elements, clicks them, and the entire workspace responds.

The 80ms typing speed went through actual A/B testing, at 11 PM, by myself. 60ms felt robotic. 120ms felt drunk. 80ms was the sweet spot. I also added ±15ms random jitter per keystroke to break the mechanical rhythm. I am not a normal person.

Live Component - WorktreeDemo (create)
Deep Dive

The cursor positioning compensates for CSS transform scaling. The mockup renders at native resolution (960×600) inside a container that might be 480×300. The cursor needs to move to where the element appears to be on screen, not where it actually is in component coordinate space. The failed approaches before the current system are worth noting: CSS keyframes were precise but impossible to make responsive (cursor would drift on resize). GSAP timelines had a better API but added 30KB+ for what amounted to setTimeout chains. The current approach, raw getBoundingClientRect + setTimeout, isn't elegant, but it's responsive, lightweight, and works perfectly with CSS transform scaling.

And the asset debt problem doesn't stop at desktop.

PhoneFrame (195 lines)
Renders the mobile version of Blueberry for screenshots and mobile mockups. SocialClaudePost (248 lines)
Generates social media card mockups. Even the social posts are components.


The Capture Pipeline (AKA the Asset Factory)

Live components are great for the website and docs. They render inline and stay up-to-date automatically. But the website is only one of the places we need these visuals.

Emails need GIFs. Social posts need GIFs and videos. OG images need static PNGs. You can't embed a React component in any of these. LinkedIn doesn't support interactive content, because of course it doesn't.

So I built a full media generation pipeline. It's a dedicated page in the site that renders any mockup component and captures it frame-by-frame, then encodes the frames into GIFs, MP4s, WebMs, or PNGs using a Web Worker so the UI doesn't freeze during encoding.

One system, every format, every destination:

  • Website & docs. Live React components render inline, always up-to-date
  • Emails. Animated GIFs generated from the same components
  • Social media. GIFs and MP4s with platform-specific aspect ratios (1:1 square for Twitter, 4:5 portrait for Instagram, 16:9 for LinkedIn, 9:16 story)
  • OG images. Static PNG snapshots for link previews
  • Videos. MP4 and WebM exports for landing page heroes and demos

And the pipeline itself handles:

  • Custom FPS. 4 to 60 frames per second
  • Duration control. Capture for exactly as long as the animation loop needs
  • Trim & loop modes. Infinite loop, play-once, or pingpong (which is surprisingly useful for subtle UI animations)
  • Batch generation. Queue up "Website Hero GIF," "Email Header GIF," "Social 1:1 Terminal," "OG Image PNG," and "Landing Page MP4" and export them all in one go

The memory management alone is its own little engineering project. A 500MB frame buffer budget, ImageData reuse to avoid garbage collection pressure, and a dedicated thumbnail canvas that persists across calls. The initial implementation used 1.2GB for a 10-second 60fps capture at 1920×1080. I got it down to 500MB through a ring buffer that reuses ImageData objects, a downsampled preview canvas (¼ resolution), and lazy GIF palette quantization in a Web Worker.

Deep Dive

The trickiest optimization was avoiding garbage collection pauses during capture. Allocating all frame buffers upfront eliminated the 50-100ms GC hiccups that were causing dropped frames. A 10-second capture at 60fps and 1920×1080 means 600 frames × 1920 × 1080 × 4 bytes = ~5GB of raw frame data. The ring buffer approach only needs ~30 frames of buffer since the encoder processes frames faster than they're captured. Combined with upfront buffer allocation (zero GC during capture), the pipeline can record a 30-second 30fps animation without Chrome complaining.

But here's the thing that surprised me most: the system is even more valuable for creating assets than for maintaining them. Need a GIF of the terminal running Claude Code for a tweet? <TerminalCard presets={["claude"]} />, hit capture, done. Need a 9:16 story showing the worktree feature? Point the pipeline at the WorktreeDemo, set the aspect ratio, export. A new asset that used to take 30 minutes is now a one-liner and a button click.

Change something in the app, re-run the batch, and every asset across the website, docs, emails, social, and OG images updates in minutes. Not hours. Not days. Minutes.


The Part Where I Explain Why This Isn't Actually Insane

OK so writing 18,000+ lines of code for marketing screenshots sounds ridiculous. I get it. Two years ago, this would have been a hard no even with a full team. The conversation would have gone something like this:

"I want to rebuild every panel of my app as a standalone component, create a design token system, build scripted cursor animations, add a settings dialog, a music player, phone frames for mobile mockups, and a GIF capture pipeline."

"How long?"

"Three months, maybe four."

"For... screenshots?"

"...Yes."

"Hell No!"

And that would have been the right call! Three months of engineering time, especially when there's two of you, to solve a problem you can hack around with a screenshot and a Figma file? The ROI doesn't work. Not even close.

But the math has changed.

With AI-assisted development, this entire system, 51 files, 18,000+ lines, the whole pipeline, took me about two days of focused work. Not two days of planning and three months of building. Two days, start to finish.

The design tokens were extracted from the real codebase in an afternoon. The mock components were built iteratively with Claude reviewing the actual source components and generating pixel-accurate replicas. Not screenshots, the real source code, so it understood structure, not just appearance. The cursor animations, the ones with the millisecond timing and the typing speed debates, were choreographed through a conversational back-and-forth that would have taken a week of manual keyframing. The dialog components, the music panel, the card wrappers, all of it emerged from the same iterative loop: hand it the source, get a mockup, refine until indistinguishable.

Deep Dive

The AI workflow was conversational and iterative, not "write a prompt, get 16,000 lines." Design token extraction: "Here's the Electron app's CSS. Extract every color, spacing value, and font definition." Component building: "Here's the source for the real terminal panel component. Build a standalone React version that matches it exactly." I never provided screenshots of the app, I gave it the actual source code, which meant the AI understood not just how things looked but how they worked. Each component went through 3-5 rounds of refinement. The AI provided the bulk of the code; I provided the taste. The result is code that looks like I wrote every line, because I effectively directed every line.

Two days of work for a system that saves us every single time we update the product. The app, the website, the docs, the emails, the social posts, all of it. No more screenshot sessions. No more "did we update the email GIFs?" No more finding stale imagery in the docs three months later. No more manually exporting social assets at four different aspect ratios. The website, the docs, the emails, the social imagery, the OG images... all of it stays in sync because it's all generated from the same components.

And yes, you can argue this was a stupid use of time pre-product-market-fit. Fair. But it was also genuinely fun, and sometimes you need that too. Not everything has to be a cold ROI calculation. Sometimes you build something because the craft itself is energising, and that energy carries into everything else you ship.

That's not over-engineering. That's just engineering, done at a speed that makes the ROI work for the first time.


The Line Has Moved

And that's really the thesis of this whole blog. AI doesn't just speed things up. It moves the line of what's worth building.

When there's two of you, you learn to think in systems very quickly. Every process that repeats is a process that should be automated. Every manual step is a future bottleneck. You don't have the luxury of throwing people at problems, so you throw systems at them instead.

The problem is, building those systems used to be its own massive investment. The internal tool that would save 20 minutes a day but take 6 weeks to build. The test harness that would catch edge cases but requires mocking half the system. The design system that would make everything consistent but nobody has time to extract the tokens. You could see the system you should build, but you couldn't justify the time.

Those ideas are all back on the table.

What used to be a 3-month project is a 2-day project now, and the calculus is completely different. A 2-day investment that saves you 4 hours every month for the lifetime of your product? That's a no-brainer. The exact same investment at 3 months? You'd never approve it. For a two-person team, this is transformative. You can finally build the infrastructure that lets you operate like a team of 20.

The tools haven't changed what's possible. They've changed what's practical, and for tiny teams especially, that distinction is everything.


What's Next

The heavy lifting is done. We're not going deeper down this rabbit hole. The system does what it needs to do. But we'll keep making small improvements as the app evolves. New features in Blueberry get new mockup components. The design tokens stay in sync. Maybe I'll add keystroke sounds to the terminal animations, because the cost of trying is about 20 minutes of work now and can be run in parallel with other work, so why not?

That's kind of the whole point. When the cost of building drops by 100x, the definition of "worth building" expands dramatically, and when the cost of maintaining drops too, because the system is designed to evolve with the product, you stop thinking about it as a project and start thinking about it as infrastructure.

Welcome to Over-Engineered. This is the first of many stories about things that would have been a bad idea two years ago and are a great idea now. If you're the kind of person who builds a working music player inside a mockup of your own IDE, you're in the right place.

One last thing. None of this would be possible without the incredible work Renato has done on the product itself. I'm just building mockups of what he's built for real. If you like what you see in these demos, you should try the actual thing. It's next level.

Episode 2 the insanity of building the over-engineered blog itself, just to publish this one post. Custom reactions, paragraph-level engagement tracking, a speed reader, a custom AI voice clone for listen mode, highlight sharing, and a bottom bar that probably has more features than most standalone apps, because it had to live up to its name. Stay tuned.

Built with Blueberry

The modern IDE for product builders

Comments

Sign in to comment

New here?

Loading comments...