SketchScript
AI-powered sketchnotes from any transcript
SketchScript turned meeting transcripts into hand-drawn sketchnote visualizations using AI. It was a fun build and people enjoyed using it, but it wasn't viable as a SaaS product.
What it did
Paste any transcript — a team meeting, a YouTube video, a podcast, a lecture — and get a hand-drawn visual summary in under two minutes. The idea was grounded in dual coding theory: combining text with visuals increases recall from roughly 10% to 65%. SketchScript made that accessible without needing artistic skill or a graphic recorder.
Claude analysed transcripts and designed the visual layout. Gemini generated the sketchnote images. Each model handled what it was best at — Claude's reasoning identified what mattered, Gemini's image generation rendered it visually.
Why it closed
The per-generation costs didn't scale. Each sketchnote required an LLM analysis pass plus an image generation call, and the audience stayed small. The product worked — the business model didn't.
That's a useful thing to learn. Not every good idea is a good product. SketchScript validated that AI-generated sketchnotes are genuinely useful. It also showed that wrapping a prompt pipeline in a SaaS layer adds cost without adding enough value over just… giving people the pipeline.
Build it yourself
The image generation behind SketchScript was powered by a Claude Code skill called the Art Skill. It handles prompt construction, style management, and image generation — sketchnotes, diagrams, illustrations, whatever you need.
It's open source on GitHub. If you use Claude Code, you can plug it straight in.