Modern Creator Network
Rob Shocks · YouTube · 14:07

Karpathy's 2026 Playbook: What to Actually Build with AI Now

Rob Shocks unpacks Andrej Karpathy's AI Ascent talk into four frameworks every builder needs in their head, then runs his own side-project folder through the test and kills three apps live.

Posted
1 weeks ago
Duration
Format
Essay
educational
Channel
RS
Rob Shocks
§ 01 · The Hook

The bait, then the rug-pull.

Open on a defocused bokeh of Rob at his ring-lit desk and a single sentence designed to stop the scroll: Karpathy — the man who coined vibe coding — just told you that everything you built last year is wrong. The promise is delivered in the next twenty-four seconds: four frameworks, the actual playbook, and a brutal little test that will probably kill at least one app sitting in your side-project folder.

§ · Stated Promise

What the video promised.

stated at 00:24In the next few minutes, I'm gonna walk you through what he actually said about 2026, what it means for what you should be building and how you should be building, and the four frameworks that I think every AI builder needs to have in their head right now.delivered at 13:32
§ · Chapters

Where the time goes.

00:0000:24

01 · Cold open — Karpathy's bad news

Rob frames Karpathy's credentials (OpenAI, Tesla Autopilot, coined 'vibe coding') and lands the pattern interrupt: if you're still building like last year, you're in trouble.

00:2400:57

02 · Promise + sponsor handshake

Sponsor tag for monday.com, then the explicit promise: what Karpathy said about 2026, what to build, and the four frameworks every AI builder needs. Teases the 'Strategy Agent' that will return as a tool recommendation later.

00:5701:37

03 · The December inflection

Karpathy's first big claim: around December, models crossed a quality line — output 'just worked', he stopped correcting, and went on a vibe-coding bender. Implication: if you haven't built end-to-end with Claude Code / Codex / Cursor in the last 60 days, you are flying blind. Homework: build something this weekend.

01:3702:17

04 · Software 3.0 explained

Karpathy's evolution: Software 1.0 = handwritten rules. Software 2.0 = trained neural nets on big data. Software 3.0 = the LLM itself is the programmable computer, the prompt is the code, the context window is your lever.

02:1703:17

05 · From SaaS to agent calls

Rob uses his own product Levercast as a worked example. Old world: log in, click create, generate post. New world: tell an agent 'use the Levercast MCP, here's an idea' and walk away. The interface for software is no longer humans clicking — it's agents calling.

03:1704:25

06 · The MenuGen test

Karpathy's own MenuGen app — built last year to turn restaurant menus into food photos — now does nothing ChatGPT/Nano Banana can't do natively in one prompt. He demonstrates by dropping the menu directly into a multimodal model. The lesson: a huge percentage of apps shipping right now are Software 1.0 plumbing wrapped around what should be a single Software 3.0 prompt.

04:2504:57

07 · The single-prompt test (stop or pivot)

The diagnostic Karpathy gives every builder: take what you're building and ask, could I do this with a single multimodal prompt + the right tool calls or an MCP? If yes — you're plumbing that's about to get eaten by the next model release. Stop or pivot. Rob caps it with the Gretzky 'skate where the puck is going' quote.

04:5706:00

08 · Sponsor: monday.com vibe

Sponsor read for monday.com's new natural-language app builder, framed (not unreasonably) as a real-world Software 3.0 surface: build bespoke apps on top of your existing monday workflows, OKRs, and data. CTA: free start, link in description.

06:0007:10

09 · Verifiability is the moat

Karpathy's next pillar: models are great at code because code is deterministic and verifiable — that's clean feedback the model can train on. Most of the world isn't. So the opportunity is finding niches with SOME verifiability that the frontier labs aren't chasing: financial trading, supply-chain/routing optimization, CI and migration agents, data cleaning and labeling. Your domain expertise is the moat.

07:1008:52

10 · Vibe coding is retired — meet agentic engineering

Karpathy explicitly retires 'vibe coding' (great as a floor — anyone can build now — but a ceiling without quality). The new term: agentic engineering. Specs, plans, context-window management, code review, unit-through-end-to-end smoke tests, CI blockers. People who get good at this are 10x faster. Rob pushes back on the X-bro claim of '10–20 agents at once' — he can keep 3–4 in his head, max, because he's working on production systems.

08:5209:40

11 · Self-promo + Build 1: tools for understanding, not speed

Short ad for switchdimension.com course/community, then framework #1 of four: build tools that increase your understanding, not just your speed. Worked example: Rob's 'Strategy Agent' — a folder of markdown strategy docs that an agent reads to keep him focused, redirect him when he's chasing the wrong thing, and ground every new doc in real company context.

09:4010:57

12 · Build 2: agent-first infrastructure

Everything we've built is for humans — docs, dashboards, install flows, DNS. The next-gen win is stripping the human UI layer: would an agent know how to use this directly with no human translation? Concrete signal already happening: llm.txt files on e-commerce sites so agents can quickly figure out 'is this API trustworthy, how does it work' instead of wading through marketing copy.

10:5712:00

13 · Build 3: verifiable domain capability

Big labs cover the big surfaces; they will not reinforcement-learn every sub-niche. There are millions of them. The play: pick one, build a verifiable RL environment around it, fine-tune, own that capability. Rob's encouragement: don't dismiss this as inaccessible — you can build anything now, take the handbrake off.

12:0013:32

14 · Build 4: apps that only exist because of Software 3.0

The big one. Not a faster spreadsheet, not a prettier UI on top of an existing workflow — genuinely new things that couldn't exist before reasoning + multimodal models, like the LLM-as-knowledge-base pattern Karpathy demoed. Rob shares that this week he went into his own side-project folder and killed three projects that failed the MenuGen test and the Software 3.0 test. Better to kill them now than watch them die in three months. Teases one or two survivors that hit all four criteria.

13:3214:07

15 · Engagement question + CTA stack

Genuine ask: what's the app YOU built that probably shouldn't exist anymore? Then the CTA cluster — subscribe, teases a follow-up on agent-first infrastructure, points to switchdimension.com course/community, links Karpathy's full original talk, and points to more videos on the channel.

§ · Storyboard

Visual structure at a glance.

bokeh cold open
hookbokeh cold open00:00
Karpathy's 2026 Playbook title card
hookKarpathy's 2026 Playbook title card00:22
promise: 3 things in 10 minutes
promisepromise: 3 things in 10 minutes00:37
December broke something open
valueDecember broke something open00:57
Software 3.0 / write the prompt slide
valueSoftware 3.0 / write the prompt slide02:09
Levercast SaaS-with-AI demo
valueLevercast SaaS-with-AI demo03:17
MenuGen output — Kaya French Toast
valueMenuGen output — Kaya French Toast04:06
monday.com sponsor card
sponsormonday.com sponsor card05:22
monday.com vibe app dashboard
sponsormonday.com vibe app dashboard06:20
Find Verifiable Workflows lower-third
valueFind Verifiable Workflows lower-third06:46
Four practices that help slide
valueFour practices that help slide07:58
'Covered in Upcoming Course' tag
cta'Covered in Upcoming Course' tag08:05
§ · Frameworks

Named ideas worth stealing.

01:50model

Software 1.0 / 2.0 / 3.0 (Karpathy)

  1. Software 1.0 — handwritten rules / explicit code
  2. Software 2.0 — neural networks trained on large datasets
  3. Software 3.0 — the LLM IS the programmable computer; prompts are the code; context window is the lever

Karpathy's three-era model for how software is written. The shift to 3.0 is the unlock — most current SaaS is 1.0 plumbing around what 3.0 already does natively.

Steal forany framing video, sales page, or pitch where you want to make a 'this is a paradigm shift' argument feel earned
04:00concept

The MenuGen Test / Single-Prompt Test

Take what you're building and ask: could I do this with a single multimodal prompt + the right tool calls or an MCP? If yes, you're building plumbing that's about to be eaten by the next model release. Stop or pivot.

Steal forkilling your own side projects honestly; pitching consulting clients on whether to keep or kill a feature; framing a thought-leadership post
09:35list

Four Frameworks Every AI Builder Needs (Rob's distillation of Karpathy)

  1. Build tools that enhance your understanding, not just your speed (e.g. a Strategy Agent)
  2. Build agent-first infrastructure (strip human UI; expose llm.txt / MCPs / APIs)
  3. Build verifiable domain capability (pick a niche, RL it, own it)
  4. Build apps that ONLY exist because of Software 3.0 (no spreadsheet-with-AI)

Rob's four-pillar takeaway from Karpathy's talk — what to actually go build now.

Steal fora list-style short, a newsletter spine, a course module map, a personal kill-or-keep checklist
06:32list

Verifiable Workflows for Big Business Wins

  1. Financial trading
  2. Supply-chain and routing optimization
  3. Continuous integration & migration agents
  4. Data cleaning and labeling

Karpathy's example list of underserved domains with enough verifiability to train against — niches the frontier labs are not focusing on.

Steal forvertical AI startup ideation; client briefs in agencies; deciding which side project to take seriously
07:50list

Four Practices of Agentic Engineering

  1. CI as a hard gate
  2. Automated security review
  3. Code review as a first-class step
  4. Human comprehension artefact (specs, plans, docs)

On-screen card listing the four practices Rob says professional builders are now doing (his version of Karpathy's agentic-engineering pillar). Replaces 'vibe coding' as the professional discipline.

Steal forengineering team standards; AI-builder course module; LinkedIn carousel on 'what good looks like in 2026'
§ · Quotables

Lines you could clip.

07:50
Vibe coding raised the floor. Pretty much anyone can build now. But what professionals are doing now is agentic engineering.
tight, quotable, retires a term and coins a replacement in one breath — perfect for a 'mark this moment' clipTikTok hook
04:50
If the answer is yes, you're building plumbing that's about to get eaten by the next model release. Stop or pivot.
fear-based, action-forcing, lands as a verdict — works as a standalone shorts hookIG reel cold open
04:33
A huge percentage of the apps people are building right now shouldn't exist. They're orchestrating things the model can already do natively.
controversial enough to drive comments, true enough to drive sharesTikTok hook
12:46
I went to my own side project folder this week and I killed at least three projects after watching Karpathy's talk.
vulnerable, specific, and validates the test he just taught — strongest credibility-builder in the videonewsletter pull-quote
01:57
Software 3.0 is where the LLM itself becomes the programmable computer, the interpreter, and your code basically is the prompt.
the central thesis distilled — works as a slide, a tweet, or a course titlenewsletter pull-quote
12:00
We can now build anything. Take your handbrake off and go and do it.
permission-giving line — high CTR on solo-builder audiencesIG reel cold open
§ · Pacing

How they spent the runtime.

Hook length24s
Info densityhigh
Filler8%
Sponsors
  • 04:5706:00 · monday.com (vibe natural-language app builder)
§ · Resources Mentioned

Things they pointed at.

01:40toolClaude Code
01:40toolCodex
01:40toolCursor (agent mode)
02:20productLevercast (Rob's own AI social-media post tool)
04:00productMenuGen (Karpathy's example app)
04:17toolChatGPT
04:17toolNano Banana
05:22productmonday.com vibe
09:00toolClaude Cowork
10:15toolllm.txt convention
00:24linkAndrej Karpathy's full AI Ascent talk
§ · CTA Breakdown

How they asked for the click.

13:32subscribe
Hit me up in the comments, and I'm asking this seriously — what is the app you've built that probably shouldn't exist anymore? If this was useful, subscribe. Check out the Switch Dimension course and community.

Stacked CTA — engagement question first (clever: drives comments by asking for confessions), then subscribe, then teases the next video (agent-first infrastructure deep-dive), then product link, then external link to Karpathy's original talk. The engagement-question opener is the strongest move because it lowers the ask and produces algorithm-friendly comment threads.

§ · The Script

Word for word.

HOOKopening / re-engagementCTAthe pitchmetaphoranalogystory
00:00HOOKCTASo Andrzej Karpathy, he's the guy who literally helped build modern AI, who was at OpenAI, who got Tesla autopilot working, and who coined the term vibe coding. He just told us if you're still building apps the way you were last year, he's got bad news for us. So he just gave a brilliant talk. I've spent hours breaking it down to help you understand how it may affect what you build as a developer, founder, or software company.
00:24CTAThanks to monday.com for sponsoring this video. More on them later. In the next few minutes, I'm gonna walk you through what he actually said about 2026, what it means for what you should be building and how you should be building, and the four frameworks that I think every AI builder needs to have in their head right now. I'm Rob from Switch Dimension, and if you're trying to build with AI in 2026 and beyond, this might be the most important video you watch this month, so stick with me. This kind of a brain is the single most powerful thing I've built in the last couple of months. Highly recommend you do something similar yourself.
00:57So one of the first things that stood out to me was Karpathy's point about the December inflection. So what Karpathy is saying is round about December, the model's output started to actually just work. He stopped correcting it. He started just trusting the model, and he basically went on this vibe coding bender. He said he has a ton of side projects that he's built up, and we all have that problem. Multiple different projects we're working on the same time because now we can build anything.
01:23So essentially, if you haven't sat down in the last sixty days and seriously tried to build something end to end with clog code codex cursor in agent mode, you are really flying blind according to Karpathy. So go do that this weekend. Seriously, build something. So I think Karpathi's biggest takeaway in this talk was the idea of software three point o, and we've heard that so many different times before. But this is a slightly different angle I don't think many have seen. So Karpathi's
01:51breakdown of the software evolution is software one point zero is handwritten rules. So basically, writing out code. Software two point o was training neural networks via large data sets, and software three point zero is where the large language model itself becomes the programmable computer, the interpreter, and your code basically is the prompt and the context window is your lever. So that's Karpathi's high level take on it. How does this affect us practically as builders?
02:16HOOKSo in traditional SaaS, this is Levercast, an app I built. It uses AI to help you post to all your social media platforms. So I would just click create, drop in my thoughts, generate a post with an existing style guide, and then it takes the pain out of the first draft for my social media posts across the various different channels that I work on. So that's a traditional SaaS app with a little bit of AI sprinkled in on top. So here's the big shift and here's what's changing. We're going to be increasingly using our agents to do more work first. We're gonna be living in claw code, codex cursor, whatever is your agent of choice.
02:50In this new world, what people will increasingly be doing is basically just telling their AI agents to carry out a task. So instead of me going and logging in to Leverkast and going through all the workflow, I might just say, hey. I've got an idea for a post. I could drop it in and say, hey. Use the Leverkast MCP to carry this out. Or not even. I could actually use a skill that's prebuilt in that will actually do all of this for me. I go off to do something else or I have this working on a schedule and then bang,
03:15all my posts are published. So besides that being a cheap plug for my own software, really what we're getting at here is a change in how we need to think about building things. As a builder, you really need to think about the current capabilities of multimodal models and even think ahead a few months to what's coming next and how that might affect the software
03:34you're building or you've already built. Do you need to have an MCP, an API, or some kind of agentic workflow in front of it so that an agent can discover it quickly? So Karpathy's example of this was back last year, he built this quick app that basically allowed you to turn your menus into magic. It's the idea that you could take any menu and looking at this menu here, might not know exactly what the food is or what it might look like. By dumping in this menu into his app, it basically was able to create AI representations
04:04of what the food was and what it looked like. So last year, this was an entire application that he built that somebody could use. This year, he could solve the same problem with just Chatchi BT or Nano Banana. So you can see the entire menu gen workflow replaced by me just dropping in that original menu, and here you can see an overlay of the dishes on the starters.
04:25CTASo his big point here is the menu gen app that he built a year ago doesn't really need to exist anymore, and here's why that's really uncomfortable. A huge percentage of the apps people are building right now shouldn't exist either. They're basically orchestrating things the model can already do natively that's only just appeared in the last couple of months. Their software one point zero plumbing wrapped around what should just be a single software three point zero prompt. Anyway, you get the point. So here's the test.
04:58CTATake what you're building and ask, could I do this with a single multimodal prompt and the right tool calls or an MCP or two? If the answer is yes, you're building plumbing that's about to get eaten by the next model release. Stop or pivot. For you Americans or Canadians who love hockey, we don't even have ice in Ireland. This is where I insert a quote about you're skating to where the puck is going to be, not where it is right now. So if you watch this channel at all, you're going to be no stranger to vibe coding apps. And I wanted to talk today about monday.com's
05:29CTAvibe because basically I think it's different. It's a natural language app builder baked directly into the monday.com platform. So, yeah, of course, there's tons of different app builders out there, but I think what's really different about this, you're actually building your application on top of your monday.com infrastructure
05:47CTAand context. And as we talk about in this video, the context and the understanding is really important. You've got your workflows, your intakes, your OKRs, everything that's important to your company is managed in there. But often there's this last mile problem where you want to solve something like, you know, I wish I had the perfect form for pulling that in or, uh, we need a clean page that represents our OKRs.
06:10So with monday.com's new vibe, you can use natural language to create bespoke interfaces with inputs and outputs that write back to your monday.com data, like a sales forecasting app, a campaign health tracker, a time tracker app. So it's pretty much living in Monday where you and your staff works. This really is a super tool. Viewers can get started for free. Check out the link in the description down below. So already, Karpathy has given us some real gold. Let's move on to his next point. And this is all about verifiability
06:40and building a moat as we go forward. So Karpathy went on to explain why a lot of Frontier models fall down and how there's an advantage in that. The reason models are so good at code is because it's deterministic and it's verifiable, and that's great feedback for the model. Unfortunately, a lot of the world isn't as verifiable as code. So Carpathi suggests, what are the domains that the large frontier labs are not focusing on right now that's have some level of verifiability?
07:08Financial trading, supply chain, and routing optimization, continuous integration and migration agents, data cleaning, labeling. You all have domain expertise in some niche part of any workflow or company. What are the things that you can build in that space that take advantage of the things Carpathi is talking about in this conversation?
07:30So thankfully, in this talk, he basically retires the term vibe coding and replaces it. So vibe coding is great. It raised the floor. Pretty much anyone can build now. We all needed that. It pretty much democratizes building. But But he says what professionals are doing now is agentic engineering, and that's the kind of stuff I teach a lot in my channel and on my course.
07:50CTAThis means things like using specs, plans, managing your context window, doing proper review, making sure that we have unit level smoke tests to end tests, and blockers in place in continuous integration so that we're not pushing bad code. Karpathi says he's seeing people who are getting good at this go 10 x faster. If you're on x at all, you'll see that, oh, people running 10 to 20 agents at the same time. Personally, I kinda think that's nonsense. I'm really good at this. I've been doing it for two years, and the most I can keep in my head at any time is maybe three to four agents running at the same time. I give them a little bit of work. I check what they're working on. I have to review the code. These are production databases,
08:30CTAproduction systems I'm working on. I'm not just gonna push up whatever the agent was working on. So if you're feeling behind and you're working with agents, but you don't feel like you're working with 10 or building entire, uh, empires in a single prompt, then don't worry. But the thing is we really need to think about how we get to that point because what Karpathy is talking about is we need to build again for where that puck is going.
08:52CTASo we get to that point where not unlike in December when things just worked, I think we'll get to a point where our our agentic harnesses just work, and we can run 10 to 20 agents at the same time if we choose. We're not there yet, but we're on that path. By the way, if you're interested in building agentic harnesses and production systems like that so you can build apps really quickly,
09:15CTAyou can join my course and community at switchdimension.com. It's currently closed to new members right now, but I have a new course dropping in the coming weeks. If you want to be part of that cohort, just drop your name and email into the wait list. Okay. So let's wrap this all up into the four things you should build. So the first thing he suggests you build are tools that enhance your understanding,
09:38not just your speed. So here's what I do myself, and here's a really easy way for you to get started with this. Here, I've got Claude Cowork, but it doesn't matter what tool you use. I could do this in Claude code. You could do this in cursor. You can do it wherever you want. Essentially, what you need is a folder and an initial prompt. That prompt to your agent is where you tell it about your company, your app, your domain space, your life, whatever else it is, and ask it to create a set of strategy documents
10:05around that application business, whatever the domain space is. So in my case, I have a company called Switch Dimension that helps people learn how to build with AI, and I got it to produce a whole lot of strategy documents around that through various conversations. And they're just simply stored as markdown in a folder. So my big problem is I want to build everything and take every opportunity.
10:25I just have a quick conversation with my strategy agent, which now has an understanding of where I'm going, and it keeps me on track. It gives me focus. It says you're going in the wrong direction if you work on that. Here's what you should be focusing on. Here's the opportunities. And every time I ask it to generate a doc or create new content, it does that with real context and understanding
10:45of my Switch Dimension world. This kind of a brain is the single most powerful thing I've built in the last couple of months. Highly recommend you do something similar yourself. So the next thing you should build is agent first infrastructure. Everything we've built is pretty much built for humans. That's documents, dashboards, install flows, DNS settings. The entire Internet is built really for humans. So the real win in this next generation is stripping away all of the human UI. Would an agent know how to use this directly without any kind of human translation in between? We're seeing this on websites
11:18in ecommerce where LLM dot TXT files are there so that when agents arrive on a website, can quickly figure out how to use it rather than reading through all the emotional marketing that's directed at humans. It wants to just know how the API works, how can work with your product, is it trustworthy as fast as possible, build for that. Number three, we talked about verifiable domain capabilities.
11:39So a lot of the big labs are covering the large domain areas, but they're not going to have the time or be specifically focused on reinforcement learning for all these little sub niches. There are millions of them in every part of our life and business. Can you build a reinforcement learning environment around this, fine tune it, and own that capability?
12:00CTASo don't dismiss this as an opportunity for you. We can now build anything. That's exciting. Take your handbrake off and go and do it. And number four, the big one. Let's build apps that only exist now because of software three point zero. So that is not a faster spreadsheet, a faster UI interface on top of a workflow. We need completely new things like that large language model knowledge base Karpathi was talking about. That literally, we couldn't exist because there was no code that could actually do it. There's so many new things we can build now that we're still thinking about apps and the old way of doing things. There's a whole new approach now with these reasoning models that we can apply. Let's be honest, AI will change the jobs market. It will change how we work. If we think positively about what we can build now, the reframing of data, the compilation across all these modalities, We've got large language models now that actually push and progress other areas of science and medicine. We have a really exciting time ahead of us. On a personal note, I went to my own side project folder this week, and I killed at least three project after watching Karpathy's talk. Both of them failed the menu gen test and the software three point o test. They were basically software one point zero plumbing for things the next model release is probably gonna do natively. This, of course, is painful, but it's better just for me to kill them now than to shift them and watch them die in three months time. But saying that, there are one or two apps that I am doubling down on that hits all four of the criteria up above. I'll be sharing more on that soon.
13:32CTASo hit me up in the comments, and I'm asking this seriously. What is the app you've built that probably shouldn't exist anymore? I want to see how many of us are in the same boat. No judgment. I'm in it too. If this was useful, subscribe. I'm doing a follow-up on building agent first infrastructure that goes way deep into the practical side. And if you want the actual playbooks I use building with agents, go and check out the Switch Dimension Lose Neller and community. They are linked below. I've also linked to Karpathy's full talk down below. I'm supposed to link one of my own videos, so you go there and watch one of those next, but I highly recommend checking it out. And I've also got a ton of other videos on this topic on the channel.
§ · For Joe

Steal this playbook.

The 2026 builder filter

Frame every product decision through Karpathy's lens: am I shipping plumbing the next model release will eat — or am I shipping something that only exists because of Software 3.0?

  • Build the Strategy Agent first. Before you ship anything else in MCN+, JoeFlow, or Mod Producer, stand up a 'brain' — a markdown folder + agent that knows your offer architecture, your audience, and your kill criteria. Use it to reject your own ideas.
  • Run the MenuGen test on every project in the repo. JoeFlow, Reels Editor, ModBoard, ClipLab — for each, ask 'can ChatGPT/Claude do this with one prompt + an MCP?' If yes, the moat has to come from somewhere else (data, network, distribution, brand) or it dies.
  • Lean into verifiability. Your moat as a creator-tools builder is domain expertise (direct response, video production, transcription pipelines). Pick the niches with measurable outputs (transcript quality, conversion on a sales page, retention on a clip) and build RL-style feedback loops there.
  • Ship an agent-first version of every Modern Creator product. Add llm.txt + an MCP to every Mod app — JoeFlow MCP, ModBoard MCP, Mod Producer MCP. Agents are a distribution channel.
  • Use the kill-three-projects confession as a content pattern. A creator publicly killing his own work is a high-trust, high-comment move — it lowers the ask before any pitch. Joe should do this once a quarter on Killing Excuses.
  • Steal the four-frameworks listicle structure. 'X just told us Y — here are the four things' is a clean carousel/short/email spine you can run quarterly on every AI announcement that matters.
  • Use the 'softer hook' format: credentials in line one, bad-news pattern interrupt in line two. It's the cleanest 24-second open in the category — Joe should rebuild Killing Excuses cold opens this way.
§ · For You

If you're building anything with AI right now.

What to actually do this weekend

Pick one project this weekend. Run it through one question: could a single multimodal prompt plus an MCP do what your app does? If yes — stop or pivot. If no — you may have something worth doubling down on.

  • Spend a real weekend coding end-to-end with Claude Code, Codex, or Cursor in agent mode. Don't watch tutorials — build something rough. Karpathy's claim is that if you haven't done this in the last 60 days, your mental model is already wrong.
  • Open your own side-project folder. List every app you've started. For each one, ask honestly: does this exist only because models couldn't do it natively six months ago? If yes — kill it now, before you spend three more months on it.
  • Pick a domain you know better than 95% of people. Could be supply-chain ops in your industry, a niche of medical billing, a corner of finance, a job-specific workflow. That domain expertise is the moat AI labs can't replicate.
  • If you're building anything user-facing, add an llm.txt and consider exposing an MCP. The next wave of distribution is not humans finding your homepage — it's agents finding your API.
  • Stop calling what you do 'vibe coding'. Start treating it like engineering — specs, plans, tests, code review. The people moving 10x faster are the ones who added discipline back, not who removed it.
  • If a project survives the MenuGen test AND requires reasoning + multimodal + verifiability — go all in. Those are the apps worth your year.
§ · Frame Gallery

Visual moments.