Modern Creator Network
Jack Roberts · YouTube · 16:52

This Memory System just 10x'd Claude Code

A three-tier memory system for AI coding tools — short, mid, long — that works across Claude, ChatGPT, Cursor, and Anti-Gravity.

Posted
1 weeks ago
Duration
Format
Tutorial
educational
Channel
JR
Jack Roberts
§ 01 · The Hook

The bait, then the rug-pull.

The cold open does double duty: dangle a fast outcome ('10x more productive') AND name the cost of inaction ('amnesia' — the moment your model starts speaking Spanish halfway through a thread). Inside sixty seconds you know the promise, the problem, and the format of the answer.

§ · Stated Promise

What the video promised.

stated at 00:05I'm gonna show you exactly how to set up your own second AI brain that works across all apps with an incredibly simple setup. So you can have a memory operating system that makes you 10 times more productive and stops wasting your time.delivered at 16:50
§ · Chapters

Where the time goes.

00:0000:24

01 · Cold open + credibility

Promise (10x productivity, second AI brain across apps), pattern interrupt ('amnesia'), then quick credibility flex — sold last startup, builds AI businesses, drops a graph-view memory map as visual proof.

00:2401:36

02 · What does great look like?

Defines the four properties of a great memory system before building anything: remembers everything, lets you edit on the fly, plugs into every platform (kills info silos), fuels every answer with context.

01:3602:45

03 · Memory as input, not vault

Reframes the mental model. Every prompt silently pulls who you are, what you're shipping, what you started last month. Surfaces the failure mode of long threads ('Claude has amnesia' / 'speaking Spanish').

02:4502:51

04 · Three levels framework

Names the architecture: Short (who am I), Mid (what am I doing), Long (what happened before + expert knowledge). Same answer across Claude/OpenClaw/ChatGPT.

02:5104:04

05 · Layer 1 - Operating Manual (who am I)

First tier: identity, role, goals, tone, non-negotiables. Stuff that does not change weekly. ~200 words max. Lives natively in every platform's global settings; example walkthrough on Claude desktop + Anti-Gravity Customization.

04:0405:06

06 · Explicit vs implicit memory + the rule

Two flavors: hard-coded instructions you write, plus the model's own learned memory growing as you converse. Lands the principle: 'the outcome of a conversation should never depend on chat history' — if it matters, write it down.

05:0606:37

07 · Layer 2 - The Workshop (what am I doing)

Mid-term, project-scoped. Ask Claude to organize life/business into 6-8 categories (community, agency, startup, personal/health). One project folder per category.

06:3708:04

08 · Project CLAUDE.md + memory folder

Each project gets a CLAUDE.md at root (mission, stack, decisions, memory map, references — keep under 200 lines) plus a memory/ subfolder for evolving artifacts: decisions, current-strategy, next-actions, session-summaries.

08:0409:35

09 · Mutable layer + workflow

How you actually use it: open the project folder for whatever you're working on right now. Designed to be rewritten as priorities shift. Demos the same setup in Claude desktop projects, Claude Code, and Anti-Gravity — same idea, different UI.

09:3511:25

10 · Layer 3 - The Arcade (long-term memory)

Third tier answers 'what happened before?' Two storage options introduced: Pinecone (vector DB for semantic search at scale) and Obsidian (markdown + graph view for hand-editable memory). Most people over-complicate this layer.

11:2513:20

11 · Obsidian vs Pinecone tradeoff

Obsidian when you want to read and edit memory by hand (graphs, backlinks, strategy notes, decision frameworks). Pinecone when you want indexed semantic search across thousands of records, scale, anywhere access. Jack personally uses Pinecone.

13:2014:35

12 · Conversation archive + wrap-up skill

First sub-layer of long-term: every meaningful conversation ends with a /wrap-up skill that summarizes decisions, next actions, metadata, and embeds the result into Pinecone. Indexed and searchable later by date and topic.

14:3515:05

13 · Expert knowledge bases

Second sub-layer of long-term: domain-specific corpora (YouTube expertise, Hormozi business strategy). Layer 2 CLAUDE.md tells Claude which Pinecone indexes to consult for which questions — this is where the three layers interconnect.

15:0515:40

14 · Building knowledge with NotebookLM

Workflow: ask Claude/ChatGPT to research a topic, auto-generate a 50-resource NotebookLM notebook, then download and vectorize into Pinecone (or keep in Obsidian).

15:4016:40

15 · Firecrawl as MCP connector

Walkthrough adding Firecrawl as a Claude custom MCP connector (Connectors -> Add custom connector -> paste API key). Claims ~80% cost savings and better accuracy for agentic deep research vs default browsing.

16:4016:52

16 · Recap + open loop to super-skills

Restates the three layers (who / what / before) and frames memory as only as strong as the skills supporting it. Hard cut into next-video CTA on 'super-skills that make your memory system more powerful'.

§ · Storyboard

Visual structure at a glance.

Graph-view tease
hookGraph-view tease00:00
Title card - Memory System
promiseTitle card - Memory System00:08
Three-tier overview slide
frameworkThree-tier overview slide00:34
Short / Mid / Long
frameworkShort / Mid / Long02:45
Layer 01 - Operating Manual
valueLayer 01 - Operating Manual03:00
Anti-Gravity customization
valueAnti-Gravity customization04:05
Layer 02 - The Workshop
valueLayer 02 - The Workshop05:33
Project Operating Manual
valueProject Operating Manual06:47
Anti-Gravity rules file
valueAnti-Gravity rules file08:32
§ · Frameworks

Named ideas worth stealing.

02:45model

Three-Level Memory System

  1. L1 Short / Operating Manual - Who am I?
  2. L2 Mid / The Workshop - What am I doing?
  3. L3 Long / The Arcade - What happened before + expert knowledge

Stratified memory across timescales and scopes. Each tier answers a different question and lives in a different location (global settings / project folder / vector store).

Steal forJoe's own AI-memory taxonomy across ~/.claude/CLAUDE.md, joe-profile.md, per-project CLAUDE.md, and MEMORY.md — name the system, not just the files.
00:24list

Four properties of a great memory system

  1. Remembers everything you said
  2. Lets you change the important stuff on the fly
  3. Plugs into every platform (kills info silos)
  4. Fuels every answer with context

Spec sheet for evaluating any memory architecture. Used as the pre-build checklist before showing the implementation.

Steal forTemplate for evaluating any infrastructure pitch in 'Own your stack' content — define great first, then build to spec.
06:37concept

Project Operating Manual template

  1. What is the folder / what is the goal / why does it exist
  2. The stack (what you're building it with)
  3. Decisions already made (so we don't relitigate)
  4. Memory map - where each memory lives
  5. References

Per-project CLAUDE.md skeleton. Under 200 lines because it gets prepended to every conversation in that scope.

Steal forJoe's MCN / ModBoard / Clip Lab / JoeFlow folders could each get this exact skeleton.
02:12concept

Memory is an input, not a vault

Reframes memory as plumbing into every prompt rather than a passive store you look things up in. Every prompt silently pulls who you are, what you're shipping, what you started last month.

Steal forDirect rhetorical match for Joe's 'plumbing you own vs utilities you rent' frame — AI memory as plumbing.
04:42concept

Outcomes-should-never-depend-on-history rule

Stress test for your memory system: open a new chat with zero context — does it still give the best advice? If not, the implicit/chat-history layer is doing work that should be explicit.

Steal forSharp testable assertion for any AI-tooling content Joe ships.
§ · Quotables

Lines you could clip.

00:00
Called memory systems are a cheat code, but only if you use them properly.
Cold-open hook framing — promise + condition in one line.TikTok hook
02:14
Memory is not a vault, it's an input. Every prompt silently pulls from your stack.
The thesis line. Quotable, screenshottable, owns the reframe.IG reel cold open
02:25
How many times have you spoken with Claude or ChatGPT to get halfway through the conversation and it's talking complete Spanish?
Universally relatable pain point, vivid 'Spanish' metaphor.TikTok hook
04:42
The outcome of the conversation should never ever depend on a chat history.
Strong testable rule. Reads like a principle, not a tip.newsletter pull-quote
05:04
Models forget things, they get truncated, they hallucinate. If it matters, we need to make sure we're writing it down.
Mic-drop close on Layer 1. Reads as advice your friend gave you.newsletter pull-quote
09:35
Most people over-complicate it or don't set it up properly, meaning they get none of the benefits but all the complexity.
Diagnoses the exact failure mode for the long-term-memory layer.IG reel cold open
§ · Pacing

How they spent the runtime.

Hook length24s
Info densityhigh
Filler8%
§ · Resources Mentioned

Things they pointed at.

00:14toolClaude (Anthropic)
02:10toolChatGPT
04:05toolAnti-Gravity (Google IDE)
04:06toolVS Code
08:30toolClaude Desktop projects
08:45toolCowork
09:35toolPinecone
10:05toolObsidian
10:09conceptKarpathi Obsidian setup
10:15conceptRetrieval Augmented Generation (RAG)
14:05productGlider (Jack's speech-to-text startup)
14:13channelAlex Hormozi
§ · CTA Breakdown

How they asked for the click.

16:40next-video
Your memory system is only strong as the skills that support us. So what we need to do next is learn what I call super skills that can make your memory system even more powerful, and we're gonna learn that by watching this video right here.

Open-loop close — no subscribe ask, no link, no product. Pure retention CTA pointing to the next video. Clean for tutorial format because it preserves the 'I just gave you something useful' afterglow.

§ · The Script

Word for word.

HOOKopening / re-engagementCTAthe pitchmetaphorstory
00:00HOOKCalled memory systems are a cheat code, but only if you use them properly. And in this video, I'm gonna show you exactly how to set up your own second AI brain that works across all apps with an incredibly simple setup. So you can have a memory operating system that makes you 10 times more productive and stops wasting your time. And if you don't know who I am, my name is Jack Roberts. I built and sold my last tech startup with a gazillion customers. Now I build my own AI businesses, and I just share the stuff that actually works. So if you haven't already, grab that beautiful coffee and let's dive straight in. So this is the Claude memory system, and I've spent over a year looking at memory systems. I've done a lot of research. This is the simplest one that I've actually found. So when we talk about the Claude code memory system, what do I actually mean? What does great look like? Let's define what that is before we think about what we're gonna build. Well, the first thing that a great memory system does is it remembers
00:55everything that you say. Right? Not just last time messages, not just the thread, every meaningful exchange that you've ever had with it, it remembers it and we can capture it and refer to it whenever we want to, just like this beautiful filing cabinet here. So the first goal is it needs to remember what we've said. The second thing is that we can change the the important stuff on the fly. Right? Maybe we don't wanna focus on strategy a anymore. We wanna focus on strategy b or maybe your income's gone up and you wanna change and reflect that in the strategy. So we need the ability to change the important stuff with your role, your stack, whatever it is, we wanna be able to disregard
01:30HOOKold information. The third thing that it needs to do, it needs to plug into every platform. This is what I call information silos. It's very common AI, chatty p t for this, Claude for that. Maybe your grandmother's basement for something completely different. We don't wanna be hip and hopping. We need a central memory system, which is the universal point of truth. We call it the memory core in this system. Fourthly and finally, it needs to fuel every answer with context. What is the point of this memory system if we can't ask it questions about stuff that's happened and we can't get context rich information when we ask it things about strategy? We have a prompt, we have our three tier three level memory system, then we get the output. That is the basic idea. Memory is not a vault, it's an import. Every prompt silently pulls from your stack. Who you are, what you're shipping, what you just started last month, so response lands sharper than any fresh chat could ever hope to do. Now how many times have you spoken with Claude or ChatGPT to get halfway through the conversation and it's talking complete
02:27HOOKSpanish? And it's because it has amnesia. Now the best way to solve for this is to think about your memory system as solving this across three levels. Okay? We have short term memory, which is very simply, who am I? We have midterm memory, which is what am I doing? And then we have long term memory, which is what's happened before and an expert knowledge base. I'll explain exactly what I mean. Alright. Now not to get metaphysical,
02:51but the first question you have to ask yourself is who am I? And I'm not talking about glass of red wine at 4AM in the morning. I mean, does your model actually know who you are? We call this the operating manual and it's the very first level, the very first tier of our three level memory system. So think about it like this. Who am I? It's your name, your role, your current goals, the way you like your answers framed, the tools you use, tone voice, non negotiables, stuff that does not change
03:16every week, but defines every reply. So here's me for example. My name is Jack Roberts, AI YouTuber, direct me fluff, m dashes emojis, vibe is Pacheco, if I'm talking about this particular presentation. Now the idea of this is that it lives natively in every single platform and it grows as you talk. So the idea with Claude, for example, has its own internal memory system which gets better over time. And when you express preferences or ask it things, it records that for you. So as you're actually talking to them and you say, hey, remember this or I prefer x so busy, Claude will actually remember that stuff for you. But one little hack I wanna show you real quickly. So first actionable step, go on Claude. If you're coding Claude, come to the bottom left under Jack Roberts, click on obviously, yours may not say Jack Roberts, unless we got the same name. I don't know. Go up to general. And what you're looking for here is instructions for Claude. Put some stuff in here. Try to make it no more than 200 words maximum.
04:04Just top level of of how you want Claude to behave and any key information about you. Okay? Then if you're in anti gravity, for example, all you do is click on the additional options and you click on customization. This will be the same in Versus Code and various other forks. And you'll see you have global and you have workspace. You click on global and these are gonna be Gemini. Mv which are the global instructions for how these models basically behave with you. Now what's important to bear in mind here is that there's two bits of memory. One is explicit hard code stuff, and then we have the generalized knowledge it has of you, which it will natively pick up based on your conversations. So this is an important thing to understand with these memory systems is that the outcome of the conversation
04:43should never ever depend on a chat history. I should be able with this system, and you can with this system, that's why it's so cool, open new chat window and regardless of any previous message, it should always give me the best advice possible. But think of this level one as your general top of funnel memory that identifies who you are. It's important to bear in mind that models forget things, they get truncated, they hallucinate.
05:06If it matters, we need to make sure we're writing it down. Okay. So real quick, what are you doing? Well, not right now. I mean, you and are hanging out together. But I mean, what are you working on right now? Because this is the second level of our three tier system. It's the projects you're doing. They may change from time to time. Maybe you're building a startup. Maybe you've got a really cool client or several cool clients. These are the things that you'll be doing right now and that forms the second level of our system. And that's why in this second level, it's actually quite structural. So what we're going to do essentially is create
05:35six or seven separate folders for all the things that we're doing. So for example, I might have one for my community. I might have one for my agency. I might have one for my startup. And or, you know, one might be personal life and health and fitness, getting like in shape and all that kind of stuff. So what you need to do first of all is head over to Claude, or this could be your model of choice. I need to ask you this question. Hey. Based on everything that you know about me, all about conversation, all about history and projects,
06:00I want you to organize all the areas of my life and business into six to eight different categories that surmise everything that I'm doing. I'm using this to build up folders to best organize my life and my business. And once you've got that, you can basically create it. And it is surprisingly accurate, but you don't want any more than eight. Otherwise, it's way too much to manage. Now what we're gonna do based on these eight things, we'll come down to Shiva's workshop, and what we're effectively doing is creating one unique Claude dot m d file for each of those eight projects. You could do it based on clients, but again, try to keep it under eight. Alright? So we've the Claude dot m d at the root that tells Claude what the project is.
06:37The memory folder beside it stores everything that evolves, the decisions, strategies, the summaries, the next actions. You can open up in any platform and it just works. And what I've done for you as well, I've pulled together this project operating manual that you can literally copy and paste. And what I want you to do based on those eight things is come down and essentially create this per project. So this is open opens a folder. So what happens here is that Claude will open this up before it touches any of your code. So it sits as a living
07:04medium term memory that can change based on what you're working on at the moment. It's the midterm layer of our three level memory system. Okay. So what's included in this folder? Well, we need to explain what the folder is, what the goal is, why does it exist. This exists to get me 10% body fat. This exists to make me x thousand dollars. This exists to help my client do x y z. The stack, what are you building it with? Decisions
07:26you've already made. Okay? So calls already made so we don't need to relitigate them. It needs to have a memory map about where each individual memory lives and any references that are relevant. Okay? Now the template is simply this. I'll let you basically copy and paste this, but essentially fill this out for everything. Okay? And here's an example of what that might look like based on everything you're doing. Now we wanna keep it under 200 lines because this is your MD. Effectively, what this will do is be prefilled into every single conversation, so we don't want it to be too big. It needs to be updated, needs to be accurate, and it's one folder per repo. So ask all the questions, copy this into these six folders, and then fill it out. Now if you are in Claw desktop app, what you can do in Cowork, if you're using Cowork, is literally use what I would call the projects tab. So you click on projects, this is their version of that. Okay? These can be your six or seven different versions. Awesome. Now, if you're in code, again, I'd recommend code honestly because it's just everything that code does, but like a better and unlimited and it's essentially the same thing. Then in code, all you would do is effectively create these six or seven folders on your desktop. And when you open this up, you can see all the folders on the left hand side. And it's the exact same thing in anti gravity. You can have all of the folders basically, and all you do ever do is you open it up and you open up the folder that you want to chat in based on the conversations that you're having. And so the idea here is that whenever you do work, all you do is you think about which of the seven things is it about. It's about my business. Cool. When I click on business, it's about growing my LinkedIn. Then I click on LinkedIn. And effectively, do all your work inside that project for that specific thing. It has these living midterm strategy that may change. It is what we call mutable. It is subject to change and we do all the work in those seven project folders based on the things that you want to do. And the third level is probably one of the most important which is it's a long term memory. And most people I've seen either drastically over complicate it or don't set it up properly, meaning they get none of the benefits but all the complexity. So level three is the arcade. It basically answers the question, what happened before? What the hell happened before this? So let's get into a little bit detail. Now to do this, we're gonna use one of two systems.
09:34Option one is gonna be something called Pinecone, which effectively is a big database. This looks really technical. You're probably thinking, Jack, this is way out of control. In reality, they're just five mystical bookshelves, which are very, very cool. In reality, what it looks like is you can ask questions to things like this on aiojack.com. All of my YouTube videos, any content I ever create is indexed into this beautiful thing in pine cone. It is unbelievably
09:58easy to set up. And the other alternative we have is, you might have guessed it, something called Obsidian. And Obsidian itself has gained a lot of virality, especially a new system, what we call Carpathi Obsidian, was considered an alternative to what we use in Pinecone, is retrieval augmented generation. Now the idea of this is that it tracks all of the text for any topic you want to. You can amend it. Claude has access to all of it in his markdown files, which is really cool. And it understands basically everything. And the more that you basically use this, the better the system gets because it finds cross linkages between everything. So this one here is a phrase that you can see. This is an example one I have on YouTube, and you can see the relationships
10:40between each of the individual things. I know you're thinking, Jack, graphs are great, but will graphs solve all my problems? The answer to that question is yes, they will. No. They won't. Of course, they won't. But, yeah, this is the idea of Obsidian, basically. And, obviously, you can do little funny things with it, which is, like, half the fun is moving these sliders up and down, basically. But this is what they call Carpathi Obsidian, which is his alternative to a complex rag system. Now this isn't a right or wrong. You can pick either one of these two systems that you like. Let me help you make an informed decision if I may. So if you look at Obsidian, which is the node one that I showed you, you use this when you want to read and edit the memory by hand. If you wanna physically look at your memories, double click in. For example, I wanna click on this. I wanna see what this says. I wanna come down and read it. Then you may wanna go for something like Obsidian. You can create strategy note, decision frameworks, visual graphs, and backlinks that are important to you and you want zero in for it, you just want the files. Obsidian might be one for you. I'll put a full link on screen right now if you wanna double click and set up Obsidian. I'll go through the entire thing. Alternatively, you have Pinecam. If you want semantic search across thousands of records, this is way more scalable, can work anywhere. You can publish it in apps. It can be accessed from anywhere on the planet. You know, you wanna store wrap ups on every session. If it's for scale, so you wanna store longer information like books and transcripts, it is incredible for that. In other if you want a readable file, use Obsidian. If you want index as searchable summary, use Pinecone. If you're asking what I personally use, I personally use Pinecone just because I don't wanna read through everything in Obsidian, but Obsidian does really have its genuine use cases. Now there are two levels to long term memory.
12:11One is an archive of any conversation that you've ever had. Now the really cool thing is whether this is in Claude, whether this is in OpenClear, Hermes, or any system you're using, we can, if we're using Pinecone, for example, send all this information to Pinecone and we can index it. So for example, if I have a big conversation with this, all I do is I create a skill. I can come down, do forward slash wrap up, and I have a wrap up skill that takes the entire conversation
12:37and embeds it in Pinecone. Now if you wanna know how to set this this wrap up skill up, I'll put a link on screen for you so can go through my full Pinecone video that breaks down the entire system for you, my Kapathi video. And then the second level of long term memory here guys is the what I would call knowledge, Deep knowledge. So for example, I might wanna have a YouTube expertise
12:55database in my pine cone, for example. And again, you could do this with Obsidian if you like to. But the idea here is this one might be YouTube. And this could be all the information that I want on for growing on YouTube or it could be my business or whatever it is. And then what I'm doing is when I'm having conversations, your AI agent or your AI conversation can call
13:13and reach over and conversate with this knowledge base when it's relevant. And so in the project folder section section that we covered in section two, what you might say and which is where it all interconnects together. Okay. So let's say for example that one of my folders is Glider. Right? Glider is the speech to text startup that I founded, lets you yap into a computer. In the Glider folder for example, in here, I might say, look, these are the indexes in pine cone I want you to consult
13:40when I ask you strategic questions. This is my long term memory. In addition to that, if I ask you for any history, consult this index. Then in Pinecone, I might have an index over here, which could be Glider strategy or it could be business strategy or Hormozi or whatever it is, and it will call that long term memory, that expert knowledge and history to help and improve the current conversation. Now to actually build out that long term knowledge, there's two systems we can do. One is to connect this to NotebookLM.
14:07I'll put a link down below. But for example, I could say something like, hey, go and do some research on business strategy according to Alex or Mercy. Go create for me a notebook in my notebook alarm and just get me like 50 different resources as much as you possibly can and create that notebook for me. Send that straight off. Now what's really cool about this is it will actually curate and you have the power of Claude or ChatGPT 5.5
14:29going ahead and building those notebooks for you, which is incredible. Then we can go over, grab that information, save it on our desktop, and even vectorize it into Pinecone. Or just like we did with Obsidian, have this beautiful thing over here, which I opened the graph for so you can see because it's very shiny and we like lots of graphs over here. Just use this. Now this is my, you know, Hormozi business strategy thing that I can ask questions to. Again, I'll put a link on screen for the Notebook LM deep dive if you wanna double click and learn more about that strategy. The other one that I absolutely love doing is Firecrawl. So for example, I could say something like this. Hey. I want you to use my Firecrawl integration
15:06and go ahead and do some deep research for me on Hormozi's best five practices for scaling a business past $10,000 a month. Okay. Do that. Send that one off. Of course, we can connect Firecrawl to the connectors on plus. Click on connectors over here. You can see I've got Firecrawl installed. And all you literally do is come over, you grab the API key, you can come back over to the connectors, then what you would do is click on the plus, click on add custom connector, and then right here, you just type in Firecrawl, and then remote MCP server, just enter in this right here. Where I've got API key, you just enter your Firecrawl. And then basically, what that will then be able to do is use the FireCall agent, and this can save you like 80% of the cost, and it's just way more efficient in getting accurate data for agents. So what this basically means is we have an archive of every conversation. So every meaningful conversation ends with a wrap up. That summary decisions, next actions, made the metadata,
15:55the summary lands in the archive. So if I ever say to, hey, what was that thing we discussed about this really important event last January? We've got it. And when we do index it, we have the date and we can set all those different filters in Pinecone or locally on a computer if we're using Obsidian, which is really cool. That's the exact process we went through. Then we covered the expert knowledge that we've embedded. We've used Firecrawl to go get some deep research. We can use NotebookLM to build beautiful, detailed,
16:21CTAimpressive notebooks on any topic we want to using the world's strongest AI research bottle, which is freaking incredible. And we can pull that down anytime we want to in our memory system. So we have the three layers. We have short term, midterm, and long term. Level one is who we are. Level two is what we're doing, and level three is everything that's happened before and that perfect knowledge. Now your memory system is only strong as the skills that support us. So what we need to do next is learn what I call super skills that can make your memory system even more powerful, and we're gonna learn that by watching this video right here.
§ · For Joe

Steal the three-question taxonomy.

Memory-as-infrastructure playbook

Memory is not storage — it's the plumbing that makes every prompt sharper than the last.

  • Frame any AI-tooling content with the three-question collapse: Who am I? / What am I doing? / What happened before? It's screenshot-friendly and survives the platform port.
  • Build the L2 Workshop as an actual file system: 6-8 project folders, CLAUDE.md at root, sibling memory/ directory with decisions / current-strategy / next-actions / session-summaries.
  • Lift Jack's 'memory is an input, not a vault' line — rhetorical sibling to Joe's 'plumbing you own vs utilities you rent'.
  • Use the testable rule as a sharp closer: 'the outcome should never depend on chat history' — if it matters, write it down.
  • Steal the slide aesthetic: treasure-map / parchment over generic-AI gradient cards. Visual differentiation matters in this niche.
  • Open-loop close to a 'super-skills' follow-up rather than a subscribe-ask — preserves the gift afterglow on tutorial content.
§ · For You

Build your own AI memory system this week.

If you're using Claude Code (or any AI tool) daily

Stop re-explaining yourself to your AI every morning — install a three-layer memory once and every conversation gets sharper.

  • Layer 1 (today, 10 minutes): write a 200-word identity file. Name, role, tone preferences, non-negotiables, current stack. Paste it into Claude's Instructions, Cursor/VS Code global rules, and ChatGPT's Custom Instructions.
  • Layer 2 (this week): ask Claude to bucket your work into 6-8 categories. Make a folder per category with a CLAUDE.md at the root and a memory/ subfolder beside it. One per real life-area (not per app).
  • Layer 3 (when you're ready): pick ONE — Obsidian if you like reading your notes; Pinecone if you want semantic search and don't care about reading. Don't set up both. Don't make it complicated.
  • End every meaningful AI conversation with a wrap-up: ask the model to summarize the decisions, next actions, and key insights from this session. Save the result into your memory folder.
  • Test the system: open a fresh chat window with no history. If you can't get great advice immediately, the memory layer is incomplete — something important is only living in chat history.
  • Don't over-build. Most people fail Layer 3 by piling on tools (Pinecone + Obsidian + ChromaDB + a custom RAG). Pick one. Cap project folders at eight. If it takes more than an evening to set up, you're doing it wrong.
§ · Frame Gallery

Visual moments.