Stop Juggling AI Tools — How to Build a Second Brain That Actually Works for You
GenAI 30 Project Challenge - 8 RAG
Do you ever feel like you're drowning in a sea of AI tools?
Notion AI, ChatGPT, Claude, NotebookLM, Perplexity … They all promise to be your "second brain", but instead of feeling smarter, you're just more scattered.
I’ve tried them all. Some days, one tool feels more useful than the others. But there’s never a clear winner, never that one tool that truly earns the title of “my second brain”. They all seem so similar on the surface, yet each one insists it’s fundamentally different.
That strange tension, between sameness and difference, sat in the back of my mind for months. I'd switch between tools based on mood, project, or whatever shiny new feature caught my attention. The constant switching was exhausting, but I never give it much thought until recently, when I was building a personal website and started experimenting with RAG (Retrieval-Augmented Generation).
As part of my GenAI 30 project challenge, I set out to create something that felt personal and practical: an AI that actually understands my writing and can talk back with context.
What I discovered in the process changed everything, not just about AI tools, but about how to think strategically about the ones we already use.
1. What Is RAG, Really?
If you’re like me, not so deep into technical details, the first time you hear “RAG”, it might sound like another AI buzzword: mysterious, essential, maybe even overhyped.
But once I built one, I realized: at its core, RAG is just smarter search.
Unlike keyword search, which only finds exact word matches, RAG understands meaning. For example:
Keyword search for "dog"
Finds: documents with the word "dog"
Misses: "puppies," "canines," "golden retrievers"
RAG search for "dog"
Finds: all of the above, plus related topics like pet care, training, and veterinarians
RAG perfectly captures the famous saying in linguistics:
You shall know a word by the company it keeps.
When you ask a question, RAG turns your question into a meaning-code (called an embedding), finds content with similar meanings, and feeds that context into a language model to generate a grounded response.
Perfect. That’s exactly what I need for my website.
2. Building the System: RAG Integration
I set out to build a RAG system that could serve as a personal AI assistant, one that actually understands my writing, remembers my topics, and can answer questions with my context.
The Goals:
1. Fetch and process my Substack articles
2. Create semantic chunks optimized for retrieval
3. Generate embeddings for similarity search
4. Build a chat interface that feels natural and responsive
5. Provide source citations so users know where information comes from
The Architecture
I chose a three-phase approach that balanced functionality with cost-effectiveness:
Phase 1: Content Processing
Fetched 30+ articles automatically from my feed
Broke content into 500-word chunks with 50-word overlaps
Extracted titles, URLs, publication dates, and topic tags
Phase 2: Embeddings & Search
Used OpenAI's
text-embedding-3-small
for vector representationsImplemented cosine similarity for finding relevant chunks
Stored everything locally in JSON to keep costs minimal
Phase 3: Chat Interface
Built a responsive floating chat widget
Added proper markdown rendering for human friendly responses
Included source citations with similarity scores
The Results & Numbers
30+ articles processed and indexed
250 semantic chunks, each embedded with OpenAI
2,436-dimensional vectors per chunk
Seconds response times for most queries
Similarity scores ranging from 0.3–0.8 (with higher scores = better matches)
Total embedding cost: ~$0.03
Zero hallucinations—all answers grounded in my content
What I Learned
Chunking Strategy Matters 500-word chunks with overlaps preserved meaning while keeping search sharp. Go too small, you lose context. Too big, and results get fuzzy.
Semantic Search Feels Like Magic Even vague queries like "How do I build an AI workflow?" surfaced the right ideas. It understood intent behind the words.
User Experience Is Everything Having a chat interface right on the site made it feel human. And markdown formatting? Underrated.
Building RAG from scratch was one of my most rewarding GenAI projects.
It combined embeddings, similarity search, and language generation into something genuinely useful.
The most exciting part? Watching the system understand the nuances of my writing and provide contextually relevant responses as I keep writing. It's like having a research assistant who's read everything I've written and can instantly recall the most relevant information.
Want to try the RAG system yourself? Check out the floating chat widget on my website - it's powered by everything described in this section!
3. The Revelation: RAG Is Everywhere
After building this project, I found myself thinking:
Wait, that’s it? That’s RAG?
It almost feels deceptively simple.
But that shift—from matching keywords to understanding intent—is what makes RAG so powerful.
And once I saw it working in my own system, I couldn’t stop thinking:
What if many of the tools we already use as our "second brains" (e.g. Obsidian, Notion, Cursor) are quietly using RAG-like concepts under the hood?
This realization sent me down a rabbit hole that changed how I see every AI tool I use.
🧠 Obsidian: The Structured Mind
Obsidian might be the closest thing we have to a "second brain" in its raw form.
It’s a local-first, Markdown-based note app with bi-directional linking and a knowledge graph. Out of the box, it has no AI, no embeddings, just structure.
But once I added plugins and connected it to LLM API, something magical happened. I could analyze writing patterns, summarize content, and expand on ideas with contextual awareness.
To push this further, I downloaded a few local LLMs via Ollama for experimentation. Some plugins were hit or miss, but the one that really stood out was Copilot, which helps with summarization, rewriting, and expansion.

The writing assistant wasn’t just spitting back keywords, it was surfacing concepts, suggesting phrasing, and giving feedback based on meaning.
It wasn’t marketed as RAG. But it sure behaved like one.
⚙️ Automation + Notion + AI
If I had to pick one concept that feels revolutionary every time I use it, it’s automation. It shows up everywhere. It’s in tutorials and workflows, and yet it never loses its impact. Especially when combined with Notion's structured database architecture.
I built a system using n8n to pull AI-related emails from Gmail, send them to an API, and store both original and AI-enhanced content in a Notion database. Once the data's there, querying it feels like semantic search.
I can ask:
“What AI projects involve voice transcription?”
Or:
“What’s related to gaming?”
Even if the exact words aren’t there, the results come back relevant.
Again: not officially RAG, but definitely RAG logic.
💻 Cursor: My Silent Second Brain
Here's where it gets interesting. If I were to rebuild my RAG website today but have it operating fully locally, I’d just open Cursor, point it to my folder of writing, and ask questions directly.
That’s how I already use it for writing, coding, brainstorming, and editing.
And here’s the kicker: the foundation is exactly the same.
Cursor works by embedding your files, running similarity search, and feeding that context into a language model, just like the RAG architecture I built from the challenge project.
Sometimes I tell people about how I use Cursor for edits or project guidance and they look at me like something’s wrong with me. But to me, this is exactly what a second brain should feel like.
It surfaces relevant insights
It understands the task
It gives suggestions in real time, grounded in your own work
In other words, it’s RAG, but disguised as a coding editor.
🖼 Beyond Text: Visual “RAG”?
After all this reflection on text, I started wondering: what about images?
This took me back to my very first GenAI challenge, searching through a giant folder of unlabeled, randomly named images. There were no tags or filenames, yet I could type in queries and instantly find what I needed.
Behind the scenes, it used:
OpenAI’s CLIP model to embed both images and text into the same vector space
Cosine similarity to match text prompts to visual content
Local processing, no external database or LLM needed
Now, to be clear, this isn’t full RAG. There's no context augmentation or generation step. But it's definitely RAG-adjacent.
Even if it’s not “RAG” by definition, it’s built from the same mental blueprint.
⚡ Real-Time RAGs You’re Already Using
After seeing how RAG concepts work with both text and images, I started noticing them in everyday tools, even the ones that don’t advertise it.
Take Unsplash. You search “cozy winter cabin”, and it returns exactly that, even without matching filenames. It’s not keyword search. It’s using image-text embeddings, just like CLIP.
Or NotebookLM, Google’s AI notebook. You upload documents, and it answers questions by retrieving relevant content and generating responses. That’s RAG in action.
Even Spotify’s search is starting to feel RAG-like. When you type “lo-fi beats for focus”, it’s not just matching genres, it’s understanding vibe, context, and intent.
These tools may not say RAG, but behind the scenes, they’re doing the same thing:
retrieve → understand → respond.
So yes, RAG is technical, but it’s already part of your daily life.
4. The Strategic Shift: Choosing the Right Second Brain (For You)
Once I realized all these tools share the same underlying logic, I finally understood why I kept jumping between them, I was treating them as different species when they're really different breeds of the same animal.
Instead of comparing feature lists, I started asking the right questions:
What kind of content does this tool retrieve best? Cursor excels with code and project files. NotebookLM shines with documents and research. Obsidian works magic with interconnected notes.
How does it understand context? Some tools prioritize recent interactions, others weight similarity, still others factor in your behavioral patterns.
What style of response fits my thinking? Do I want conversational back-and-forth, quick contextual suggestions, or structured analysis?
The tools aren't really competing, they're optimized for different types of thinking. And once you understand their RAG DNA, you can match them to your actual workflow instead of chasing the latest features.
The Anti-Juggling Framework
Here's the three-step process that finally ended my tool-jumping:
Step 1: Map Your Thinking Patterns
Before choosing any tool, spend one week tracking when and why you reach for AI help. Notice:
Do you need quick answers while writing? (That's retrieval-heavy)
Are you brainstorming and connecting ideas? (That's exploration-heavy)
Do you work with structured data and documents? (That's organization-heavy)
Most people are strongest in one area. Identify yours first.
Don't just guess, actually track this. Keep a simple note for a week: "Tuesday 2pm: Needed help explaining a complex concept to a client (retrieval-heavy)" or "Thursday 10am: Stuck on project direction, needed to explore possibilities (exploration-heavy)."
Step 2: Choose Your Foundation Tool
Based on your dominant pattern, pick ONE primary tool:
Retrieval-heavy: Start with Cursor or NotebookLM—they excel at finding and surfacing relevant context from your existing work.
Exploration-heavy: Obsidian with AI plugins gives you the best thinking playground for connecting ideas and discovering new insights.
Organization-heavy: Notion AI handles structured content and workflows seamlessly, especially when combined with automation.
Here's the crucial part: resist the urge to optimize. Your foundation tool doesn't need to be perfect at everything. It just needs to be excellent at your primary thinking pattern.
Step 3: Build Your RAG Mindset
Now that you understand how these tools really work (they're all doing retrieve → understand → respond), you can use them more intentionally:
Feed them your actual work, not random prompts.
Upload your documents, connect your projects, import your notes. The magic happens when the tool understands your specific context.
Ask questions that build on previous context.
Instead of starting fresh each time, reference earlier conversations, build on previous insights, create threads of thought.
Let them suggest connections you might have missed.
Use their pattern recognition to surface relationships between your ideas, projects, or research.
Treat their outputs as thinking partners, not final answers.
The best interactions happen when you're collaborating with the tool, not just consuming its outputs.
The Depth Principle
The magic isn't in the tool, it's in creating a consistent context that gets smarter over time. Every conversation, every document you add, every question you ask builds a richer understanding of your work and thinking patterns.
This is why tool-jumping is so counterproductive. Each time you switch, you lose that accumulated context. You're essentially starting over with each new tool, never allowing any single system to truly understand your work.
Your Turn To Start
Pick your foundation tool tonight. Based on your dominant thinking pattern, choose one tool and commit to it for at least 30 days.
Tomorrow, spend 10 minutes feeding it something you're actually working on. A current project, a folder of notes, a document you're struggling with. Ask it one real question about your work — not a test question, but something you genuinely want to explore.
That's it. No complex setup, no perfect system. Just one tool, one context, one question.
The most important thing isn't which tool you use. It's asking:
What do I need help with?
And what's the most comfortable, frictionless way to get there?
The juggling stops when you commit to depth over breadth. The tools aren't competing for your attention, they're here to support how you think, how you create, and how you want to work.
Fantastic article, Jenny!
I'm currently working on a RAG project at my 9-to-5, testing different chunking and indexing strategies. Having small, overlapping chunks improved the specificity, like you said.
I started exploring MCP to gain more control over database-sourced content and built a simple MCP server prototype over the weekend. It was fun to see how the tiny qwen3:32b LLM model, running on Ollama locally, was able to figure out SQL queries from the database schema and my vague prompt. It even fixed the broken queries on its own, until it retrieved the data I requested. It took me a few hours to get this multi-step looping process, where the LLM self-corrects on error, to work correctly. Now my mind is racing as I think of ways to apply this idea to other problems. Self-improving learning loops would be like rocket fuel, accelerating problem-solving.
The ability for an LLM to call different tools using MCP (Model Context Protocol) opens up a new path, leveraging AI capabilities and other content types. I used the FastMCP library by Jeremiah Lowin - worth checking out.
I started using Cursor a few days ago and connected it to my Obsidian vault, and got a very similar experience to what you described.
Thanks for pointing out how RAG is embedded everywhere. I can see that, too, after building one myself. Looks like we are going through similar paths.
What a breakdown of RAG, Jenny! You definitely sparked some ideas I will be experimenting with. Thank you! As always, awesome work. 🙌🙌