Skip to main content
Back to blog

Building a Brain for My Homelab

5 min read

I have three machines, a NAS, and a problem that every developer running AI tools eventually hits: nothing remembers anything.

Claude Code forgets your architecture decisions between sessions. Your Slack bot has no idea what your CLI tool just did. Your overnight automation repeats the same mistake because no one told it what happened yesterday. Every tool is an amnesiac operating in isolation.

So I built a brain. Not metaphorically — I literally mapped my homelab infrastructure to neuroscience.

The Architecture

It started with one observation: the human brain doesn't have a single "memory system." It has specialized structures that handle different kinds of information, connected by pathways that route signals between them.

My homelab now works the same way:

Cortex is the hippocampus — long-term memory formation and retrieval. It's a FastAPI service backed by PostgreSQL and ChromaDB that stores everything my AI agents learn. Every memory gets auto-embedded, auto-linked to related memories, and auto-clustered by topic. Schema-on-read, not schema-on-write. An auto-data lake for agent memory.

Thalamus is the sensory gate — it sits on the MQTT bus, watches every event flowing through the system, and decides what's worth escalating. Not everything needs attention. The thalamus filters noise and only persists high-priority anomalies. Most events get filtered out. The important ones surface.

Amygdala is the threat detector — a nightly security review worker that wakes at 2 AM, collects data from every system (git commits, Docker container states, MCP server integrity, dead-letter queues, Cortex audit logs), and runs a three-perspective LLM review: Lead Analyst, Adversarial Reviewer, and Moderator. The final report arrives in Slack before I wake up.

Cerebellum is the reinforcement learning layer — it observes task outcomes nightly and warms or cools the edges in Cortex's knowledge graph. Memories that led to good outcomes get easier to find. Memories that led to failures get deprioritized. The graph literally learns from experience.

Forge is the prefrontal cortex — the executive function. It's a Redis-based queue system that orchestrates 15 specialized workers. It handles task scheduling, GPU coordination, retry logic, and a multi-step overnight chain that runs autonomously every night.

Home Assistant is the motor cortex — it controls the physical world. Lights, cameras, sensors, NFC triggers. Connected to everything else via MQTT.

Nerve is the corpus callosum — the integrity layer that ensures all MCP server configurations stay correct. It maintains a canonical registry and auto-restores missing or broken configs.

Why the Metaphor Matters

This isn't branding. The neuroscience mapping is an architectural decision.

When I need to add a new capability, I ask: "What brain structure handles this?" That question constrains the design in useful ways. A new monitoring system? That's a thalamus concern — it should filter, not store. A new security check? Amygdala — it should be threat-focused and run on a schedule. A new automation? Motor cortex — Home Assistant territory.

The metaphor prevents the thing that kills most homelab projects: everything becoming a monolith.

What It Looks Like in Practice

Every night, this happens automatically:

  1. GPU warmup — load the model into VRAM
  2. Amygdala — security review across all systems
  3. Morning briefing — summarize what happened overnight
  4. Article scanner — scan 18 sources for relevant tech news
  5. arXiv scout — check for new papers in my areas of interest
  6. Code review — scan git commits across all repos
  7. PR digest — summarize open pull requests
  8. Cerebellum — reinforce knowledge graph edges from task outcomes
  9. Consolidation — prune and compress Cortex memories

By morning, I have a security report, a tech news digest, code reviews, and a briefing. All produced by local models running on my Mac Mini. No cloud APIs. No usage fees. No data leaving my network.

The Numbers

As of April 2026, Cortex holds 1,400+ active memories across 30 projects, connected by 14,100+ auto-generated relationship edges. Full-text search returns results in under 3ms. The overnight chain has been running autonomously for months.

The whole system runs on three machines and a NAS in my house. A Mac Mini M4 with 24GB (forge) handles AI and development. A Mac Mini 2018 with an Intel i7 and 64GB (sentinel) runs Docker services, Home Assistant, and media. A MacBook Air M1 with 16GB (scout) handles portable work and remote access via Tailscale. A UGREEN 2-bay NAS provides 8TB of storage.

What I Learned

Three things I didn't expect to learn:

  • Memory is infrastructure, not a feature. Every AI tool I've used treats memory as an afterthought. But memory is what turns a tool into a collaborator. Without it, every session starts from zero.
  • Local-first is an advantage, not a limitation. Sub-3ms search requires the database to be co-located with the agent. Cloud APIs add 50-200ms minimum. For interactive use, that's the difference between seamless and sluggish.
  • Autonomous overnight workflows change your relationship with your tools. I don't check for security issues — the amygdala does. I don't scan for papers — the arXiv scout does. The brain runs while I sleep, and the results are waiting when I wake up.

If you're running AI agents and finding that nothing remembers anything, the problem isn't the agents — it's the infrastructure underneath them. That's what I'm building. I'd love to hear how you're solving it: reed@grainlabs.io.