Over the last few months, I’ve been deep down the Autonomous Agents rabbit hole. More than just following the hype, I spent a good amount of time building and breaking things in production — ranging from simple LLM workflows to full-blown sales agents.

It is fascinating to see how, with open-source tools and a bit of engineering, we can create robots that not only converse but execute tasks with frightening efficiency. But as complexity increased, a failure pattern started to bother me: memory.

Most agents today suffer from “short-term amnesia.” They operate with sliding context windows (a buffer of the last n messages).

This works well for trivial interactions, but try sustaining a long conversation where you need to sell something and strategically use information the lead revealed about themselves. It is precisely when you need to be persuasive that the context disappears — and the illusion of intelligence shatters.

The Problem with Isolated Vectors

The industry rushed to solve this with vector databases (RAG). And while vectors are great for finding semantic similarity (“pasta” relates to “food”), they are terrible at maintaining structured relationships.

Human memory doesn’t work just by similarity; it works by connection. If I tell you “Alice moved to Berlin,” I don’t just create a vector. I update the entity Alice with the attribute Location: Berlin. It’s a graph, not a list.

I realized that to take the next step, my agents needed a structured semantic memory. I needed to stop treating memory as a chat history and start treating it as a Knowledge Graph.

Building Nous

I wanted a tool to bridge this gap — combining the flexibility of vector search with the precision of graphs — but it had to be simple. I didn’t want to write complex Cypher queries every time my agent needed to save a fact.

Since I couldn’t find anything that met this simplicity requirement, I started sketching out what would become Nous.

Nous (Greek for “intellect” or “mind”) is an attempt to create this hybrid memory layer. The architecture bets on a combination of technologies:

  • Apache AGE to structure entities and their relationships (the graph).
  • Qdrant for semantic search and fuzzy search (the vectors).

The idea is to abstract away the database complexity. The agent should simply “assimilate” information and “lookup” what it knows.

  1. On Assimilation: You throw in raw text (e.g., a user message). Nous uses an LLM to extract atomic facts and update the graph.
  2. On Lookup: You ask for an entity’s profile and receive a structured summary, ready to be injected into the system prompt.

Current State

The project is open-source and currently in Alpha stage — it is an active construction site. I am using it to solve my own context bottlenecks, but I believe other developers might find the paradigm useful.

If you are also exploring how to make agents less “forgetful” and more contextual, the code is available for study and collaboration.