Memory Systems
Added March 10, 2026 Source: Pawel Huryn
Design and implement agent memory systems for persistent knowledge retention across sessions. This skill helps you compare production frameworks and architect solutions for features like tracking entities over time or building knowledge graphs.
Installation
This skill has dependencies (scripts or reference files). Install using the method below to make sure everything is in place.
npx skills add muratcankoylan/Agent-Skills-for-Context-Engineering --skill context-engineering-collectionRequires Node.js 18+. The skills CLI auto-detects your editor and installs to the right directory.
Or install manually from the source repository.
SKILL.md (reference - install via npx or source for all dependencies)
---
name: memory-systems
description: >
Guides implementation of agent memory systems, compares production frameworks
(Mem0, Zep/Graphiti, Letta, LangMem, Cognee), and designs persistence architectures
for cross-session knowledge retention. Use when the user asks to "implement
agent memory", "persist state across sessions", "build knowledge graph for agents",
"track entities over time", "add long-term memory", "choose a memory framework",
or mentions temporal knowledge graphs, vector stores, entity memory, adaptive memory, dynamic memory or memory benchmarks (LoCoMo, LongMemEval).
---
# Memory System Design
Memory provides the persistence layer that allows agents to maintain continuity across sessions and reason over accumulated knowledge. Simple agents rely entirely on context for memory, losing all state when sessions end. Sophisticated agents implement layered memory architectures that balance immediate context needs with long-term knowledge retention. The evolution from vector stores to knowledge graphs to temporal knowledge graphs represents increasing investment in structured memory for improved retrieval and reasoning.
## When to Activate
Activate this skill when:
- Building agents that must persist knowledge across sessions
- Choosing between memory frameworks (Mem0, Zep/Graphiti, Letta, LangMem, Cognee)
- Needing to maintain entity consistency across conversations
- Implementing reasoning over accumulated knowledge
- Designing memory architectures that scale in production
- Evaluating memory systems against benchmarks (LoCoMo, LongMemEval, DMR)
- Building dynamic memory with automatic entity/relationship extraction and self-improving(Cognee)
## Core Concepts
Memory spans a spectrum from volatile context window to persistent storage. Key insight from benchmarks: **tool complexity matters less than reliable retrieval** — Letta's filesystem agents scored 74% on LoCoMo using basic file operations, beating Mem0's specialized tools at 68.5%. Start simple, add structure (graphs, temporal validity) only when retrieval quality demands it.
## Detailed Topics
### Production Framework Landscape
| Framework | Architecture | Best For | Trade-off |
|-----------|-------------|----------|-----------|
| **Mem0** | Vector store + graph memory, pluggable backends | Multi-tenant systems, broad integrations | Less specialized for multi-agent |
| **Zep/Graphiti** | Temporal knowledge graph, bi-temporal model | Enterprise requiring relationship modeling + temporal reasoning | Advanced features cloud-locked |
| **Letta** | Self-editing memory with tiered storage (in-context/core/archival) | Full agent introspection, stateful services | Complexity for simple use cases |
| **Cognee** | Multi-layer semantic graph via customizable ECL pipeline with customizable Tasks | Evolving agent memory that adapts and learns; multi-hop reasoning | Heavier ingest-time processing |
| **LangMem** | Memory tools for LangGraph workflows | Teams already on LangGraph | Tightly coupled to LangGraph |
| **File-system** | Plain files with naming conventions | Simple agents, prototyping | No semantic search, no relationships |
Zep's Graphiti engine builds a three-tier knowledge graph (episode, semantic entity, community subgraphs) with a bi-temporal model tracking both when events occurred and when they were ingested. Mem0 offers the fastest path to production with managed infrastructure. Letta provides the deepest agent control through its Agent Development Environment. Cognee produces multi-layer semantic graphs — it layers text chunks and entity types as nodes with detailed relationship edges, building interconnected knowledge engine. Every core piece (ingestion, entity extraction, post-processing, retrieval) is customizable.
**Benchmark Performance Comparison**
| System | DMR Accuracy | LoCoMo | HotPotQA (multi-hop) | Latency |
|--------|-------------|--------|---------------------|---------|
| Cognee | — | — | Highest on EM, F1, Correctness | Variable |
| Zep (Temporal KG) | 94.8% | — | Mid-range across metrics | 2.58s |
| Letta (filesystem) | — | 74.0% | — | — |
| Mem0 | — | 68.5% | Lowest across metrics | — |
| MemGPT | 93.4% | — | — | Variable |
| GraphRAG | ~75-85% | — | — | Variable |
| Vector RAG baseline | ~60-70% | — | — | Fast |
Zep achieves up to 18.5% accuracy improvement on LongMemEval while reducing latency by 90%. Cognee outperformed Mem0, Graphiti, and LightRAG on HotPotQA multi-hop reasoning benchmarks across Exact Match, F1, and human-like correctness metrics. Letta's filesystem-based agents achieved 74% on LoCoMo using basic file operations, outperforming specialized memory tools — tool complexity matters less than reliable retrieval. No single benchmark is definitive; treat these as signals for specific retrieval dimensions rather than rankings.
### Memory Layers (Decision Points)
| Layer | Persistence | Implementation | When to Use |
|-------|------------|----------------|-------------|
| **Working** | Context window only | Scratchpad in system prompt | Always — optimize with attention-favored positions |
| **Short-term** | Session-scoped | File-system, in-memory cache | Intermediate tool results, conversation state |
| **Long-term** | Cross-session | Key-value store → graph DB | User preferences, domain knowledge, entity registries |
| **Entity** | Cross-session | Entity registry + properties | Maintaining identity ("John Doe" = same person across conversations) |
| **Temporal KG** | Cross-session + history | Graph with validity intervals | Facts that change over time, time-travel queries, preventing context clash |
### Retrieval Strategies
| Strategy | Use When | Limitation |
|----------|----------|------------|
| **Semantic** (embedding similarity) | Direct factual queries | Degrades on multi-hop reasoning |
| **Entity-based** (graph traversal) | "Tell me everything about X" | Requires graph structure |
| **Temporal** (validity filter) | Facts change over time | Requires validity metadata |
| **Hybrid** (semantic + keyword + graph) | Best overall accuracy | Most infrastructure |
Zep's hybrid approach achieves 90% latency reduction (2.58s vs 28.9s) by retrieving only relevant subgraphs. Cognee implements hybrid retrieval through its 14 search modes — each mode combines different strategies from its three-store architecture (graph, vector, relational), letting agents select the retrieval strategy that fits the query type rather than using a one-size-fits-all approach.
### Memory Consolidation
Consolidate periodically to prevent unbounded growth. **Invalidate but don't discard** — preserving history matters for temporal queries. Trigger on memory count thresholds, degraded retrieval quality, or scheduled intervals. See [implementation reference](./references/implementation.md) for working consolidation code.
## Practical Guidance
### Choosing a Memory Architecture
**Start simple, add complexity only when retrieval fails.** Most agents don't need a temporal knowledge graph on day one.
1. **Prototype**: File-system memory. Store facts as structured JSON with timestamps. Good enough to validate agent behavior.
2. **Scale**: Move to Mem0 or vector store with metadata when you need semantic search and multi-tenant isolation.
3. **Complex reasoning**: Add Zep/Graphiti when you need relationship traversal, temporal validity, or cross-session synthesis. Graphiti uses structured ties with generic relations, keeping graphs simple and easy to reason about; Cognee builds denser multi-layer semantic graphs with detailed relationship edges — choose based on whether you need temporal bi-modeling (Graphiti) or richer interconnected knowledge structures (Cognee).
4. **Full control**: Use Letta or Cognee when you need agent self-management of memory with deep introspection.
### Integration with Context
Memories must integrate with context systems to be useful. Use just-in-time memory loading to retrieve relevant memories when needed. Use strategic injection to place memories in attention-favored positions (beginning/end of context).
### Error Recovery
- **Empty retrieval**: Fall back to broader search (remove entity filter, widen time range). If still empty, prompt user for clarification.
- **Stale results**: Check `valid_until` timestamps. If most results are expired, trigger consolidation before retrying.
- **Conflicting facts**: Prefer the fact with the most recent `valid_from`. Surface the conflict to the user if confidence is low.
- **Storage failure**: Queue writes for retry. Never block the agent's response on a memory write.
### Anti-Patterns
- **Stuffing everything into context**: Long inputs are expensive and degrade performance. Use just-in-time retrieval.
- **Ignoring temporal validity**: Facts go stale. Without validity tracking, outdated information poisons context.
- **Over-engineering early**: A filesystem agent can outperform complex memory tooling. Add sophistication when simple approaches fail.
- **No consolidation strategy**: Unbounded memory growth degrades retrieval quality over time.
## Examples
**Example 1: Mem0 Integration**
```python
from mem0 import Memory
m = Memory()
m.add("User prefers dark mode and Python 3.12", user_id="alice")
m.add("User switched to light mode", user_id="alice")
# Retrieves current preference (light mode), not outdated one
results = m.search("What theme does the user prefer?", user_id="alice")
```
**Example 2: Temporal Query**
```python
# Track entity with validity periods
graph.create_temporal_relationship(
source_id=user_node,
rel_type="LIVES_AT",
target_id=address_node,
valid_from=datetime(2024, 1, 15),
valid_until=datetime(2024, 9, 1), # moved out
)
# Query: Where did user live on March 1, 2024?
results = graph.query_at_time(
{"type": "LIVES_AT", "source_label": "User"},
query_time=datetime(2024, 3, 1)
)
```
**Example 3: Cognee Memory Ingestion and Search**
```python
import cognee
from cognee.modules.search.types import SearchType
# Ingest and build knowledge graph
await cognee.add("./docs/")
await cognee.add("any data")
await cognee.cognify()
# Enrich memory
await cognee.memify()
# Agent retrieves relationship-aware context
results = await cognee.search(
query_text="Any query for your memory",
query_type=SearchType.GRAPH_COMPLETION,
)
```
## Guidelines
1. Start with file-system memory; add complexity only when retrieval quality demands it
2. Track temporal validity for any fact that can change over time
3. Use hybrid retrieval (semantic + keyword + graph) for best accuracy
4. Consolidate memories periodically — invalidate but don't discard
5. Design for retrieval failure: always have a fallback when memory lookup returns nothing
6. Consider privacy implications of persistent memory (retention policies, deletion rights)
7. Benchmark your memory system against LoCoMo or LongMemEval before and after changes
8. Monitor memory growth and retrieval latency in production
## Integration
This skill builds on context-fundamentals. It connects to:
- multi-agent-patterns - Shared memory across agents
- context-optimization - Memory-based context loading
- evaluation - Evaluating memory quality
## References
Internal references:
- [Implementation Reference](./references/implementation.md) - Detailed implementation patterns, production framework references (Mem0, Graphiti, Cognee)
Related skills in this collection:
- context-fundamentals - Context basics
- multi-agent-patterns - Cross-agent memory
External resources:
- Zep temporal knowledge graph paper (arXiv:2501.13956)
- Mem0 production architecture paper (arXiv:2504.19413)
- Cognee optimized knowledge graph + LLM reasoning paper (arXiv:2505.24478)
- LoCoMo benchmark (Snap Research)
- MemBench evaluation framework (ACL 2025)
- Graphiti open-source temporal KG engine (github.com/getzep/graphiti)
- Cognee open-source knowledge graph memory (github.com/topoteretes/cognee)
- [Cognee comparison: Form vs Function](https://www.cognee.ai/blog/deep-dives/competition-comparison-form-vs-function) — graph structure comparison and HotPotQA benchmarks across Mem0, Graphiti, LightRAG, Cognee
---
## Skill Metadata
**Created**: 2025-12-20
**Last Updated**: 2026-02-26
**Author**: Agent Skills for Context Engineering Contributors
**Version**: 3.0.0
## Companion Files
The following companion files are referenced above and included here for standalone use.
### references/implementation.md
```markdown
# Memory Systems: Technical Reference
This document provides implementation details for memory system components.
## Vector Store Implementation
### Basic Vector Store
```python
import numpy as np
from typing import List, Dict, Any
import json
def cosine_similarity(a: np.ndarray, b: np.ndarray) -> float:
"""Compute cosine similarity between two vectors."""
norm_a = np.linalg.norm(a)
norm_b = np.linalg.norm(b)
if norm_a == 0 or norm_b == 0:
return 0.0
return float(np.dot(a, b) / (norm_a * norm_b))
class VectorStore:
def __init__(self, dimension=768):
self.dimension = dimension
self.vectors = []
self.metadata = []
self.texts = []
def add(self, text: str, metadata: Dict[str, Any] = None):
"""Add document to store."""
embedding = self._embed(text)
self.vectors.append(embedding)
self.metadata.append(metadata or {})
self.texts.append(text)
return len(self.vectors) - 1
def search(self, query: str, limit: int = 5,
filters: Dict[str, Any] = None) -> List[Dict]:
"""Search for similar documents."""
query_embedding = self._embed(query)
scores = []
for i, vec in enumerate(self.vectors):
score = cosine_similarity(query_embedding, vec)
# Apply filters
if filters and not self._matches_filters(self.metadata[i], filters):
score = -1 # Exclude
scores.append((i, score))
# Sort by score
scores.sort(key=lambda x: x[1], reverse=True)
# Return top k
results = []
for idx, score in scores[:limit]:
if score > 0: # Only include positive matches
results.append({
"index": idx,
"score": score,
"text": self._get_text(idx),
"metadata": self.metadata[idx]
})
return results
def _embed(self, text: str) -> np.ndarray:
"""Generate deterministic pseudo-embedding for demonstration.
In production, replace with actual embedding model."""
np.random.seed(hash(text) % (2**32))
vec = np.random.randn(self.dimension)
return vec / (np.linalg.norm(vec) + 1e-8)
def _matches_filters(self, metadata: Dict, filters: Dict) -> bool:
"""Check if metadata matches filters."""
for key, value in filters.items():
if key not in metadata:
return False
if isinstance(value, list):
if metadata[key] not in value:
return False
elif metadata[key] != value:
return False
return True
def _get_text(self, index: int) -> str:
"""Retrieve original text for index."""
return self.texts[index] if index < len(self.texts) else ""
```
### Metadata-Enhanced Vector Store
```python
class MetadataVectorStore(VectorStore):
def __init__(self, dimension=768):
super().__init__(dimension)
self.entity_index = {} # entity -> [indices]
self.time_index = {} # time_range -> [indices]
def add(self, text: str, metadata: Dict[str, Any] = None):
"""Add with enhanced indexing."""
metadata = metadata or {}
index = super().add(text, metadata)
# Index by entity
if "entity" in metadata:
entity = metadata["entity"]
if entity not in self.entity_index:
self.entity_index[entity] = []
self.entity_index[entity].append(index)
# Index by time
if "valid_from" in metadata:
time_key = self._time_range_key(
metadata.get("valid_from"),
metadata.get("valid_until")
)
if time_key not in self.time_index:
self.time_index[time_key] = []
self.time_index[time_key].append(index)
return index
def search_by_entity(self, query: str, entity: str, limit: int = 5) -> List[Dict]:
"""Search within specific entity."""
indices = self.entity_index.get(entity, [])
filtered = [self.metadata[i] for i in indices]
# Score and rank
query_embedding = self._embed(query)
scored = []
for i, meta in zip(indices, filtered):
vec = self.vectors[i]
score = cosine_similarity(query_embedding, vec)
scored.append((i, score, meta))
scored.sort(key=lambda x: x[1], reverse=True)
return [{
"index": idx,
"score": score,
"metadata": meta
} for idx, score, meta in scored[:limit]]
```
## Knowledge Graph Implementation
### Property Graph Storage
```python
from typing import Dict, List, Optional
import uuid
class PropertyGraph:
def __init__(self):
self.nodes = {} # id -> properties
self.edges = [] # list of edge dicts
self.entity_registry = {} # name -> node_id (maintains identity)
self.indexes = {
"node_label": {}, # label -> [node_ids]
"edge_type": {} # type -> [edge_ids]
}
def get_or_create_node(self, name: str, label: str, properties: Dict = None) -> str:
"""Get existing node by name, or create a new one.
Uses entity_registry to ensure identity across interactions."""
if name in self.entity_registry:
return self.entity_registry[name]
node_id = self.create_node(label, {**(properties or {}), "name": name})
self.entity_registry[name] = node_id
return node_id
def create_node(self, label: str, properties: Dict = None) -> str:
"""Create node with label and properties."""
node_id = str(uuid.uuid4())
self.nodes[node_id] = {
"label": label,
"properties": properties or {}
}
# Index by label
if label not in self.indexes["node_label"]:
self.indexes["node_label"][label] = []
self.indexes["node_label"][label].append(node_id)
return node_id
def create_relationship(self, source_id: str, rel_type: str,
target_id: str, properties: Dict = None) -> str:
"""Create directed relationship between nodes."""
edge_id = str(uuid.uuid4())
self.edges.append({
"id": edge_id,
"source": source_id,
"target": target_id,
"type": rel_type,
"properties": properties or {}
})
# Index by type
if rel_type not in self.indexes["edge_type"]:
self.indexes["edge_type"][rel_type] = []
self.indexes["edge_type"][rel_type].append(edge_id)
return edge_id
def query(self, cypher_like: str, params: Dict = None) -> List[Dict]:
"""
Simple query matching.
Supports patterns like:
MATCH (e)-[r]->(o) WHERE e.id = $id RETURN r
"""
# In production, use actual graph database
# This is a simplified pattern matcher
results = []
if cypher_like.startswith("MATCH"):
# Parse basic pattern
pattern = self._parse_pattern(cypher_like)
results = self._match_pattern(pattern, params or {})
return results
def _parse_pattern(self, query: str) -> Dict:
"""Parse simplified MATCH pattern."""
# Simplified parser for demonstration
return {
"source_label": self._extract_label(query, "source"),
"rel_type": self._extract_type(query),
"target_label": self._extract_label(query, "target"),
"where": self._extract_where(query)
}
def _match_pattern(self, pattern: Dict, params: Dict) -> List[Dict]:
"""Match pattern against graph."""
results = []
for edge in self.edges:
# Match relationship type
if pattern["rel_type"] and edge["type"] != pattern["rel_type"]:
continue
source = self.nodes.get(edge["source"], {})
target = self.nodes.get(edge["target"], {})
# Match labels
if pattern["source_label"] and source.get("label") != pattern["source_label"]:
continue
if pattern["target_label"] and target.get("label") != pattern["target_label"]:
continue
# Match where clause
if pattern["where"] and not self._match_where(edge, source, target, params):
continue
results.append({
"source": source,
"relationship": edge,
"target": target
})
return results
```
## Temporal Knowledge Graph
```python
from datetime import datetime
from typing import Optional
class TemporalKnowledgeGraph(PropertyGraph):
def __init__(self):
super().__init__()
self.temporal_index = {} # time_range -> [edge_ids]
def create_temporal_relationship(
self,
source_id: str,
rel_type: str,
target_id: str,
valid_from: datetime,
valid_until: Optional[datetime] = None,
properties: Dict = None
) -> str:
"""Create relationship with temporal validity."""
edge_id = super().create_relationship(
source_id, rel_type, target_id, properties
)
# Index temporally
time_key = self._time_range_key(valid_from, valid_until)
if time_key not in self.temporal_index:
self.temporal_index[time_key] = []
self.temporal_index[time_key].append(edge_id)
# Store validity on edge
edge = self._get_edge(edge_id)
edge["valid_from"] = valid_from.isoformat()
edge["valid_until"] = valid_until.isoformat() if valid_until else None
return edge_id
def query_at_time(self, query: str, query_time: datetime) -> List[Dict]:
"""Query graph state at specific time."""
# Find edges valid at query time
valid_edges = []
for edge in self.edges:
valid_from = datetime.fromisoformat(edge.get("valid_from", "1970-01-01"))
valid_until = edge.get("valid_until")
if valid_from <= query_time:
if valid_until is None or datetime.fromisoformat(valid_until) > query_time:
valid_edges.append(edge)
# Match against pattern
pattern = self._parse_pattern(query)
results = []
for edge in valid_edges:
if pattern["rel_type"] and edge["type"] != pattern["rel_type"]:
continue
source = self.nodes.get(edge["source"], {})
target = self.nodes.get(edge["target"], {})
results.append({
"source": source,
"relationship": edge,
"target": target
})
return results
def _time_range_key(self, start: datetime, end: Optional[datetime]) -> str:
"""Create time range key for indexing."""
start_str = start.isoformat()
end_str = end.isoformat() if end else "infinity"
return f"{start_str}::{end_str}"
```
## Memory Consolidation
```python
class MemoryConsolidator:
def __init__(self, graph: PropertyGraph, vector_store: VectorStore):
self.graph = graph
self.vector_store = vector_store
self.consolidation_threshold = 1000 # memories before consolidation
def should_consolidate(self) -> bool:
"""Check if consolidation should trigger."""
total_memories = len(self.graph.nodes) + len(self.graph.edges)
return total_memories > self.consolidation_threshold
def consolidate(self):
"""Run consolidation process."""
# Step 1: Identify duplicate or merged facts
duplicates = self.find_duplicates()
# Step 2: Merge related facts
for group in duplicates:
self.merge_fact_group(group)
# Step 3: Update validity periods
self.update_validity_periods()
# Step 4: Rebuild indexes
self.rebuild_indexes()
def find_duplicates(self) -> List[List]:
"""Find groups of potentially duplicate facts."""
# Group by subject and predicate
groups = {}
for edge in self.graph.edges:
key = (edge["source"], edge["type"])
if key not in groups:
groups[key] = []
groups[key].append(edge)
# Return groups with multiple edges
return [edges for edges in groups.values() if len(edges) > 1]
def merge_fact_group(self, edges: List[Dict]):
"""Merge group of duplicate edges."""
if len(edges) == 1:
return
# Keep most recent/relevant
keeper = max(edges, key=lambda e: e.get("properties", {}).get("confidence", 0))
# Merge metadata
for edge in edges:
if edge["id"] != keeper["id"]:
self.merge_properties(keeper, edge)
self.graph.edges.remove(edge)
def merge_properties(self, target: Dict, source: Dict):
"""Merge properties from source into target."""
for key, value in source.get("properties", {}).items():
if key not in target["properties"]:
target["properties"][key] = value
elif isinstance(value, list):
target["properties"][key].extend(value)
```
## Memory-Context Integration
```python
class MemoryContextIntegrator:
def __init__(self, memory_system, context_limit=100000):
self.memory_system = memory_system
self.context_limit = context_limit
def build_context(self, task: str, current_context: str = "") -> str:
"""Build context including relevant memories."""
# Extract entities from task
entities = self._extract_entities(task)
# Retrieve memories for each entity
memories = []
for entity in entities:
entity_memories = self.memory_system.retrieve_entity(entity)
memories.extend(entity_memories)
# Format memories for context
memory_section = self._format_memories(memories)
# Combine with current context
combined = current_context + "\n\n" + memory_section
# Check limit and truncate if needed
if self._token_count(combined) > self.context_limit:
combined = self._truncate_context(combined, self.context_limit)
return combined
def _extract_entities(self, task: str) -> List[str]:
"""Extract entity mentions from task."""
# In production, use NER or entity extraction
import re
pattern = r"\[([^\]]+)\]" # [[entity_name]] convention
return re.findall(pattern, task)
def _format_memories(self, memories: List[Dict]) -> str:
"""Format memories for context injection."""
sections = ["## Relevant Memories"]
for memory in memories:
formatted = f"- {memory.get('content', '')}"
if "source" in memory:
formatted += f" (Source: {memory['source']})"
if "timestamp" in memory:
formatted += f" [Time: {memory['timestamp']}]"
sections.append(formatted)
return "\n".join(sections)
def _token_count(self, text: str) -> int:
"""Estimate token count."""
return len(text) // 4 # Rough approximation
def _truncate_context(self, context: str, limit: int) -> str:
"""Truncate context to fit limit."""
tokens = context.split()
truncated = []
count = 0
for token in tokens:
if count + 1 > limit:
break
truncated.append(token)
count += 1
return " ".join(truncated)
```
## Framework Integration Examples
### Mem0 Quick Start
```python
from mem0 import Memory
# Initialize with default config (uses local storage)
m = Memory()
# Store memories with user scoping
m.add("Prefers Python 3.12 with type hints", user_id="dev-alice")
m.add("Working on microservices migration", user_id="dev-alice")
# Search with natural language
results = m.search("What language does the user prefer?", user_id="dev-alice")
# Batch operations
m.add([
"Sprint goal: complete auth service",
"Blocked on database schema review"
], user_id="dev-alice")
```
### Graphiti (Zep's Open-Source Temporal KG Engine)
```python
from graphiti_core import Graphiti
from graphiti_core.nodes import EpisodeType
# Initialize with Neo4j backend
graphiti = Graphiti("bolt://localhost:7687", "neo4j", "password")
# Add episodes (conversations, events)
await graphiti.add_episode(
name="user_conversation_42",
episode_body="Alice mentioned she moved to Berlin in January.",
source=EpisodeType.message,
source_description="Chat with Alice"
)
# Search combines semantic, keyword, and graph traversal
results = await graphiti.search("Where does Alice live?")
```
### Cognee (Open-Source Knowledge Engine for AI Memory)
```python
import cognee
from cognee.modules.search.types import SearchType
# ECL pipeline: add → cognify → memify → search
await cognee.add("./docs/")
await cognee.add("any-data")
await cognee.cognify()
await cognee.memify()
# Graph-aware retrieval (default: GRAPH_COMPLETION)
results = await cognee.search(
query_text="any query to search in memory",
query_type=SearchType.GRAPH_COMPLETION,
)
# Raw chunks when agent reasons over text itself
chunks = await cognee.search(
query_text="any query to search in memory",
query_type=SearchType.CHUNKS,
)
```
```
Originally by Pawel Huryn, adapted here as an Agent Skills compatible SKILL.md.
Works with
Agent Skills format — supported by 20+ editors. Learn more