elite-longterm-memory by nextfrontierbuilds/elite-longterm-memory
npx skills add https://github.com/nextfrontierbuilds/elite-longterm-memory --skill elite-longterm-memory面向 AI 智能体的终极记忆系统。 将 6 种经过验证的方法融合为一个坚不可摧的架构。
永不丢失上下文。永不忘记决策。永不重复错误。
┌─────────────────────────────────────────────────────────────────┐
│ 精英长期记忆系统 │
├─────────────────────────────────────────────────────────────────┤
│ │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ 热内存 │ │ 温存储 │ │ 冷存储 │ │
│ │ │ │ │ │ │ │
│ │ 会话- │ │ LanceDB │ │ Git-笔记 │ │
│ │ 状态.md │ │ 向量 │ │ 知识图谱 │ │
│ │ │ │ │ │ │ │
│ │ (压缩后保留) │ │ (语义搜索) │ │ (永久决策) │ │
│ └─────────────┘ └─────────────┘ └─────────────┘ │
│ │ │ │ │
│ └────────────────┼────────────────┘ │
│ ▼ │
│ ┌─────────────┐ │
│ │ 记忆.md │ ← 精选长期记忆 │
│ │ + 每日/ │ (人类可读) │
│ └─────────────┘ │
│ │ │
│ ▼ │
│ ┌─────────────┐ │
│ │ 超级记忆 │ ← 云端备份 (可选) │
│ │ API │ │
│ └─────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────┘
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
源自:bulletproof-memory
活跃的工作记忆,在压缩后仍能保留。采用预写日志协议。
# SESSION-STATE.md — 活跃工作记忆
## 当前任务
[我们正在处理的内容]
## 关键上下文
- 用户偏好:...
- 已做决策:...
- 阻碍因素:...
## 待处理操作
- [ ] ...
规则: 在响应前写入。由用户输入触发,而非智能体记忆。
源自:lancedb-memory
对所有记忆进行语义搜索。自动回忆功能会注入相关上下文。
# 自动回忆 (自动发生)
memory_recall query="项目状态" limit=5
# 手动存储
memory_store text="用户偏好深色模式" category="preference" importance=0.9
源自:git-notes-memory
结构化的决策、学习和上下文。支持分支感知。
# 存储决策 (静默 - 永不宣布)
python3 memory.py -p $DIR remember '{"type":"decision","content":"前端使用 React"}' -t tech -i h
# 检索上下文
python3 memory.py -p $DIR get "前端"
源自:Clawdbot 原生
人类可读的长期记忆。包含每日日志和提炼的智慧。
workspace/
├── MEMORY.md # 精选长期记忆 (精华部分)
└── memory/
├── 2026-01-30.md # 每日日志
├── 2026-01-29.md
└── topics/ # 主题特定文件
源自:supermemory
跨设备同步。与你的知识库对话。
export SUPERMEMORY_API_KEY="your-key"
supermemory add "重要上下文"
supermemory search "我们关于...的决定是什么"
新增:自动事实提取
Mem0 自动从对话中提取事实。减少 80% 的令牌使用。
npm install mem0ai
export MEM0_API_KEY="your-key"
const { MemoryClient } = require('mem0ai');
const client = new MemoryClient({ apiKey: process.env.MEM0_API_KEY });
// 对话自动提取事实
await client.add(messages, { user_id: "user123" });
// 检索相关记忆
const memories = await client.search(query, { user_id: "user123" });
优势:
cat > SESSION-STATE.md << 'EOF'
# SESSION-STATE.md — 活跃工作记忆
此文件是智能体的“内存”——在压缩、重启、分心后仍能保留。
## 当前任务
[无]
## 关键上下文
[尚无]
## 待处理操作
- [ ] 无
## 近期决策
[尚无]
---
*最后更新:[时间戳]*
EOF
在 ~/.clawdbot/clawdbot.json 中:
{
"memorySearch": {
"enabled": true,
"provider": "openai",
"sources": ["memory"],
"minScore": 0.3,
"maxResults": 10
},
"plugins": {
"entries": {
"memory-lancedb": {
"enabled": true,
"config": {
"autoCapture": false,
"autoRecall": true,
"captureCategories": ["preference", "decision", "fact"],
"minImportance": 0.7
}
}
}
}
}
cd ~/clawd
git init # 如果尚未初始化
python3 skills/git-notes-memory/memory.py -p . sync --start
# 确保拥有:
# - 工作区根目录下的 MEMORY.md
# - 用于每日日志的 memory/ 文件夹
mkdir -p memory
export SUPERMEMORY_API_KEY="your-key"
# 添加到 ~/.zshrc 以实现持久化
memory_search 以获取相关的先前上下文memory_store 并设置 importance=0.9memory_recall query="*" limit=50memory_forget id=<id>预写日志: 在响应前写入状态,而不是之后。
| 触发条件 | 操作 |
|---|---|
| 用户陈述偏好 | 写入 SESSION-STATE.md → 然后响应 |
| 用户做出决策 | 写入 SESSION-STATE.md → 然后响应 |
| 用户给出截止日期 | 写入 SESSION-STATE.md → 然后响应 |
| 用户纠正你 | 写入 SESSION-STATE.md → 然后响应 |
为什么? 如果你先响应,然后在保存前崩溃/压缩,上下文就会丢失。WAL 确保了持久性。
User: "这个项目我们用 Tailwind,不用原生 CSS"
Agent (内部):
1. 写入 SESSION-STATE.md: "决策:使用 Tailwind,而非原生 CSS"
2. 存储到 Git-笔记:关于 CSS 框架的决策
3. memory_store: "用户偏好 Tailwind 而非原生 CSS" importance=0.9
4. 然后响应: "明白了 — 就用 Tailwind..."
# 审计向量记忆
memory_recall query="*" limit=50
# 清除所有向量 (核选项)
rm -rf ~/.clawdbot/memory/lancedb/
clawdbot gateway restart
# 导出 Git-笔记
python3 memory.py -p . export --format json > memories.json
# 检查记忆健康状态
du -sh ~/.clawdbot/memory/
wc -l MEMORY.md
ls -la memory/
了解根本原因有助于你修复问题:
| 失效模式 | 原因 | 修复方法 |
|---|---|---|
| 忘记一切 | memory_search 已禁用 | 启用 + 添加 OpenAI 密钥 |
| 文件未加载 | 智能体跳过读取记忆 | 添加到 AGENTS.md 规则 |
| 事实未捕获 | 无自动提取 | 使用 Mem0 或手动记录 |
| 子智能体隔离 | 未继承上下文 | 在任务提示中传递上下文 |
| 重复错误 | 教训未记录 | 写入 memory/lessons.md |
如果你有 OpenAI 密钥,启用语义搜索:
clawdbot configure --section web
这将启用对 MEMORY.md + memory/*.md 文件的向量搜索。
自动从对话中提取事实。减少 80% 的令牌使用。
npm install mem0ai
const { MemoryClient } = require('mem0ai');
const client = new MemoryClient({ apiKey: process.env.MEM0_API_KEY });
// 自动提取并存储
await client.add([
{ role: "user", content: "我更喜欢 Tailwind 而不是原生 CSS" }
], { user_id: "ty" });
// 检索相关记忆
const memories = await client.search("CSS 偏好", { user_id: "ty" });
memory/
├── projects/
│ ├── strykr.md
│ └── taska.md
├── people/
│ └── contacts.md
├── decisions/
│ └── 2026-01.md
├── lessons/
│ └── mistakes.md
└── preferences.md
保持 MEMORY.md 作为摘要 (<5KB),链接到详细文件。
| 问题 | 修复方法 |
|---|---|
| 忘记偏好 | 在 MEMORY.md 中添加 ## 偏好 部分 |
| 重复错误 | 将每个错误记录到 memory/lessons.md |
| 子智能体缺乏上下文 | 在生成任务提示中包含关键上下文 |
| 忘记近期工作 | 严格遵守每日文件纪律 |
| 记忆搜索不工作 | 检查 OPENAI_API_KEY 是否已设置 |
智能体在对话中不断忘记: → SESSION-STATE.md 未更新。检查 WAL 协议。
注入了不相关的记忆: → 禁用 autoCapture,提高 minImportance 阈值。
记忆过大,回忆缓慢: → 运行维护:清除旧向量,归档每日日志。
Git-笔记未持久化: → 运行 git notes push 以同步到远程。
memory_search 无返回结果: → 检查 OpenAI API 密钥:echo $OPENAI_API_KEY → 验证 clawdbot.json 中 memorySearch 已启用
由 @NextXFrontier 构建 — Next Frontier AI 工具包的一部分
每周安装量
1.2K
代码仓库
GitHub 星标数
8
首次出现
2026年2月10日
安全审计
安装于
gemini-cli1.2K
opencode1.2K
cursor1.2K
codex1.2K
github-copilot1.2K
kimi-cli1.2K
The ultimate memory system for AI agents. Combines 6 proven approaches into one bulletproof architecture.
Never lose context. Never forget decisions. Never repeat mistakes.
┌─────────────────────────────────────────────────────────────────┐
│ ELITE LONGTERM MEMORY │
├─────────────────────────────────────────────────────────────────┤
│ │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ HOT RAM │ │ WARM STORE │ │ COLD STORE │ │
│ │ │ │ │ │ │ │
│ │ SESSION- │ │ LanceDB │ │ Git-Notes │ │
│ │ STATE.md │ │ Vectors │ │ Knowledge │ │
│ │ │ │ │ │ Graph │ │
│ │ (survives │ │ (semantic │ │ (permanent │ │
│ │ compaction)│ │ search) │ │ decisions) │ │
│ └─────────────┘ └─────────────┘ └─────────────┘ │
│ │ │ │ │
│ └────────────────┼────────────────┘ │
│ ▼ │
│ ┌─────────────┐ │
│ │ MEMORY.md │ ← Curated long-term │
│ │ + daily/ │ (human-readable) │
│ └─────────────┘ │
│ │ │
│ ▼ │
│ ┌─────────────┐ │
│ │ SuperMemory │ ← Cloud backup (optional) │
│ │ API │ │
│ └─────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────┘
From: bulletproof-memory
Active working memory that survives compaction. Write-Ahead Log protocol.
# SESSION-STATE.md — Active Working Memory
## Current Task
[What we're working on RIGHT NOW]
## Key Context
- User preference: ...
- Decision made: ...
- Blocker: ...
## Pending Actions
- [ ] ...
Rule: Write BEFORE responding. Triggered by user input, not agent memory.
From: lancedb-memory
Semantic search across all memories. Auto-recall injects relevant context.
# Auto-recall (happens automatically)
memory_recall query="project status" limit=5
# Manual store
memory_store text="User prefers dark mode" category="preference" importance=0.9
From: git-notes-memory
Structured decisions, learnings, and context. Branch-aware.
# Store a decision (SILENT - never announce)
python3 memory.py -p $DIR remember '{"type":"decision","content":"Use React for frontend"}' -t tech -i h
# Retrieve context
python3 memory.py -p $DIR get "frontend"
From: Clawdbot native
Human-readable long-term memory. Daily logs + distilled wisdom.
workspace/
├── MEMORY.md # Curated long-term (the good stuff)
└── memory/
├── 2026-01-30.md # Daily log
├── 2026-01-29.md
└── topics/ # Topic-specific files
From: supermemory
Cross-device sync. Chat with your knowledge base.
export SUPERMEMORY_API_KEY="your-key"
supermemory add "Important context"
supermemory search "what did we decide about..."
NEW: Automatic fact extraction
Mem0 automatically extracts facts from conversations. 80% token reduction.
npm install mem0ai
export MEM0_API_KEY="your-key"
const { MemoryClient } = require('mem0ai');
const client = new MemoryClient({ apiKey: process.env.MEM0_API_KEY });
// Conversations auto-extract facts
await client.add(messages, { user_id: "user123" });
// Retrieve relevant memories
const memories = await client.search(query, { user_id: "user123" });
Benefits:
cat > SESSION-STATE.md << 'EOF'
# SESSION-STATE.md — Active Working Memory
This file is the agent's "RAM" — survives compaction, restarts, distractions.
## Current Task
[None]
## Key Context
[None yet]
## Pending Actions
- [ ] None
## Recent Decisions
[None yet]
---
*Last updated: [timestamp]*
EOF
In ~/.clawdbot/clawdbot.json:
{
"memorySearch": {
"enabled": true,
"provider": "openai",
"sources": ["memory"],
"minScore": 0.3,
"maxResults": 10
},
"plugins": {
"entries": {
"memory-lancedb": {
"enabled": true,
"config": {
"autoCapture": false,
"autoRecall": true,
"captureCategories": ["preference", "decision", "fact"],
"minImportance": 0.7
}
}
}
}
}
cd ~/clawd
git init # if not already
python3 skills/git-notes-memory/memory.py -p . sync --start
# Ensure you have:
# - MEMORY.md in workspace root
# - memory/ folder for daily logs
mkdir -p memory
export SUPERMEMORY_API_KEY="your-key"
# Add to ~/.zshrc for persistence
memory_search for relevant prior contextmemory_store with importance=0.9memory_recall query="*" limit=50memory_forget id=<id>Write-Ahead Log: Write state BEFORE responding, not after.
| Trigger | Action |
|---|---|
| User states preference | Write to SESSION-STATE.md → then respond |
| User makes decision | Write to SESSION-STATE.md → then respond |
| User gives deadline | Write to SESSION-STATE.md → then respond |
| User corrects you | Write to SESSION-STATE.md → then respond |
Why? If you respond first and crash/compact before saving, context is lost. WAL ensures durability.
User: "Let's use Tailwind for this project, not vanilla CSS"
Agent (internal):
1. Write to SESSION-STATE.md: "Decision: Use Tailwind, not vanilla CSS"
2. Store in Git-Notes: decision about CSS framework
3. memory_store: "User prefers Tailwind over vanilla CSS" importance=0.9
4. THEN respond: "Got it — Tailwind it is..."
# Audit vector memory
memory_recall query="*" limit=50
# Clear all vectors (nuclear option)
rm -rf ~/.clawdbot/memory/lancedb/
clawdbot gateway restart
# Export Git-Notes
python3 memory.py -p . export --format json > memories.json
# Check memory health
du -sh ~/.clawdbot/memory/
wc -l MEMORY.md
ls -la memory/
Understanding the root causes helps you fix them:
| Failure Mode | Cause | Fix |
|---|---|---|
| Forgets everything | memory_search disabled | Enable + add OpenAI key |
| Files not loaded | Agent skips reading memory | Add to AGENTS.md rules |
| Facts not captured | No auto-extraction | Use Mem0 or manual logging |
| Sub-agents isolated | Don't inherit context | Pass context in task prompt |
| Repeats mistakes | Lessons not logged | Write to memory/lessons.md |
If you have an OpenAI key, enable semantic search:
clawdbot configure --section web
This enables vector search over MEMORY.md + memory/*.md files.
Auto-extract facts from conversations. 80% token reduction.
npm install mem0ai
const { MemoryClient } = require('mem0ai');
const client = new MemoryClient({ apiKey: process.env.MEM0_API_KEY });
// Auto-extract and store
await client.add([
{ role: "user", content: "I prefer Tailwind over vanilla CSS" }
], { user_id: "ty" });
// Retrieve relevant memories
const memories = await client.search("CSS preferences", { user_id: "ty" });
memory/
├── projects/
│ ├── strykr.md
│ └── taska.md
├── people/
│ └── contacts.md
├── decisions/
│ └── 2026-01.md
├── lessons/
│ └── mistakes.md
└── preferences.md
Keep MEMORY.md as a summary (<5KB), link to detailed files.
| Problem | Fix |
|---|---|
| Forgets preferences | Add ## Preferences section to MEMORY.md |
| Repeats mistakes | Log every mistake to memory/lessons.md |
| Sub-agents lack context | Include key context in spawn task prompt |
| Forgets recent work | Strict daily file discipline |
| Memory search not working | Check OPENAI_API_KEY is set |
Agent keeps forgetting mid-conversation: → SESSION-STATE.md not being updated. Check WAL protocol.
Irrelevant memories injected: → Disable autoCapture, increase minImportance threshold.
Memory too large, slow recall: → Run hygiene: clear old vectors, archive daily logs.
Git-Notes not persisting: → Run git notes push to sync with remote.
memory_search returns nothing: → Check OpenAI API key: echo $OPENAI_API_KEY → Verify memorySearch enabled in clawdbot.json
Built by@NextXFrontier — Part of the Next Frontier AI toolkit
Weekly Installs
1.2K
Repository
GitHub Stars
8
First Seen
Feb 10, 2026
Security Audits
Gen Agent Trust HubPassSocketPassSnykPass
Installed on
gemini-cli1.2K
opencode1.2K
cursor1.2K
codex1.2K
github-copilot1.2K
kimi-cli1.2K
React 组合模式指南:Vercel 组件架构最佳实践,提升代码可维护性
102,200 周安装