mirofish-offline-simulation by aradotso/trending-skills
npx skills add https://github.com/aradotso/trending-skills --skill mirofish-offline-simulation技能来自 ara.so — Daily 2026 技能集。
MiroFish-离线是一个完全本地的多智能体群体智能引擎。向其输入任何文档(新闻稿、政策草案、财务报告),它就会生成数百个具有独特个性的 AI 智能体,模拟社交媒体上的公众反应——帖子、争论、观点转变——每小时更新。无需云 API:Neo4j CE 5.15 处理图记忆,Ollama 提供 LLM 服务。
Document Input
│
▼
Graph Build (NER + relationship extraction via Ollama LLM)
│
▼
Neo4j Knowledge Graph (entities, relations, embeddings via nomic-embed-text)
│
▼
Env Setup (generate hundreds of agent personas with personalities + memory)
│
▼
Simulation (agents post, reply, argue, shift opinions on simulated platforms)
│
▼
Report (ReportAgent interviews focus group, queries graph, generates analysis)
│
▼
Interaction (chat with any individual agent, full memory persists)
后端 : Flask + Python 3.11
前端 : Vue 3 + Node 18
图数据库 : Neo4j CE 5.15 (bolt 协议)
LLM : Ollama (OpenAI 兼容的 /v1 端点)
嵌入模型 : nomic-embed-text (768 维,通过 Ollama)
: 混合搜索 — 0.7 × 向量相似度 + 0.3 × BM25
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
git clone https://github.com/nikmcfly/MiroFish-Offline.git
cd MiroFish-Offline
cp .env.example .env
# 启动 Neo4j + Ollama + MiroFish 后端 + 前端
docker compose up -d
# 拉取所需模型到 Ollama 容器中
docker exec mirofish-ollama ollama pull qwen2.5:32b
docker exec mirofish-ollama ollama pull nomic-embed-text
# 检查所有服务是否健康运行
docker compose ps
打开 http://localhost:3000。
1. Neo4j
docker run -d --name neo4j \
-p 7474:7474 -p 7687:7687 \
-e NEO4J_AUTH=neo4j/mirofish \
neo4j:5.15-community
2. Ollama
ollama serve &
ollama pull qwen2.5:32b # 主 LLM (~20GB,需要 24GB VRAM)
ollama pull qwen2.5:14b # 轻量级选项 (~10GB VRAM)
ollama pull nomic-embed-text # 嵌入模型 (小,快)
3. 后端
cp .env.example .env
# 编辑 .env (参见配置部分)
cd backend
pip install -r requirements.txt
python run.py
# 后端启动于 http://localhost:5000
4. 前端
cd frontend
npm install
npm run dev
# 前端启动于 http://localhost:3000
.env)# ── LLM (Ollama OpenAI 兼容端点) ──────────────────────────
LLM_API_KEY=ollama
LLM_BASE_URL=http://localhost:11434/v1
LLM_MODEL_NAME=qwen2.5:32b
# ── Neo4j ─────────────────────────────────────────────────────────────
NEO4J_URI=bolt://localhost:7687
NEO4J_USER=neo4j
NEO4J_PASSWORD=mirofish
# ── 嵌入模型 (Ollama) ───────────────────────────────────────────────
EMBEDDING_MODEL=nomic-embed-text
EMBEDDING_BASE_URL=http://localhost:11434
# ── 可选:将 Ollama 替换为任何 OpenAI 兼容的提供商 ─────────
# LLM_API_KEY=$OPENAI_API_KEY
# LLM_BASE_URL=https://api.openai.com/v1
# LLM_MODEL_NAME=gpt-4o
MiroFish 与图数据库之间的抽象层:
from backend.storage.base import GraphStorage
from backend.storage.neo4j_storage import Neo4jStorage
# 初始化存储 (通常通过 Flask app.extensions 完成)
storage = Neo4jStorage(
uri=os.environ["NEO4J_URI"],
user=os.environ["NEO4J_USER"],
password=os.environ["NEO4J_PASSWORD"],
embedding_model=os.environ["EMBEDDING_MODEL"],
embedding_base_url=os.environ["EMBEDDING_BASE_URL"],
llm_base_url=os.environ["LLM_BASE_URL"],
llm_api_key=os.environ["LLM_API_KEY"],
llm_model=os.environ["LLM_MODEL_NAME"],
)
from backend.services.graph_builder import GraphBuilder
builder = GraphBuilder(storage=storage)
# 输入文档字符串
with open("press_release.txt", "r") as f:
document_text = f.read()
# 提取实体 + 关系,存储到 Neo4j
graph_id = builder.build(
content=document_text,
title="Q4 财报",
source_type="financial_report",
)
print(f"图谱构建完成: {graph_id}")
# 返回一个 graph_id,用于后续的模拟运行
from backend.services.simulation import SimulationService
sim = SimulationService(storage=storage)
# 从现有图谱创建模拟环境
sim_id = sim.create_environment(
graph_id=graph_id,
agent_count=200, # 要生成的智能体数量
simulation_hours=24, # 模拟时间跨度
platform="twitter", # "twitter" | "reddit" | "weibo"
)
# 运行模拟 (阻塞式 — 生产环境请使用异步包装器)
result = sim.run(sim_id=sim_id)
print(f"模拟完成。生成的帖子数: {result['post_count']}")
print(f"情感轨迹: {result['sentiment_over_time']}")
from backend.services.report import ReportAgent
report_agent = ReportAgent(storage=storage)
# 生成结构化分析报告
report = report_agent.generate(
sim_id=sim_id,
focus_group_size=10, # 要访谈的智能体数量
include_graph_search=True,
)
print(report["summary"])
print(report["key_narratives"])
print(report["sentiment_shift"])
print(report["influential_agents"])
from backend.services.agent_chat import AgentChatService
chat = AgentChatService(storage=storage)
# 列出已完成的模拟中的智能体
agents = chat.list_agents(sim_id=sim_id, limit=10)
agent_id = agents[0]["id"]
print(f"正在与: {agents[0]['persona']['name']} 聊天")
print(f"个性: {agents[0]['persona']['traits']}")
# 发送消息 — 智能体以其角色身份,基于完整记忆进行回复
response = chat.send(
agent_id=agent_id,
message="你为什么发布那条关于财报的批评?",
)
print(response["reply"])
# → 智能体使用其个性、观点偏见和发帖历史进行回复
from backend.services.search import SearchService
search = SearchService(storage=storage)
# 混合搜索: 0.7 * 向量相似度 + 0.3 * BM25
results = search.query(
text="高管薪酬争议",
graph_id=graph_id,
top_k=5,
vector_weight=0.7,
bm25_weight=0.3,
)
for r in results:
print(r["entity"], r["relationship"], r["score"])
from backend.storage.base import GraphStorage
from typing import List, Dict, Any
class MyCustomStorage(GraphStorage):
"""
通过实现此接口,可将 Neo4j 替换为任何图数据库。
通过 Flask app.extensions['neo4j_storage'] = MyCustomStorage(...) 注册。
"""
def store_entity(self, entity: Dict[str, Any]) -> str:
# 存储实体,返回 entity_id
raise NotImplementedError
def store_relationship(
self,
source_id: str,
target_id: str,
relation_type: str,
properties: Dict[str, Any],
) -> str:
raise NotImplementedError
def vector_search(
self, embedding: List[float], top_k: int = 5
) -> List[Dict[str, Any]]:
raise NotImplementedError
def keyword_search(
self, query: str, top_k: int = 5
) -> List[Dict[str, Any]]:
raise NotImplementedError
def get_agent_memory(self, agent_id: str) -> Dict[str, Any]:
raise NotImplementedError
def update_agent_memory(
self, agent_id: str, memory_update: Dict[str, Any]
) -> None:
raise NotImplementedError
# backend/app.py — 存储如何通过依赖注入连接
from flask import Flask
from backend.storage.neo4j_storage import Neo4jStorage
import os
def create_app():
app = Flask(__name__)
# 单一的存储实例,通过 app.extensions 注入到各处
storage = Neo4jStorage(
uri=os.environ["NEO4J_URI"],
user=os.environ["NEO4J_USER"],
password=os.environ["NEO4J_PASSWORD"],
embedding_model=os.environ["EMBEDDING_MODEL"],
embedding_base_url=os.environ["EMBEDDING_BASE_URL"],
llm_base_url=os.environ["LLM_BASE_URL"],
llm_api_key=os.environ["LLM_API_KEY"],
llm_model=os.environ["LLM_MODEL_NAME"],
)
app.extensions["neo4j_storage"] = storage
from backend.routes import graph_bp, simulation_bp, report_bp
app.register_blueprint(graph_bp)
app.register_blueprint(simulation_bp)
app.register_blueprint(report_bp)
return app
from flask import Blueprint, current_app, request, jsonify
simulation_bp = Blueprint("simulation", __name__)
@simulation_bp.route("/api/simulation/run", methods=["POST"])
def run_simulation():
storage = current_app.extensions["neo4j_storage"]
data = request.json
sim = SimulationService(storage=storage)
sim_id = sim.create_environment(
graph_id=data["graph_id"],
agent_count=data.get("agent_count", 200),
simulation_hours=data.get("simulation_hours", 24),
)
result = sim.run(sim_id=sim_id)
return jsonify(result)
| 方法 | 端点 | 描述 |
|---|---|---|
POST | /api/graph/build | 上传文档,构建知识图谱 |
GET | /api/graph/:id | 获取图谱实体和关系 |
POST | /api/simulation/create | 创建模拟环境 |
POST | /api/simulation/run | 执行模拟 |
GET | /api/simulation/:id/results | 获取帖子、情感、指标 |
GET | /api/simulation/:id/agents | 列出生成的智能体 |
POST | /api/report/generate | 生成 ReportAgent 分析 |
POST | /api/agent/:id/chat | 与特定智能体聊天 |
GET | /api/search | 混合搜索知识图谱 |
示例:从文档构建图谱
curl -X POST http://localhost:5000/api/graph/build \
-H "Content-Type: application/json" \
-d '{
"content": "Acme Corp announces record Q4 earnings, CFO resigns...",
"title": "Q4 新闻稿",
"source_type": "press_release"
}'
# → {"graph_id": "g_abc123", "entities": 47, "relationships": 89}
示例:运行模拟
curl -X POST http://localhost:5000/api/simulation/run \
-H "Content-Type: application/json" \
-d '{
"graph_id": "g_abc123",
"agent_count": 150,
"simulation_hours": 12,
"platform": "twitter"
}'
# → {"sim_id": "s_xyz789", "status": "running"}
| 使用场景 | 模型 | VRAM | 内存 |
|---|---|---|---|
| 快速测试 / 开发 | qwen2.5:7b | 6 GB | 16 GB |
| 平衡质量 | qwen2.5:14b | 10 GB | 16 GB |
| 生产质量 | qwen2.5:32b | 24 GB | 32 GB |
| 仅 CPU (较慢) | qwen2.5:7b | 无 | 16 GB |
通过编辑 .env 切换模型:
LLM_MODEL_NAME=qwen2.5:14b
然后重启后端 — 无需其他更改。
import os
from backend.storage.neo4j_storage import Neo4jStorage
from backend.services.graph_builder import GraphBuilder
from backend.services.simulation import SimulationService
from backend.services.report import ReportAgent
storage = Neo4jStorage(
uri=os.environ["NEO4J_URI"],
user=os.environ["NEO4J_USER"],
password=os.environ["NEO4J_PASSWORD"],
embedding_model=os.environ["EMBEDDING_MODEL"],
embedding_base_url=os.environ["EMBEDDING_BASE_URL"],
llm_base_url=os.environ["LLM_BASE_URL"],
llm_api_key=os.environ["LLM_API_KEY"],
llm_model=os.environ["LLM_MODEL_NAME"],
)
def test_press_release(text: str) -> dict:
# 1. 构建知识图谱
builder = GraphBuilder(storage=storage)
graph_id = builder.build(content=text, title="草稿 PR", source_type="press_release")
# 2. 模拟公众反应
sim = SimulationService(storage=storage)
sim_id = sim.create_environment(graph_id=graph_id, agent_count=300, simulation_hours=48)
sim.run(sim_id=sim_id)
# 3. 生成报告
report = ReportAgent(storage=storage).generate(sim_id=sim_id, focus_group_size=15)
return {
"sentiment_peak": report["sentiment_over_time"][0],
"key_narratives": report["key_narratives"],
"risk_score": report["risk_score"],
"recommended_edits": report["recommendations"],
}
# 用法
with open("draft_announcement.txt") as f:
result = test_press_release(f.read())
print(f"风险评分: {result['risk_score']}/10")
print(f"主要叙事: {result['key_narratives'][0]}")
# 通过 Anthropic 使用 Claude (或任何代理)
LLM_API_KEY=$ANTHROPIC_API_KEY
LLM_BASE_URL=https://api.anthropic.com/v1
LLM_MODEL_NAME=claude-3-5-sonnet-20241022
# OpenAI
LLM_API_KEY=$OPENAI_API_KEY
LLM_BASE_URL=https://api.openai.com/v1
LLM_MODEL_NAME=gpt-4o
# 本地 LM Studio
LLM_API_KEY=lm-studio
LLM_BASE_URL=http://localhost:1234/v1
LLM_MODEL_NAME=your-loaded-model
# 检查 Neo4j 是否在运行
docker ps | grep neo4j
# 检查 bolt 端口
nc -zv localhost 7687
# 查看 Neo4j 日志
docker logs neo4j --tail 50
# 列出可用模型
ollama list
# 拉取缺失的模型
ollama pull qwen2.5:32b
ollama pull nomic-embed-text
# 检查 Ollama 是否在服务
curl http://localhost:11434/api/tags
# 在 .env 中切换到更小的模型
LLM_MODEL_NAME=qwen2.5:14b # 或 qwen2.5:7b
# 重启后端
cd backend && python run.py
# nomic-embed-text 生成 768 维向量
# 如果切换嵌入模型,请删除并重新创建 Neo4j 向量索引:
# 在 Neo4j 浏览器中 (http://localhost:7474):
# DROP INDEX entity_embedding IF EXISTS;
# 然后重启 MiroFish — 它会用正确的维度重新创建索引。
# docker-compose.yml — 添加 GPU 预留:
services:
ollama:
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [gpu]
qwen2.5:7b 进行更快(质量较低)的推理agent_count 减少到 50–100 进行测试simulation_hours 减少到 6–12# 检查 frontend/.env 中的 VITE_API_BASE_URL
VITE_API_BASE_URL=http://localhost:5000
# 验证后端是否已启动
curl http://localhost:5000/api/health
MiroFish-Offline/
├── backend/
│ ├── run.py # 入口点
│ ├── app.py # Flask 工厂,依赖注入连接
│ ├── storage/
│ │ ├── base.py # GraphStorage 抽象接口
│ │ └── neo4j_storage.py # Neo4j 实现
│ ├── services/
│ │ ├── graph_builder.py # NER + 关系提取
│ │ ├── simulation.py # 智能体模拟引擎
│ │ ├── report.py # ReportAgent + 焦点小组
│ │ ├── agent_chat.py # 每个智能体的聊天接口
│ │ └── search.py # 混合向量 + BM25 搜索
│ └── routes/
│ ├── graph.py
│ ├── simulation.py
│ └── report.py
├── frontend/ # Vue 3 (全英文 UI)
├── docker-compose.yml
├── .env.example
└── README.md
每周安装量
247
代码仓库
GitHub 星标数
10
首次出现
6 天前
安全审计
安装于
opencode246
github-copilot246
codex246
amp246
cline246
kimi-cli246
Skill by ara.so — Daily 2026 Skills collection.
MiroFish-Offline is a fully local multi-agent swarm intelligence engine. Feed it any document (press release, policy draft, financial report) and it generates hundreds of AI agents with unique personalities that simulate public reaction on social media — posts, arguments, opinion shifts — hour by hour. No cloud APIs required: Neo4j CE 5.15 handles graph memory, Ollama serves the LLMs.
Document Input
│
▼
Graph Build (NER + relationship extraction via Ollama LLM)
│
▼
Neo4j Knowledge Graph (entities, relations, embeddings via nomic-embed-text)
│
▼
Env Setup (generate hundreds of agent personas with personalities + memory)
│
▼
Simulation (agents post, reply, argue, shift opinions on simulated platforms)
│
▼
Report (ReportAgent interviews focus group, queries graph, generates analysis)
│
▼
Interaction (chat with any individual agent, full memory persists)
Backend : Flask + Python 3.11
Frontend : Vue 3 + Node 18
Graph DB : Neo4j CE 5.15 (bolt protocol)
LLM : Ollama (OpenAI-compatible /v1 endpoint)
Embeddings : nomic-embed-text (768-dimensional, via Ollama)
Search : Hybrid — 0.7 × vector similarity + 0.3 × BM25
git clone https://github.com/nikmcfly/MiroFish-Offline.git
cd MiroFish-Offline
cp .env.example .env
# Start Neo4j + Ollama + MiroFish backend + frontend
docker compose up -d
# Pull required models into the Ollama container
docker exec mirofish-ollama ollama pull qwen2.5:32b
docker exec mirofish-ollama ollama pull nomic-embed-text
# Check all services are healthy
docker compose ps
Open http://localhost:3000.
1. Neo4j
docker run -d --name neo4j \
-p 7474:7474 -p 7687:7687 \
-e NEO4J_AUTH=neo4j/mirofish \
neo4j:5.15-community
2. Ollama
ollama serve &
ollama pull qwen2.5:32b # Main LLM (~20GB, requires 24GB VRAM)
ollama pull qwen2.5:14b # Lighter option (~10GB VRAM)
ollama pull nomic-embed-text # Embeddings (small, fast)
3. Backend
cp .env.example .env
# Edit .env (see Configuration section)
cd backend
pip install -r requirements.txt
python run.py
# Backend starts on http://localhost:5000
4. Frontend
cd frontend
npm install
npm run dev
# Frontend starts on http://localhost:3000
.env)# ── LLM (Ollama OpenAI-compatible endpoint) ──────────────────────────
LLM_API_KEY=ollama
LLM_BASE_URL=http://localhost:11434/v1
LLM_MODEL_NAME=qwen2.5:32b
# ── Neo4j ─────────────────────────────────────────────────────────────
NEO4J_URI=bolt://localhost:7687
NEO4J_USER=neo4j
NEO4J_PASSWORD=mirofish
# ── Embeddings (Ollama) ───────────────────────────────────────────────
EMBEDDING_MODEL=nomic-embed-text
EMBEDDING_BASE_URL=http://localhost:11434
# ── Optional: swap Ollama for any OpenAI-compatible provider ─────────
# LLM_API_KEY=$OPENAI_API_KEY
# LLM_BASE_URL=https://api.openai.com/v1
# LLM_MODEL_NAME=gpt-4o
The abstraction layer between MiroFish and the graph database:
from backend.storage.base import GraphStorage
from backend.storage.neo4j_storage import Neo4jStorage
# Initialize storage (typically done via Flask app.extensions)
storage = Neo4jStorage(
uri=os.environ["NEO4J_URI"],
user=os.environ["NEO4J_USER"],
password=os.environ["NEO4J_PASSWORD"],
embedding_model=os.environ["EMBEDDING_MODEL"],
embedding_base_url=os.environ["EMBEDDING_BASE_URL"],
llm_base_url=os.environ["LLM_BASE_URL"],
llm_api_key=os.environ["LLM_API_KEY"],
llm_model=os.environ["LLM_MODEL_NAME"],
)
from backend.services.graph_builder import GraphBuilder
builder = GraphBuilder(storage=storage)
# Feed a document string
with open("press_release.txt", "r") as f:
document_text = f.read()
# Extract entities + relationships, store in Neo4j
graph_id = builder.build(
content=document_text,
title="Q4 Earnings Report",
source_type="financial_report",
)
print(f"Graph built: {graph_id}")
# Returns a graph_id used for subsequent simulation runs
from backend.services.simulation import SimulationService
sim = SimulationService(storage=storage)
# Create a simulation environment from an existing graph
sim_id = sim.create_environment(
graph_id=graph_id,
agent_count=200, # Number of agents to generate
simulation_hours=24, # Simulated time span
platform="twitter", # "twitter" | "reddit" | "weibo"
)
# Run the simulation (blocking — use async wrapper for production)
result = sim.run(sim_id=sim_id)
print(f"Simulation complete. Posts generated: {result['post_count']}")
print(f"Sentiment trajectory: {result['sentiment_over_time']}")
from backend.services.report import ReportAgent
report_agent = ReportAgent(storage=storage)
# Generate a structured analysis report
report = report_agent.generate(
sim_id=sim_id,
focus_group_size=10, # Number of agents to interview
include_graph_search=True,
)
print(report["summary"])
print(report["key_narratives"])
print(report["sentiment_shift"])
print(report["influential_agents"])
from backend.services.agent_chat import AgentChatService
chat = AgentChatService(storage=storage)
# List agents from a completed simulation
agents = chat.list_agents(sim_id=sim_id, limit=10)
agent_id = agents[0]["id"]
print(f"Chatting with: {agents[0]['persona']['name']}")
print(f"Personality: {agents[0]['persona']['traits']}")
# Send a message — agent responds in-character with full memory
response = chat.send(
agent_id=agent_id,
message="Why did you post that criticism about the earnings report?",
)
print(response["reply"])
# → Agent responds using its personality, opinion bias, and post history
from backend.services.search import SearchService
search = SearchService(storage=storage)
# Hybrid search: 0.7 * vector similarity + 0.3 * BM25
results = search.query(
text="executive compensation controversy",
graph_id=graph_id,
top_k=5,
vector_weight=0.7,
bm25_weight=0.3,
)
for r in results:
print(r["entity"], r["relationship"], r["score"])
from backend.storage.base import GraphStorage
from typing import List, Dict, Any
class MyCustomStorage(GraphStorage):
"""
Swap Neo4j for any graph DB by implementing this interface.
Register via Flask app.extensions['neo4j_storage'] = MyCustomStorage(...)
"""
def store_entity(self, entity: Dict[str, Any]) -> str:
# Store entity, return entity_id
raise NotImplementedError
def store_relationship(
self,
source_id: str,
target_id: str,
relation_type: str,
properties: Dict[str, Any],
) -> str:
raise NotImplementedError
def vector_search(
self, embedding: List[float], top_k: int = 5
) -> List[Dict[str, Any]]:
raise NotImplementedError
def keyword_search(
self, query: str, top_k: int = 5
) -> List[Dict[str, Any]]:
raise NotImplementedError
def get_agent_memory(self, agent_id: str) -> Dict[str, Any]:
raise NotImplementedError
def update_agent_memory(
self, agent_id: str, memory_update: Dict[str, Any]
) -> None:
raise NotImplementedError
# backend/app.py — how storage is wired via dependency injection
from flask import Flask
from backend.storage.neo4j_storage import Neo4jStorage
import os
def create_app():
app = Flask(__name__)
# Single storage instance, injected everywhere via app.extensions
storage = Neo4jStorage(
uri=os.environ["NEO4J_URI"],
user=os.environ["NEO4J_USER"],
password=os.environ["NEO4J_PASSWORD"],
embedding_model=os.environ["EMBEDDING_MODEL"],
embedding_base_url=os.environ["EMBEDDING_BASE_URL"],
llm_base_url=os.environ["LLM_BASE_URL"],
llm_api_key=os.environ["LLM_API_KEY"],
llm_model=os.environ["LLM_MODEL_NAME"],
)
app.extensions["neo4j_storage"] = storage
from backend.routes import graph_bp, simulation_bp, report_bp
app.register_blueprint(graph_bp)
app.register_blueprint(simulation_bp)
app.register_blueprint(report_bp)
return app
from flask import Blueprint, current_app, request, jsonify
simulation_bp = Blueprint("simulation", __name__)
@simulation_bp.route("/api/simulation/run", methods=["POST"])
def run_simulation():
storage = current_app.extensions["neo4j_storage"]
data = request.json
sim = SimulationService(storage=storage)
sim_id = sim.create_environment(
graph_id=data["graph_id"],
agent_count=data.get("agent_count", 200),
simulation_hours=data.get("simulation_hours", 24),
)
result = sim.run(sim_id=sim_id)
return jsonify(result)
| Method | Endpoint | Description |
|---|---|---|
POST | /api/graph/build | Upload document, build knowledge graph |
GET | /api/graph/:id | Get graph entities and relationships |
POST | /api/simulation/create | Create simulation environment |
POST |
Example: Build graph from document
curl -X POST http://localhost:5000/api/graph/build \
-H "Content-Type: application/json" \
-d '{
"content": "Acme Corp announces record Q4 earnings, CFO resigns...",
"title": "Q4 Press Release",
"source_type": "press_release"
}'
# → {"graph_id": "g_abc123", "entities": 47, "relationships": 89}
Example: Run a simulation
curl -X POST http://localhost:5000/api/simulation/run \
-H "Content-Type: application/json" \
-d '{
"graph_id": "g_abc123",
"agent_count": 150,
"simulation_hours": 12,
"platform": "twitter"
}'
# → {"sim_id": "s_xyz789", "status": "running"}
| Use Case | Model | VRAM | RAM |
|---|---|---|---|
| Quick test / dev | qwen2.5:7b | 6 GB | 16 GB |
| Balanced quality | qwen2.5:14b | 10 GB | 16 GB |
| Production quality | qwen2.5:32b | 24 GB | 32 GB |
| CPU-only (slow) | qwen2.5:7b | None | 16 GB |
Switch model by editing .env:
LLM_MODEL_NAME=qwen2.5:14b
Then restart the backend — no other changes needed.
import os
from backend.storage.neo4j_storage import Neo4jStorage
from backend.services.graph_builder import GraphBuilder
from backend.services.simulation import SimulationService
from backend.services.report import ReportAgent
storage = Neo4jStorage(
uri=os.environ["NEO4J_URI"],
user=os.environ["NEO4J_USER"],
password=os.environ["NEO4J_PASSWORD"],
embedding_model=os.environ["EMBEDDING_MODEL"],
embedding_base_url=os.environ["EMBEDDING_BASE_URL"],
llm_base_url=os.environ["LLM_BASE_URL"],
llm_api_key=os.environ["LLM_API_KEY"],
llm_model=os.environ["LLM_MODEL_NAME"],
)
def test_press_release(text: str) -> dict:
# 1. Build knowledge graph
builder = GraphBuilder(storage=storage)
graph_id = builder.build(content=text, title="Draft PR", source_type="press_release")
# 2. Simulate public reaction
sim = SimulationService(storage=storage)
sim_id = sim.create_environment(graph_id=graph_id, agent_count=300, simulation_hours=48)
sim.run(sim_id=sim_id)
# 3. Generate report
report = ReportAgent(storage=storage).generate(sim_id=sim_id, focus_group_size=15)
return {
"sentiment_peak": report["sentiment_over_time"][0],
"key_narratives": report["key_narratives"],
"risk_score": report["risk_score"],
"recommended_edits": report["recommendations"],
}
# Usage
with open("draft_announcement.txt") as f:
result = test_press_release(f.read())
print(f"Risk score: {result['risk_score']}/10")
print(f"Top narrative: {result['key_narratives'][0]}")
# Claude via Anthropic (or any proxy)
LLM_API_KEY=$ANTHROPIC_API_KEY
LLM_BASE_URL=https://api.anthropic.com/v1
LLM_MODEL_NAME=claude-3-5-sonnet-20241022
# OpenAI
LLM_API_KEY=$OPENAI_API_KEY
LLM_BASE_URL=https://api.openai.com/v1
LLM_MODEL_NAME=gpt-4o
# Local LM Studio
LLM_API_KEY=lm-studio
LLM_BASE_URL=http://localhost:1234/v1
LLM_MODEL_NAME=your-loaded-model
# Check Neo4j is running
docker ps | grep neo4j
# Check bolt port
nc -zv localhost 7687
# View Neo4j logs
docker logs neo4j --tail 50
# List available models
ollama list
# Pull missing models
ollama pull qwen2.5:32b
ollama pull nomic-embed-text
# Check Ollama is serving
curl http://localhost:11434/api/tags
# Switch to smaller model in .env
LLM_MODEL_NAME=qwen2.5:14b # or qwen2.5:7b
# Restart backend
cd backend && python run.py
# nomic-embed-text produces 768-dim vectors
# If you switch embedding models, drop and recreate the Neo4j vector index:
# In Neo4j browser (http://localhost:7474):
# DROP INDEX entity_embedding IF EXISTS;
# Then restart MiroFish — it recreates the index with correct dimensions.
# docker-compose.yml — add GPU reservation:
services:
ollama:
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [gpu]
qwen2.5:7b for faster (lower quality) inferenceagent_count to 50–100 for testingsimulation_hours to 6–12# Check VITE_API_BASE_URL in frontend/.env
VITE_API_BASE_URL=http://localhost:5000
# Verify backend is up
curl http://localhost:5000/api/health
MiroFish-Offline/
├── backend/
│ ├── run.py # Entry point
│ ├── app.py # Flask factory, DI wiring
│ ├── storage/
│ │ ├── base.py # GraphStorage abstract interface
│ │ └── neo4j_storage.py # Neo4j implementation
│ ├── services/
│ │ ├── graph_builder.py # NER + relationship extraction
│ │ ├── simulation.py # Agent simulation engine
│ │ ├── report.py # ReportAgent + focus group
│ │ ├── agent_chat.py # Per-agent chat interface
│ │ └── search.py # Hybrid vector + BM25 search
│ └── routes/
│ ├── graph.py
│ ├── simulation.py
│ └── report.py
├── frontend/ # Vue 3 (fully English UI)
├── docker-compose.yml
├── .env.example
└── README.md
Weekly Installs
247
Repository
GitHub Stars
10
First Seen
6 days ago
Security Audits
Gen Agent Trust HubFailSocketPassSnykFail
Installed on
opencode246
github-copilot246
codex246
amp246
cline246
kimi-cli246
AI Elements:基于shadcn/ui的AI原生应用组件库,快速构建对话界面
56,200 周安装
/api/simulation/run |
| Execute simulation |
GET | /api/simulation/:id/results | Get posts, sentiment, metrics |
GET | /api/simulation/:id/agents | List generated agents |
POST | /api/report/generate | Generate ReportAgent analysis |
POST | /api/agent/:id/chat | Chat with a specific agent |
GET | /api/search | Hybrid search the knowledge graph |