grepai-ollama-setup by yoanbernabeu/grepai-skills
npx skills add https://github.com/yoanbernabeu/grepai-skills --skill grepai-ollama-setup此技能涵盖安装和配置 Ollama 作为 GrepAI 的本地嵌入提供程序。Ollama 可实现 100% 私密的代码搜索,您的代码永远不会离开您的机器。
| 优势 | 描述 |
|---|---|
| 🔒 隐私性 | 代码永远不会离开您的机器 |
| 💰 免费 | 无 API 费用 |
| ⚡ 快速 | 本地处理,无网络延迟 |
| 🔌 离线 | 无需互联网即可工作 |
# Install Ollama
brew install ollama
# Start the Ollama service
ollama serve
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
.dmg 文件并拖拽到"应用程序"文件夹# One-line installer
curl -fsSL https://ollama.com/install.sh | sh
# Start the service
ollama serve
GrepAI 需要一个嵌入模型来将代码转换为向量。
# Download the recommended model (768 dimensions)
ollama pull nomic-embed-text
规格说明:
# Multilingual support (better for non-English code/comments)
ollama pull nomic-embed-text-v2-moe
# Larger, more accurate
ollama pull bge-m3
# Maximum quality
ollama pull mxbai-embed-large
| 模型 | 维度 | 大小 | 最适合 |
|---|---|---|---|
nomic-embed-text | 768 | 274 MB | 通用代码搜索 |
nomic-embed-text-v2-moe | 768 | 500 MB | 多语言代码库 |
bge-m3 | 1024 | 1.2 GB | 大型代码库 |
mxbai-embed-large | 1024 | 670 MB | 最高准确度 |
# Check if Ollama server is responding
curl http://localhost:11434/api/tags
# Expected output: JSON with available models
ollama list
# Output:
# NAME ID SIZE MODIFIED
# nomic-embed-text:latest abc123... 274 MB 2 hours ago
# Quick test (should return embedding vector)
curl http://localhost:11434/api/embeddings -d '{
"model": "nomic-embed-text",
"prompt": "function hello() { return world; }"
}'
安装 Ollama 后,配置 GrepAI 以使用它:
# .grepai/config.yaml
embedder:
provider: ollama
model: nomic-embed-text
endpoint: http://localhost:11434
这是运行 grepai init 时的默认配置,因此如果使用 nomic-embed-text 则无需更改。
# Run in current terminal (see logs)
ollama serve
# Using nohup
nohup ollama serve &
# Or as a systemd service (Linux)
sudo systemctl enable ollama
sudo systemctl start ollama
# Check if running
pgrep -f ollama
# Or test the API
curl -s http://localhost:11434/api/tags | head -1
嵌入模型会加载到 RAM 中:
nomic-embed-text:约 500 MB RAMbge-m3:约 1.5 GB RAMmxbai-embed-large:约 1 GB RAMOllama 默认使用 CPU。为了获得更快的嵌入速度:
❌ 问题: 连接 localhost:11434 被拒绝
✅ 解决方案: 启动 Ollama:
ollama serve
❌ 问题: 找不到模型 ✅ 解决方案: 先拉取模型:
ollama pull nomic-embed-text
❌ 问题: 嵌入生成缓慢 ✅ 解决方案:
ollama ps)❌ 问题: 内存不足 ✅ 解决方案: 使用更小的模型或增加系统 RAM
ollama serve 正在运行nomic-embed-text 提供最佳平衡ollama pull nomic-embed-text 获取更新成功设置后:
✅ Ollama Setup Complete
Ollama Version: 0.1.x
Endpoint: http://localhost:11434
Model: nomic-embed-text (768 dimensions)
Status: Running
GrepAI is ready to use with local embeddings.
Your code will never leave your machine.
每周安装次数
297
仓库
GitHub 星标数
15
首次出现
2026年1月28日
安全审计
安装于
opencode250
codex246
gemini-cli230
github-copilot229
kimi-cli216
cursor215
This skill covers installing and configuring Ollama as the local embedding provider for GrepAI. Ollama enables 100% private code search where your code never leaves your machine.
| Benefit | Description |
|---|---|
| 🔒 Privacy | Code never leaves your machine |
| 💰 Free | No API costs |
| ⚡ Fast | Local processing, no network latency |
| 🔌 Offline | Works without internet |
# Install Ollama
brew install ollama
# Start the Ollama service
ollama serve
.dmg and drag to Applications# One-line installer
curl -fsSL https://ollama.com/install.sh | sh
# Start the service
ollama serve
GrepAI requires an embedding model to convert code into vectors.
# Download the recommended model (768 dimensions)
ollama pull nomic-embed-text
Specifications:
# Multilingual support (better for non-English code/comments)
ollama pull nomic-embed-text-v2-moe
# Larger, more accurate
ollama pull bge-m3
# Maximum quality
ollama pull mxbai-embed-large
| Model | Dimensions | Size | Best For |
|---|---|---|---|
nomic-embed-text | 768 | 274 MB | General code search |
nomic-embed-text-v2-moe | 768 | 500 MB | Multilingual codebases |
bge-m3 | 1024 | 1.2 GB | Large codebases |
mxbai-embed-large | 1024 | 670 MB | Maximum accuracy |
# Check if Ollama server is responding
curl http://localhost:11434/api/tags
# Expected output: JSON with available models
ollama list
# Output:
# NAME ID SIZE MODIFIED
# nomic-embed-text:latest abc123... 274 MB 2 hours ago
# Quick test (should return embedding vector)
curl http://localhost:11434/api/embeddings -d '{
"model": "nomic-embed-text",
"prompt": "function hello() { return world; }"
}'
After installing Ollama, configure GrepAI to use it:
# .grepai/config.yaml
embedder:
provider: ollama
model: nomic-embed-text
endpoint: http://localhost:11434
This is the default configuration when you run grepai init, so no changes are needed if using nomic-embed-text.
# Run in current terminal (see logs)
ollama serve
# Using nohup
nohup ollama serve &
# Or as a systemd service (Linux)
sudo systemctl enable ollama
sudo systemctl start ollama
# Check if running
pgrep -f ollama
# Or test the API
curl -s http://localhost:11434/api/tags | head -1
Embedding models load into RAM:
nomic-embed-text: ~500 MB RAMbge-m3: ~1.5 GB RAMmxbai-embed-large: ~1 GB RAMOllama uses CPU by default. For faster embeddings:
❌ Problem: connection refused to localhost:11434 ✅ Solution: Start Ollama:
ollama serve
❌ Problem: Model not found ✅ Solution: Pull the model first:
ollama pull nomic-embed-text
❌ Problem: Slow embedding generation ✅ Solution:
ollama ps)❌ Problem: Out of memory ✅ Solution: Use a smaller model or increase system RAM
ollama serve is runningnomic-embed-text offers best balanceollama pull nomic-embed-text for updatesAfter successful setup:
✅ Ollama Setup Complete
Ollama Version: 0.1.x
Endpoint: http://localhost:11434
Model: nomic-embed-text (768 dimensions)
Status: Running
GrepAI is ready to use with local embeddings.
Your code will never leave your machine.
Weekly Installs
297
Repository
GitHub Stars
15
First Seen
Jan 28, 2026
Security Audits
Gen Agent Trust HubFailSocketPassSnykWarn
Installed on
opencode250
codex246
gemini-cli230
github-copilot229
kimi-cli216
cursor215
agent-browser 浏览器自动化工具 - Vercel Labs 命令行网页操作与测试
140,500 周安装