pgvector-semantic-search by timescale/pg-aiguide
npx skills add https://github.com/timescale/pg-aiguide --skill pgvector-semantic-search语义搜索通过含义而非精确关键词来查找内容。嵌入模型将文本转换为高维向量,其中相似的含义映射到相近的点。pgvector 将这些向量存储在 PostgreSQL 中,并使用近似最近邻(ANN)索引来快速找到最接近的匹配项——可扩展到数百万行,而无需离开数据库。将文本与其嵌入向量一起存储,然后通过将搜索文本转换为向量并返回距离最小的行来进行查询。
本指南涵盖 pgvector 的设置和调优——不涉及对搜索质量有显著影响的嵌入模型选择或文本分块。所有功能(halfvec、binary_quantize、迭代扫描)需要 pgvector 0.8.0+。
除非有特定理由,否则请使用此配置。
halfvec(N),其中 N 是您的嵌入维度(必须处处匹配)。示例使用 1536;请替换为您的维度 N。<=>)m = 16, ef_construction = 64)。使用 并通过 查询。广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
halfvec_cosine_ops<=>SET hnsw.ef_search = 100(来自已发布基准测试的良好起点,增加该值可在更高延迟下获得更高召回率)ORDER BY embedding <=> $1::halfvec(N) LIMIT k此设置为大多数文本嵌入工作负载提供了良好的速度-召回率权衡。
CREATE EXTENSION IF NOT EXISTS vector;halfvec——以 halfvec 形式存储和索引,可将存储和索引大小减少 50%,且召回率损失最小。CREATE INDEX CONCURRENTLY ...<=>):对于非归一化的嵌入向量,使用余弦距离。对于单位归一化的嵌入向量,余弦距离和内积会产生相同的排序结果;默认使用余弦距离。halfvec_cosine_ops 创建的索引要求查询中使用 <=>;halfvec_l2_ops 要求 <->;操作符不匹配将不会使用索引。$1::halfvec(N)),以避免在预处理语句中出现隐式转换失败。halfvec(N)halfvec(N)bit(N)vector / halfvec / bit 类型而不进行显式转换ORDER BY 子句中对表列调用 binary_quantize();应将其存储起来halfvec(1536) 列要求查询向量转换为 ::halfvec(1536)。-- 以 halfvec 形式存储和索引
CREATE TABLE items (
id BIGINT GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
contents TEXT NOT NULL,
embedding halfvec(1536) NOT NULL -- NOT NULL 要求在插入前生成嵌入向量,而不是异步生成
);
CREATE INDEX ON items USING hnsw (embedding halfvec_cosine_ops);
-- 查询:返回最接近的 10 个项目。$1 是搜索文本的嵌入向量。
SELECT id, contents FROM items ORDER BY embedding <=> $1::halfvec(1536) LIMIT 10;
对于其他距离操作符(L2、内积等),请参阅 pgvector README。
推荐的索引类型。创建一个多层可导航图,具有优越的速度-召回率权衡。可以在空表上创建(无需训练步骤)。
CREATE INDEX ON items USING hnsw (embedding halfvec_cosine_ops);
-- 使用调优参数
CREATE INDEX ON items USING hnsw (embedding halfvec_cosine_ops) WITH (m = 16, ef_construction = 64);
| 参数 | 默认值 | 描述 |
|---|---|---|
m | 16 | 每层的最大连接数。值越高 = 召回率越好,内存占用越大 |
ef_construction | 64 | 构建时的候选列表大小。值越高 = 图质量越好,构建越慢 |
hnsw.ef_search | 40 | 查询时的候选列表大小。值越高 = 召回率越好,查询越慢。应 ≥ LIMIT。 |
ef_search 调优(大致指南——实际结果因数据集而异):
| ef_search | 近似召回率 | 相对速度 |
|---|---|---|
| 40 | 较低(某些基准测试中约 95%) | 1x(基线) |
| 100 | 较高 | 约慢 2 倍 |
| 200 | 非常高 | 约慢 4 倍 |
| 400 | 接近精确 | 约慢 8 倍 |
-- 为会话设置搜索参数
SET hnsw.ef_search = 100;
-- 为单个查询设置
BEGIN;
SET LOCAL hnsw.ef_search = 100;
SELECT id, contents FROM items ORDER BY embedding <=> $1::halfvec(1536) LIMIT 10;
COMMIT;
默认使用 HNSW。仅当 HNSW 的运营成本比峰值召回率更重要时才使用 IVFFlat。
在以下情况下选择 IVFFlat:
如果您需要以下情况,请避免使用 IVFFlat:
注意事项:
lists 和 ivfflat.probes;probes 值越高 = 召回率越好,查询越慢。入门配置:
CREATE INDEX ON items
USING ivfflat (embedding halfvec_cosine_ops)
WITH (lists = 1000);
SET ivfflat.probes = 10;
halfvec 进行存储和索引。halfvec(m=16)4–6 KB(数量级);3072 维约为 2 倍;m=32 大约会使 HNSW 链接/图开销翻倍。halfvec 不合适,请使用二进制量化 + 重排序。在 m=16、1536 维下的大致 halfvec 容量(假设 RAM 大部分可用于索引缓存):
| RAM | 近似最大 halfvec 向量数 |
|---|---|
| 16 GB | 约 2–3M 向量 |
| 32 GB | 约 4–6M 向量 |
| 64 GB | 约 8–12M 向量 |
| 128 GB | 约 16–25M 向量 |
对于 3072 维嵌入向量,将这些数字除以约 2。
对于 m=32,容量也除以约 2。
如果索引在此规模下无法放入内存,请使用二进制量化。
这些是范围,并非保证。请通过在负载下监控缓存驻留情况和 p95/p99 延迟来验证。
内存减少 32 倍。与重排序结合使用以获得可接受的召回率。
-- 带有用于二进制量化的生成列的表
CREATE TABLE items (
id BIGINT GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
contents TEXT NOT NULL,
embedding halfvec(1536) NOT NULL,
embedding_bq bit(1536) GENERATED ALWAYS AS (binary_quantize(embedding)::bit(1536)) STORED
);
CREATE INDEX ON items USING hnsw (embedding_bq bit_hamming_ops);
-- 使用重排序进行查询以获得更好的召回率
-- ef_search 必须 >= 内部 LIMIT 以检索足够的候选者
SET hnsw.ef_search = 800;
WITH q AS (
SELECT binary_quantize($1::halfvec(1536))::bit(1536) AS qb
)
SELECT *
FROM (
SELECT i.id, i.contents, i.embedding
FROM items i, q
ORDER BY i.embedding_bq <~> q.qb -- 使用索引计算二进制距离
LIMIT 800
) candidates
ORDER BY candidates.embedding <=> $1::halfvec(1536) -- 计算 halfvec 距离(不使用索引),比二进制距离更准确
LIMIT 10;
80 倍的过采样率(800 个候选者对应 10 个结果)是一个合理的起点。二进制量化会损失精度,因此在重排序期间需要更多候选者来找到真正的最近邻。如果召回率不足,请增加该值;如果重排序延迟过高,请减少该值。
| 规模 | 向量数 | 配置 | 备注 |
|---|---|---|---|
| 小 | <100K | 默认值 | 索引可选,但可改善尾部延迟 |
| 中 | 100K–5M | 默认值 | 监控 p95 延迟;最常见的生产范围 |
| 大 | 5M+ | ef_construction=100+ | 内存驻留性至关重要 |
| 超大 | 10M+ | 二进制量化 + 重排序 | 如果可能,首先添加 RAM 或分区 |
首先调优 ef_search 以提高召回率;仅当召回率趋于平稳且内存允许时,才增加 m。在并发情况下,当索引无法放入内存时,尾部延迟会激增。二进制量化是一种应急方案——应优先考虑添加 RAM 或分区。
过滤向量搜索需要谨慎。根据过滤选择性和查询形状,过滤器可能导致提前终止(行数太少,结果缺失)或增加工作量(延迟)。
默认情况下,当存在 WHERE 子句时,HNSW 可能会提前停止,这可能导致结果少于预期。迭代扫描允许 HNSW 继续搜索,直到找到足够的过滤行。
当过滤器显著减少结果集时启用迭代扫描。
-- 为过滤查询启用迭代扫描
SET hnsw.iterative_scan = relaxed_order;
SELECT id, contents
FROM items
WHERE category_id = 123
ORDER BY embedding <=> $1::halfvec(1536)
LIMIT 10;
如果结果仍然稀疏,请增加扫描预算:
SET hnsw.max_scan_tuples = 50000;
权衡:增加 hnsw.max_scan_tuples 可以提高召回率,但可能显著增加延迟。
何时不需要迭代扫描:
高选择性过滤器(约 10k 行以下) 在过滤器列上使用 B 树索引,以便 Postgres 可以在 ANN 之前进行预过滤。
CREATE INDEX ON items (category_id);
低基数过滤器(少数不同的值) 为每个过滤器值使用部分 HNSW 索引。
CREATE INDEX ON items
USING hnsw (embedding halfvec_cosine_ops)
WHERE category_id = 11;
许多过滤器值或大型数据集 按过滤器键进行分区,以保持每个 ANN 索引较小。
CREATE TABLE items (
embedding halfvec(1536),
category_id int
) PARTITION BY LIST (category_id);
对于具有基于标签过滤器的大型数据集,pgvectorscale 的 StreamingDiskANN 索引支持在 smallint[] 列上进行过滤索引。标签与向量一起被索引,从而实现高效的过滤搜索,而无需 HNSW 后过滤的准确性权衡。有关设置详情,请参阅 pgvectorscale 文档。
-- COPY 最快;二进制格式更快但需要正确编码
-- 文本格式:'[0.1, 0.2, ...]'
COPY items (contents, embedding) FROM STDIN;
-- 二进制格式(如果您的客户端支持):
COPY items (contents, embedding) FROM STDIN WITH (FORMAT BINARY);
-- 加载后添加索引
SET maintenance_work_mem = '4GB';
SET max_parallel_maintenance_workers = 7;
CREATE INDEX ON items USING hnsw (embedding halfvec_cosine_ops);
-- 检查索引大小
SELECT pg_size_pretty(pg_relation_size('items_embedding_idx'));
-- 调试查询性能
EXPLAIN (ANALYZE, BUFFERS) SELECT id, contents FROM items ORDER BY embedding <=> $1::halfvec(1536) LIMIT 10;
-- 监控索引构建进度
SELECT phase, round(100.0 * blocks_done / nullif(blocks_total, 0), 1) AS "%"
FROM pg_stat_progress_create_index;
-- 比较近似与精确召回率
BEGIN;
SET LOCAL enable_indexscan = off; -- 强制精确搜索
SELECT id, contents FROM items ORDER BY embedding <=> $1::halfvec(1536) LIMIT 10;
COMMIT;
-- 强制使用索引进行调试
BEGIN;
SET LOCAL enable_seqscan = off;
SELECT id, contents FROM items ORDER BY embedding <=> $1::halfvec(1536) LIMIT 10;
COMMIT;
| 症状 | 可能原因 | 解决方法 |
|---|---|---|
| 查询未使用 ANN 索引 | 缺少 ORDER BY + LIMIT、操作符不匹配或隐式转换 | 使用 ORDER BY 并配合与索引操作符类匹配的距离操作符;显式转换查询向量 |
| 结果少于预期(过滤查询) | HNSW 因过滤器而提前停止 | 启用迭代扫描;增加 hnsw.max_scan_tuples;或进行预过滤(B 树)、使用部分索引或分区 |
| 结果少于预期(未过滤查询) | ANN 召回率太低 | 增加 hnsw.ef_search |
| 低 CPU 使用率下的高延迟 | HNSW 索引未驻留在内存中 | 使用 halfvec,减少 m/ef_construction,增加 RAM,分区,或使用二进制量化 |
| 索引构建缓慢 | 构建内存或并行度不足 | 增加 maintenance_work_mem 和 max_parallel_maintenance_workers;在批量加载后构建 |
| 内存不足错误 | 索引太大,超出可用 RAM | 使用 halfvec,减少索引参数,或切换到带重排序的二进制量化 |
| 零结果或结果缺失 | NULL 或零向量 | 避免 NULL 嵌入向量;不要将零向量与余弦距离一起使用 |
每周安装次数
144
代码仓库
GitHub 星标数
1.6K
首次出现
2026年1月29日
安全审计
安装于
gemini-cli134
codex134
opencode133
github-copilot132
cursor121
kimi-cli118
Semantic search finds content by meaning rather than exact keywords. An embedding model converts text into high-dimensional vectors, where similar meanings map to nearby points. pgvector stores these vectors in PostgreSQL and uses approximate nearest neighbor (ANN) indexes to find the closest matches quickly—scaling to millions of rows without leaving the database. Store your text alongside its embedding, then query by converting your search text to a vector and returning the rows with the smallest distance.
This guide covers pgvector setup and tuning—not embedding model selection or text chunking, which significantly affect search quality. Requires pgvector 0.8.0+ for all features (halfvec, binary_quantize, iterative scan).
Use this configuration unless you have a specific reason not to.
halfvec(N) where N is your embedding dimension (must match everywhere). Examples use 1536; replace with your dimension N.<=>)m = 16, ef_construction = 64). Use halfvec_cosine_ops and query with <=>.SET hnsw.ef_search = 100 (good starting point from published benchmarks, increase for higher recall at higher latency)ORDER BY embedding <=> $1::halfvec(N) LIMIT kThis setup provides a strong speed–recall tradeoff for most text-embedding workloads.
CREATE EXTENSION IF NOT EXISTS vector;halfvec by default—store and index as halfvec for 50% smaller storage and indexes with minimal recall loss.CREATE INDEX CONCURRENTLY ...<=>): For non-normalized embeddings, use cosine. For unit-normalized embeddings, cosine and inner product yield identical rankings; default to cosine.halfvec_cosine_ops requires <=> in queries; requires ; mismatched operators won't use the index.halfvec(N)halfvec(N)bit(N) in a generated columnvector / halfvec / bit without explicit castsbinary_quantize() on table columns inside ORDER BY; store it insteadhalfvec(1536) column requires query vectors cast as ::halfvec(1536).-- Store and index as halfvec
CREATE TABLE items (
id BIGINT GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
contents TEXT NOT NULL,
embedding halfvec(1536) NOT NULL -- NOT NULL requires embeddings generated before insert, not async
);
CREATE INDEX ON items USING hnsw (embedding halfvec_cosine_ops);
-- Query: returns 10 closest items. $1 is the embedding of your search text.
SELECT id, contents FROM items ORDER BY embedding <=> $1::halfvec(1536) LIMIT 10;
For other distance operators (L2, inner product, etc.), see the pgvector README.
The recommended index type. Creates a multilayer navigable graph with superior speed-recall tradeoff. Can be created on empty tables (no training step required).
CREATE INDEX ON items USING hnsw (embedding halfvec_cosine_ops);
-- With tuning parameters
CREATE INDEX ON items USING hnsw (embedding halfvec_cosine_ops) WITH (m = 16, ef_construction = 64);
| Parameter | Default | Description |
|---|---|---|
m | 16 | Max connections per layer. Higher = better recall, more memory |
ef_construction | 64 | Build-time candidate list. Higher = better graph quality, slower build |
hnsw.ef_search | 40 | Query-time candidate list. Higher = better recall, slower queries. Should be ≥ LIMIT. |
ef_search tuning (rough guidelines—actual results vary by dataset):
| ef_search | Approx Recall | Relative Speed |
|---|---|---|
| 40 | lower (~95% on some benchmarks) | 1x (baseline) |
| 100 | higher | ~2x slower |
| 200 | very-high | ~4x slower |
| 400 | near-exact | ~8x slower |
-- Set search parameter for session
SET hnsw.ef_search = 100;
-- Set for single query
BEGIN;
SET LOCAL hnsw.ef_search = 100;
SELECT id, contents FROM items ORDER BY embedding <=> $1::halfvec(1536) LIMIT 10;
COMMIT;
Default to HNSW. Use IVFFlat only when HNSW’s operational costs matter more than peak recall.
Choose IVFFlat if:
Avoid IVFFlat if you need:
Notes:
lists and ivfflat.probes; higher probes = better recall, slower queries.Starter config:
CREATE INDEX ON items
USING ivfflat (embedding halfvec_cosine_ops)
WITH (lists = 1000);
SET ivfflat.probes = 10;
halfvec by default for storage and indexing.halfvec (m=16) (order-of-magnitude); 3072-dim is ~2×; m=32 roughly doubles HNSW link/graph overhead.halfvec doesn’t fit, use binary quantization + re-ranking.Approximate halfvec capacity at m=16, 1536-dim (assumes RAM mostly available for index caching):
| RAM | Approx max halfvec vectors |
|---|---|
| 16 GB | ~2–3M vectors |
| 32 GB | ~4–6M vectors |
| 64 GB | ~8–12M vectors |
| 128 GB | ~16–25M vectors |
For 3072-dim embeddings, divide these numbers by ~2.
For m=32, also divide capacity by ~2.
If the index cannot fit in memory at this scale, use binary quantization.
These are ranges, not guarantees. Validate by monitoring cache residency and p95/p99 latency under load.
32× memory reduction. Use with re-ranking for acceptable recall.
-- Table with generated column for binary quantization
CREATE TABLE items (
id BIGINT GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
contents TEXT NOT NULL,
embedding halfvec(1536) NOT NULL,
embedding_bq bit(1536) GENERATED ALWAYS AS (binary_quantize(embedding)::bit(1536)) STORED
);
CREATE INDEX ON items USING hnsw (embedding_bq bit_hamming_ops);
-- Query with re-ranking for better recall
-- ef_search must be >= inner LIMIT to retrieve enough candidates
SET hnsw.ef_search = 800;
WITH q AS (
SELECT binary_quantize($1::halfvec(1536))::bit(1536) AS qb
)
SELECT *
FROM (
SELECT i.id, i.contents, i.embedding
FROM items i, q
ORDER BY i.embedding_bq <~> q.qb -- computes binary distance using index
LIMIT 800
) candidates
ORDER BY candidates.embedding <=> $1::halfvec(1536) -- computes halfvec distance (no index), more accurate than binary
LIMIT 10;
The 80× oversampling ratio (800 candidates for 10 results) is a reasonable starting point. Binary quantization loses precision, so more candidates are needed to find true nearest neighbors during re-ranking. Increase if recall is insufficient; decrease if re-ranking latency is too high.
| Scale | Vectors | Config | Notes |
|---|---|---|---|
| Small | <100K | Defaults | Index optional but improves tail latency |
| Medium | 100K–5M | Defaults | Monitor p95 latency; most common production range |
| Large | 5M+ | ef_construction=100+ | Memory residency critical |
| Very Large | 10M+ | Binary quantization + re-ranking | Add RAM or partition first if possible |
Tune ef_search first for recall; only increase m if recall plateaus and memory allows. Under concurrency, tail latency spikes when the index doesn't fit in memory. Binary quantization is an escape hatch—prefer adding RAM or partitioning first.
Filtered vector search requires care. Depending on filter selectivity and query shape, filters can cause early termination (too few rows, missing results) or increase work (latency).
By default, HNSW may stop early when a WHERE clause is present, which can lead to fewer results than expected. Iterative scan allows HNSW to continue searching until enough filtered rows are found.
Enable iterative scan when filters materially reduce the result set.
-- Enable iterative scans for filtered queries
SET hnsw.iterative_scan = relaxed_order;
SELECT id, contents
FROM items
WHERE category_id = 123
ORDER BY embedding <=> $1::halfvec(1536)
LIMIT 10;
If results are still sparse, increase the scan budget:
SET hnsw.max_scan_tuples = 50000;
Trade-off: increasing hnsw.max_scan_tuples improves recall but can significantly increase latency.
When iterative scan is not needed:
Highly selective filters (under ~10k rows) Use a B-tree index on the filter column so Postgres can prefilter before ANN.
CREATE INDEX ON items (category_id);
Low-cardinality filters (few distinct values) Use partial HNSW indexes per filter value.
CREATE INDEX ON items
USING hnsw (embedding halfvec_cosine_ops)
WHERE category_id = 11;
Many filter values or large datasets Partition by the filter key to keep each ANN index small.
CREATE TABLE items (
embedding halfvec(1536),
category_id int
) PARTITION BY LIST (category_id);
For large datasets with label-based filters, pgvectorscale's StreamingDiskANN index supports filtered indexes on smallint[] columns. Labels are indexed alongside vectors, enabling efficient filtered search without the accuracy tradeoffs of HNSW post-filtering. See the pgvectorscale documentation for setup details.
-- COPY is fastest; binary format is faster but requires proper encoding
-- Text format: '[0.1, 0.2, ...]'
COPY items (contents, embedding) FROM STDIN;
-- Binary format (if your client supports it):
COPY items (contents, embedding) FROM STDIN WITH (FORMAT BINARY);
-- Add indexes AFTER loading
SET maintenance_work_mem = '4GB';
SET max_parallel_maintenance_workers = 7;
CREATE INDEX ON items USING hnsw (embedding halfvec_cosine_ops);
-- Check index size
SELECT pg_size_pretty(pg_relation_size('items_embedding_idx'));
-- Debug query performance
EXPLAIN (ANALYZE, BUFFERS) SELECT id, contents FROM items ORDER BY embedding <=> $1::halfvec(1536) LIMIT 10;
-- Monitor index build progress
SELECT phase, round(100.0 * blocks_done / nullif(blocks_total, 0), 1) AS "%"
FROM pg_stat_progress_create_index;
-- Compare approximate vs exact recall
BEGIN;
SET LOCAL enable_indexscan = off; -- Force exact search
SELECT id, contents FROM items ORDER BY embedding <=> $1::halfvec(1536) LIMIT 10;
COMMIT;
-- Force index use for debugging
BEGIN;
SET LOCAL enable_seqscan = off;
SELECT id, contents FROM items ORDER BY embedding <=> $1::halfvec(1536) LIMIT 10;
COMMIT;
| Symptom | Likely Cause | Fix |
|---|---|---|
| Query does not use ANN index | Missing ORDER BY + LIMIT, operator mismatch, or implicit casts | Use ORDER BY with a distance operator that matches the index ops class; explicitly cast query vectors |
| Fewer results than expected (filtered query) | HNSW stops early due to filter | Enable iterative scan; increase hnsw.max_scan_tuples; or prefilter (B-tree), use partial indexes, or partition |
| Fewer results than expected (unfiltered query) | ANN recall too low | Increase hnsw.ef_search |
| High latency with low CPU usage | HNSW index not resident in memory |
Weekly Installs
144
Repository
GitHub Stars
1.6K
First Seen
Jan 29, 2026
Security Audits
Gen Agent Trust HubPassSocketWarnSnykPass
Installed on
gemini-cli134
codex134
opencode133
github-copilot132
cursor121
kimi-cli118
AI 代码实施计划编写技能 | 自动化开发任务分解与 TDD 流程规划工具
50,900 周安装
PostgreSQL优化助手 - JSONB操作、性能调优、窗口函数、全文搜索实战指南
9,600 周安装
GitHub Copilot create-readme:AI自动生成专业README文档工具
9,600 周安装
React Native 最佳实践与性能优化指南 | 提升应用FPS、启动速度与包体积
9,600 周安装
Web无障碍性(a11y)指南:WCAG 2.1原则、Lighthouse审计与代码实践
10,500 周安装
Vue Router 最佳实践指南:导航守卫、路由生命周期与常见陷阱解决方案
9,900 周安装
SEO优化指南:技术性SEO、页面优化与结构化数据实践
10,700 周安装
halfvec_l2_ops<->$1::halfvec(N)) to avoid implicit-cast failures in prepared statements.Use halfvec, reduce m/ef_construction, add RAM, partition, or use binary quantization |
| Slow index builds | Insufficient build memory or parallelism | Increase maintenance_work_mem and max_parallel_maintenance_workers; build after bulk load |
| Out-of-memory errors | Index too large for available RAM | Use halfvec, reduce index parameters, or switch to binary quantization with re-ranking |
| Zero or missing results | NULL or zero vectors | Avoid NULL embeddings; do not use zero vectors with cosine distance |