cloudflare-kv by jezweb/claude-skills
npx skills add https://github.com/jezweb/claude-skills --skill cloudflare-kv状态:生产就绪 ✅ 最后更新:2026-01-20 依赖项:cloudflare-worker-base(用于 Worker 设置) 最新版本:wrangler@4.59.2, @cloudflare/workers-types@4.20260109.0
近期更新(2025年):
# 创建命名空间
npx wrangler kv namespace create MY_NAMESPACE
# 输出:[[kv_namespaces]] binding = "MY_NAMESPACE" id = "<UUID>"
wrangler.jsonc:
{
"kv_namespaces": [{
"binding": "MY_NAMESPACE", // 通过 env.MY_NAMESPACE 访问
"id": "<production-uuid>",
"preview_id": "<preview-uuid>" // 可选:本地开发
}]
}
基本用法:
type Bindings = { MY_NAMESPACE: KVNamespace };
app.post('/set/:key', async (c) => {
await c.env.MY_NAMESPACE.put(c.req.param('key'), await c.req.text());
return c.json({ success: true });
});
app.get('/get/:key', async (c) => {
const value = await c.env.MY_NAMESPACE.get(c.req.param('key'));
return value ? c.json({ value }) : c.json({ error: 'Not found' }, 404);
});
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
// 获取单个键
const value = await env.MY_KV.get('key'); // string | null
const data = await env.MY_KV.get('key', { type: 'json' }); // object | null
const buffer = await env.MY_KV.get('key', { type: 'arrayBuffer' });
const stream = await env.MY_KV.get('key', { type: 'stream' });
// 带缓存获取(最少60秒)
const value = await env.MY_KV.get('key', { cacheTtl: 300 }); // 5分钟边缘缓存
// 批量读取(计为1次操作)
const values = await env.MY_KV.get(['key1', 'key2']); // Map<string, string | null>
// 带元数据
const { value, metadata } = await env.MY_KV.getWithMetadata('key');
const result = await env.MY_KV.getWithMetadata(['key1', 'key2']); // 带元数据的批量读取
// 基本写入(每个键每秒最多1次)
await env.MY_KV.put('key', 'value');
await env.MY_KV.put('user:123', JSON.stringify({ name: 'John' }));
// 带过期时间
await env.MY_KV.put('session', data, { expirationTtl: 3600 }); // 1小时
await env.MY_KV.put('token', value, { expiration: Math.floor(Date.now()/1000) + 86400 });
// 带元数据(最多1024字节)
await env.MY_KV.put('config', 'dark', {
metadata: { updatedAt: Date.now(), version: 2 }
});
关键限制:
// 带分页的列表
const result = await env.MY_KV.list({ prefix: 'user:', limit: 1000, cursor });
// result: { keys: [], list_complete: boolean, cursor?: string }
// 关键:始终检查 list_complete,而不是 keys.length === 0
let cursor: string | undefined;
do {
const result = await env.MY_KV.list({ prefix: 'user:', cursor });
processKeys(result.keys);
cursor = result.list_complete ? undefined : result.cursor;
} while (cursor);
// 删除单个键
await env.MY_KV.delete('key'); // 总是成功
// 批量删除(仅限CLI,最多10,000个键)
// npx wrangler kv bulk delete --binding=MY_KV keys.json
async function getCachedData(kv: KVNamespace, key: string, fetchFn: () => Promise<any>, ttl = 300) {
const cached = await kv.get(key, { type: 'json', cacheTtl: ttl });
if (cached) return cached;
const data = await fetchFn();
await kv.put(key, JSON.stringify(data), { expirationTtl: ttl * 2 });
return data;
}
指导原则:最少60秒,用于读取密集型工作负载(读写比100:1)
// 将小值(<1024字节)存储在元数据中,避免单独的 get() 调用
await env.MY_KV.put('user:123', '', {
metadata: { status: 'active', plan: 'pro', lastSeen: Date.now() }
});
// list() 自动返回元数据(无需额外的 get() 调用)
const users = await env.MY_KV.list({ prefix: 'user:' });
users.keys.forEach(({ name, metadata }) => console.log(name, metadata.status));
KV性能因键的温度而异:
| 类型 | 响应时间 | 发生时机 |
|---|---|---|
| 热键 | 6-8毫秒 | 每个数据中心每分钟读取2次以上 |
| 冷键 | 100-300毫秒 | 不常访问,从中央存储获取 |
2025年8月后的改进:
优化:使用键合并使冷键受益于热键缓存:
// ❌ 不好:许多冷键(每个300毫秒)
await kv.put('user:123:name', 'John');
await kv.put('user:123:email', 'john@example.com');
await kv.put('user:123:plan', 'pro');
// 每个冷键的读取:约100-300毫秒
const name = await kv.get('user:123:name'); // 冷键
const email = await kv.get('user:123:email'); // 冷键
const plan = await kv.get('user:123:plan'); // 冷键
// ✅ 好:单个热键(6-8毫秒)
await kv.put('user:123', JSON.stringify({
name: 'John',
email: 'john@example.com',
plan: 'pro'
}));
// 单次读取,缓存为热键:约6-8毫秒
const user = JSON.parse(await kv.get('user:123'));
CacheTtl 帮助冷键:对于不常读取的数据,cacheTtl 可以减少冷键读取延迟。
权衡:合并需要读-改-写操作进行更新
async function* paginateKV(kv: KVNamespace, options: { prefix?: string } = {}) {
let cursor: string | undefined;
do {
const result = await kv.list({ ...options, cursor });
yield result.keys;
cursor = result.list_complete ? undefined : result.cursor;
} while (cursor);
}
// 用法
for await (const keys of paginateKV(env.MY_KV, { prefix: 'user:' })) {
processKeys(keys);
}
async function putWithRetry(kv: KVNamespace, key: string, value: string, opts?: KVPutOptions) {
let attempts = 0, delay = 1000;
while (attempts < 5) {
try {
await kv.put(key, value, opts);
return;
} catch (error) {
if ((error as Error).message.includes('429')) {
attempts++;
if (attempts >= 5) throw new Error('Max retry attempts');
await new Promise(r => setTimeout(r, delay));
delay *= 2; // 指数退避
} else throw error;
}
}
}
KV在整个Cloudflare全球网络中具有最终一致性(2025年8月重新设计:混合存储,p99延迟<5毫秒):
工作原理:
示例:
// 东京:写入
await env.MY_KV.put('counter', '1');
const value = await env.MY_KV.get('counter'); // "1" ✅(同一POP,读己所写)
// 伦敦(60秒内):可能是陈旧数据 ⚠️
const value2 = await env.MY_KV.get('counter'); // 可能是旧值
// 60+秒后:一致 ✅
读己所写(RYOW)保证:自2025年8月重新设计后,通过同一Cloudflare接入点路由的请求会立即看到自己的写入。跨不同POP的全局一致性仍需最多60秒。
时间戳缓解模式(用于关键一致性需求):
// 在键结构中使用时间戳以避免一致性问题
const timestamp = Date.now();
await kv.put(`user:123:${timestamp}`, userData);
// 使用带前缀的列表查找最新项
const result = await kv.list({ prefix: 'user:123:' });
const latestKey = result.keys.sort((a, b) =>
parseInt(b.name.split(':')[2]) - parseInt(a.name.split(':')[2])
).at(0);
使用KV于:读取密集型工作负载(比例100:1)、配置、功能标志、缓存、用户偏好 不要使用KV于:金融交易、强一致性、每个键每秒写入>1次、关键数据
需要强一致性? 使用 Durable Objects
# 创建命名空间
npx wrangler kv namespace create MY_NAMESPACE [--preview]
# 管理键(添加 --remote 标志以访问生产数据)
npx wrangler kv key put --binding=MY_KV "key" "value" [--ttl=3600] [--metadata='{}']
npx wrangler kv key get --binding=MY_KV "key" [--remote]
npx wrangler kv key list --binding=MY_KV [--prefix="user:"] [--remote]
npx wrangler kv key delete --binding=MY_KV "key"
# 批量操作(最多10,000个键)
npx wrangler kv bulk put --binding=MY_KV data.json
npx wrangler kv bulk delete --binding=MY_KV keys.json
重要:CLI命令默认使用本地存储。添加 --remote 标志以访问生产/远程数据。
在开发期间将本地Worker连接到生产KV命名空间:
wrangler.jsonc:
{
"kv_namespaces": [{
"binding": "MY_KV",
"id": "production-uuid",
"remote": true // 连接到实时KV
}]
}
工作原理:
好处:
⚠️ 警告:写入会影响生产数据。考虑使用带有 remote: true 的暂存命名空间,而不是生产命名空间。
版本支持:
来源:远程绑定架构
| 功能 | 免费计划 | 付费计划 |
|---|---|---|
| 每日读取次数 | 100,000 | 无限制 |
| 每日写入次数(不同键) | 1,000 | 无限制 |
| 每个键每秒写入次数 | 1 | 1 |
| 每个Worker调用的操作次数 | 1,000 | 1,000 |
| 每个账户的命名空间数量 | 1,000 | 1,000 |
| 每个账户的存储空间 | 1 GB | 无限制 |
| 键大小 | 512 字节 | 512 字节 |
| 元数据大小 | 1024 字节 | 1024 字节 |
| 值大小 | 25 MiB | 25 MiB |
| 最小 cacheTtl | 60 秒 | 60 秒 |
关键:每个键每秒1次写入(超过则返回429),批量操作计为1次操作,命名空间限制从200个增加到1,000个(2025年1月)
原因:每秒向同一键写入超过1次 解决方案:使用带指数退避的重试(见高级模式)
// ❌ 不好
await env.MY_KV.put('counter', '1');
await env.MY_KV.put('counter', '2'); // 429 错误!
// ✅ 好
await putWithRetry(env.MY_KV, 'counter', '2');
原因:值超过25 MiB 解决方案:写入前验证大小
if (value.length > 25 * 1024 * 1024) throw new Error('Value exceeds 25 MiB');
原因:序列化后元数据超过1024字节 解决方案:验证序列化后的大小
const serialized = JSON.stringify(metadata);
if (serialized.length > 1024) throw new Error('Metadata exceeds 1024 bytes');
原因:cacheTtl <60秒 解决方案:使用最小值60
// ❌ 错误
await env.MY_KV.get('key', { cacheTtl: 30 });
// ✅ 正确
await env.MY_KV.get('key', { cacheTtl: 60 });
list() 时,将小值(<1024字节)存储在元数据中list_complete,而不是 keys.length === 0get() 返回 null)原因:每秒向同一键写入超过1次 解决方案:合并写入或使用带指数退避的重试
// ❌ 不好:速率限制
for (let i = 0; i < 10; i++) await kv.put('counter', String(i));
// ✅ 好:单次写入
await kv.put('counter', '9');
// ✅ 好:带退避的重试
await putWithRetry(kv, 'counter', String(i));
原因:最终一致性(约60秒传播) 解决方案:接受陈旧读取,使用Durable Objects实现强一致性,或实现应用级缓存失效
原因:单个Worker调用中KV操作超过1000次 解决方案:使用批量操作
// ❌ 不好:5000次操作
for (const key of 5000keys) await kv.get(key);
// ✅ 好:1次操作
const values = await kv.get(keys); // 批量读取
原因:已删除/过期的键创建了必须遍历的“墓碑” 解决方案:始终检查 list_complete,而不是 keys.length
// ✅ 正确的分页
let cursor: string | undefined;
do {
const result = await kv.list({ cursor });
processKeys(result.keys); // 即使为空
cursor = result.list_complete ? undefined : result.cursor;
} while (cursor);
关键:使用 prefix 时,必须在所有分页调用中包含它:
// ❌ 错误 - 后续页面丢失前缀
let result = await kv.list({ prefix: 'user:' });
result = await kv.list({ cursor: result.cursor }); // 缺少前缀!
// ✅ 正确 - 每次调用都包含前缀
let cursor: string | undefined;
do {
const result = await kv.list({ prefix: 'user:', cursor });
processKeys(result.keys);
cursor = result.list_complete ? undefined : result.cursor;
} while (cursor);
来源:列出键文档
wrangler types 不为环境嵌套的KV绑定生成类型原因:在环境配置中定义的KV命名空间(例如 [env.feature.kv_namespaces])未包含在生成的TypeScript类型中 影响:KV绑定的TypeScript自动完成和类型检查丢失 来源:GitHub Issue #9709
示例配置:
# wrangler.toml
[env.feature]
name = "my-worker-feature"
[[env.feature.kv_namespaces]]
binding = "MY_STORAGE_FEATURE"
id = "xxxxxxxxxxxx"
运行 npx wrangler types 会为环境变量创建类型定义,但不会为KV命名空间绑定创建。
解决方法:
# 为特定环境生成类型
npx wrangler types -e feature
或者将KV命名空间定义在顶层而不是嵌套在环境中:
# 顶层(类型生成正确)
[[kv_namespaces]]
binding = "MY_STORAGE"
id = "xxxxxxxxxxxx"
注意:运行时绑定仍然正常工作;这仅影响类型生成。
wrangler kv key list 对远程数据返回空数组原因:CLI命令默认使用本地存储,而不是远程/生产KV 影响:用户期望看到生产数据,但从本地存储获得空数组 来源:GitHub Issue #10395
解决方案:使用 --remote 标志访问生产/远程数据
# ❌ 显示本地存储(可能为空)
npx wrangler kv key list --binding=MY_KV
# ✅ 显示远程/生产数据
npx wrangler kv key list --binding=MY_KV --remote
为何发生:设计上,wrangler dev 使用本地KV存储以避免干扰生产数据。CLI命令为保持一致性遵循相同的默认设置。
适用于:所有 wrangler kv key 命令(get, list, delete, put)
id 与 preview_id)cacheTtl 值(最少60秒)list_complete(而非 keys.length)最后更新:2026-01-20 包版本:wrangler@4.59.2, @cloudflare/workers-types@4.20260109.0 变更:新增6项研究发现 - 热/冷键性能模式、远程绑定(Wrangler 4.37+)、wrangler types环境问题、CLI --remote标志要求、RYOW一致性细节、分页中前缀持久性
每周安装次数
355
仓库
GitHub 星标数
652
首次出现
2026年1月20日
安全审计
安装于
claude-code292
gemini-cli238
opencode236
cursor218
antigravity209
codex208
Status : Production Ready ✅ Last Updated : 2026-01-20 Dependencies : cloudflare-worker-base (for Worker setup) Latest Versions : wrangler@4.59.2, @cloudflare/workers-types@4.20260109.0
Recent Updates (2025) :
# Create namespace
npx wrangler kv namespace create MY_NAMESPACE
# Output: [[kv_namespaces]] binding = "MY_NAMESPACE" id = "<UUID>"
wrangler.jsonc:
{
"kv_namespaces": [{
"binding": "MY_NAMESPACE", // Access as env.MY_NAMESPACE
"id": "<production-uuid>",
"preview_id": "<preview-uuid>" // Optional: local dev
}]
}
Basic Usage:
type Bindings = { MY_NAMESPACE: KVNamespace };
app.post('/set/:key', async (c) => {
await c.env.MY_NAMESPACE.put(c.req.param('key'), await c.req.text());
return c.json({ success: true });
});
app.get('/get/:key', async (c) => {
const value = await c.env.MY_NAMESPACE.get(c.req.param('key'));
return value ? c.json({ value }) : c.json({ error: 'Not found' }, 404);
});
// Get single key
const value = await env.MY_KV.get('key'); // string | null
const data = await env.MY_KV.get('key', { type: 'json' }); // object | null
const buffer = await env.MY_KV.get('key', { type: 'arrayBuffer' });
const stream = await env.MY_KV.get('key', { type: 'stream' });
// Get with cache (minimum 60s)
const value = await env.MY_KV.get('key', { cacheTtl: 300 }); // 5 min edge cache
// Bulk read (counts as 1 operation)
const values = await env.MY_KV.get(['key1', 'key2']); // Map<string, string | null>
// With metadata
const { value, metadata } = await env.MY_KV.getWithMetadata('key');
const result = await env.MY_KV.getWithMetadata(['key1', 'key2']); // Bulk with metadata
// Basic write (max 1/second per key)
await env.MY_KV.put('key', 'value');
await env.MY_KV.put('user:123', JSON.stringify({ name: 'John' }));
// With expiration
await env.MY_KV.put('session', data, { expirationTtl: 3600 }); // 1 hour
await env.MY_KV.put('token', value, { expiration: Math.floor(Date.now()/1000) + 86400 });
// With metadata (max 1024 bytes)
await env.MY_KV.put('config', 'dark', {
metadata: { updatedAt: Date.now(), version: 2 }
});
Critical Limits:
// List with pagination
const result = await env.MY_KV.list({ prefix: 'user:', limit: 1000, cursor });
// result: { keys: [], list_complete: boolean, cursor?: string }
// CRITICAL: Always check list_complete, not keys.length === 0
let cursor: string | undefined;
do {
const result = await env.MY_KV.list({ prefix: 'user:', cursor });
processKeys(result.keys);
cursor = result.list_complete ? undefined : result.cursor;
} while (cursor);
// Delete single key
await env.MY_KV.delete('key'); // Always succeeds
// Bulk delete (CLI only, up to 10,000 keys)
// npx wrangler kv bulk delete --binding=MY_KV keys.json
async function getCachedData(kv: KVNamespace, key: string, fetchFn: () => Promise<any>, ttl = 300) {
const cached = await kv.get(key, { type: 'json', cacheTtl: ttl });
if (cached) return cached;
const data = await fetchFn();
await kv.put(key, JSON.stringify(data), { expirationTtl: ttl * 2 });
return data;
}
Guidelines : Minimum 60s, use for read-heavy workloads (100:1 read/write ratio)
// Store small values (<1024 bytes) in metadata to avoid separate get() calls
await env.MY_KV.put('user:123', '', {
metadata: { status: 'active', plan: 'pro', lastSeen: Date.now() }
});
// list() returns metadata automatically (no additional get() calls)
const users = await env.MY_KV.list({ prefix: 'user:' });
users.keys.forEach(({ name, metadata }) => console.log(name, metadata.status));
KV performance varies based on key temperature:
| Type | Response Time | When It Happens |
|---|---|---|
| Hot keys | 6-8ms | Read 2+ times/minute per datacenter |
| Cold keys | 100-300ms | Infrequently accessed, fetched from central storage |
Post-August 2025 Improvements :
Optimization : Use key coalescing to make cold keys benefit from hot key caching:
// ❌ Bad: Many cold keys (300ms each)
await kv.put('user:123:name', 'John');
await kv.put('user:123:email', 'john@example.com');
await kv.put('user:123:plan', 'pro');
// Each read of a cold key: ~100-300ms
const name = await kv.get('user:123:name'); // Cold
const email = await kv.get('user:123:email'); // Cold
const plan = await kv.get('user:123:plan'); // Cold
// ✅ Good: Single hot key (6-8ms)
await kv.put('user:123', JSON.stringify({
name: 'John',
email: 'john@example.com',
plan: 'pro'
}));
// Single read, cached as hot key: ~6-8ms
const user = JSON.parse(await kv.get('user:123'));
CacheTtl helps cold keys : For infrequently-read data, cacheTtl reduces cold read latency.
Trade-off : Coalescing requires read-modify-write for updates
async function* paginateKV(kv: KVNamespace, options: { prefix?: string } = {}) {
let cursor: string | undefined;
do {
const result = await kv.list({ ...options, cursor });
yield result.keys;
cursor = result.list_complete ? undefined : result.cursor;
} while (cursor);
}
// Usage
for await (const keys of paginateKV(env.MY_KV, { prefix: 'user:' })) {
processKeys(keys);
}
async function putWithRetry(kv: KVNamespace, key: string, value: string, opts?: KVPutOptions) {
let attempts = 0, delay = 1000;
while (attempts < 5) {
try {
await kv.put(key, value, opts);
return;
} catch (error) {
if ((error as Error).message.includes('429')) {
attempts++;
if (attempts >= 5) throw new Error('Max retry attempts');
await new Promise(r => setTimeout(r, delay));
delay *= 2; // Exponential backoff
} else throw error;
}
}
}
KV is eventually consistent across Cloudflare's global network (Aug 2025 redesign: hybrid storage, <5ms p99 latency):
How It Works:
Example:
// Tokyo: Write
await env.MY_KV.put('counter', '1');
const value = await env.MY_KV.get('counter'); // "1" ✅ (same POP, RYOW)
// London (within 60s): May be stale ⚠️
const value2 = await env.MY_KV.get('counter'); // Might be old value
// After 60+ seconds: Consistent ✅
Read-Your-Own-Write (RYOW) Guarantee : Since August 2025 redesign, requests routed through the same Cloudflare point of presence see their own writes immediately. Global consistency across different POPs still takes up to 60 seconds.
Timestamp Mitigation Pattern (for critical consistency needs):
// Use timestamp in key structure to avoid consistency issues
const timestamp = Date.now();
await kv.put(`user:123:${timestamp}`, userData);
// Find latest using list with prefix
const result = await kv.list({ prefix: 'user:123:' });
const latestKey = result.keys.sort((a, b) =>
parseInt(b.name.split(':')[2]) - parseInt(a.name.split(':')[2])
).at(0);
Use KV for : Read-heavy workloads (100:1 ratio), config, feature flags, caching, user preferences Don't use KV for : Financial transactions, strong consistency, >1/second writes per key, critical data
Need strong consistency? Use Durable Objects
Source : Redesigning Workers KV
# Create namespace
npx wrangler kv namespace create MY_NAMESPACE [--preview]
# Manage keys (add --remote flag to access production data)
npx wrangler kv key put --binding=MY_KV "key" "value" [--ttl=3600] [--metadata='{}']
npx wrangler kv key get --binding=MY_KV "key" [--remote]
npx wrangler kv key list --binding=MY_KV [--prefix="user:"] [--remote]
npx wrangler kv key delete --binding=MY_KV "key"
# Bulk operations (up to 10,000 keys)
npx wrangler kv bulk put --binding=MY_KV data.json
npx wrangler kv bulk delete --binding=MY_KV keys.json
IMPORTANT : CLI commands default to local storage. Add --remote flag to access production/remote data.
Connect local Workers to production KV namespaces during development:
wrangler.jsonc:
{
"kv_namespaces": [{
"binding": "MY_KV",
"id": "production-uuid",
"remote": true // Connect to live KV
}]
}
How It Works:
Benefits:
⚠️ Warning : Writes affect production data. Consider using a staging namespace with remote: true instead of production.
Version Support:
Source : Remote bindings architecture
| Feature | Free Plan | Paid Plan |
|---|---|---|
| Reads per day | 100,000 | Unlimited |
| Writes per day (different keys) | 1,000 | Unlimited |
| Writes per key per second | 1 | 1 |
| Operations per Worker invocation | 1,000 | 1,000 |
| Namespaces per account | 1,000 | 1,000 |
| Storage per account | 1 GB | Unlimited |
| Key size | 512 bytes | 512 bytes |
| Metadata size | 1024 bytes | 1024 bytes |
| Value size |
Critical : 1 write/second per key (429 if exceeded), bulk operations count as 1 operation, namespace limit increased from 200 → 1,000 (Jan 2025)
Cause : Writing to same key >1/second Solution : Use retry with exponential backoff (see Advanced Patterns)
// ❌ Bad
await env.MY_KV.put('counter', '1');
await env.MY_KV.put('counter', '2'); // 429 error!
// ✅ Good
await putWithRetry(env.MY_KV, 'counter', '2');
Cause : Value exceeds 25 MiB Solution : Validate size before writing
if (value.length > 25 * 1024 * 1024) throw new Error('Value exceeds 25 MiB');
Cause : Metadata exceeds 1024 bytes when serialized Solution : Validate serialized size
const serialized = JSON.stringify(metadata);
if (serialized.length > 1024) throw new Error('Metadata exceeds 1024 bytes');
Cause : cacheTtl <60 seconds Solution : Use minimum 60
// ❌ Error
await env.MY_KV.get('key', { cacheTtl: 30 });
// ✅ Correct
await env.MY_KV.get('key', { cacheTtl: 60 });
list() frequentlylist_complete when paginating, not keys.length === 0get() returns null if key doesn't exist)Cause : Writing to same key >1/second Solution : Consolidate writes or use retry with exponential backoff
// ❌ Bad: Rate limit
for (let i = 0; i < 10; i++) await kv.put('counter', String(i));
// ✅ Good: Single write
await kv.put('counter', '9');
// ✅ Good: Retry with backoff
await putWithRetry(kv, 'counter', String(i));
Cause : Eventual consistency (~60 seconds propagation) Solution : Accept stale reads, use Durable Objects for strong consistency, or implement app-level cache invalidation
Cause : >1000 KV operations in single Worker invocation Solution : Use bulk operations
// ❌ Bad: 5000 operations
for (const key of 5000keys) await kv.get(key);
// ✅ Good: 1 operation
const values = await kv.get(keys); // Bulk read
Cause : Deleted/expired keys create "tombstones" that must be iterated through Solution : Always check list_complete, not keys.length
// ✅ Correct pagination
let cursor: string | undefined;
do {
const result = await kv.list({ cursor });
processKeys(result.keys); // Even if empty
cursor = result.list_complete ? undefined : result.cursor;
} while (cursor);
CRITICAL : When using prefix, you must include it in all paginated calls :
// ❌ WRONG - Loses prefix on subsequent pages
let result = await kv.list({ prefix: 'user:' });
result = await kv.list({ cursor: result.cursor }); // Missing prefix!
// ✅ CORRECT - Include prefix on every call
let cursor: string | undefined;
do {
const result = await kv.list({ prefix: 'user:', cursor });
processKeys(result.keys);
cursor = result.list_complete ? undefined : result.cursor;
} while (cursor);
Source : List keys documentation
wrangler types Does Not Generate Types for Environment-Nested KV BindingsCause : KV namespaces defined within environment configurations (e.g., [env.feature.kv_namespaces]) are not included in generated TypeScript types Impact : Loss of TypeScript autocomplete and type checking for KV bindings Source : GitHub Issue #9709
Example Configuration:
# wrangler.toml
[env.feature]
name = "my-worker-feature"
[[env.feature.kv_namespaces]]
binding = "MY_STORAGE_FEATURE"
id = "xxxxxxxxxxxx"
Running npx wrangler types creates type definitions for environment variables but not for the KV namespace bindings.
Workaround:
# Generate types for specific environment
npx wrangler types -e feature
Or define KV namespaces at top level instead of nested in environments:
# Top-level (types generated correctly)
[[kv_namespaces]]
binding = "MY_STORAGE"
id = "xxxxxxxxxxxx"
Note : Runtime bindings still work correctly; this only affects type generation.
wrangler kv key list Returns Empty Array for Remote DataCause : CLI commands default to local storage , not remote/production KV Impact : Users expect to see production data but get empty array from local storage Source : GitHub Issue #10395
Solution : Use --remote flag to access production/remote data
# ❌ Shows local storage (likely empty)
npx wrangler kv key list --binding=MY_KV
# ✅ Shows remote/production data
npx wrangler kv key list --binding=MY_KV --remote
Why This Happens : By design, wrangler dev uses local KV storage to avoid interfering with production data. CLI commands follow the same default for consistency.
Applies to : All wrangler kv key commands (get, list, delete, put)
id vs preview_id)cacheTtl values set for reads (min 60s)list_complete check (not keys.length)Last Updated : 2026-01-20 Package Versions : wrangler@4.59.2, @cloudflare/workers-types@4.20260109.0 Changes : Added 6 research findings - hot/cold key performance patterns, remote bindings (Wrangler 4.37+), wrangler types environment issue, CLI --remote flag requirement, RYOW consistency details, prefix persistence in pagination
Weekly Installs
355
Repository
GitHub Stars
652
First Seen
Jan 20, 2026
Security Audits
Gen Agent Trust HubPassSocketWarnSnykPass
Installed on
claude-code292
gemini-cli238
opencode236
cursor218
antigravity209
codex208
Azure Data Explorer (Kusto) 查询技能:KQL数据分析、日志遥测与时间序列处理
100,500 周安装
AI文本编辑器:专业校对、语法检查、内容优化,提升写作质量
1,200 周安装
产品需求文档撰写指南:11位产品负责人框架与AI时代PRD最佳实践
1,200 周安装
Prisma ORM v7 升级指南:从 v6 迁移的完整步骤与破坏性变更解析
1,300 周安装
OpenAI电子表格技能:自动化创建、编辑、分析Excel/CSV文件,支持公式与可视化
1,200 周安装
任务管理插件 - 基于Markdown的轻量级任务跟踪系统,支持可视化看板与AI助手集成
1,200 周安装
Next.js 浏览器自动化工具 - next-browser 安装使用与性能测试指南
1,300 周安装
| 25 MiB |
| 25 MiB |
| Minimum cacheTtl | 60 seconds | 60 seconds |