cloudflare-python-workers by jezweb/claude-skills
npx skills add https://github.com/jezweb/claude-skills --skill cloudflare-python-workers状态:测试版(需要 python_workers 兼容性标志) 运行时:Pyodide(Python 3.12+ 编译为 WebAssembly) 包版本:workers-py@1.7.0, workers-runtime-sdk@0.3.1, wrangler@4.58.0 最后验证:2026-01-21
确保已安装:
# 创建项目目录
mkdir my-python-worker && cd my-python-worker
# 初始化 Python 项目
uv init
# 安装 pywrangler
uv tool install workers-py
# 初始化 Worker 配置
uv run pywrangler init
创建 src/entry.py:
from workers import WorkerEntrypoint, Response
class Default(WorkerEntrypoint):
async def fetch(self, request):
return Response("Hello from Python Worker!")
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
{
"name": "my-python-worker",
"main": "src/entry.py",
"compatibility_date": "2025-12-01",
"compatibility_flags": ["python_workers"]
}
uv run pywrangler dev
# 访问 http://localhost:8787
uv run pywrangler deploy
如果您在 2025 年 12 月之前创建了 Python Worker,您只能使用内置包。使用 pywrangler(2025 年 12 月),您现在可以部署带有外部包的 Worker。
旧方法(不再需要):
# 仅限于内置包
# 只能使用 httpx、aiohttp、beautifulsoup4 等
# 错误:"You cannot yet deploy Python Workers that depend on
# packages defined in requirements.txt [code: 10021]"
新方法(pywrangler):
# pyproject.toml
[project]
dependencies = ["fastapi", "any-pyodide-compatible-package"]
uv tool install workers-py
uv run pywrangler deploy # 现在可以工作了!
历史时间线:
参见:包部署问题历史
自 2025 年 8 月起,Python Workers 使用基于类的模式(而非全局处理程序):
from workers import WorkerEntrypoint, Response
class Default(WorkerEntrypoint):
async def fetch(self, request):
# 通过 self.env 访问绑定
value = await self.env.MY_KV.get("key")
# 解析请求
url = request.url
method = request.method
return Response(f"Method: {method}, URL: {url}")
所有 Cloudflare 绑定都通过 self.env 访问:
class Default(WorkerEntrypoint):
async def fetch(self, request):
# D1 数据库
result = await self.env.DB.prepare("SELECT * FROM users").all()
# KV 存储
value = await self.env.MY_KV.get("key")
await self.env.MY_KV.put("key", "value")
# R2 对象存储
obj = await self.env.MY_BUCKET.get("file.txt")
# Workers AI
response = await self.env.AI.run("@cf/meta/llama-2-7b-chat-int8", {
"prompt": "Hello!"
})
return Response("OK")
支持的绑定:
详情请参阅 Cloudflare 绑定文档。
from workers import WorkerEntrypoint, Response
import json
class Default(WorkerEntrypoint):
async def fetch(self, request):
# 解析 JSON 请求体
if request.method == "POST":
body = await request.json()
return Response(
json.dumps({"received": body}),
headers={"Content-Type": "application/json"}
)
# 查询参数
url = URL(request.url)
name = url.searchParams.get("name", "World")
return Response(f"Hello, {name}!")
from workers import handler
@handler
async def on_scheduled(event, env, ctx):
# 按 cron 计划运行
print(f"Cron triggered at {event.scheduledTime}")
# 执行工作...
await env.MY_KV.put("last_run", str(event.scheduledTime))
在 wrangler.jsonc 中配置:
{
"triggers": {
"crons": ["*/5 * * * *"] // 每 5 分钟
}
}
Python Workflows 支持具有自动重试和状态持久化的、耐用的多步骤自动化。
Python Workflows 使用 @step.do() 装饰器模式,因为 Python 不容易支持匿名回调(不像 JavaScript/TypeScript 允许内联箭头函数)。这是根本性的语言差异,而非 Cloudflare 实现的限制。
JavaScript 模式(无法直接转换):
await step.do("my step", async () => {
// 内联回调
return result;
});
Python 模式(必需):
@step.do("my step")
async def my_step():
# 带装饰器的命名函数
return result
result = await my_step()
Pyodide 捕获 JavaScript promises(thenables)并将其代理为 Python awaitables。这使得可以使用标准 Python 异步模式实现类似 Promise.all 的行为:
import asyncio
@step.do("step_a")
async def step_a():
return "A"
@step.do("step_b")
async def step_b():
return "B"
# 并发执行(类似 Promise.all)
results = await asyncio.gather(step_a(), step_b())
# results = ["A", "B"]
为什么这可行:来自工作流步骤的 JavaScript promises 被代理为 Python awaitables,从而允许使用标准的 asyncio 并发原语。
from workers import WorkflowEntrypoint, WorkerEntrypoint, Response
class MyWorkflow(WorkflowEntrypoint):
async def run(self, event, step):
# 步骤 1
@step.do("fetch data")
async def fetch_data():
response = await fetch("https://api.example.com/data")
return await response.json()
data = await fetch_data()
# 步骤 2:休眠
await step.sleep("wait", "10 seconds")
# 步骤 3:处理
@step.do("process data")
async def process_data():
return {"processed": True, "count": len(data)}
result = await process_data()
return result
class Default(WorkerEntrypoint):
async def fetch(self, request):
# 创建工作流实例
instance = await self.env.MY_WORKFLOW.create()
return Response(f"Workflow started: {instance.id}")
定义步骤依赖关系以实现并行执行:
class MyWorkflow(WorkflowEntrypoint):
async def run(self, event, step):
@step.do("step_a")
async def step_a():
return "A done"
@step.do("step_b")
async def step_b():
return "B done"
# step_c 等待 step_a 和 step_b
@step.do("step_c", depends=[step_a, step_b], concurrent=True)
async def step_c(result_a, result_b):
return f"C received: {result_a}, {result_b}"
return await step_c()
{
"compatibility_flags": ["python_workers", "python_workflows"],
"compatibility_date": "2025-12-01",
"workflows": [
{
"name": "my-workflow",
"binding": "MY_WORKFLOW",
"class_name": "MyWorkflow"
}
]
}
[project]
name = "my-python-worker"
version = "0.1.0"
requires-python = ">=3.12"
dependencies = [
"beautifulsoup4",
"httpx"
]
[dependency-groups]
dev = [
"workers-py",
"workers-runtime-sdk"
]
Python Workers 支持:
参见 Pyodide 包列表。
只有 异步 HTTP 库可以工作:
# ✅ 可用 - httpx(异步)
import httpx
async with httpx.AsyncClient() as client:
response = await client.get("https://api.example.com")
# ✅ 可用 - aiohttp
import aiohttp
async with aiohttp.ClientSession() as session:
async with session.get("https://api.example.com") as response:
data = await response.json()
# ❌ 不可用 - requests(同步)
import requests # 会失败!
通过 Pyodide 的 FFI 从 Python 访问 JavaScript API:
from js import fetch, console, Response as JSResponse
class Default(WorkerEntrypoint):
async def fetch(self, request):
# 使用 JavaScript fetch
response = await fetch("https://api.example.com")
data = await response.json()
# 控制台日志
console.log("Fetched data:", data)
# 返回 JavaScript Response
return JSResponse.new("Hello!")
重要:to_py() 是 JavaScript 对象上的一个 方法,而不是一个独立的函数。只有 to_js() 是函数。
from js import Object
from pyodide.ffi import to_js
# ❌ 错误 - ImportError!
from pyodide.ffi import to_py
python_data = to_py(js_data)
# ✅ 正确 - to_py() 是一个方法
async def fetch(self, request):
data = await request.json() # 返回 JS 对象
python_data = data.to_py() # 转换为 Python 字典
# 将 Python 字典转换为 JavaScript 对象
python_dict = {"name": "test", "count": 42}
js_object = to_js(python_dict, dict_converter=Object.fromEntries)
# 在 Response 中使用
return Response(to_js({"status": "ok"}))
来源:GitHub Issue #3322(Pyodide 维护者澄清)
此技能可预防 11 个已记录的问题:
错误:TypeError: on_fetch is not defined
原因:处理程序模式在 2025 年 8 月发生了变化。
# ❌ 旧版(已弃用)
@handler
async def on_fetch(request):
return Response("Hello")
# ✅ 新版(当前)
class Default(WorkerEntrypoint):
async def fetch(self, request):
return Response("Hello")
错误:RuntimeError: cannot use blocking call in async context
原因:Python Workers 仅运行异步代码。同步库会阻塞事件循环。
# ❌ 失败
import requests
response = requests.get("https://api.example.com")
# ✅ 可用
import httpx
async with httpx.AsyncClient() as client:
response = await client.get("https://api.example.com")
错误:ModuleNotFoundError: No module named 'numpy'(或类似错误)
原因:只有纯 Python 包可以工作。不支持原生 C 扩展。
解决方案:使用 Pyodide 兼容的替代方案或查看 Pyodide 包列表。
错误:Error: Python Workers require the python_workers compatibility flag
修复:添加到 wrangler.jsonc:
{
"compatibility_flags": ["python_workers"]
}
对于 Workflows,还需添加 "python_workflows"。
错误:工作流状态未正确持久化
原因:所有 I/O 操作必须在 @step.do 内部进行以保证耐用性。
# ❌ 错误 - 在步骤外进行 fetch
response = await fetch("https://api.example.com")
@step.do("use data")
async def use_data():
return await response.json() # 重试时 response 可能已过时
# ✅ 正确 - 在步骤内进行 fetch
@step.do("fetch and use")
async def fetch_and_use():
response = await fetch("https://api.example.com")
return await response.json()
错误:TypeError: Object of type X is not JSON serializable
原因:工作流步骤的返回值必须是 JSON 可序列化的。
修复:在返回前转换复杂对象:
@step.do("process")
async def process():
# 将 datetime 转换为字符串
return {"timestamp": datetime.now().isoformat()}
注意:Python Workers 的冷启动时间比 JavaScript 长。借助 Wasm 内存快照(2025 年 12 月),像 FastAPI 和 Pydantic 这样的重型包现在可以在 约 1 秒 内加载(之前约为 10 秒),但这仍然比 JavaScript Workers(约 50 毫秒)慢约 2 倍。
性能数据(截至 2025 年 12 月):
缓解措施:
错误:Failed to install package X
原因:
修复:检查包兼容性,使用替代方案,或请求支持。
错误:从 JavaScript Worker 调用 Python Worker 时出现 Network connection lost 来源:GitHub Issue #11438
发生原因:开发注册表无法正确路由在不同终端中单独运行的 Workers 之间的 RPC 调用。
预防措施:
# ❌ 无效 - 分开的终端
# 终端 1: npx wrangler dev (JS worker)
# 终端 2: npx wrangler dev (Python worker)
# 结果:网络连接丢失错误
# ✅ 有效 - 单个 wrangler 实例
npx wrangler dev -c ts/wrangler.jsonc -c py/wrangler.jsonc
在单个 wrangler 实例中运行两个 Worker,以实现正确的 RPC 通信。
错误:TypeError: Parser error: The memory limit has been exceeded 来源:GitHub Issue #10814
发生原因:HTML 中的大型内联 data: URL(>10MB)会触发解析器内存限制。这 不是 关于响应大小的问题——10MB 的纯文本可以正常工作,但带有嵌入式数据 URL 的 10MB HTML 会失败。这在 Python Jupyter Notebooks 中很常见,它们使用内联图像来绘制图表。
预防措施:
# ❌ 失败 - 在包含 data: URL 的 notebook HTML 上触发 HTMLRewriter
response = await fetch("https://origin.example.com/notebook.html")
return response # 如果 HTML 包含大型 data: URL 则崩溃
# ✅ 有效 - 直接流式传输或使用 text/plain
response = await fetch("https://origin.example.com/notebook.html")
headers = {"Content-Type": "text/plain"} # 绕过解析器
return Response(await response.text(), headers=headers)
变通方案:
text/plain 内容类型绕过解析器错误:部署失败并出现用户错误 来源:Python Workers Redux 博客
发生原因:Wasm 快照不支持在请求处理程序之前初始化 PRNG。如果您在模块初始化期间调用伪随机数生成器 API(如 random.seed()),部署会 失败。
预防措施:
import random
# ❌ 部署失败 - 模块级别的 PRNG 调用
random.seed(42)
class Default(WorkerEntrypoint):
async def fetch(self, request):
return Response(str(random.randint(1, 100)))
# ✅ 有效 - 在处理程序内部调用 PRNG
class Default(WorkerEntrypoint):
async def fetch(self, request):
random.seed(42) # 在处理程序内部初始化
return Response(str(random.randint(1, 100)))
仅在请求处理程序内部调用 PRNG 函数,而不是在模块级别。
WorkerEntrypoint 类模式python_workers 兼容性标志self.env 访问所有绑定@handler 装饰器FastAPI 可以与 Python Workers 一起工作,但有一些限制:
from fastapi import FastAPI
from workers import WorkerEntrypoint
app = FastAPI()
@app.get("/")
async def root():
return {"message": "Hello from FastAPI"}
class Default(WorkerEntrypoint):
async def fetch(self, request):
# 通过 FastAPI 路由
return await app(request)
限制:
详情请参阅 Cloudflare FastAPI 示例。
{
"workers-py": "1.7.0",
"workers-runtime-sdk": "0.3.1",
"wrangler": "4.58.0"
}
注意:始终固定版本以实现可重现的构建。查看 PyPI workers-py 获取最新版本。
兼容性日期指南:
2025-12-01(包含 pywrangler 改进的最新功能)2025-08-01每周安装量
316
仓库
GitHub 星标数
652
首次出现
2026年1月20日
安全审计
安装于
claude-code264
gemini-cli211
opencode206
cursor200
antigravity194
codex183
Status : Beta (requires python_workers compatibility flag) Runtime : Pyodide (Python 3.12+ compiled to WebAssembly) Package Versions : workers-py@1.7.0, workers-runtime-sdk@0.3.1, wrangler@4.58.0 Last Verified : 2026-01-21
Ensure you have installed:
# Create project directory
mkdir my-python-worker && cd my-python-worker
# Initialize Python project
uv init
# Install pywrangler
uv tool install workers-py
# Initialize Worker configuration
uv run pywrangler init
Create src/entry.py:
from workers import WorkerEntrypoint, Response
class Default(WorkerEntrypoint):
async def fetch(self, request):
return Response("Hello from Python Worker!")
{
"name": "my-python-worker",
"main": "src/entry.py",
"compatibility_date": "2025-12-01",
"compatibility_flags": ["python_workers"]
}
uv run pywrangler dev
# Visit http://localhost:8787
uv run pywrangler deploy
If you created a Python Worker before December 2025, you were limited to built-in packages. With pywrangler (Dec 2025), you can now deploy with external packages.
Old Approach (no longer needed):
# Limited to built-in packages only
# Could only use httpx, aiohttp, beautifulsoup4, etc.
# Error: "You cannot yet deploy Python Workers that depend on
# packages defined in requirements.txt [code: 10021]"
New Approach (pywrangler):
# pyproject.toml
[project]
dependencies = ["fastapi", "any-pyodide-compatible-package"]
uv tool install workers-py
uv run pywrangler deploy # Now works!
Historical Timeline :
See : Package deployment issue history
As of August 2025, Python Workers use a class-based pattern (not global handlers):
from workers import WorkerEntrypoint, Response
class Default(WorkerEntrypoint):
async def fetch(self, request):
# Access bindings via self.env
value = await self.env.MY_KV.get("key")
# Parse request
url = request.url
method = request.method
return Response(f"Method: {method}, URL: {url}")
All Cloudflare bindings are accessed via self.env:
class Default(WorkerEntrypoint):
async def fetch(self, request):
# D1 Database
result = await self.env.DB.prepare("SELECT * FROM users").all()
# KV Storage
value = await self.env.MY_KV.get("key")
await self.env.MY_KV.put("key", "value")
# R2 Object Storage
obj = await self.env.MY_BUCKET.get("file.txt")
# Workers AI
response = await self.env.AI.run("@cf/meta/llama-2-7b-chat-int8", {
"prompt": "Hello!"
})
return Response("OK")
Supported Bindings :
See Cloudflare Bindings Documentation for details.
from workers import WorkerEntrypoint, Response
import json
class Default(WorkerEntrypoint):
async def fetch(self, request):
# Parse JSON body
if request.method == "POST":
body = await request.json()
return Response(
json.dumps({"received": body}),
headers={"Content-Type": "application/json"}
)
# Query parameters
url = URL(request.url)
name = url.searchParams.get("name", "World")
return Response(f"Hello, {name}!")
from workers import handler
@handler
async def on_scheduled(event, env, ctx):
# Run on cron schedule
print(f"Cron triggered at {event.scheduledTime}")
# Do work...
await env.MY_KV.put("last_run", str(event.scheduledTime))
Configure in wrangler.jsonc:
{
"triggers": {
"crons": ["*/5 * * * *"] // Every 5 minutes
}
}
Python Workflows enable durable, multi-step automation with automatic retries and state persistence.
Python Workflows use the @step.do() decorator pattern because Python does not easily support anonymous callbacks (unlike JavaScript/TypeScript which allows inline arrow functions). This is a fundamental language difference, not a limitation of Cloudflare's implementation.
JavaScript Pattern (doesn't translate):
await step.do("my step", async () => {
// Inline callback
return result;
});
Python Pattern (required):
@step.do("my step")
async def my_step():
# Named function with decorator
return result
result = await my_step()
Source : Python Workflows Blog
Pyodide captures JavaScript promises (thenables) and proxies them as Python awaitables. This enables Promise.all-equivalent behavior using standard Python async patterns:
import asyncio
@step.do("step_a")
async def step_a():
return "A"
@step.do("step_b")
async def step_b():
return "B"
# Concurrent execution (like Promise.all)
results = await asyncio.gather(step_a(), step_b())
# results = ["A", "B"]
Why This Works : JavaScript promises from workflow steps are proxied as Python awaitables, allowing standard asyncio concurrency primitives.
Source : Python Workflows Blog
from workers import WorkflowEntrypoint, WorkerEntrypoint, Response
class MyWorkflow(WorkflowEntrypoint):
async def run(self, event, step):
# Step 1
@step.do("fetch data")
async def fetch_data():
response = await fetch("https://api.example.com/data")
return await response.json()
data = await fetch_data()
# Step 2: Sleep
await step.sleep("wait", "10 seconds")
# Step 3: Process
@step.do("process data")
async def process_data():
return {"processed": True, "count": len(data)}
result = await process_data()
return result
class Default(WorkerEntrypoint):
async def fetch(self, request):
# Create workflow instance
instance = await self.env.MY_WORKFLOW.create()
return Response(f"Workflow started: {instance.id}")
Define step dependencies for parallel execution:
class MyWorkflow(WorkflowEntrypoint):
async def run(self, event, step):
@step.do("step_a")
async def step_a():
return "A done"
@step.do("step_b")
async def step_b():
return "B done"
# step_c waits for both step_a and step_b
@step.do("step_c", depends=[step_a, step_b], concurrent=True)
async def step_c(result_a, result_b):
return f"C received: {result_a}, {result_b}"
return await step_c()
{
"compatibility_flags": ["python_workers", "python_workflows"],
"compatibility_date": "2025-12-01",
"workflows": [
{
"name": "my-workflow",
"binding": "MY_WORKFLOW",
"class_name": "MyWorkflow"
}
]
}
[project]
name = "my-python-worker"
version = "0.1.0"
requires-python = ">=3.12"
dependencies = [
"beautifulsoup4",
"httpx"
]
[dependency-groups]
dev = [
"workers-py",
"workers-runtime-sdk"
]
Python Workers support:
Only async HTTP libraries work:
# ✅ WORKS - httpx (async)
import httpx
async with httpx.AsyncClient() as client:
response = await client.get("https://api.example.com")
# ✅ WORKS - aiohttp
import aiohttp
async with aiohttp.ClientSession() as session:
async with session.get("https://api.example.com") as response:
data = await response.json()
# ❌ DOES NOT WORK - requests (sync)
import requests # Will fail!
Request support for new packages at: https://github.com/cloudflare/workerd/discussions/categories/python-packages
Access JavaScript APIs from Python via Pyodide's FFI:
from js import fetch, console, Response as JSResponse
class Default(WorkerEntrypoint):
async def fetch(self, request):
# Use JavaScript fetch
response = await fetch("https://api.example.com")
data = await response.json()
# Console logging
console.log("Fetched data:", data)
# Return JavaScript Response
return JSResponse.new("Hello!")
Important : to_py() is a METHOD on JavaScript objects, not a standalone function. Only to_js() is a function.
from js import Object
from pyodide.ffi import to_js
# ❌ WRONG - ImportError!
from pyodide.ffi import to_py
python_data = to_py(js_data)
# ✅ CORRECT - to_py() is a method
async def fetch(self, request):
data = await request.json() # Returns JS object
python_data = data.to_py() # Convert to Python dict
# Convert Python dict to JavaScript object
python_dict = {"name": "test", "count": 42}
js_object = to_js(python_dict, dict_converter=Object.fromEntries)
# Use in Response
return Response(to_js({"status": "ok"}))
Source : GitHub Issue #3322 (Pyodide maintainer clarification)
This skill prevents 11 documented issues :
Error : TypeError: on_fetch is not defined
Why : Handler pattern changed in August 2025.
# ❌ OLD (deprecated)
@handler
async def on_fetch(request):
return Response("Hello")
# ✅ NEW (current)
class Default(WorkerEntrypoint):
async def fetch(self, request):
return Response("Hello")
Error : RuntimeError: cannot use blocking call in async context
Why : Python Workers run async-only. Sync libraries block the event loop.
# ❌ FAILS
import requests
response = requests.get("https://api.example.com")
# ✅ WORKS
import httpx
async with httpx.AsyncClient() as client:
response = await client.get("https://api.example.com")
Error : ModuleNotFoundError: No module named 'numpy' (or similar)
Why : Only pure Python packages work. Native C extensions are not supported.
Solution : Use Pyodide-compatible alternatives or check Pyodide packages.
Error : Error: Python Workers require the python_workers compatibility flag
Fix : Add to wrangler.jsonc:
{
"compatibility_flags": ["python_workers"]
}
For Workflows, also add "python_workflows".
Error : Workflow state not persisted correctly
Why : All I/O must happen inside @step.do for durability.
# ❌ BAD - fetch outside step
response = await fetch("https://api.example.com")
@step.do("use data")
async def use_data():
return await response.json() # response may be stale on retry
# ✅ GOOD - fetch inside step
@step.do("fetch and use")
async def fetch_and_use():
response = await fetch("https://api.example.com")
return await response.json()
Error : TypeError: Object of type X is not JSON serializable
Why : Workflow step return values must be JSON-serializable.
Fix : Convert complex objects before returning:
@step.do("process")
async def process():
# Convert datetime to string
return {"timestamp": datetime.now().isoformat()}
Note : Python Workers have higher cold starts than JavaScript. With Wasm memory snapshots (Dec 2025), heavy packages like FastAPI and Pydantic now load in ~1 second (down from ~10 seconds previously), but this is still ~2x slower than JavaScript Workers (~50ms).
Performance Numbers (as of Dec 2025):
Mitigation :
Source : Python Workers Redux Blog | InfoQ Coverage
Error : Failed to install package X
Causes :
Fix : Check package compatibility, use alternatives, or request support.
Error : Network connection lost when calling Python Worker from JavaScript Worker Source : GitHub Issue #11438
Why It Happens : Dev registry doesn't properly route RPC calls between separately-run Workers in different terminals.
Prevention :
# ❌ Doesn't work - separate terminals
# Terminal 1: npx wrangler dev (JS worker)
# Terminal 2: npx wrangler dev (Python worker)
# Result: Network connection lost error
# ✅ Works - single wrangler instance
npx wrangler dev -c ts/wrangler.jsonc -c py/wrangler.jsonc
Run both workers in a single wrangler instance to enable proper RPC communication.
Error : TypeError: Parser error: The memory limit has been exceeded Source : GitHub Issue #10814
Why It Happens : Large inline data: URLs (>10MB) in HTML trigger parser memory limits. This is NOT about response size—10MB plain text works fine, but 10MB HTML with embedded data URLs fails. Common with Python Jupyter Notebooks that use inline images for plots.
Prevention :
# ❌ FAILS - HTMLRewriter triggered on notebook HTML with data: URLs
response = await fetch("https://origin.example.com/notebook.html")
return response # Crashes if HTML contains large data: URLs
# ✅ WORKS - Stream directly or use text/plain
response = await fetch("https://origin.example.com/notebook.html")
headers = {"Content-Type": "text/plain"} # Bypass parser
return Response(await response.text(), headers=headers)
Workarounds :
text/plain content-type to bypass parserError : Deployment fails with user error Source : Python Workers Redux Blog
Why It Happens : Wasm snapshots don't support PRNG initialization before request handlers. If you call pseudorandom number generator APIs (like random.seed()) during module initialization, deployment FAILS.
Prevention :
import random
# ❌ FAILS deployment - module-level PRNG call
random.seed(42)
class Default(WorkerEntrypoint):
async def fetch(self, request):
return Response(str(random.randint(1, 100)))
# ✅ WORKS - PRNG calls inside handlers
class Default(WorkerEntrypoint):
async def fetch(self, request):
random.seed(42) # Initialize inside handler
return Response(str(random.randint(1, 100)))
Only call PRNG functions inside request handlers, not at module level.
WorkerEntrypoint class patternpython_workers compatibility flagself.env for all bindings@handler decorator for fetchFastAPI can work with Python Workers but with limitations:
from fastapi import FastAPI
from workers import WorkerEntrypoint
app = FastAPI()
@app.get("/")
async def root():
return {"message": "Hello from FastAPI"}
class Default(WorkerEntrypoint):
async def fetch(self, request):
# Route through FastAPI
return await app(request)
Limitations :
See Cloudflare FastAPI example for details.
{
"workers-py": "1.7.0",
"workers-runtime-sdk": "0.3.1",
"wrangler": "4.58.0"
}
Note : Always pin versions for reproducible builds. Check PyPI workers-py for latest releases.
Compatibility Date Guidance :
2025-12-01 for new projects (latest features including pywrangler improvements)2025-08-01 only if you need to match older production WorkersWeekly Installs
316
Repository
GitHub Stars
652
First Seen
Jan 20, 2026
Security Audits
Gen Agent Trust HubPassSocketPassSnykWarn
Installed on
claude-code264
gemini-cli211
opencode206
cursor200
antigravity194
codex183
agent-browser 浏览器自动化工具 - Vercel Labs 命令行网页操作与测试
140,500 周安装
App Store Connect CLI 工作流自动化技能:asc-workflow 命令详解与CI/CD集成指南
1,000 周安装
LangSmith Fetch 代理调试技能 - 快速调试 LangChain/LangGraph 代理执行轨迹
1,000 周安装
专业市场研究报告生成器 | 50+页咨询级分析报告,含波特五力、SWOT、PESTLE等框架
998 周安装
Peekaboo:macOS UI 自动化 CLI 工具 - 屏幕捕获、元素定位与输入驱动
1,000 周安装
Chrome扩展开发最佳实践指南 - Manifest V3性能优化与代码规范
239 周安装
Favicon生成器 - 一键生成完整网站图标包,支持SVG/ICO/iOS/Android/PWA格式
1,000 周安装