The Agent Skills Directory
npx skills add https://code.deepline.com--with 参数中的 Clay 会话 Cookie 会被发送到 Deepline 遥测 (POST /api/v2/telemetry/activity)。run_javascript 在本地执行,但命令字符串会被传输。
两条规则,必须同时遵守:
切勿将 Cookie 嵌入到有效负载中。 在 JS 中从环境变量读取:
'cookie': process.env.CLAY_COOKIE // 正确
'cookie': 'claysession=abc123' // 错误 — 会出现在遥测中
存储在 .env.deepline 中,并添加到 .gitignore:
# .env.deepline (迁移专用 — 不影响任何其他项目环境文件)
# 使用单引号 — GA cookie 值包含 $o7, $g1 等,双引号会被 bash 展开
CLAY_COOKIE='claysession=<value>; ajs_user_id=<id>'
使用以下命令加载:set -a; source .env.deepline; set +a
如何获取 Cookie: HAR 导出会剥离 HttpOnly Cookie — claysession 会丢失。请从 Chrome DevTools 网络标签页复制 curl 命令(右键点击任何 api.clay.com 请求 → 复制 → 复制为 cURL)。提取 -b '...' 之间的所有内容并存储为 CLAY_COOKIE。
| 输入类型 | 包含内容 | 如何解析 |
|---|---|---|
HAR 文件 (app.clay.com_*.har) | 完整的网络流量,包括包含已渲染公式单元格值的 bulk-fetch-records 响应 — 最丰富的输入 | Base64 解码 + gunzip response.content.text;提取 results[].cells |
ClayMate Lite 导出 (clay-claude-t_xxx-date.json) | tableSchema (原始 GET 响应) + portableSchema ({{@Name}} 引用) + bulkFetchRecords (N 个样本行,如果导出时表为空则为 ) — 仅次于 HAR 的第二丰富输入 |
当有多个输入可用时的优先级顺序:HAR > ClayMate Lite 导出 > bulk-fetch-records > 模式 JSON > 用户描述。 始终使用最丰富的可用输入。ClayMate Lite 导出已捆绑模式 + 记录 — 直接使用其中的 .tableSchema 和 .bulkFetchRecords。
当bulkFetchRecords 为 null 时: 回退到 portableSchema 以恢复提示词和模式:
.portableSchema.columns[].typeSettings.inputsBinding → 查找 {name: "prompt"} 条目 → .formulaText{name: "answerSchemaType"} → .formulaMap.jsonSchema (双重转义字符串 — JSON.parse 两次).typeSettings.conditionalRunFormulaText — 在 Deepline 中转换为行过滤器# RECOVERED FROM PORTABLE SCHEMA — field f_xxx (逐字记录,非近似)如何从 HAR 中提取 bulk-fetch-records:
# 查找 bulk-fetch-records 条目(响应体是 base64+gzip)
python3 - <<'EOF'
import json, base64, gzip
with open('your-export.har') as f: # 替换为你的 HAR 文件名
har = json.load(f)
for entry in har['log']['entries']:
url = entry['request']['url']
if 'bulk-fetch-records' in url:
body = entry['response']['content'].get('text', '')
enc = entry['response']['content'].get('encoding', '')
data = base64.b64decode(body) if enc == 'base64' else body.encode()
try:
data = gzip.decompress(data)
except Exception:
pass
print(json.dumps(json.loads(data), indent=2)[:5000])
EOF
每次迁移都会生成以下结构:
project/
├── .env.deepline # Clay 凭证(切勿提交 — 添加到 .gitignore)
├── .env.deepline.example # 显示所需变量的模板 — 可安全提交
├── .gitignore # 排除 .env.deepline, *.csv, work_*.csv
├── prompts/
│ └── <name>.txt # 每个 AI 列一个文件。头部注释说明来源:
│ # "# RECOVERED FROM HAR — field f_xxx" ← 来自 Clay 的逐字记录
│ # "# ⚠️ APPROXIMATED — could not recover from HAR" ← 猜测
├── scripts/
│ ├── fetch_<table>.sh # 获取 Clay 记录 → seed_<table>.csv
│ └── enrich_<table>.sh # 运行 deepline enrich 处理 → output_<table>.csv
脚本输出 CSV 列: 从 Clay 获取的所有列(使用确切的字段 ID)+ 每个 enrichment 处理步骤一列。列名使用与处理计划匹配的 snake_case 别名。
提示词文件格式: 纯文本系统提示词。变量使用 {{column_name}} 语法(Deepline 的插值)。第一行始终是说明来源的 # 注释(HAR 字段 ID 或近似警告)。
在编写任何脚本之前生成。在阶段 2 之前获取用户确认。
---|---|---|---|---|---
1 | record_id | 内置 | — | 字符串 |
… | | | | |
graph TD
A[record_id] --> B[clay_record]
B --> C[fields]
C --> D[exa_research]
D --> E[strategic_initiatives]
C --> F[qualify_person]
E --> F
使用 classDef 颜色:蓝色 = 本地 (run_javascript),橙色 = 远程 API,绿色 = AI (deeplineagent)。
列别名规则: 从实际的 Clay 列名派生别名,使用 snake_case(例如 "Work Email" → work_email, "Strategic Initiatives" → strategic_initiatives)。两个结构性别名 clay_record 和 fields 是固定的 — 其他所有别名都遵循 Clay 模式。请勿从记忆的列表中发明名称。
| 处理步骤 | 列别名 | Deepline 工具 | 依赖于 | 备注 |
|---|---|---|---|---|
| 1 | clay_record | run_javascript (fetch) | record_id | 从环境变量获取 Cookie;别名始终是 clay_record |
| 2 | fields | run_javascript (flatten) | clay_record | 别名始终是 fields |
| N | <clay_col_snake> | <参见 clay-action-mappings.md> | <之前的处理步骤> | 别名 = snake_case(Clay 列名) |
示例(说明性 — 实际别名必须匹配你表的列名):
| 处理步骤 | 列别名 | Deepline 工具 | 依赖于 | 备注 |
|---|---|---|---|---|
| 1 | clay_record | run_javascript (fetch) | record_id | |
| 2 | fields | run_javascript (flatten) | clay_record | |
| 3 | work_email | cost_aware_first_name_and_domain_to_email_waterfall | fields.first_name, fields.last_name, fields.company_domain | 主要方法 |
| 4 | work_email_li | person_linkedin_to_email_waterfall | fields.linkedin_url | 当步骤 3 返回空值时的备用方案 |
| 5 | email_valid | leadmagic_email_validation | work_email | 可选的最终关卡 |
| 6 | job_function | deeplineagent | fields.job_title | 分类 |
| 7 | company_research | deeplineagent 或 exa_search -> deeplineagent | fields.company_domain | 步骤 1/2 |
| 8 | strategic_initiatives | deeplineagent + jsonSchema | company_research | 步骤 2/2 |
| 9 | qualify_person | deeplineagent + jsonSchema | fields.*, strategic_initiatives | ICP 评分 |
陈述每一个无法验证的假设。在阶段 2 之前获取确认。
在编写任何提示词近似之前执行此操作。 实际的 Clay 提示词模板通常存在于 bulk-fetch-records 响应中的公式字段单元格值里。
发现过程:
formula 类型的字段,其单元格值以 "You are..." 开头或包含编号要求 — 这些是 Clay 已渲染的提示词模板。action 类型的字段的实际单元格值:如果包含 "Status Code: 200" 或 "NO_CELL",它们是 webhook 调用或未触发的操作 — 不是 AI 输出。提示词恢复优先级(从最丰富到最弱):
portableSchema — columns[].typeSettings.inputsBinding[name=prompt].formulaText 包含完整提示词,即使 bulkFetchRecords 为 null。标记为 # RECOVERED FROM PORTABLE SCHEMA — field f_xxx。# ⚠️ APPROXIMATED — could not recover。从 portableSchema 恢复 JSON 模式:
import json
for col in d['portableSchema']['columns']:
if col['type'] == 'action':
for inp in col['typeSettings'].get('inputsBinding', []):
if inp['name'] == 'answerSchemaType':
schema_raw = inp.get('formulaMap', {}).get('jsonSchema', '').strip('"')
# 双重转义:取消转义 \\" → " 和 \\n → \n,然后解析
schema_raw = schema_raw.replace('\\"', '"').replace('\\n', '\n').replace('\\\\', '\\')
schema = json.loads(schema_raw)
修复恢复的提示词中的任何 Clay 公式错误: 错误的字段引用(例如,{{@Name}} 应变为 {{name}})、{single_brace} 语法(不被 Deepline 插值)、Clay.formatForAIPrompt(...) 包装调用(剥离,直接使用字段引用)。
在假设一个流水线有多少个 AI 处理步骤之前,检查 3 条以上记录的实际单元格值:
| 单元格值 | 含义 | 如何复制 |
|---|---|---|
NO_CELL | 操作从未触发 | 从头构建 |
"Status Code: 200" / {"status":200} | HTTP/webhook 操作 (n8n, Zapier) — 不是 AI 输出 | run_javascript 获取或存根 |
"" (空字符串) | 列已运行但未产生任何内容,或已禁用 | 视为 NO_CELL |
| 多样化的生成式文本 | 实际的 AI 输出 | deeplineagent |
模式中的列可能从未运行过。在计算 AI 处理步骤之前,始终验证单元格值。空列或 "Status Code: 200" 列不是流水线步骤。
根据阶段 1 揭示的内容,在编写脚本之前回答这些问题。只回答适用于你表的问题 — 并非每个表都有电子邮件列或 AI 处理步骤。
表类型(勾选所有适用的项):
deepline tools search "person enrichment linkedin" 验证丰富工具deepline tools search "<platform> add leads" 验证工具record_id 获取步骤) → 参见下面的公司情报流水线电子邮件策略(仅当表有电子邮件列时):
generate-email-permutations 或 validate-email?→ 使用 cost_aware_first_name_and_domain_to_email_waterfall 作为主要方法(内置排列 + 验证),而不是单独使用 person_linkedin_to_email_waterfall。person_linkedin_to_email_waterfall 作为备用处理步骤,用于主要方法未找到的行。AI 策略(仅当表有 use-ai / claygent / octave 列时):
{single_brace} 错误和错误的字段引用。NO_CELL / "Status Code: 200"(阶段 1 §1.6)?→ 将其完全排除在处理计划之外。评分 / 资格(仅当表有评分列时):
依赖顺序(所有表):
run_javascript?→ 必须在任何引用 {{col_name}} 的 --in-place 处理步骤之前执行。公司情报流水线(仅当source 字段为 Mixrank/company-search 时):
Find companies / source 字段 → 替换为 apollo_company_search + 可选的 prospeo_enrich_company。向用户询问原始的 Mixrank 过滤条件(地点、规模、行业、技术栈)。route-row 操作列 → 不可复制。改为生成过滤后的输出 CSV;记录目标表 ID 以供参考。conditionalRunFormulaText)→ 在每个处理步骤之前实现为行过滤器(跳过不匹配条件的行)。安全(所有表):
CLAY_COOKIE 是否存储在 .env.deepline 中(而非硬编码在脚本中)?→ 验证 .env.deepline 在 .gitignore 中。output/ 是否在 .gitignore 中?→ 输出 CSV 包含 Clay 记录 PII(姓名、电子邮件、LinkedIn URL)。run_javascript 获取调用是否都使用 process.env.CLAY_COOKIE,而不是硬编码字符串?.env.deepline 是否对 CLAY_COOKIE 使用单引号?GA cookie 值包含 $ 字符,双引号会被 bash 展开。每次迁移两个脚本:
clay_fetch_records.sh — 通过 run_javascript + fetch() 获取 Clay 记录。
schema 模式:GET /v3/tables/{id} 元数据pilot 模式:--rows 0:3 (第 0-2 行)full 模式:所有行claygent_replicate.sh — 通过 deepline enrich 复制 AI + 丰富列。
架构选择:deepline enrich CLI 与 Python SDK
对于 Claygent 繁重的表(多个 use-ai (claygent+web) 列),已验证的模式是使用纯 Python 脚本调用 deepline tools execute exa_search 进行外部研究,并调用 deepline tools execute deeplineagent 进行 AI 合成。这种方法:
deepline enrichThreadPoolExecutor)clay_record 数据时(json.loads(row['clay_record']))才兼容 {{field}} 插值,而不是通过 deepline enrich --in-placedeepline enrich CLI 模式(如下所示)仍然适用于非 AI 处理步骤(run_javascript、电子邮件瀑布、提供商查找)以及简单的单列 deeplineagent 丰富。
Cookie 模式(强制):
set -a; source .env.deepline; set +a
: "${CLAY_COOKIE:?CLAY_COOKIE must be set in .env.deepline}"
CLAY_VERSION="${CLAY_VERSION:-v20260311_192407Z_5025845142}"
# clay_curl 包装器 — 所有 Clay API 调用都需要(裸 curl 会得到 401)
clay_curl() {
curl -s --fail \
-b "${CLAY_COOKIE}" \
-H "accept: application/json, text/plain, */*" \
-H "origin: https://app.clay.com" \
-H "referer: https://app.clay.com/" \
-H "x-clay-frontend-version: ${CLAY_VERSION}" \
-H "user-agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/145.0.0.0 Safari/537.36" \
"$@"
}
⚠️ 安全:切勿将CLAY_COOKIE 硬编码为字面值 — 它会出现在日志、遥测和 git 历史记录中。 仅从环境变量读取。检查你的 .env.deepline 是否在 .gitignore 中。检查你的 output/ 目录(包含 Clay PII 数据)是否也被 gitignore。
Clay API 端点事实(已验证):
| 所需内容 | 正确端点 | 备注 |
|---|---|---|
| 所有记录 ID | GET /v3/tables/{TABLE_ID}/views/{VIEW_ID}/records/ids | GET /v3/tables/{TABLE_ID}/records/ids 返回 NotFound — 需要视图 ID |
| 视图 ID | GET /v3/tables/{TABLE_ID} → .table.firstViewId | 始终从模式动态获取 |
| 获取记录 | POST /v3/tables/{TABLE_ID}/bulk-fetch-records | 请求体: |
用于 JSON 负载的 Python 子进程(当负载中包含 JS 代码时强制使用):
WITH_ARG=$(python3 - <<'PYEOF'
import json
code = "const fn=(row.first_name||'').toLowerCase()..."
print('col_name=run_javascript:' + json.dumps({'code': code}))
PYEOF
)
deepline enrich --input seed.csv --output work.csv --with "$WITH_ARG"
这避免了所有 bash/JSON 引用问题。切勿在 bash 字符串中手动构建嵌入 JS 的 JSON。
执行顺序 — 始终遵循分阶段模式:
完整模式(包含轮询循环)请参见 execution-ordering.md。
完整的 CLI 模式:clay-action-mappings.md。使用前始终验证工具 ID。
当你遇到下面未列出的 Clay 操作时,切勿猜测。运行发现:
# 步骤 1 — 描述该操作的作用,按意图搜索
deepline tools search "<what the action does>"
# 步骤 2 — 检查最佳候选
deepline tools get <candidate_tool_id>
# 步骤 3 — 如果未找到任何内容,回退到 deeplineagent
# 对结构化输出使用 deeplineagent 和 jsonSchema;对研究密集型工作拆分 exa_search 和 synthesis
按 Clay 操作类型的发现示例:
| 你在 Clay 中看到 | 搜索查询 | 可能结果 |
|---|---|---|
enrich-person-with-* | deepline tools search "person enrich linkedin" | leadmagic_profile_search, crustdata_person_enrichment |
find-email-* | deepline tools search "email finder" | hunter_email_finder, leadmagic_email_finder |
何时使用deeplineagent 回退: 如果 deepline tools search 没有返回相关工具,或者操作涉及模型判断(分类、评分、生成、摘要),则使用 deeplineagent。从 portableSchema.inputsBinding[name=prompt].formulaText 或 bulkFetchRecords 中的单元格值重构提示词。
| Clay 操作 | Deepline 工具 | 测试状态 |
|---|---|---|
generate-email-permutations + 整个电子邮件瀑布 + validate-email | cost_aware_first_name_and_domain_to_email_waterfall (主要) + 手动 perm_fln + leadmagic_email_validation + person_linkedin_to_email_waterfall (备用)。参见处理计划 5a–5e。⚠️ 单独使用 person_linkedin_to_email_waterfall = ~13% 匹配率,而使用排列优先方法约为 ~99%。 CLI 语法:deepline enrich --input seed.csv --output out.csv --with '{"alias":"email_result","tool":"cost_aware_first_name_and_domain_to_email_waterfall","payload":{"first_name":"{{first_name}}","last_name":"{{last_name}}","domain":"{{domain}}"}}' — 在运行前预览玩法扩展,并且一旦找到有效电子邮件,瀑布流就会提前停止。 |
每当复制 use-ai (claygent + web) 列时使用。完整模式、处理步骤结构、_extract_primary_source_url() 实现、置信度校准数据、失败模式和搜索角度变体:binary-search-optimizer.md
摘要: 处理步骤 A:3× 并行 exa_search (仅高亮,带引号的域名查询) → 处理步骤 B:使用包含 confidence + missing_angles 的 jsonSchema 进行 deeplineagent 合成 → 关卡:如果 confidence == "high" 则停止 → 处理步骤 C:对 missing_angles 进行后续搜索 → 处理步骤 D:重新合成 → 处理步骤 E:通过 _extract_primary_source_url(company_domain=domain) 进行主要来源深度阅读。
始终添加 research_confidence 和 research_passes 跟踪列。low 置信度 ≠ 糟糕输出(26 行测试:0% high,35% medium,65% low — 但 50% 的 low 行具有具体有用的内容)。
run_javascript 列必须在添加引用其值的 --in-place 列之前执行。参见 execution-ordering.md。run_javascript 列除外 — 始终在所有行上运行它们。fields=run_javascript:{flatten clay_record} — 在使用 deepline enrich --in-place 引用任何 {{fields.xxx}} 之前必需。使用 Python SDK 方法时不需要 — 只需在 Python 中调用 json.loads(row['clay_record']) 并直接访问任何键。Clay session cookies in --with args are sent to Deepline telemetry (POST /api/v2/telemetry/activity). run_javascript executes locally but the command string is transmitted.
Two rules, always apply both:
Never embed the cookie in the payload. Read it from env in JS:
'cookie': process.env.CLAY_COOKIE // RIGHT
'cookie': 'claysession=abc123' // WRONG — appears in telemetry
Store in.env.deepline, add to .gitignore:
# .env.deepline (migration-specific — does not affect any other project env files)
# Use SINGLE quotes — GA cookies contain $o7, $g1 etc. that bash expands with double quotes
CLAY_COOKIE='claysession=<value>; ajs_user_id=<id>'
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
null访问 .tableSchema 获取字段 ID,.bulkFetchRecords.results[].cells 获取单元格值(如果非 null),.portableSchema.columns[].typeSettings.inputsBinding 获取完整的提示词和 JSON 模式 — 即使 bulkFetchRecords 为 null 也可恢复 |
GET /v3/tables/{ID} JSON | 仅模式:字段名、ID、操作类型、列顺序。无单元格值,无提示词 | 用于列清单和字段 ID→名称映射 |
POST bulk-fetch-records 响应 | 模式 + 样本行的实际单元格值 — 包含已渲染的公式提示词、操作输出、NO_CELL 标记 | 每个字段的 results[].cells[field_id].value |
| Clay 工作簿 URL | 不直接包含任何内容 — 从 URL 提取 TABLE_ID;使用用户的 CLAY_COOKIE 通过 API 获取模式和记录 | GET /v3/tables/{TABLE_ID} 然后 POST bulk-fetch-records |
| 用户描述 | 仅列名 + 操作类型 — 无字段 ID,无实际提示词 | 最弱的输入;必须近似所有内容 |
{"recordIds": [...], "includeExternalContentFieldIds": []}| 响应格式 | {"results": [{id, cells, ...}]} | 键是 results;记录 ID 是 .id(不是 .recordId) |
| 记录 ID 响应 | {"results": ["r_abc", "r_def", ...]} | 使用 .get("results", []) 解析 |
verify-email-* | deepline tools search "email verify validate" | leadmagic_email_validation, zerobounce_email_validation |
company-* / enrich-company-* | deepline tools search "company enrich" | apollo_enrich_company, prospeo_enrich_company |
add-to-campaign-* | deepline tools search "add leads campaign" | instantly_add_to_campaign, smartlead_api_request |
social-media-* | deepline tools search "linkedin posts scrape" | crustdata_linkedin_posts, apify_run_actor_sync |
| 完全新颖的操作 | deepline tools search "<verb> <noun from column name>" | 使用顶部结果或 deeplineagent 回退 |
✅ 已测试 — 通过点模式首次尝试找到 patrick.valle@zoominfo.com (status=valid);跳过了 7 个下游提供商 |
enrich-person-with-mixrank-v2 | leadmagic_profile_search → crustdata_person_enrichment | 尚未针对真实的 Clay Mixrank 输出进行测试 |
lookup-company-in-other-table | run_javascript (本地 CSV 连接) | 尚未测试 |
chat-gpt-schema-mapper | deeplineagent;当需要结构化提取时添加 jsonSchema | ✅ 已测试(类似于 data_warehouse + job_function 处理步骤) |
use-ai (无网络) | deeplineagent | ✅ 已测试 (data_warehouse, job_function, technical_resources_readiness, key_gtm_friction 处理步骤) |
use-ai (claygent + web) | 二分搜索优化器 — 处理步骤 1:3× 并行 exa_search (仅高亮,约便宜 10 倍),针对财务/IR、产品发布、新细分市场 → 处理步骤 2:使用包含 `confidence: "high | medium |
octave-qualify-person | deeplineagent + jsonSchema ICP 评分器 | ✅ 已测试 — 26 行 |
octave-run-sequence-runner | 处理步骤 1:deeplineagent (信号) → 处理步骤 2:deeplineagent (电子邮件) | 模式已测试 (find_tension_mapping + verified_pvp_messages);尚未针对真实的 sequence-runner Clay 列进行验证 |
add-lead-to-campaign (Smartlead) | smartlead_api_request POST /v1/campaigns/{id}/leads | 尚未测试 |
add-lead-to-campaign (Instantly) | instantly_add_to_campaign — 使用 deepline tools execute instantly_add_to_campaign --payload '{"campaign_id":"<id>","contacts":[{"email":"...","first_name":"...","last_name":"...","company":"..."}]}'。首先使用 instantly_list_campaigns 列出营销活动。 | ✅ 已测试 — {"pushed": 1, "failed": 0, "errors": []} |
exa_search | exa_search (直接) | ✅ 已广泛测试 — 仅高亮和全文模式,include_domains |
route-row | 在 Deepline 中不可复制。 Clay 的路由操作有条件地将行推送到下游表。替换为:为每个目的地生成过滤后的输出 CSV。从 inputsBinding 记录目的地 tableId 值,以便用户知道数据去向。 | N/A — 改为生成输出 CSV |
find-lists-of-companies-with-mixrank-source (source 类型) | 处理步骤 1:apollo_company_search (地点、规模、行业、技术栈过滤器) → 处理步骤 2 (可选):prospeo_enrich_company (描述、行业、员工数)。向用户询问原始的 Mixrank 过滤条件。参见 clay-action-mappings.md 公司来源部分。 | ✅ apollo_company_search 已测试 |
social-posts-* | 两个独立的工具:(1) 个人资料帖子抓取器 → apify_run_actor_sync (apimaestro/linkedin-profile-scraper) 用于从特定人物的个人资料获取帖子。(2) 内容搜索/信号监控 → crustdata_linkedin_posts (需要 keyword 字段 + 可选的 filters,用于 MEMBER, COMPANY, AUTHOR_COMPANY 等) — 这是关键字搜索,不是个人资料 URL 抓取器。对 Clay 的 social-posts-person 操作使用 (1);对信号监控(例如,"ZoomInfo 中谁在发布关于 GTM 的内容?")使用 (2)。 | ✅ crustdata_linkedin_posts 关键字搜索已测试;apify_run_actor_sync 个人资料抓取器尚未端到端验证 |
{{col.field}}{{col.field.nested}}MAX_LONG 上限警告:如果你为昂贵的下游处理步骤添加行数上限(例如 row_range_long = row_range[:20]),请务必明确记录这一点 — 很容易忽略某些批次在 full 运行时静默跳过超过上限的行。deeplineagent 使用结构化 JSON:每列进行一次 deeplineagent 调用,并从一次结构化响应中提取所有需要的字段。切勿在一个 jsonSchema 输出就足够的情况下进行多次模型调用。示例:qualify_person 应返回一个包含 score、tier 和 reasoning 的 JSON 对象 — 而不是三个单独的调用。jsonSchema 的 deeplineagent 将对象直接存储在单元格中。将下游扁平字段引用为 {{col.field_name}}。如果你需要比一个对象级别更深的嵌套,首先使用 run_javascript 将其扁平化。{{xxx}} 引用的列必须位于先前的 enrich 调用中。python3 -c "import json; print('col=tool:' + json.dumps({...}))" — 切勿在 bashLoad with: set -a; source .env.deepline; set +a
How to get the cookie: HAR exports strip HttpOnly cookies — claysession will be missing. Instead, copy a curl command from Chrome DevTools Network tab (right-click any api.clay.com request → Copy → Copy as cURL). Extract everything between -b '...' and store as CLAY_COOKIE.
| Input type | What it contains | How to parse |
|---|---|---|
HAR file (app.clay.com_*.har) | Full network traffic including bulk-fetch-records responses with rendered formula cell values — the richest input | Base64-decode + gunzip response.content.text; extract results[].cells |
ClayMate Lite export (clay-claude-t_xxx-date.json) | tableSchema (raw GET response) + portableSchema ({{@Name}} refs) + bulkFetchRecords (N sample rows, or null if table was empty at export time) — second richest input after HAR | Access .tableSchema for field IDs, .bulkFetchRecords.results[].cells for cell values (if non-null), .portableSchema.columns[].typeSettings.inputsBinding for full prompts and JSON schemas — recoverable even when bulkFetchRecords is null |
GET /v3/tables/{ID} JSON | Schema only: field names, IDs, action types, column order. No cell values, no prompts | Use for column inventory and field ID→name mapping |
POST bulk-fetch-records response | Schema + actual cell values for sampled rows — contains rendered formula prompts, action outputs, NO_CELL markers | results[].cells[field_id].value for each field |
| Clay workbook URL | Nothing directly — extract TABLE_ID from URL; fetch schema + records via API with user's CLAY_COOKIE | GET /v3/tables/{TABLE_ID} then POST bulk-fetch-records |
| User description | Column names + action types only — no field IDs, no actual prompts | Weakest input; must approximate everything |
Priority order when multiple inputs available: HAR > ClayMate Lite export > bulk-fetch-records > schema JSON > user description. Always use the richest available input. A ClayMate Lite export already bundles schema + records — use .tableSchema and .bulkFetchRecords from it directly.
WhenbulkFetchRecords is null: Fall back to portableSchema for prompt and schema recovery:
.portableSchema.columns[].typeSettings.inputsBinding → find {name: "prompt"} entry → .formulaText{name: "answerSchemaType"} → .formulaMap.jsonSchema (double-escaped string — JSON.parse twice).typeSettings.conditionalRunFormulaText — convert to row-filter in Deepline# RECOVERED FROM PORTABLE SCHEMA — field f_xxx (verbatim, not approximated)How to extract bulk-fetch-records from a HAR:
# Find bulk-fetch-records entries (response body is base64+gzip)
python3 - <<'EOF'
import json, base64, gzip
with open('your-export.har') as f: # replace with your HAR filename
har = json.load(f)
for entry in har['log']['entries']:
url = entry['request']['url']
if 'bulk-fetch-records' in url:
body = entry['response']['content'].get('text', '')
enc = entry['response']['content'].get('encoding', '')
data = base64.b64decode(body) if enc == 'base64' else body.encode()
try:
data = gzip.decompress(data)
except Exception:
pass
print(json.dumps(json.loads(data), indent=2)[:5000])
EOF
Every migration produces this structure:
project/
├── .env.deepline # Clay credentials (never commit — add to .gitignore)
├── .env.deepline.example # Template showing required vars — safe to commit
├── .gitignore # Excludes .env.deepline, *.csv, work_*.csv
├── prompts/
│ └── <name>.txt # One file per AI column. Header documents source:
│ # "# RECOVERED FROM HAR — field f_xxx" ← verbatim from Clay
│ # "# ⚠️ APPROXIMATED — could not recover from HAR" ← guessed
├── scripts/
│ ├── fetch_<table>.sh # Fetches Clay records → seed_<table>.csv
│ └── enrich_<table>.sh # Runs deepline enrich passes → output_<table>.csv
Script output CSV columns: All columns fetched from Clay (using exact field IDs) + one column per enrichment pass. Column names use snake_case aliases matching the pass plan.
Prompt file format: Plain text system prompt. Variables use {{column_name}} syntax (Deepline's interpolation). First line is always a # comment documenting the source (HAR field ID or approximation warning).
Produce before writing any scripts. Get user confirmation before Phase 2.
---|---|---|---|---|---
1 | record_id | built-in | — | string |
… | | | | |
graph TD
A[record_id] --> B[clay_record]
B --> C[fields]
C --> D[exa_research]
D --> E[strategic_initiatives]
C --> F[qualify_person]
E --> F
Use classDef colors: blue = local (run_javascript), orange = remote API, green = AI (deeplineagent).
Column alias rule: Derive aliases from the actual Clay column name, snake_cased (e.g. "Work Email" → work_email, "Strategic Initiatives" → strategic_initiatives). The two structural aliases clay_record and fields are fixed — all others follow the Clay schema. Do NOT invent names from a memorized list.
| Pass | Column alias | Deepline tool | Depends on | Notes |
|---|---|---|---|---|
| 1 | clay_record | run_javascript (fetch) | record_id | Cookie from env; alias is always clay_record |
| 2 | fields | run_javascript (flatten) | clay_record | alias is always fields |
| N | <clay_col_snake> | <see clay-action-mappings.md> | <prior passes> | Alias = snake_case(Clay column name) |
Example (illustrative — actual aliases must match your table's column names):
| Pass | Column alias | Deepline tool | Depends on | Notes |
|---|---|---|---|---|
| 1 | clay_record | run_javascript (fetch) | record_id | |
| 2 | fields | run_javascript (flatten) | clay_record | |
| 3 | work_email | cost_aware_first_name_and_domain_to_email_waterfall | fields.first_name, fields.last_name, fields.company_domain | Primary |
| 4 | work_email_li | person_linkedin_to_email_waterfall | fields.linkedin_url | Fallback for rows where 3 returned empty |
| 5 | email_valid | leadmagic_email_validation | work_email | Optional final gate |
| 6 | job_function | deeplineagent | fields.job_title | Classification |
| 7 | company_research | deeplineagent or exa_search -> deeplineagent | fields.company_domain | Pass 1 of 2 |
| 8 | strategic_initiatives | deeplineagent + jsonSchema | company_research | Pass 2 of 2 |
| 9 | qualify_person | deeplineagent + jsonSchema | fields.*, strategic_initiatives | ICP score |
State every unverifiable assumption. Get confirmation before Phase 2.
Do this before writing any prompt approximations. Actual Clay prompt templates often live in formula field cell values in the bulk-fetch-records response.
Discovery procedure:
formula type fields with cell values that start with "You are..." or contain numbered requirements — these are Clay's rendered prompt templates.action type fields for actual cell values: if they contain "Status Code: 200" or "NO_CELL", they are webhook calls or unfired actions — not AI outputs.Prompt recovery priority (richest to weakest):
portableSchema — columns[].typeSettings.inputsBinding[name=prompt].formulaText has the full prompt even when bulkFetchRecords is null. Mark as # RECOVERED FROM PORTABLE SCHEMA — field f_xxx.# ⚠️ APPROXIMATED — could not recover.JSON schema recovery from portableSchema:
import json
for col in d['portableSchema']['columns']:
if col['type'] == 'action':
for inp in col['typeSettings'].get('inputsBinding', []):
if inp['name'] == 'answerSchemaType':
schema_raw = inp.get('formulaMap', {}).get('jsonSchema', '').strip('"')
# Double-escaped: unescape \\" → " and \\n → \n, then parse
schema_raw = schema_raw.replace('\\"', '"').replace('\\n', '\n').replace('\\\\', '\\')
schema = json.loads(schema_raw)
Fix any Clay formula bugs in recovered prompts: Wrong field references (e.g., {{@Name}} should become {{name}}), {single_brace} syntax (not interpolated by Deepline), Clay.formatForAIPrompt(...) wrapper calls (strip, use field ref directly).
Before assuming how many AI passes a pipeline has, check actual cell values across 3+ records:
| Cell value | Meaning | How to replicate |
|---|---|---|
NO_CELL | Action never fired | Build from scratch |
"Status Code: 200" / {"status":200} | HTTP/webhook action (n8n, Zapier) — NOT AI output | run_javascript fetch or stub |
"" (empty string) | Column ran but produced nothing, or was disabled | Treat as NO_CELL |
| Varied generation-shaped text | Actual AI output | deeplineagent |
A column in the schema may never have run. Always verify cell values before counting AI passes. An empty or "Status Code: 200" column is not a pipeline step.
Answer these before writing scripts based on what Phase 1 revealed. Only answer the questions that apply to your table — not every table has email columns or AI passes.
Table type (check all that apply):
deepline tools search "person enrichment linkedin"deepline tools search "<platform> add leads"record_id fetch step) → see Company Intelligence pipeline belowEmail strategy (only if table has email columns):
generate-email-permutations OR validate-email? → Use cost_aware_first_name_and_domain_to_email_waterfall as primary (permutation + validation built-in), not person_linkedin_to_email_waterfall alone.person_linkedin_to_email_waterfall as a fallback pass for rows that the primary approach missed.AI strategy (only if table has use-ai / claygent / octave columns):
{single_brace} bugs and wrong field refs.NO_CELL / "Status Code: 200" (Phase 1 §1.6)? → Exclude it from the pass plan entirely.Scoring / qualification (only if table has scoring columns):
Dependency ordering (all tables):
run_javascript? → Must execute before any --in-place pass that references {{col_name}}.Company intelligence pipeline (only ifsource field is Mixrank/company-search):
Find companies / source field → Replace with apollo_company_search + optional prospeo_enrich_company. Ask user for original Mixrank filter criteria (location, size, industry, tech stack).route-row action columns → NOT replicable. Produce filtered output CSV instead; document destination table IDs for reference.conditionalRunFormulaText in portableSchema) → implement as row-filter before each pass (skip rows that don't match condition).Security (all tables):
CLAY_COOKIE stored in .env.deepline (not hardcoded in the script)? → Verify .env.deepline in .gitignore.output/ in .gitignore? → Output CSVs contain Clay record PII (names, emails, LinkedIn URLs).run_javascript fetch calls use process.env.CLAY_COOKIE, not a hardcoded string?.env.deepline use single quotes for CLAY_COOKIE? GA cookie values contain $ characters that bash expands with double quotes.Two scripts per migration:
clay_fetch_records.sh — Fetches Clay records via run_javascript + fetch().
schema mode: GET /v3/tables/{id} metadatapilot mode: --rows 0:3 (rows 0-2)full mode: all rowsclaygent_replicate.sh — Replicates AI + enrichment columns via deepline enrich.
Architecture choice:deepline enrich CLI vs Python SDK
For Claygent-heavy tables (multiple use-ai (claygent+web) columns), the validated pattern is a pure Python script that calls deepline tools execute exa_search for external research and deepline tools execute deeplineagent for AI synthesis. This approach:
deepline enrich entirely for AI passes when the logic is easier to orchestrate in PythonThreadPoolExecutor across both rows and passes simultaneously{{field}} interpolation only when accessing clay_record data directly in Python (json.loads(row['clay_record'])) rather than via deepline enrich --in-placeThe deepline enrich CLI pattern (shown below) still applies for non-AI passes (run_javascript, email waterfall, provider lookups) and for simple single-column deeplineagent enrichments.
Cookie pattern (mandatory):
set -a; source .env.deepline; set +a
: "${CLAY_COOKIE:?CLAY_COOKIE must be set in .env.deepline}"
CLAY_VERSION="${CLAY_VERSION:-v20260311_192407Z_5025845142}"
# clay_curl wrapper — required for all Clay API calls (bare curl gets 401)
clay_curl() {
curl -s --fail \
-b "${CLAY_COOKIE}" \
-H "accept: application/json, text/plain, */*" \
-H "origin: https://app.clay.com" \
-H "referer: https://app.clay.com/" \
-H "x-clay-frontend-version: ${CLAY_VERSION}" \
-H "user-agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/145.0.0.0 Safari/537.36" \
"$@"
}
⚠️ Security: Never hardcodeCLAY_COOKIE as a literal value in the script — it will appear in logs, telemetry, and git history. Read it from env only. Check your .env.deepline is in .gitignore. Check your output/ directory (which contains Clay PII data) is also gitignored.
Clay API endpoint facts (verified):
| What you need | Correct endpoint | Notes |
|---|---|---|
| All record IDs | GET /v3/tables/{TABLE_ID}/views/{VIEW_ID}/records/ids | GET /v3/tables/{TABLE_ID}/records/ids returns NotFound — view ID required |
| View ID | GET /v3/tables/{TABLE_ID} → .table.firstViewId | Always fetch dynamically from schema |
| Fetch records | POST /v3/tables/{TABLE_ID}/bulk-fetch-records | Body: {"recordIds": [...], "includeExternalContentFieldIds": []} |
| Response format | {"results": [{id, cells, ...}]} | Key is results; record ID is .id (not .recordId) |
| Record IDs response | {"results": ["r_abc", "r_def", ...]} | Parse with .get("results", []) |
Python subprocess for JSON payloads (mandatory when JS code is in the payload):
WITH_ARG=$(python3 - <<'PYEOF'
import json
code = "const fn=(row.first_name||'').toLowerCase()..."
print('col_name=run_javascript:' + json.dumps({'code': code}))
PYEOF
)
deepline enrich --input seed.csv --output work.csv --with "$WITH_ARG"
This avoids all bash/JSON quoting issues. Never hand-build JSON with embedded JS in bash strings.
Execution ordering — always follow the staged pattern:
See execution-ordering.md for the full pattern with polling loop.
Full CLI patterns: clay-action-mappings.md. Always verify tool IDs before use.
When you encounter a Clay action not listed below, do not guess. Run discovery:
# Step 1 — describe what the action does, search by intent
deepline tools search "<what the action does>"
# Step 2 — inspect the best candidate
deepline tools get <candidate_tool_id>
# Step 3 — if nothing found, fall back to deeplineagent
# Use deeplineagent with jsonSchema for structured outputs; split exa_search and synthesis for research-heavy work
Discovery examples by Clay action type:
| You see in Clay | Search query | Likely result |
|---|---|---|
enrich-person-with-* | deepline tools search "person enrich linkedin" | leadmagic_profile_search, crustdata_person_enrichment |
find-email-* | deepline tools search "email finder" | hunter_email_finder, leadmagic_email_finder |
verify-email-* | deepline tools search "email verify validate" | leadmagic_email_validation, zerobounce_email_validation |
company-* / enrich-company-* | deepline tools search "company enrich" | apollo_enrich_company, prospeo_enrich_company |
add-to-campaign-* | deepline tools search "add leads campaign" | instantly_add_to_campaign, smartlead_api_request |
social-media-* | deepline tools search "linkedin posts scrape" | crustdata_linkedin_posts, apify_run_actor_sync |
| Completely novel action | deepline tools search "<verb> <noun from column name>" | Use top result or deeplineagent fallback |
When to usedeeplineagent fallback: If deepline tools search returns no relevant tools, or the action involves model judgment (classification, scoring, generation, summarization), use deeplineagent. Reconstruct the prompt from portableSchema.inputsBinding[name=prompt].formulaText or the cell values in bulkFetchRecords.
| Clay action | Deepline tool | Test status |
|---|---|---|
generate-email-permutations + entire email waterfall + validate-email | cost_aware_first_name_and_domain_to_email_waterfall (primary) + manual perm_fln + leadmagic_email_validation + person_linkedin_to_email_waterfall (fallback). See Pass Plan 5a–5e. ⚠️person_linkedin_to_email_waterfall alone = ~13% match rate vs ~99% with permutation-first approach. CLI syntax: deepline enrich --input seed.csv --output out.csv --with '{"alias":"email_result","tool":"cost_aware_first_name_and_domain_to_email_waterfall","payload":{"first_name":"{{first_name}}","last_name":"{{last_name}}","domain":"{{domain}}"}}' — the play expansion is previewed before running and the waterfall stops early once a valid email is found. | ✅ Tested — found patrick.valle@zoominfo.com (status=valid) via dot pattern on first try; 7 downstream providers skipped |
enrich-person-with-mixrank-v2 | leadmagic_profile_search → crustdata_person_enrichment | Not yet tested against real Clay Mixrank output |
lookup-company-in-other-table | run_javascript (local CSV join) | Not yet tested |
chat-gpt-schema-mapper | deeplineagent; add jsonSchema when you need structured extraction | ✅ Tested (analogous to data_warehouse + job_function passes) |
use-ai (no web) | deeplineagent | ✅ Tested (data_warehouse, job_function, technical_resources_readiness, key_gtm_friction passes) |
use-ai (claygent + web) | Binary search optimizer — Pass 1: 3× parallel exa_search (highlights-only, ~10x cheaper) targeting financial/IR, product launches, new segments → Pass 2: deeplineagent synthesis with jsonSchema containing `confidence: "high | medium |
octave-qualify-person | deeplineagent + jsonSchema ICP scorer | ✅ Tested — 26 rows |
octave-run-sequence-runner | Pass 1: deeplineagent (signals) → Pass 2: deeplineagent (email) | Pattern tested (find_tension_mapping + verified_pvp_messages); not yet validated against a real sequence-runner Clay column |
add-lead-to-campaign (Smartlead) | smartlead_api_request POST /v1/campaigns/{id}/leads | Not yet tested |
add-lead-to-campaign (Instantly) | instantly_add_to_campaign — use deepline tools execute instantly_add_to_campaign --payload '{"campaign_id":"<id>","contacts":[{"email":"...","first_name":"...","last_name":"...","company":"..."}]}'. List campaigns first with instantly_list_campaigns. | ✅ Tested — {"pushed": 1, "failed": 0, "errors": []} |
exa_search | exa_search (direct) | ✅ Tested extensively — highlights-only and full-text modes, include_domains |
route-row | Not replicable in Deepline. Clay's routing action pushes rows to downstream tables conditionally. Replace with: produce a filtered output CSV per destination. Note destination tableId values from inputsBinding so user knows where data was going. | N/A — produces output CSV instead |
find-lists-of-companies-with-mixrank-source (source type) | Pass 1 : apollo_company_search (location, size, industry, tech stack filters) → Pass 2 (optional): prospeo_enrich_company (description, industry, employee count). Ask user for original Mixrank filter criteria. See clay-action-mappings.md Company Source section. | ✅ apollo_company_search tested |
social-posts-* | Two separate tools: (1) Profile post scraper → apify_run_actor_sync (apimaestro/linkedin-profile-scraper) for fetching posts from a specific person's profile. (2) Content search/signal monitoring → crustdata_linkedin_posts (requires keyword field + optional filters for MEMBER, COMPANY, AUTHOR_COMPANY, etc.) — this is a keyword search, NOT a profile URL scraper. Use (1) for Clay's social-posts-person action; use (2) for signal monitoring (e.g., "who at ZoomInfo is posting about GTM?"). | ✅ keyword search tested; profile scraper not yet validated end-to-end |
Use whenever replicating a use-ai (claygent + web) column. Full pattern, pass structure, _extract_primary_source_url() implementation, confidence calibration data, failure modes, and search angle variants: binary-search-optimizer.md
Summary: Pass A: 3× parallel exa_search (highlights-only, domain-quoted queries) → Pass B: deeplineagent synthesis with jsonSchema containing confidence + missing_angles → gate : if confidence == "high" stop → Pass C: follow-up searches on missing_angles → Pass D: re-synthesize → Pass E: primary-source deep-read via _extract_primary_source_url(company_domain=domain).
Always add research_confidence and research_passes tracking columns. low confidence ≠ bad output (26-row test: 0% high, 35% medium, 65% low — but 50% of low rows had specific useful content).
run_javascript columns must be executed before adding --in-place columns that reference their values. See execution-ordering.md.run_javascript columns are exempt — always run them on all rows.fields=run_javascript:{flatten clay_record} — required before any {{fields.xxx}} reference when using deepline enrich --in-place. Not required when using the Python SDK approach — just call json.loads(row['clay_record']) directly in Python and access any key.{{col.field}} works; {{col.field.nested}} fails. Flatten first (CLI) or go multi-level in Python.MAX_LONG cap warning: If you add a row-count cap for expensive downstream passes (e.g. row_range_long = row_range[:20]), make sure to document this explicitly — it's easy to miss that some batches silently skip rows beyond the cap on a full run.deeplineagent: Make a single deeplineagent invocation per column and extract all needed fields from one structured response. Never make multiple model calls where one jsonSchema output would suffice. Example: qualify_person should return one JSON object with score, tier, and reasoning — not three separate calls.deeplineagent with jsonSchema stores the object directly in the cell. Reference downstream flat fields as {{col.field_name}}. If you need deeper nesting than one object level, flatten it first with run_javascript.{{xxx}} must be in a prior enrich call.python3 -c "import json; print('col=tool:' + json.dumps({...}))" — never hand-write JSON with embedded JS in bash strings.CLAY_COOKIE value in code — always process.env.CLAY_COOKIE.valid, valid_catch_all, AND catch_all from leadmagic_email_validation — all three are as reliable as Clay's own ZeroBounce output. valid_catch_all is the highest-confidence version (engagement-confirmed, <5% bounce rate). Do NOT accept unknown.See patterns.md for prescriptive patterns + antipatterns for every common mistake.
After running the full pipeline, compare against Clay ground truth.
Run the bundled comparison script:
# Auto-detects clay_ prefixed columns → unprefixed mapping
python3 /path/to/skill/scripts/compare.py ground_truth.csv enriched.csv
# Or with explicit column mapping
python3 /path/to/skill/scripts/compare.py ground_truth.csv enriched.csv \
--map '{"clay_final_email":"work_email","clay_job_function":"job_function"}'
| Column type | Pass threshold | How to check |
|---|---|---|
Email (work_email) | DL found rate ≥ 95% of Clay found rate | compare.py auto-flags |
Classify (job_function) | ≥ 95% exact match on pilot rows | compare.py distribution output |
Structured (deeplineagent + jsonSchema) | Object present in 100% of rows, all schema fields populated | Spot-check 5 rows |
Fetch (run_javascript) | 100% non-null for all mapped fields | compare.py fill rate |
Claygent/web research (exa_search + deeplineagent) | is_failed_research() returns False on ≥ 85% of rows | See confidence calibration section |
Company mix : ZoomInfo, Klaviyo, PetScreening, PulsePoint, Tines, Bloomreach, Aviatrix, BambooHealth, Circle, ZipHQ, Edmentum, PandaDoc, project44, Apollo GraphQL, Digital Turbine, Amagi, Stash, ConstructConnect, Momentive Software, and others.
| Metric | Value |
|---|---|
| Total rows | 26 |
| All 3 passes fired | 26/26 (100%) |
| Confidence: high | 0/26 (0%) |
| Confidence: medium/medium-high | 9/26 (35%) |
| Confidence: low (but useful content) | 13/26 (50%) |
| Failed (UNCHANGED/UNRESOLVED) | 4/26 (15%) |
| Score: Tier B (5–6) | 14/26 (54%) |
| Score: Tier C (0–4) | 12/26 (46%) |
Clay vs Deepline content comparison (ZoomInfo, Patrick Valle):
confidence=high, 10 search steps, 133.75s, $0.13 cost, queries Googleconfidence=low (despite specific accurate content), 3 passes, parallel execution, ~$0.02–0.05 cost, queries ExaCompanies that reliably reach medium confidence : Large public companies (ZoomInfo, Klaviyo, Circle), well-funded startups with active newsrooms (Tines, Aviatrix, project44), and companies with recent PR activity (PandaDoc, BambooHealth, Digital Turbine, Amagi, Onit).
Companies that stay low : Small niche B2B (LERETA, ConstructConnect, ZipHQ, Edmentum), companies with ambiguous domain names (getflex.com, onit.com, stash.com), private equity-owned businesses with sparse web presence.
Email accuracy is not 100% even at 99% match rate. There are two categories:
valid or valid_catch_all → high confidence. valid_catch_all means engagement signal data confirmed the address on a catch-all domain (<5% bounce rate).catch_all → domain accepts all addresses. The permutation format (fn.ln, fln) is a best guess. Same limitation as Clay (Clay uses the same ZeroBounce validation).This matches Clay's own accuracy characteristics — Clay uses the same ZeroBounce validation. If a user asks "is this 100% accurate?", the honest answer is: same accuracy as Clay, which is high but not 100% due to catch-all domains and format edge cases.
clay_fetch_records.sh + claygent_replicate.sh--rows 0:1 for any paid tools; show previewpython3 compare.py ground_truth.csv enriched.csv — confirm all thresholds passrun_javascript needs no pilot gate. For deeplineagent, exa_search, leadmagic_*, hunter_*, and other non-trivial or paid tools: run rows 0:1 first.
./claygent_replicate.sh # pilot: row 0 only
./claygent_replicate.sh 0:3 # rows 0-2
./claygent_replicate.sh full # all rows
Weekly Installs
262
Source
First Seen
13 days ago
Security Audits
Installed on
codex262
gemini-cli261
amp261
cline261
github-copilot261
kimi-cli261
通过 LiteLLM 代理让 Claude Code 对接 GitHub Copilot 运行 | 高级变通方案指南
27,600 周安装
crustdata_linkedin_postsapify_run_actor_sync