The Agent Skills Directory
npx skills add https://code.deepline.com使用此技能进行潜在客户挖掘、客户研究、联系人信息丰富、验证、线索评分、个性化以及活动激活。
provider-playbooks/*.md 中。--rows 0:1 单行试点。客户通常试图从"我有一个理想客户画像"进展到"这是一个包含电子邮件/LinkedIn 和高度个性化内容或信号的潜在客户列表"。他们可能处于此过程的任何阶段,但需引导他们。
发现顺序:先公司,后人员。 当任务需要根据标准(投资组合、理想客户画像、招聘信号)寻找匹配公司的联系人时,先发现公司集合,然后在每个公司中寻找人员。不要从宽泛的人员搜索查询开始。
SKILL.md):决策模型、防护栏、审批门、子文档链接。prompts.json。recipes/*.md):针对特定任务(电子邮件查找、LinkedIn 解析、瀑布模式、联系人查找、执行器合约)的逐步操作手册。像使用 Grep 搜索代码一样搜索。provider-playbooks/*.md):特定于提供商的特性、成本/质量说明和回退行为。无损失规则:被移动的指导在其规范级别保持完整记录,并从此处链接。
停止。在打开与任务对应的正确子文档之前,不要调用任何提供商、运行任何 deepline tools execute 或编写任何搜索命令。
这些技能文档和子文档不是通用文档 — 它们是从数百次实际运行中提炼出来的,精确编码了哪些有效、哪些失败以及原因。它们包含经过验证的参数模式、正确的过滤器语法、并行执行模式、经过测试的示例负载以及经过多次迭代才发现已知的陷阱。将它们视为捷径:阅读文档 5 秒钟可以避免 10 次失败的工具调用、浪费的积分和无用的输出。每当代理跳过阅读文档并试图从第一性原理"弄明白"时,它都会重新发现那些已经记录并解决的相同故障模式。
SKILL.md 是路由层 — 它告诉你去哪里,而不是如何执行。子文档和特定于任务的技能包含如何执行。没有它们,你将猜测参数、选择错误的提供商、顺序运行搜索而不是并行运行,并产生垃圾结果。这种情况已经反复发生。
这不是可选的。 阅读匹配的文档。不要跳过此步骤。不要"只是快速尝试一下 Apollo"或"只是运行一次搜索看看"。这些文档之所以存在,是因为正确的方法并不明显,必须通过试错来学习 — 它们是让你直接跳到有效方法的捷径。
!重要 阅读多个文档是一个好主意,而且通常非常必要。多读一些。
路由规则 — 将你的任务与文档匹配并阅读它:
| 当任务涉及... | 你必须首先阅读此文档 | 它提供的内容(SKILL.md 没有的) |
|---|---|---|
| 寻找公司、寻找人员、构建潜在客户列表、潜在客户挖掘、投资组合/风险投资来源、在已知公司查找联系人、大规模覆盖完成 | finding-companies-and-contacts.md | 提供商过滤器模式、并行执行模式、提供商组合表、基于角色的搜索规则、子代理编排、大规模覆盖完成、投资组合/风险投资捷径、联系人查找模式。 |
研究公司或人员、了解他们构建什么、找出用例、基于使命/产品/行业进行个性化、丰富 CSV、添加数据列、瀑布式丰富、查找电子邮件/电话/LinkedIn、合并数据、自定义信号、run_javascript / deeplineagent 步骤、Apify 执行器 — 任何添加或转换行级数据的任务 | enriching-and-researching.md | deepline enrich 语法和所有标志。带有回退链的瀑布模式。run_javascript / deeplineagent 路由。多轮管道模式(研究轮 → 生成轮)。合并模式。电子邮件/电话/LinkedIn 瀑布顺序。自定义信号桶。Apify 执行器选择。GTM 定义和默认值。 |
recipes/ 目录包含经过实战检验的操作手册。在你开始执行之前,扫描此列表并阅读任何与你的任务匹配的配方。
当配方匹配时:将其作为你的执行计划逐步遵循。 配方编码了来之不易的排序和提供商选择 — 相信它们胜过通用指导或你自己的直觉。如果用户的请求不完全符合,请使用上面的阶段文档调整配方,但将配方的结构和顺序作为你的基线。
| 配方 | 在以下情况使用... |
|---|---|
build-tam.md | 根据理想客户画像标准构建总目标市场列表或大型公司列表 |
enriching-and-researching.md | 通过角色查找玩法在已知公司查找联系人/人员 |
enriching-and-researching.md | 联系人到电子邮件路由和原生电子邮件恢复玩法 |
actor-contracts.md | Apify 执行器选择、已知执行器 ID、输入模式 |
competitive-social-listening.md | 查找谁与竞争对手的 LinkedIn 帖子互动 — 反应、评论、高级买家过滤、互动仪表板 |
如果没有匹配项,使用 grep 搜索更具体的关键词:Grep pattern="<keyword>" path="<directory containing this SKILL.md>/recipes/" glob="*.md" output_mode="files_with_matches"
deepline csv show --csv <path> --summary 以了解其结构(行数、列、样本值)。deepline enrich 进行任何逐行处理(丰富、重写、研究、评分)。deepline csv show --csv <path> --rows 0:2 获取两行样本,或生成一个 Explore 子代理来回答有关数据的问题。对于信号驱动的发现(投资者、融资、招聘、员工人数、行业、地理位置、技术栈、合规性),从 deepline tools search 开始。不要猜测字段。
搜索 2-4 个同义词,并行执行:
deepline tools search investor
deepline tools search investor --prefix crustdata
deepline tools search --categories company_search --search_terms "structured filters,icp"
deepline tools search --categories people_search --search_terms "title filters,linkedin"
始终 在运行任何命令之前,将你的执行计划发布到会话 UI。这不是可选的 — 用户通过会话 UI 实时监控进度。没有它,UI 将不显示任何内容,用户也没有可见性。
# 发布你的计划(接受步骤标签的 JSON 数组)
deepline session plan --steps '["检查 CSV 并了解结构","搜索电子邮件查找工具","在行 0:1 上运行试点","获取完整运行的批准","执行完整丰富","运行后验证和交付"]'
# 当你完成每个步骤时,更新其状态(从 0 开始索引)
deepline session plan --update 0 --status completed
deepline session plan --update 1 --status running
deepline session plan --update 1 --status completed
deepline session plan --update 2 --status running
# 出错时:
deepline session plan --update 2 --status error
有效的步骤状态:pending, running, completed, error, skipped。
当你在处理一个正在运行的步骤时,发送状态更新以显示你当前正在做什么。这适用于计划无法提前预测的紧急工作(解析响应、回退到替代提供商、提取数据等)。
# 当一个步骤正在运行时,发送状态更新(附加到当前正在运行的步骤)
deepline session status --message "从 Apollo 响应中提取公司域名"
deepline session status --message "LeadMagic 未返回结果 — 回退到 ZeroBounce"
deepline session status --message "验证 23 个全能邮箱"
# 可选地通过索引定位特定步骤
deepline session status --message "使用不同参数重试" --step-index 2
每个新的状态消息都将前一个标记为完成,并显示为活动的子步骤。这些是轻量级的 — 每当你做用户希望看到的事情时,请随意使用它们。
规则:
running,完成时标记为 completed 或 error。session status 消息以显示你当前正在处理的内容。--steps 重新发布以替换旧计划。deepline enrich 之外编写输出 CSV 时,注册它们:deepline session output --csv <path> --label "标签"。当工具类型比提供商广度更重要时,使用类别过滤器。常见类别:
company_search:账户/公司发现工具people_search:人员/联系人发现工具company_enrich:已知公司的公司丰富people_enrich:已知人员的人员/联系人丰富email_verify:电子邮件验证/可送达性email_finder:电子邮件查找/发现phone_finder:电话查找/发现research:公司研究、广告情报、职位搜索、技术图谱、网络研究automation:工作流式工具、浏览器/执行器运行、批量自动化outbound_tools:所有 Lemlist/Smartlead/Instantly/HeyReach 风格的操作使用 --search_terms 提供额外的排名提示,如 structured filters, title filters, api native, autocomplete, 或 bulk。
好的:
deepline tools search --categories company_search --search_terms "investors,funding"deepline tools search --categories research --search_terms "ads,technographics"避免:
deepline tools search stuffdeepline tools search search across filtersGTM 时间窗口、阈值和解释规则在 enriching-and-researching.md 的定义部分中定义。
adyntel 操作手册 摘要:首先使用渠道原生广告端点,然后综合跨渠道洞察。保持域名规范化,并记住 Adyntel 按请求计费,免费轮询端点除外。最后审阅:2026-02-27
ai_ark 操作手册 摘要:使用公司和人员搜索进行潜在客户挖掘,使用反向查找进行身份解析,仅对强匹配项使用手机查找器,当你需要已验证的电子邮件时使用异步导出或电子邮件查找器流程。最后审阅:2026-03-16
apify 操作手册 摘要:对于执行器执行,首选同步运行 (apify_run_actor_sync)。仅当你需要非阻塞执行时,使用异步运行加轮询。当来源已知且存在特定来源的执行器时,优先选择 Apify 而非 call_ai/WebSearch。最后审阅:2026-02-11
apollo 操作手册 摘要:廉价但质量一般的人员/公司搜索,默认 include_similar_titles=true,除非明确请求严格模式。最后审阅:2026-02-11
attio 操作手册 摘要:使用 assert_* 操作进行更新插入,使用 query_* 操作进行过滤读取,当你了解 Attio 对象族时使用标准对象包装器,当你需要实时同步时使用带有类型化事件名称的 webhook 订阅。最后审阅:2026-03-20
builtwith 操作手册 摘要:使用 domain_lookup 进行实时技术栈检查,使用 vector_search 在 lists/trends 之前发现正确的技术标签,使用 bulk_domain_lookup 处理行数多的域名批次。最后审阅:2026-03-21
cloudflare 操作手册 摘要:使用 cloudflare_crawl 爬取网站并将内容提取为 markdown、HTML 或 JSON。超时返回部分结果 — 检查 timedOut 字段。默认启用浏览器渲染。最后审阅:2026-03-11
deepline enrich 进行大规模(>5 行)的列表丰富或发现。它会自动打开一个可视化的 playground 表格,以便用户检查行、重新运行块并进行迭代。deepline tools execute 一次性完成是短视的。deepline enrich 中的 run_javascript,将 JS 放在 payload.code 中;当前行在运行时自动注入为 row,因此你通常不应自己传递 row。deepline session output --csv <csv_path> --label "我的结果"。这是在会话 UI 中显示输出的轻量级替代方案,替代 deepline enrich。有关 deepline csv 命令、飞行前/运行后脚本模板和检查详细信息,请参阅 enriching-and-researching.md。
FINAL_CSV="${OUTPUT_DIR:-/tmp}/<requested_filename>.csv"FINAL_CSV 和确切的 Playground URL。--rows 0:1 处理一行)。当用户要求 N 行时,从大约 1.4×N 开始(例如,25 行对应 35 行)。每个管道阶段都有自然损耗 — 联系人搜索会遗漏约 15-20% 的公司,电子邮件瀑布会遗漏约 5-10% 的联系人。努力完成困难的行几乎总是浪费:提供商找不到联系人的公司,同样也不会有电子邮件覆盖。
这样做:
不要这样做:
deeplineagent 研究轮次或手动修补来重试失败的查找。deeplineagent 研究轮次)。提供商的覆盖范围是公司的属性,而不是你可以通过更多努力克服的。只有 5 个人的小型初创公司在所有提供商中都将具有零覆盖 — 无论重试多少次都无法改变这一点。在顶部过度供应,让不完整的行自然脱落。
包含所有内容:
deepline enrich --rows 0:1 单行试点的 CSV 预览批准完整运行吗?注意:deepline enrich 默认已经打印 ASCII 预览,因此直接使用该输出。
严格格式约定(阻塞):
AWAIT_APPROVAL 状态,不要运行付费/成本未知的操作。FULL_RUN。run_javascript 是非 AI 路径。aiinference 用于通用分类/结构化推理,deeplineagent 用于上下文收集/网络研究/信号提取。审批模板:
假设
- <意图假设 1>
- <意图假设 2>
CSV 预览 (ASCII)
<粘贴来自 deepline enrich --rows 0:1 的逐字输出>
积分 + 范围 + 上限
- 提供商: <名称>
- 预计积分: <值或范围>
- 完整运行范围: <行数/项目数>
- 支出上限: <上限>
- 试点摘要: <一个简短的段落>
审批问题
批准完整运行吗?
必须对完整运行的确切 CSV 运行真实的试点(--rows 0:1,结束不包含)。
必须在审批中逐字包含 ASCII 预览。
如果试点失败,在请求批准之前修复并重新运行直到成功。
在使用 AskUserQuestion 进行审批门之前,通知会话 UI,以便用户知道检查终端:
deepline session alert --message "需要批准:对 N 行运行丰富(约 X 积分)"
deepline billing balance
deepline billing limit
当积分为零时,链接到 https://code.deepline.com/dashboard/billing 进行充值。10 积分 == 1 美元
提醒:在到达这一点之前,你应该已经阅读了第 2 节中的相关子文档。如果还没有,请返回并立即阅读。本节是快速参考摘要,不能替代子文档。
deepline tools search <intent> 开始,并行执行字段匹配的提供商调用;当 deepline-list-builder 子代理可用时,使用基于子代理的并行搜索编排作为首选模式。仅当直接发现路径耗尽后,才使用 deeplineagent 进行综合或模糊性解决。deepline enrich 语法、瀑布列模式和合并逻辑。默认瀑布顺序:dropleads → hunter → leadmagic → deepline_native → crustdata → peopledatalabs。run_javascript 进行确定性转换/模板逻辑,使用 deeplineagent 进行 AI 工作。从 prompts.json 开始。提供商路径启发式:
leadmagic_* 回退之前,优先选择质量优先(crustdata_person_enrichment, peopledatalabs_*)。关键:在运行任何序列任务时,保持 writing-outreach.md 工作流上下文处于活动状态。对于理想客户画像驱动的消息传递,这不是可选的。
尽可能使用 deepline enrich 进行繁重的逐行工作。它具有内置的速率限制处理(自适应重试/退避),适用于标准的上游限制。如果你正在构建一个自制的脚本,假设它不包含相同的自动保护,除非你明确实现它。
如果丰富或 CLI 行为不稳定,重新运行安装程序以确保最新的 CLI/客户端连接到位:
curl -s "https://code.deepline.com/api/v2/cli/install" | bash
需要身份验证的网站: 不要使用 Apify。告诉用户使用 Claude in Chrome 或引导他们通过 Inspect Element 获取带有标头的 curl 命令(用户是非技术人员)。
recipes/actor-contracts.md 获取执行器 id,或尝试 deepline tools search。operatorNotes 而非公共评分。deepline tools execute apify_list_store_actors --payload '{"search":"linkedin company employees scraper","sortBy":"relevance","limit":20}'
deepline tools execute
Use this skill for prospecting, account research, contact enrichment, verification, lead scoring, personalization, and campaign activation.
provider-playbooks/*.md.--rows 0:1 one-row pilots.Customer is generally trying to go from "I have an ICP" to "Here's a list of prospects with email/linkedin and very personalized content or signals". They may be anywhere in this process, but guide them along.
Discovery order: companies first, then people. When the task requires finding contacts at companies matching criteria (portfolio, ICP, hiring signal), discover the company set first, then find people at each company. Do not start with broad people-search queries.
SKILL.md): decision model, guardrails, approval gates, links to sub-docs.广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
| 撰写冷邮件、个性化外联、线索评分、资格认定、序列设计、活动文案、在 Playground 中检查 CSV。 如果任务还需要研究公司/人员以指导写作,也请阅读 enriching-and-researching.md — 它包含多轮管道模式。 | writing-outreach.md | 来自 prompts.json 的提示模板。评分标准。电子邮件长度/语气/结构规则。个性化模式。资格认定框架。Playground 检查命令。 |
autocomplete:搜索前的规范过滤器值发现admin:积分、监控、日志、模式、本地/开发实用程序crustdata 操作手册 摘要:从免费自动补全开始,默认使用模糊包含运算符 (.) 以获得更高的召回率。使用 ISO-3 国家代码,对于小众垂直领域,优先使用 crunchbase_categories 而非 linkedin_industries,并使用 employee_count_range 进行过滤,而不是 employee_metrics.latest_count。最后审阅:2026-02-11
deepline_native 操作手册 摘要:启动器操作等待完成并返回带有 job_id 的最终负载;查找器操作保持可用于显式轮询。最后审阅:2026-02-23
deeplineagent 操作手册 摘要:使用 Vercel AI Gateway 进行纯推理或使用 Deepline 管理的工具和计费进行多步骤研究。最后审阅:2026-03-22
dropleads 操作手册 摘要:首先使用 Prime-DB 搜索/计数来高效界定细分市场,然后仅对入围记录运行查找器/验证器步骤。优先使用 companyDomains 而非 companyNames,将多词关键词拆分为单独的标记,并使用宽泛的 jobTitles 加资历,而不是精确的职位匹配。最后审阅:2026-02-26
exa 操作手册 摘要:在 answer 之前使用 search/contents 进行可审计的检索,然后使用明确的引用进行综合。编写自然语言查询,预期会有丢弃/噪音,并避免将类别搜索与 includeDomains 风格的来源范围界定混合使用。最后审阅:2026-02-11
firecrawl 操作手册 摘要:网络抓取、爬取、搜索和 AI 提取。使用 firecrawl_scrape 处理单页,firecrawl_search 进行网络搜索 + 抓取,firecrawl_map 进行 URL 发现,firecrawl_crawl 进行多页爬取,firecrawl_extract 进行结构化提取。最后审阅:2026-03-11
forager 操作手册 摘要:首先使用总计端点(免费)估算数量,然后使用 reveal 标志进行搜索/查找以获取联系人。在已验证的手机方面表现强劲。最后审阅:2026-02-28
google_search 操作手册 摘要:使用 Google 搜索进行广泛的网络召回,然后使用提取/丰富工具进行结构化工作流跟进。最后审阅:2026-02-12
heyreach 操作手册 摘要:首先解析活动 ID,然后进行批量插入,并在写入后确认活动统计信息。最后审阅:2026-02-11
hubspot 操作手册 摘要:使用 list/get/search 进行灵活的 CRM 读取,使用批量操作进行大型同步,并在写入前使用模式、管道、所有者和关联工具来发现 HubSpot 特定的 ID。最后审阅:2026-03-20
hunter 操作手册 摘要:首先使用 discover 进行免费的理想客户画像塑造,然后使用 domain/email finder 进行积分高效的联系人发现,最后使用 verifier 作为最终的发送门。最后审阅:2026-02-24
icypeas 操作手册 摘要:使用 email-search 进行单个电子邮件发现,使用 bulk-search 处理批量。抓取 LinkedIn 个人资料进行丰富。使用 find-people 进行具有 16 个过滤器的潜在客户挖掘。计数端点是免费的。最后审阅:2026-02-28
instantly 操作手册 摘要:首先列出活动,然后在受控批次中添加联系人并验证下游统计信息。最后审阅:2026-02-11
leadmagic 操作手册 摘要:将验证视为守门员,并在升级到更深层次的丰富之前运行电子邮件模式瀑布。最后审阅:2026-02-11
lemlist 操作手册 摘要:首先列出活动清单,并以小批量推送联系人,并在写入后检查统计信息。最后审阅:2026-03-01
parallel 操作手册 摘要:对于代理工作流,优先选择 run-task/search/extract 原语,避免 monitor/stream 复杂性。最后审阅:2026-02-11
peopledatalabs 操作手册 摘要:在昂贵的 person/company search 和 enrich 调用之前,使用 clean/autocomplete 助手来规范化输入。将公司搜索视为最后手段的结构化路径,对于非平凡的 SQL 风格查询,优先选择负载文件或 heredocs。最后审阅:2026-02-11
prospeo 操作手册 摘要:使用 enrich-person 处理单个联系人,使用 search-person 进行具有 30+ 个过滤器的潜在客户挖掘,使用 search-company 处理账户级列表。最后审阅:2026-02-28
salesforce 操作手册 摘要:在自定义写入之前使用字段检查,使用特定于对象的 create/update/delete 工具处理标准 CRM 记录,使用 list 工具进行带分页交接的增量读取。最后审阅:2026-03-20
serper 操作手册 摘要:使用 Serper 进行广泛的实时 Google 网络搜索和本地/地图召回。在结构化提取或丰富之前,这是一个强大的第一步。最后审阅:2026-03-23
smartlead 操作手册 摘要:首先列出活动,然后使用 Smartlead 字段名称推送线索,并在之后确认活动统计信息。最后审阅:2026-03-05
zerobounce 操作手册 摘要:在外联发送前用作最终的电子邮件验证门。检查 sub_status 以获取详细的失败原因。对于 5 个以上的电子邮件使用 batch。最后审阅:2026-02-28
当用户输入缺失时应用默认值。
用户指定的值始终覆盖默认值。
在审批消息中,将活动的默认值列为假设。
deepline backend stop --just-backend 显式停止后端。_metadata)。当使用 shell 工具重建中间 CSV 时,携带 _metadata 列。--output 写入你的工作目录,然后在后续处理时对该输出使用 --in-place。--in-place 用于迭代你自己先前的输出 — 切勿用于源文件。--with-force <alias>。leadmagic_email_validation,然后进行丰富佐证。recipes/actor-contracts.md 获取已知的执行器 ID。prompts.jsonrecipes/*.md): step-by-step playbooks for specific tasks (email lookup, LinkedIn resolution, waterfall patterns, contact finding, actor contracts). Search like code with Grep.provider-playbooks/*.md): provider-specific quirks, cost/quality notes, and fallback behavior.No-loss rule: moved guidance remains fully documented at its canonical level and is linked from here.
STOP. Do not call any provider, run anydeepline tools execute, or write any search command until you have opened the correct sub-doc for your task.
These skill docs and sub-docs are not generic documentation — they are distilled from hundreds of real runs and encode exactly what works, what fails, and why. They contain validated parameter schemas, correct filter syntax, parallel execution patterns, tested sample payloads, and known pitfalls that took many iterations to discover. Think of them as shortcuts: reading a doc for 5 seconds saves you from 10 failed tool calls, wasted credits, and garbage output. Every time an agent skips reading the docs and tries to "figure it out" from first principles, it re-discovers the same failure modes that are already documented and solved.
SKILL.md is the routing layer — it tells you WHERE to go, not HOW to execute. The sub-docs and task-specific skills contain the HOW. Without them you will guess parameters, pick wrong providers, run searches sequentially instead of in parallel, and produce garbage results. This has happened repeatedly.
This is not optional. Read the matching doc. Do not skip this step. Do not "just try Apollo real quick" or "just run one search to see." These docs exist because the correct approach was non-obvious and had to be learned through trial and error — they are shortcuts that let you skip straight to what works.
!important READING MULTIPLE DOCS IS A GREAT IDEA AND OFTEN SUPER ESSENTIAL. JUST READ MORE.
Routing rules — match your task to a doc and READ IT:
| When the task involves... | You MUST read this doc first | What it gives you (that SKILL.md doesn't) |
|---|---|---|
| Finding companies, finding people, building lead lists, prospecting, portfolio/VC sourcing, contact finding at known companies, coverage completion at scale | finding-companies-and-contacts.md | Provider filter schemas, parallel execution patterns, provider mix tables, role-based search rules, subagent orchestration, at-scale coverage completion, portfolio/VC shortcuts, contact finding patterns. |
Researching companies or people, understanding what they build, figuring out use cases, personalizing based on mission/product/industry, enriching a CSV, adding data columns, waterfall enrichment, finding emails/phones/LinkedIn, coalescing data, custom signals,run_javascript / deeplineagent steps, Apify actors — any task that adds or transforms row-level data | enriching-and-researching.md | deepline enrich syntax and all flags. Waterfall patterns with fallback chains. run_javascript / deeplineagent routing. Multi-pass pipeline patterns (research pass → generation pass). Coalescing patterns. Email/phone/LinkedIn waterfall orders. Custom signal buckets. Apify actor selection. GTM definitions and defaults. |
| Writing cold emails, personalizing outreach, lead scoring, qualification, sequence design, campaign copy, inspecting CSVs in Playground. If the task also requires researching companies/people to inform the writing, read enriching-and-researching.md too — it has the multi-pass pipeline pattern. | writing-outreach.md | Prompt templates from prompts.json. Scoring rubrics. Email length/tone/structure rules. Personalization patterns. Qualification frameworks. Playground inspection commands. |
The recipes/ directory contains battle-tested playbooks. Before you start executing, scan this list and read any recipe that matches your task.
When a recipe matches: follow it step-by-step as your execution plan. Recipes encode hard-won sequencing and provider choices — trust them over generic guidance or your own intuition. If the user's request doesn't perfectly fit, adapt the recipe using the phase docs above, but keep the recipe's structure and ordering as your baseline.
| Recipe | Use when... |
|---|---|
build-tam.md | Building a total addressable market list or large company list from ICP criteria |
enriching-and-researching.md | Finding contacts/people at known companies via the persona lookup play |
enriching-and-researching.md | Contact-to-email routing and native email recovery plays |
actor-contracts.md | Apify actor selection, known actor IDs, input schemas |
competitive-social-listening.md | Find who engages with competitors' LinkedIn posts — reactions, comments, senior buyer filtering, engagement dashboard |
If none match, grep for more specific keywords: Grep pattern="<keyword>" path="<directory containing this SKILL.md>/recipes/" glob="*.md" output_mode="files_with_matches"
deepline csv show --csv <path> --summary first to understand its shape (row count, columns, sample values) before deciding how to process it.deepline enrich for any row-by-row processing (enrichment, rewriting, research, scoring).deepline csv show --csv <path> --rows 0:2 for a two-row sample, or spawn an Explore subagent to answer questions about the data.For signal-driven discovery (investor, funding, hiring, headcount, industry, geo, tech stack, compliance), start with deepline tools search. Do not guess fields.
Search 2-4 synonyms, execute in parallel:
deepline tools search investor
deepline tools search investor --prefix crustdata
deepline tools search --categories company_search --search_terms "structured filters,icp"
deepline tools search --categories people_search --search_terms "title filters,linkedin"
Always publish your execution plan to the Session UI before running any commands. This is not optional — users monitor progress in real time via the Session UI. Without it, the UI shows nothing and users have no visibility.
# Post your plan (accepts JSON array of step labels)
deepline session plan --steps '["Inspect CSV and understand shape","Search for email finder tools","Run pilot on rows 0:1","Get approval for full run","Execute full enrichment","Post-run validation and delivery"]'
# As you complete each step, update its status (0-indexed)
deepline session plan --update 0 --status completed
deepline session plan --update 1 --status running
deepline session plan --update 1 --status completed
deepline session plan --update 2 --status running
# On error:
deepline session plan --update 2 --status error
Valid step statuses: pending, running, completed, error, skipped.
As you work through a running step, send status updates to show what you're currently doing. This is for emergent work the plan couldn't predict upfront (parsing responses, falling back to alternative providers, extracting data, etc.).
# While a step is running, send status updates (attaches to the currently-running step)
deepline session status --message "Extracting company domains from Apollo response"
deepline session status --message "LeadMagic returned no results — falling back to ZeroBounce"
deepline session status --message "Validating 23 catch-all emails"
# Optionally target a specific step by index
deepline session status --message "Retrying with different params" --step-index 2
Each new status message marks the previous one as done and appears as the active sub-step. These are lightweight — use them freely whenever you're doing something the user would want to see.
Rules:
running when starting, completed or error when done.session status messages during step execution to show what you're currently working on.--steps to replace the old plan.deepline enrich, register them: deepline session output --csv <path> --label "Label".Use category filters when tool type matters more than provider breadth. Common categories:
company_search: account/company discovery toolspeople_search: people/contact discovery toolscompany_enrich: company enrichment on known companiespeople_enrich: person/contact enrichment on known peopleemail_verify: email verification / deliverabilityemail_finder: email lookup / discoveryphone_finder: phone lookup / discoveryresearch: company research, ad intel, job search, technographics, web researchautomation: workflow-style tools, browser/actor runs, batch automationoutbound_tools: all Lemlist/Smartlead/Instantly/HeyReach style actionsautocomplete: canonical filter value discovery before searchadmin: credits, monitoring, logs, schemas, local/dev utilitiesUse --search_terms for extra ranking hints like structured filters, title filters, api native, autocomplete, or bulk.
Good:
deepline tools search --categories company_search --search_terms "investors,funding"deepline tools search --categories research --search_terms "ads,technographics"Avoid:
deepline tools search stuffdeepline tools search search across filtersGTM time windows, thresholds, and interpretation rules are defined in the Definitions section of enriching-and-researching.md.
adyntel playbook Summary: Use channel-native ad endpoints first, then synthesize cross-channel insights. Keep domains normalized and remember Adyntel bills per request except free polling endpoints. Last reviewed: 2026-02-27
ai_ark playbook Summary: Use company and people search for prospecting, reverse lookup for identity resolution, mobile phone finder only for strong matches, and async export or email-finder flows when you need verified emails. Last reviewed: 2026-03-16
apify playbook Summary: Prefer sync run (apify_run_actor_sync) for actor execution. Use async run plus polling only when you need non-blocking execution. Reach for Apify before call_ai/WebSearch when the source is already known and a source-specific actor exists. Last reviewed: 2026-02-11
apollo playbook Summary: Cheap but mediocre quality people/company search with include_similar_titles=true unless strict mode is explicitly requested. Last reviewed: 2026-02-11
attio playbook Summary: Use assert_* operations for upserts, query_* operations for filtered reads, standard-object wrappers when you know the Attio object family, and webhook subscriptions with typed event names when you need realtime sync. Last reviewed: 2026-03-20
builtwith playbook Summary: Use domain_lookup for live stack inspection, vector_search to discover the right tech label before lists/trends, and bulk_domain_lookup for row-heavy domain batches. Last reviewed: 2026-03-21
cloudflare playbook Summary: Use cloudflare_crawl to crawl websites and extract content as markdown, HTML, or JSON. Returns partial results on timeout — check timedOut field. Browser rendering is enabled by default. Last reviewed: 2026-03-11
crustdata playbook Summary: Start with free autocomplete and default to fuzzy contains operators (.) for higher recall. Use ISO-3 country codes, prefer crunchbase_categories over linkedin_industries for niche verticals, and use employee_count_range for filtering instead of employee_metrics.latest_count. Last reviewed: 2026-02-11
deepline_native playbook Summary: Launcher actions wait for completion and return final payloads with job_id; finder actions remain available for explicit polling. Last reviewed: 2026-02-23
deeplineagent playbook Summary: Use Vercel AI Gateway for plain inference or multi-step research with Deepline-managed tools and billing. Last reviewed: 2026-03-22
dropleads playbook Summary: Use Prime-DB search/count first to scope segments efficiently, then run finder/verifier steps only on shortlisted records. Prefer companyDomains over companyNames, split multi-word keywords into separate tokens, and use broad jobTitles plus seniority instead of exact-title matching. Last reviewed: 2026-02-26
exa playbook Summary: Use search/contents before answer for auditable retrieval, then synthesize with explicit citations. Write natural-language queries, expect discard/noise, and avoid mixing category searches with includeDomains-style source scoping. Last reviewed: 2026-02-11
firecrawl playbook Summary: Web scraping, crawling, search, and AI extraction. Use firecrawl_scrape for single pages, firecrawl_search for web search + scraping, firecrawl_map for URL discovery, firecrawl_crawl for multi-page crawls, firecrawl_extract for structured extraction. Last reviewed: 2026-03-11
forager playbook Summary: Use totals endpoints first (free) to estimate volume, then search/lookup with reveal flags for contacts. Strong for verified mobiles. Last reviewed: 2026-02-28
google_search playbook Summary: Use Google Search for broad web recall, then follow up with extraction/enrichment tools for structured workflows. Last reviewed: 2026-02-12
heyreach playbook Summary: Resolve campaign IDs first, then batch inserts and confirm campaign stats after writes. Last reviewed: 2026-02-11
hubspot playbook Summary: Use list/get/search for flexible CRM reads, batch operations for large syncs, and the schema, pipeline, owner, and association tools to discover HubSpot-specific IDs before writing. Last reviewed: 2026-03-20
hunter playbook Summary: Use discover for free ICP shaping first, then domain/email finder for credit-efficient contact discovery, and verifier as the final send gate. Last reviewed: 2026-02-24
icypeas playbook Summary: Use email-search for individual email discovery, bulk-search for volume. Scrape LinkedIn profiles for enrichment. Find-people for prospecting with 16 filters. Count endpoints are free. Last reviewed: 2026-02-28
instantly playbook Summary: List campaigns first, then add contacts in controlled batches and verify downstream stats. Last reviewed: 2026-02-11
leadmagic playbook Summary: Treat validation as gatekeeper and run email-pattern waterfalls before escalating to deeper enrichment. Last reviewed: 2026-02-11
lemlist playbook Summary: List campaign inventory first and push contacts in small batches with post-write stat checks. Last reviewed: 2026-03-01
parallel playbook Summary: Prefer run-task/search/extract primitives and avoid monitor/stream complexity for agent workflows. Last reviewed: 2026-02-11
peopledatalabs playbook Summary: Use clean/autocomplete helpers to normalize input before costly person/company search and enrich calls. Treat company search as a last-resort structured path, and prefer payload files or heredocs for non-trivial SQL-style queries. Last reviewed: 2026-02-11
prospeo playbook Summary: Use enrich-person for individual contacts, search-person for prospecting with 30+ filters, and search-company for account-level lists. Last reviewed: 2026-02-28
salesforce playbook Summary: Use field inspection before custom writes, object-specific create/update/delete tools for standard CRM records, and list tools for incremental reads with pagination handoff. Last reviewed: 2026-03-20
serper playbook Summary: Use Serper for broad live Google web search and local/maps recall. Strong first step before structured extraction or enrichment. Last reviewed: 2026-03-23
smartlead playbook Summary: List campaigns first, then push leads with Smartlead field names and confirm campaign stats afterward. Last reviewed: 2026-03-05
zerobounce playbook Summary: Use as final email validation gate before outbound sends. Check sub_status for granular failure reasons. Use batch for 5+ emails. Last reviewed: 2026-02-28
Apply defaults when user input is absent.
User-specified values always override defaults.
In approval messages, list active defaults as assumptions.
deepline enrich for list enrichment or discovery at scale (>5 rows). It auto-opens a visual playground sheet so user can inspect rows, re-run blocks, and iterate.deepline tools execute is short sighted.run_javascript in deepline enrich, put JS in payload.code; the current row is auto-injected as row at runtime, so you usually should not pass row yourself.deepline session output --csv <csv_path> --label "My Results". This is the lightweight alternative to deepline enrich for surfacing output in the Session UI.deepline backend stop --just-backend unless the user asked to keep it running._metadata) end-to-end. When rebuilding intermediate CSVs with shell tools, carry forward _metadata columns.--output to write to your working directory on the first pass, then --in-place on that output for subsequent passes. --in-place is for iterating on your own prior outputs — never on source files.--with-force <alias> only for targeted recompute.See enriching-and-researching.md for deepline csv commands, pre-flight/post-run script templates, and inspection details.
FINAL_CSV="${OUTPUT_DIR:-/tmp}/<requested_filename>.csv"FINAL_CSV and exact Playground URL.--rows 0:1 for one row).When the user asks for N rows, start with ~1.4×N (e.g., 35 for 25). Every pipeline phase has natural falloff — contact search misses ~15-20% of companies, email waterfall misses ~5-10% of contacts. Fighting to complete the hard rows is almost always a waste: the companies that providers can't find contacts for are the same ones that won't have email coverage either.
Do this:
Do NOT do this:
deeplineagent research passes, or manual patching.deeplineagent research passes).Provider coverage is a property of the company, not something you can overcome with more effort. Tiny startups with 5 people will have zero coverage across all providers — no amount of retrying changes that. Over-provision at the top and let incomplete rows fall off naturally.
Include all of:
deepline enrich --rows 0:1 one-row pilotApprove full run?Note: deepline enrich already prints the ASCII preview by default, so use that output directly.
Strict format contract (blocking):
AWAIT_APPROVAL and do not run paid/cost-unknown actions.FULL_RUN after an explicit user confirmation to the approval question.run_javascript is the non-AI path. aiinference is for general classification/structured reasoning, and deeplineagent is for context gathering / web research / signal extraction.Approval template:
Assumptions
- <intent assumption 1>
- <intent assumption 2>
CSV Preview (ASCII)
<paste verbatim output from deepline enrich --rows 0:1>
Credits + Scope + Cap
- Provider: <name>
- Estimated credits: <value or range>
- Full-run scope: <rows/items>
- Spend cap: <cap>
- Pilot summary: <one short paragraph>
Approval Question
Approve full run?
Must run a real pilot on the exact CSV for full run (--rows 0:1, end exclusive).
Must include ASCII preview verbatim in approval.
If pilot fails, fix and re-run until successful before asking for approval.
Before using AskUserQuestion for the approval gate, notify the Session UI so the user knows to check the terminal:
deepline session alert --message "Approval needed: run enrichment on N rows (~X credits)"
deepline billing balance
deepline billing limit
When credits at zero, link to https://code.deepline.com/dashboard/billing to top up. 10 credits == $1
Reminder: you should have already read the relevant sub-doc from Section 2 before reaching this point. If you haven't, go back and read it now. This section is a quick-reference summary, NOT a substitute for the sub-docs.
deepline tools search <intent> and execute field-matched provider calls in parallel; when the deepline-list-builder subagent is available, use subagent-based parallel search orchestration as the preferred pattern. Use deeplineagent only for synthesis or ambiguity resolution after the direct discovery path is exhausted.deepline enrich syntax, waterfall column patterns, and coalescing logic. Default waterfall order: dropleads → hunter → leadmagic → deepline_native → crustdata → peopledatalabs.run_javascript for deterministic transforms/template logic and deeplineagent for AI work. Start from prompts.json.leadmagic_email_validation first, then enrich corroboration.recipes/actor-contracts.md for known actor IDs.Provider path heuristics:
crustdata_person_enrichment, peopledatalabs_*) before leadmagic_* fallbacks.Critical: keep writing-outreach.md workflow context active when running any sequence task. It is not optional for ICP-driven messaging.
Use deepline enrich for heavy row-by-row work whenever possible. It has built-in rate-limit handling (adaptive retries/backoff) for standard upstream limits. If you are building a homegrown script, assume it does not include the same automatic protection unless you explicitly implement it.
If enrichment or CLI behavior is unstable, rerun the installer to ensure the latest CLI/client wiring is in place:
curl -s "https://code.deepline.com/api/v2/cli/install" | bash
Sites requiring auth: Don't use Apify. Tell the user to use Claude in Chrome or guide them through Inspect Element to get a curl command with headers (user is non-technical).
recipes/actor-contracts.md for the actor id, or try deepline tools search.operatorNotes over public ratings when conflicting.deepline tools execute apify_list_store_actors --payload '{"search":"linkedin company employees scraper","sortBy":"relevance","limit":20}'
deepline tools execute apify_get_actor_input_schema --payload '{"actorId":"bebity/linkedin-jobs-scraper"}'
Do not wait for the user to ask. If there is a meaningful failure, send feedback proactively using deepline provide-feedback.
Trigger when any of these happen:
Run once per issue cluster (avoid spam), and include:
workflow goal
tool/provider/model used
failure point and exact error details
reproduction steps attempted
deepline provide-feedback "Goal: <goal>. Tool/provider/model: <details>. Failure: <what broke>. Error: <exact message>. Repro attempted: <steps>."
At the end of every completed run/session, ask exactly one Yes/No question:
Would you like me to send this session activity to the Deepline team so they can improve the experience? (Yes/No)
If user says:
Yes -> run:
deepline session send --current-session
No -> do not send the session.
Ask once per completed run. Do not nag or re-ask unless the user starts a new run/session.
Weekly Installs
376
Source
First Seen
Feb 28, 2026
Security Audits
Installed on
codex376
cursor375
gemini-cli375
github-copilot375
amp375
cline375
专业SEO审计工具:全面网站诊断、技术SEO优化与页面分析指南
59,900 周安装
GrepAI Ollama 本地安装配置教程 - 私有化代码搜索嵌入模型设置指南
297 周安装
YouTube 全功能工具包:转录、搜索、频道分析 API | TranscriptAPI.com 集成
297 周安装
Firebase AI Logic 入门指南:为移动和Web应用集成Gemini生成式AI
297 周安装
UX设计原则与可用性启发式指南 | 用户研究、设计思维、尼尔森十大原则
297 周安装
Claude AI记忆增强工具:/si:remember显式保存项目知识,提升开发效率
298 周安装
AI写作助手:内容研究、大纲构思、草稿润色,保持原创风格
298 周安装