autoresearch by github/awesome-copilot
npx skills add https://github.com/github/awesome-copilot --skill autoresearch一个适用于任何编程任务的自主实验循环。您定义目标及其衡量方式;智能体将自主迭代——修改代码、运行实验、测量结果、保留或丢弃更改——直到被中断。
此技能灵感来源于 Karpathy 的 autoresearch,从机器学习训练推广到任何具有可衡量结果的编程任务。
在任何实验开始前,与用户协作确定以下参数。直接向用户询问每一项。不要假设或跳过任何内容。
询问用户:
您想要改进或优化什么?
示例:执行时间、内存使用量、二进制文件大小、测试通过率、代码覆盖率、API 响应延迟、吞吐量、错误率、基准测试分数、构建时间、打包大小、代码行数、圈复杂度等。
将用户的回答记录为目标。
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
询问用户:
我们如何衡量成功?哪个确切的命令能产生该指标?
我需要:
- 要运行的命令(例如,
dotnet test、npm run benchmark、time ./build.sh、pytest --tb=short)- 如何从输出中提取指标(例如,正则表达式模式、特定行、JSON 字段)
- 方向:数值越低越好,还是越高越好?
示例:“运行
dotnet test --logger trx,统计通过的测试数。数值越高越好。” 示例:“运行hyperfine './my-program',提取平均时间。数值越低越好。”
记录:
METRIC_COMMAND:要运行的命令METRIC_EXTRACTION:如何从输出中提取数值指标METRIC_DIRECTION:lower_is_better 或 higher_is_better询问用户:
允许我修改哪些文件或目录?
以及哪些文件是禁止修改的(只读)?
记录:
IN_SCOPE_FILES:智能体可以编辑的文件/目录OUT_OF_SCOPE_FILES:不得修改的文件/目录询问用户:
有哪些约束条件是我需要遵守的?
示例:
- 每次实验的时间预算(例如,“每次运行应少于 2 分钟”)
- 不添加新依赖项
- 必须保持所有现有测试通过
- 不得更改公共 API
- 必须保持向后兼容性
- VRAM/内存限制
- 代码复杂度限制(优先选择更简单的解决方案)
记录为 CONSTRAINTS。
询问用户:
我应该运行多少次实验,还是应该一直运行直到您让我停止?
您可以指定一个数字(例如,“尝试 20 次实验”)或说“无限制”(我将一直运行直到您中断)。
记录为 MAX_EXPERIMENTS(数字或 unlimited)。
告知用户默认的简洁性策略:
简洁性策略(默认): 在其他条件相同的情况下,越简单越好。一个带来丑陋复杂性的微小改进是不值得的。在保持或改进指标的同时减少代码是一个很好的结果。我会权衡复杂度成本与改进幅度。这个策略对您适用吗,还是您想调整它?
将任何调整记录为 SIMPLICITY_POLICY。
以清晰的表格形式向用户总结所有参数:
| 参数 | 值 |
|---|---|
| 目标 | ... |
| 指标命令 | ... |
| 指标提取方式 | ... |
| 方向 | 数值越低越好 / 数值越高越好 ... |
| 范围内文件 | ... |
| 范围外文件 | ... |
| 约束条件 | ... |
| 最大实验次数 | ... |
| 简洁性策略 | ... |
请用户确认。在用户确认前不要继续。
一旦用户确认:
创建分支:建议一个基于当前日期的标签(例如,autoresearch/mar17)。创建分支:git checkout -b autoresearch/<tag>。
读取范围内文件:读取所有范围内的文件,以构建当前状态的完整上下文。
初始化 results.tsv:在仓库根目录创建 results.tsv,包含标题行:
experiment commit metric status description
将 results.tsv 和 run.log 添加到 .git/info/exclude(如果不存在则追加),使它们保持未跟踪状态,而无需修改任何已跟踪的文件。
运行基线:在当前未修改的代码上执行指标命令。将结果记录为实验 0,状态为 baseline,并写入 results.tsv。
向用户报告基线:
基线已建立:[metric_name] = [value] 开始自主实验循环。
持续运行此循环。不要停下来询问用户。运行直到:
MAX_EXPERIMENTS,或LOOP:
1. 思考 - 分析先前结果和当前代码。
生成实验假设。
考虑:哪些有效,哪些无效,哪些尚未尝试。
2. 编辑 - 修改范围内文件以实现想法。
每次实验的更改应保持聚焦且最小化。
3. 提交 - git add + git commit,附带简短的描述性信息。
格式:"experiment: <对所更改内容的简短描述>"
4. 运行 - 执行指标命令。
将输出重定向到 run.log,以免淹没上下文窗口。
使用适合 shell 的重定向:
- Bash/Zsh: `<command> > run.log 2>&1`
- PowerShell: `<command> *> run.log`
5. 测量 - 从 run.log 中提取指标。
如果提取失败(崩溃/错误),读取 run.log 的最后 50 行以查看错误。
6. 决定 - 将指标与当前最佳值比较:
- 改进:保留提交。更新“最佳”基线。
记录状态 = "keep"。
- 相同或更差:回滚。`git reset --hard HEAD~1`。
记录状态 = "discard"。
- 崩溃:尝试快速修复(拼写错误、导入、简单错误)。
使用修复内容修改实验提交(`git commit --amend`)并重新运行。实验保持其原始编号。
如果经过 2 次尝试仍无法修复,则回滚整个实验(`git reset --hard HEAD~1`)并记录状态 = "crash"。
7. 记录 - 向 results.tsv 追加一行:
experiment_number commit_hash metric_value status description
8. 继续 - 转到步骤 1。
生成实验想法时,遵循以下优先级顺序:
当循环结束时(达到预算或用户中断):
git log --oneline <start_commit>..HEAD制表符分隔,5 列:
experiment commit metric status description
0 a1b2c3d 0.997900 baseline unmodified code
1 b2c3d4e 0.993200 keep increase learning rate to 0.04
2 c3d4e5f 1.005000 discard switch to GeLU activation
3 d4e5f6g 0.000000 crash double model width (OOM)
autoresearch/<tag> 分支上进行git reset --hard HEAD~1 回滚results.tsv 和 run.log 保持未跟踪状态(已添加到 .git/info/exclude)每周安装次数
359
仓库
GitHub 星标数
27.0K
首次出现
8 天前
安全审计
安装于
codex337
gemini-cli336
opencode333
cursor330
kimi-cli328
github-copilot328
An autonomous experimentation loop for any programming task. You define the goal and how to measure it; the agent iterates autonomously -- modifying code, running experiments, measuring results, and keeping or discarding changes -- until interrupted.
This skill is inspired by Karpathy's autoresearch, generalized from ML training to any programming task with a measurable outcome.
Before any experimentation begins, work with the user to establish these parameters. Ask the user directly for each item. Do not assume or skip any.
Ask the user:
What are you trying to improve or optimize?
Examples: execution time, memory usage, binary size, test pass rate, code coverage, API response latency, throughput, error rate, benchmark score, build time, bundle size, lines of code, cyclomatic complexity, etc.
Record the user's answer as the goal.
Ask the user:
How do we measure success? What exact command produces the metric?
I need:
- The command to run (e.g.,
dotnet test,npm run benchmark,time ./build.sh,pytest --tb=short)- How to extract the metric from the output (e.g., a regex pattern, a specific line, a JSON field)
- Direction : Is lower better or higher better?
Example: "Run
dotnet test --logger trx, count passing tests. Higher is better." Example: "Runhyperfine './my-program', extract mean time. Lower is better."
Record:
METRIC_COMMAND: the command to runMETRIC_EXTRACTION: how to extract the numeric metric from outputMETRIC_DIRECTION: lower_is_better or higher_is_betterAsk the user:
Which files or directories am I allowed to modify?
And which files are OFF LIMITS (read-only)?
Record:
IN_SCOPE_FILES: files/dirs the agent may editOUT_OF_SCOPE_FILES: files/dirs that must not be modifiedAsk the user:
Are there any constraints I should respect?
Examples:
- Time budget per experiment (e.g., "each run should take < 2 minutes")
- No new dependencies
- Must keep all existing tests passing
- Must not change the public API
- Must maintain backward compatibility
- VRAM/memory limit
- Code complexity limits (prefer simpler solutions)
Record as CONSTRAINTS.
Ask the user:
How many experiments should I run, or should I just keep going until you stop me?
You can say a number (e.g., "try 20 experiments") or "unlimited" (I'll run until you interrupt).
Record as MAX_EXPERIMENTS (number or unlimited).
Inform the user of the default simplicity policy:
Simplicity policy (default): All else being equal, simpler is better. A small improvement that adds ugly complexity is not worth it. Removing code while maintaining or improving the metric is a great outcome. I'll weigh the complexity cost against the improvement magnitude. Does this policy work for you, or do you want to adjust it?
Record any adjustments as SIMPLICITY_POLICY.
Summarize all parameters back to the user in a clear table:
| Parameter | Value |
|---|---|
| Goal | ... |
| Metric command | ... |
| Metric extraction | ... |
| Direction | lower is better / higher ... |
| In-scope files | ... |
| Out-of-scope files | ... |
| Constraints | ... |
| Max experiments | ... |
| Simplicity policy | ... |
Ask the user to confirm. Do not proceed until confirmed.
Once the user confirms:
Create a branch : Propose a tag based on today's date (e.g., autoresearch/mar17). Create the branch: git checkout -b autoresearch/<tag>.
Read in-scope files : Read all files that are in scope to build full context of the current state.
Initialize results.tsv : Create results.tsv in the repo root with the header row:
experiment commit metric status description
Add results.tsv and run.log to .git/info/exclude (append if not already present) so they stay untracked without modifying any tracked files.
Run the baseline : Execute the metric command on the current unmodified code. Record the result as experiment 0 with status baseline in results.tsv.
Report baseline to the user:
Baseline established: [metric_name] = [value] Starting autonomous experimentation loop.
Run this loop continuously. Do not stop to ask the user. Run until:
MAX_EXPERIMENTS is reached, ORLOOP:
1. THINK - Analyze previous results and the current code.
Generate an experiment hypothesis.
Consider: what worked, what didn't, what hasn't been tried.
2. EDIT - Modify the in-scope file(s) to implement the idea.
Keep changes focused and minimal per experiment.
3. COMMIT - git add + git commit with a short descriptive message.
Format: "experiment: <short description of what changed>"
4. RUN - Execute the metric command.
Redirect output to run.log so it does not flood the context window.
Use shell-appropriate redirection:
- Bash/Zsh: `<command> > run.log 2>&1`
- PowerShell: `<command> *> run.log`
5. MEASURE - Extract the metric from run.log.
If extraction fails (crash/error), read the last 50 lines
of run.log for the error.
6. DECIDE - Compare metric to the current best:
- IMPROVED: Keep the commit. Update the "best" baseline.
Log status = "keep".
- SAME OR WORSE: Revert. `git reset --hard HEAD~1`.
Log status = "discard".
- CRASH: Attempt a quick fix (typo, import, simple error).
Amend the experiment commit (`git commit --amend`) with the fix
and rerun. The experiment keeps its original number.
If unfixable after 2 attempts, revert the entire experiment
(`git reset --hard HEAD~1`) and log status = "crash".
7. LOG - Append a row to results.tsv:
experiment_number commit_hash metric_value status description
8. CONTINUE - Go to step 1.
When generating experiment ideas, follow this priority order:
When the loop ends (budget reached or user interrupts):
git log --oneline <start_commit>..HEADTab-separated, 5 columns:
experiment commit metric status description
0 a1b2c3d 0.997900 baseline unmodified code
1 b2c3d4e 0.993200 keep increase learning rate to 0.04
2 c3d4e5f 1.005000 discard switch to GeLU activation
3 d4e5f6g 0.000000 crash double model width (OOM)
autoresearch/<tag> branchgit reset --hard HEAD~1results.tsv and run.log stay untracked (added to .git/info/exclude)Weekly Installs
359
Repository
GitHub Stars
27.0K
First Seen
8 days ago
Security Audits
Gen Agent Trust HubWarnSocketPassSnykFail
Installed on
codex337
gemini-cli336
opencode333
cursor330
kimi-cli328
github-copilot328
AI Elements:基于shadcn/ui的AI原生应用组件库,快速构建对话界面
54,900 周安装
Rust调用关系图生成器 - 可视化函数调用层次结构,提升代码分析效率
539 周安装
parallel-web-extract:并行网页内容提取工具,高效抓取网页数据
595 周安装
腾讯云CloudBase AI模型Web技能:前端调用混元/DeepSeek模型,实现流式文本生成
560 周安装
Apollo Connectors 模式助手:GraphQL API 连接器开发与集成指南
565 周安装
GitHub Trending 趋势分析工具:实时发现热门项目、技术洞察与开源机会
556 周安装
GSAP React 集成教程:useGSAP Hook 动画库与 React 组件开发指南
546 周安装