customaize-agent%3Athought-based-reasoning by neolabhq/context-engineering-kit
npx skills add https://github.com/neolabhq/context-engineering-kit --skill customaize-agent:thought-based-reasoning思维链(CoT)提示及其变体鼓励大语言模型在得出最终答案之前生成中间推理步骤,从而显著提高复杂推理任务的性能。这些技术通过使隐式推理显式化,改变了模型处理问题的方式。
| 技术 | 使用场景 | 复杂度 | 准确率提升 |
|---|---|---|---|
| 零样本思维链 | 快速推理,无可用示例 | 低 | +20-60% |
| 少样本思维链 | 有良好示例,需要格式一致 | 中 | +30-70% |
| 自我一致性 | 高风险决策,需要置信度 | 中 | 比 CoT 高 +10-20% |
| 思维树 | 需要探索的复杂问题 | 高 | 在困难任务上 +50-70% |
| 从简到繁 | 包含子问题的多步骤问题 | 中 | +30-80% |
| ReAct | 需要外部信息的任务 |
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
| 中 |
| +15-35% |
| PAL | 数学/计算问题 | 中 | +10-15% |
| Reflexion | 迭代改进,从错误中学习 | 高 | +10-20% |
论文 : "Chain of Thought Prompting Elicits Reasoning in Large Language Models" (Wei et al., 2022) 引用次数 : 14,255+
提供包含中间推理步骤的少样本示例,而不仅仅是问答对。模型学习生成类似的逐步推理。
Q: Roger has 5 tennis balls. He buys 2 more cans of tennis balls. Each can has 3 tennis balls. How many tennis balls does he have now?
A: Roger started with 5 balls. 2 cans of 3 tennis balls each is 6 tennis balls. 5 + 6 = 11. The answer is 11.
Q: The cafeteria had 23 apples. If they used 20 to make lunch and bought 6 more, how many apples do they have?
A: The cafeteria had 23 apples originally. They used 20 to make lunch. So they had 23 - 20 = 3. They bought 6 more apples, so they have 3 + 6 = 9. The answer is 9.
Q: [你的问题]
A:
论文 : "Large Language Models are Zero-Shot Reasoners" (Kojima et al., 2022) 引用次数 : 5,985+
只需在提示后附加“让我们一步步思考”(或类似短语)。这会触发模型在没有任何示例的情况下生成推理步骤。
Q: A juggler can juggle 16 balls. Half of the balls are golf balls, and half of the golf balls are blue. How many blue golf balls are there?
Let's think step by step.
替代触发短语 :
第一阶段 - 推理提取 :
Q: [问题]
A: Let's think step by step.
第二阶段 - 答案提取 :
[来自第一阶段的推理]
Therefore, the answer is
论文 : "Self-Consistency Improves Chain of Thought Reasoning in Language Models" (Wang et al., 2022) 引用次数 : 5,379+
采样多个不同的推理路径,然后通过多数投票选择最一致的答案。其直觉是:正确答案可以通过多个推理路径得出。
[使用任何 CoT 提示 - 零样本或少样本]
[使用 temperature > 0 生成 N 个样本]
[从每个样本中提取最终答案]
[返回最频繁的答案(多数投票)]
def self_consistency(prompt, n_samples=5, temperature=0.7):
answers = []
for _ in range(n_samples):
response = llm.generate(prompt, temperature=temperature)
answer = extract_answer(response)
answers.append(answer)
# Majority vote
return Counter(answers).most_common(1)[0][0]
论文 : "Tree of Thoughts: Deliberate Problem Solving with Large Language Models" (Yao et al., 2023) 引用次数 : 3,026+
将 CoT 推广到树形结构,其中每个节点是一个“思维”(连贯的语言单元)。使用搜索算法(BFS/DFS)结合自我评估来探索和选择有希望的推理路径。
思维生成 :
Given the current state:
[STATE]
Generate 3-5 possible next steps to solve this problem.
状态评估 :
Evaluate if the following partial solution is:
- "sure" (definitely leads to solution)
- "maybe" (could potentially work)
- "impossible" (cannot lead to solution)
Partial solution:
[THOUGHTS SO FAR]
BFS/DFS 搜索 :
def tree_of_thoughts(problem, max_depth=3, beam_width=3):
queue = [(problem, [])] # (state, thought_path)
while queue:
state, path = queue.pop(0)
if is_solved(state):
return path
# Generate candidate thoughts
thoughts = generate_thoughts(state, k=5)
# Evaluate and keep top-k
evaluated = [(t, evaluate(state, t)) for t in thoughts]
top_k = sorted(evaluated, key=lambda x: x[1])[:beam_width]
for thought, score in top_k:
if score != "impossible":
new_state = apply_thought(state, thought)
queue.append((new_state, path + [thought]))
return None
Problem: Use 4, 9, 10, 13 to get 24 (use +, -, *, / and each number once)
Thought 1: 13 - 9 = 4 (Now have: 4, 4, 10)
Evaluation: "maybe" - have two 4s and 10, could work
Thought 2: 10 - 4 = 6 (Now have: 4, 6, 13)
Evaluation: "maybe" - 4 * 6 = 24, need to use 13
Thought 3: 4 + 9 = 13 (Now have: 10, 13, 13)
Evaluation: "impossible" - no way to get 24 from these
论文 : "Least-to-Most Prompting Enables Complex Reasoning in Large Language Models" (Zhou et al., 2022) 引用次数 : 1,466+
两阶段过程:
第一阶段:分解
Q: Four years ago, Kody was only half as old as Mohamed. If Mohamed is currently twice as old as 30 years old, how old is Kody?
To solve "Four years ago, Kody was only half as old as Mohamed. If Mohamed is currently twice as old as 30 years old, how old is Kody?", we need to first solve:
- "If Mohamed is currently twice as old as 30 years old, how old is Mohamed?"
Then we need to solve:
- "Four years ago, Kody was only half as old as Mohamed. How old is Kody?"
第二阶段:顺序求解
Q: If Mohamed is currently twice as old as 30 years old, how old is Mohamed?
A: Mohamed is twice as old as 30, so Mohamed is 30 * 2 = 60 years old.
Q: Four years ago, Kody was only half as old as Mohamed. Mohamed is 60 years old. How old is Kody?
A: Four years ago, Mohamed was 60 - 4 = 56 years old. Kody was half as old, so Kody was 56 / 2 = 28 years old four years ago. Therefore, Kody is currently 28 + 4 = 32 years old.
The answer is 32.
论文 : "ReAct: Synergizing Reasoning and Acting in Language Models" (Yao et al., 2022) 引用次数 : 5,012+
将推理轨迹(“思考”)与行动(“行动”)和观察(“观察”)交错进行。推理有助于规划行动;行动为推理提供新信息。
Question: What is the elevation range for the area that the eastern sector of the Colorado orogeny extends into?
Thought 1: I need to search Colorado orogeny, find the area that the eastern sector extends into, then find the elevation range of that area.
Action 1: Search[Colorado orogeny]
Observation 1: The Colorado orogeny was an episode of mountain building (an orogeny) in Colorado and surrounding areas.
Thought 2: It does not mention the eastern sector. So I need to look up eastern sector.
Action 2: Lookup[eastern sector]
Observation 2: The eastern sector extends into the High Plains.
Thought 3: The eastern sector extends into the High Plains. I need to search High Plains and find its elevation range.
Action 3: Search[High Plains]
Observation 3: High Plains refers to one of two distinct land regions: High Plains (United States)...
Thought 4: I need to search High Plains (United States).
Action 4: Search[High Plains (United States)]
Observation 4: The High Plains are a subregion of the Great Plains... elevation of around 1,800 to 7,000 ft (550 to 2,130 m).
Thought 5: The elevation range is 1,800 to 7,000 ft.
Action 5: Finish[1,800 to 7,000 ft]
Search[query] - 搜索信息Lookup[keyword] - 在当前上下文中查找关键字Finish[answer] - 返回最终答案论文 : "PAL: Program-aided Language Models" (Gao et al., 2022) 引用次数 : 608+
生成代码(通常是 Python)而不是自然语言推理。执行代码以获得答案。LLM 处理分解;解释器处理计算。
Q: Roger has 5 tennis balls. He buys 2 more cans of tennis balls. Each can has 3 tennis balls. How many tennis balls does he have now?
# solution in Python:
def solution():
"""Roger has 5 tennis balls. He buys 2 more cans of tennis balls. Each can has 3 tennis balls. How many tennis balls does he have now?"""
tennis_balls_initial = 5
bought_cans = 2
tennis_balls_per_can = 3
tennis_balls_bought = bought_cans * tennis_balls_per_can
tennis_balls_total = tennis_balls_initial + tennis_balls_bought
return tennis_balls_total
Q: The bakers at the Beverly Hills Bakery baked 200 loaves of bread on Monday morning. They sold 93 loaves in the morning and 39 loaves in the afternoon. A grocery store returned 6 unsold loaves. How many loaves of bread did they have left?
# solution in Python:
def solution():
"""The bakers baked 200 loaves. They sold 93 in morning, 39 in afternoon. A store returned 6. How many left?"""
loaves_baked = 200
loaves_sold_morning = 93
loaves_sold_afternoon = 39
loaves_returned = 6
loaves_left = loaves_baked - loaves_sold_morning - loaves_sold_afternoon + loaves_returned
return loaves_left
论文 : "Automatic Chain of Thought Prompting in Large Language Models" (Zhang et al., 2022) 引用次数 : 838+
步骤 1:生成多样化的演示
# Cluster questions
clusters = cluster_questions(all_questions, k=8)
# For each cluster, pick representative and generate CoT
demonstrations = []
for cluster in clusters:
question = select_representative(cluster)
reasoning = zero_shot_cot(question) # "Let's think step by step"
demonstrations.append((question, reasoning))
步骤 2:用作少样本示例
Q: [演示问题 1]
A: Let's think step by step. [生成的推理 1]
Q: [演示问题 2]
A: Let's think step by step. [生成的推理 2]
...
Q: [新问题]
A: Let's think step by step.
论文 : "Reflexion: Language Agents with Verbal Reinforcement Learning" (Shinn et al., 2023) 引用次数 : 2,179+
任务失败后,智能体生成一个口头“反思”,分析出错原因。此反思存储在记忆中,并在后续尝试中使用,以避免重复错误。
初始尝试 :
Task: [任务描述]
Thought: [推理]
Action: [行动]
...
Result: [失败/部分成功]
反思 :
The previous attempt failed because:
1. [具体的错误分析]
2. [本应做什么]
3. [下次尝试的关键见解]
Reflection: In the next attempt, I should...
后续尝试(带记忆) :
Task: [任务描述]
Previous reflections:
- [反思 1]
- [反思 2]
Using these insights, I will now attempt the task again.
Thought: [改进的推理]
Action: [更好的行动]
Task: Write a function to find the longest palindromic substring.
Attempt 1: [有错误的代码]
Test Result: Failed on "babad" - expected "bab" or "aba", got "b"
Reflection: My solution only checked single characters. I need to:
1. Consider substrings of all lengths
2. Use expand-around-center technique for efficiency
3. Track both start position and maximum length
Attempt 2: [使用反思改进的代码]
Test Result: Passed all tests
Need Examples?
/ \
No Yes
| |
Zero-shot CoT Few-shot CoT
| |
Need higher accuracy? Need computation?
/ \ |
Yes No PAL
| |
Self-Consistency Done with CoT
|
Still not enough?
/ \
Yes No
| |
Problem decomposable? Done
/ \
Yes No
| |
Least-to-Most Need exploration?
/ \
Yes No
| |
Tree of Thoughts Need external info?
/ \
Yes No
| |
ReAct Need iteration?
/ \
Yes No
| |
Reflexion Use CoT
从零样本 CoT(“让我们一步步思考”)开始,然后在需要时逐步使用更复杂的技术。
技术通常是互补的:
| 错误 | 为何错误 | 修复方法 |
|---|---|---|
| 对简单查找使用 CoT | 增加不必要的标记和延迟 | 保留用于多步骤推理 |
| 自我一致性中样本太少 | 多数投票需要足够的样本 | 至少使用 5-10 个样本 |
| 使用通用的“一步步思考”而不检查输出 | 模型可能产生不相关的推理 | 验证推理质量,而不仅仅是存在性 |
| 在不理解权衡的情况下混合技术 | 计算成本无收益 | 理解每种技术何时增加价值 |
| 使用 PAL 而没有代码解释器 | 没有执行环境,代码生成无用 | 确保有可用的执行环境 |
| 在少样本 CoT 中不测试示例质量 | 糟糕的示例导致糟糕的推理 | 验证示例能正确解决问题 |
| 将思维树应用于线性问题 | 巨大的开销无收益 | 仅在需要探索时使用 ToT |
Wei, J. et al. (2022). "Chain of Thought Prompting Elicits Reasoning in Large Language Models." arXiv:2201.11903
Kojima, T. et al. (2022). "Large Language Models are Zero-Shot Reasoners." arXiv:2205.11916
Wang, X. et al. (2022). "Self-Consistency Improves Chain of Thought Reasoning in Language Models." arXiv:2203.11171
Yao, S. et al. (2023). "Tree of Thoughts: Deliberate Problem Solving with Large Language Models." arXiv:2305.10601
Zhou, D. et al. (2022). "Least-to-Most Prompting Enables Complex Reasoning in Large Language Models." arXiv:2205.10625
Yao, S. et al. (2022). "ReAct: Synergizing Reasoning and Acting in Language Models." arXiv:2210.03629
Gao, L. et al. (2022). "PAL: Program-aided Language Models." arXiv:2211.10435
Zhang, Z. et al. (2022). "Automatic Chain of Thought Prompting in Large Language Models." arXiv:2210.03493
Shinn, N. et al. (2023). "Reflexion: Language Agents with Verbal Reinforcement Learning." arXiv:2303.11366
每周安装量
232
代码仓库
GitHub 星标数
699
首次出现
2026年2月19日
安装于
opencode225
codex224
github-copilot224
gemini-cli223
amp221
kimi-cli221
Chain-of-Thought (CoT) prompting and its variants encourage LLMs to generate intermediate reasoning steps before arriving at a final answer, significantly improving performance on complex reasoning tasks. These techniques transform how models approach problems by making implicit reasoning explicit.
| Technique | When to Use | Complexity | Accuracy Gain |
|---|---|---|---|
| Zero-shot CoT | Quick reasoning, no examples available | Low | +20-60% |
| Few-shot CoT | Have good examples, consistent format needed | Medium | +30-70% |
| Self-Consistency | High-stakes decisions, need confidence | Medium | +10-20% over CoT |
| Tree of Thoughts | Complex problems requiring exploration | High | +50-70% on hard tasks |
| Least-to-Most | Multi-step problems with subproblems | Medium | +30-80% |
| ReAct | Tasks requiring external information | Medium | +15-35% |
| PAL | Mathematical/computational problems | Medium | +10-15% |
| Reflexion | Iterative improvement, learning from errors | High | +10-20% |
Paper : "Chain of Thought Prompting Elicits Reasoning in Large Language Models" (Wei et al., 2022) Citations : 14,255+
Provide few-shot examples that include intermediate reasoning steps, not just question-answer pairs. The model learns to generate similar step-by-step reasoning.
Q: Roger has 5 tennis balls. He buys 2 more cans of tennis balls. Each can has 3 tennis balls. How many tennis balls does he have now?
A: Roger started with 5 balls. 2 cans of 3 tennis balls each is 6 tennis balls. 5 + 6 = 11. The answer is 11.
Q: The cafeteria had 23 apples. If they used 20 to make lunch and bought 6 more, how many apples do they have?
A: The cafeteria had 23 apples originally. They used 20 to make lunch. So they had 23 - 20 = 3. They bought 6 more apples, so they have 3 + 6 = 9. The answer is 9.
Q: [YOUR QUESTION HERE]
A:
Paper : "Large Language Models are Zero-Shot Reasoners" (Kojima et al., 2022) Citations : 5,985+
Simply append "Let's think step by step" (or similar phrase) to the prompt. This triggers the model to generate reasoning steps without any examples.
Q: A juggler can juggle 16 balls. Half of the balls are golf balls, and half of the golf balls are blue. How many blue golf balls are there?
Let's think step by step.
Alternative trigger phrases :
Stage 1 - Reasoning Extraction :
Q: [QUESTION]
A: Let's think step by step.
Stage 2 - Answer Extraction :
[REASONING FROM STAGE 1]
Therefore, the answer is
Paper : "Self-Consistency Improves Chain of Thought Reasoning in Language Models" (Wang et al., 2022) Citations : 5,379+
Sample multiple diverse reasoning paths, then select the most consistent answer via majority voting. The intuition: correct answers can be reached through multiple reasoning paths.
[Use any CoT prompt - zero-shot or few-shot]
[Generate N samples with temperature > 0]
[Extract final answers from each sample]
[Return the most frequent answer (majority vote)]
def self_consistency(prompt, n_samples=5, temperature=0.7):
answers = []
for _ in range(n_samples):
response = llm.generate(prompt, temperature=temperature)
answer = extract_answer(response)
answers.append(answer)
# Majority vote
return Counter(answers).most_common(1)[0][0]
Paper : "Tree of Thoughts: Deliberate Problem Solving with Large Language Models" (Yao et al., 2023) Citations : 3,026+
Generalize CoT to a tree structure where each node is a "thought" (coherent language unit). Uses search algorithms (BFS/DFS) with self-evaluation to explore and select promising reasoning paths.
Thought Generation :
Given the current state:
[STATE]
Generate 3-5 possible next steps to solve this problem.
State Evaluation :
Evaluate if the following partial solution is:
- "sure" (definitely leads to solution)
- "maybe" (could potentially work)
- "impossible" (cannot lead to solution)
Partial solution:
[THOUGHTS SO FAR]
BFS/DFS Search :
def tree_of_thoughts(problem, max_depth=3, beam_width=3):
queue = [(problem, [])] # (state, thought_path)
while queue:
state, path = queue.pop(0)
if is_solved(state):
return path
# Generate candidate thoughts
thoughts = generate_thoughts(state, k=5)
# Evaluate and keep top-k
evaluated = [(t, evaluate(state, t)) for t in thoughts]
top_k = sorted(evaluated, key=lambda x: x[1])[:beam_width]
for thought, score in top_k:
if score != "impossible":
new_state = apply_thought(state, thought)
queue.append((new_state, path + [thought]))
return None
Problem: Use 4, 9, 10, 13 to get 24 (use +, -, *, / and each number once)
Thought 1: 13 - 9 = 4 (Now have: 4, 4, 10)
Evaluation: "maybe" - have two 4s and 10, could work
Thought 2: 10 - 4 = 6 (Now have: 4, 6, 13)
Evaluation: "maybe" - 4 * 6 = 24, need to use 13
Thought 3: 4 + 9 = 13 (Now have: 10, 13, 13)
Evaluation: "impossible" - no way to get 24 from these
Paper : "Least-to-Most Prompting Enables Complex Reasoning in Large Language Models" (Zhou et al., 2022) Citations : 1,466+
Two-stage process:
Stage 1: Decomposition
Q: Four years ago, Kody was only half as old as Mohamed. If Mohamed is currently twice as old as 30 years old, how old is Kody?
To solve "Four years ago, Kody was only half as old as Mohamed. If Mohamed is currently twice as old as 30 years old, how old is Kody?", we need to first solve:
- "If Mohamed is currently twice as old as 30 years old, how old is Mohamed?"
Then we need to solve:
- "Four years ago, Kody was only half as old as Mohamed. How old is Kody?"
Stage 2: Sequential Solving
Q: If Mohamed is currently twice as old as 30 years old, how old is Mohamed?
A: Mohamed is twice as old as 30, so Mohamed is 30 * 2 = 60 years old.
Q: Four years ago, Kody was only half as old as Mohamed. Mohamed is 60 years old. How old is Kody?
A: Four years ago, Mohamed was 60 - 4 = 56 years old. Kody was half as old, so Kody was 56 / 2 = 28 years old four years ago. Therefore, Kody is currently 28 + 4 = 32 years old.
The answer is 32.
Paper : "ReAct: Synergizing Reasoning and Acting in Language Models" (Yao et al., 2022) Citations : 5,012+
Interleave reasoning traces ("Thought") with actions ("Action") and observations ("Observation"). Reasoning helps plan actions; actions provide new information for reasoning.
Question: What is the elevation range for the area that the eastern sector of the Colorado orogeny extends into?
Thought 1: I need to search Colorado orogeny, find the area that the eastern sector extends into, then find the elevation range of that area.
Action 1: Search[Colorado orogeny]
Observation 1: The Colorado orogeny was an episode of mountain building (an orogeny) in Colorado and surrounding areas.
Thought 2: It does not mention the eastern sector. So I need to look up eastern sector.
Action 2: Lookup[eastern sector]
Observation 2: The eastern sector extends into the High Plains.
Thought 3: The eastern sector extends into the High Plains. I need to search High Plains and find its elevation range.
Action 3: Search[High Plains]
Observation 3: High Plains refers to one of two distinct land regions: High Plains (United States)...
Thought 4: I need to search High Plains (United States).
Action 4: Search[High Plains (United States)]
Observation 4: The High Plains are a subregion of the Great Plains... elevation of around 1,800 to 7,000 ft (550 to 2,130 m).
Thought 5: The elevation range is 1,800 to 7,000 ft.
Action 5: Finish[1,800 to 7,000 ft]
Search[query] - Search for informationLookup[keyword] - Look up keyword in current contextFinish[answer] - Return final answerPaper : "PAL: Program-aided Language Models" (Gao et al., 2022) Citations : 608+
Generate code (typically Python) instead of natural language reasoning. Execute the code to get the answer. The LLM handles decomposition; the interpreter handles computation.
Q: Roger has 5 tennis balls. He buys 2 more cans of tennis balls. Each can has 3 tennis balls. How many tennis balls does he have now?
# solution in Python:
def solution():
"""Roger has 5 tennis balls. He buys 2 more cans of tennis balls. Each can has 3 tennis balls. How many tennis balls does he have now?"""
tennis_balls_initial = 5
bought_cans = 2
tennis_balls_per_can = 3
tennis_balls_bought = bought_cans * tennis_balls_per_can
tennis_balls_total = tennis_balls_initial + tennis_balls_bought
return tennis_balls_total
Q: The bakers at the Beverly Hills Bakery baked 200 loaves of bread on Monday morning. They sold 93 loaves in the morning and 39 loaves in the afternoon. A grocery store returned 6 unsold loaves. How many loaves of bread did they have left?
# solution in Python:
def solution():
"""The bakers baked 200 loaves. They sold 93 in morning, 39 in afternoon. A store returned 6. How many left?"""
loaves_baked = 200
loaves_sold_morning = 93
loaves_sold_afternoon = 39
loaves_returned = 6
loaves_left = loaves_baked - loaves_sold_morning - loaves_sold_afternoon + loaves_returned
return loaves_left
Paper : "Automatic Chain of Thought Prompting in Large Language Models" (Zhang et al., 2022) Citations : 838+
Step 1: Generate diverse demonstrations
# Cluster questions
clusters = cluster_questions(all_questions, k=8)
# For each cluster, pick representative and generate CoT
demonstrations = []
for cluster in clusters:
question = select_representative(cluster)
reasoning = zero_shot_cot(question) # "Let's think step by step"
demonstrations.append((question, reasoning))
Step 2: Use as few-shot exemplars
Q: [Demo question 1]
A: Let's think step by step. [Generated reasoning 1]
Q: [Demo question 2]
A: Let's think step by step. [Generated reasoning 2]
...
Q: [New question]
A: Let's think step by step.
Paper : "Reflexion: Language Agents with Verbal Reinforcement Learning" (Shinn et al., 2023) Citations : 2,179+
After task failure, the agent generates a verbal "reflection" analyzing what went wrong. This reflection is stored in memory and used in subsequent attempts to avoid repeating mistakes.
Initial Attempt :
Task: [TASK DESCRIPTION]
Thought: [REASONING]
Action: [ACTION]
...
Result: [FAILURE/PARTIAL SUCCESS]
Reflection :
The previous attempt failed because:
1. [SPECIFIC ERROR ANALYSIS]
2. [WHAT SHOULD HAVE BEEN DONE]
3. [KEY INSIGHT FOR NEXT ATTEMPT]
Reflection: In the next attempt, I should...
Subsequent Attempt (with memory) :
Task: [TASK DESCRIPTION]
Previous reflections:
- [REFLECTION 1]
- [REFLECTION 2]
Using these insights, I will now attempt the task again.
Thought: [IMPROVED REASONING]
Action: [BETTER ACTION]
Task: Write a function to find the longest palindromic substring.
Attempt 1: [CODE WITH BUG]
Test Result: Failed on "babad" - expected "bab" or "aba", got "b"
Reflection: My solution only checked single characters. I need to:
1. Consider substrings of all lengths
2. Use expand-around-center technique for efficiency
3. Track both start position and maximum length
Attempt 2: [IMPROVED CODE USING REFLECTION]
Test Result: Passed all tests
Need Examples?
/ \
No Yes
| |
Zero-shot CoT Few-shot CoT
| |
Need higher accuracy? Need computation?
/ \ |
Yes No PAL
| |
Self-Consistency Done with CoT
|
Still not enough?
/ \
Yes No
| |
Problem decomposable? Done
/ \
Yes No
| |
Least-to-Most Need exploration?
/ \
Yes No
| |
Tree of Thoughts Need external info?
/ \
Yes No
| |
ReAct Need iteration?
/ \
Yes No
| |
Reflexion Use CoT
Begin with Zero-shot CoT ("Let's think step by step"), then progress to more complex techniques if needed.
Techniques are often complementary:
| Mistake | Why It's Wrong | Fix |
|---|---|---|
| Using CoT for simple lookups | Adds unnecessary tokens and latency | Reserve for multi-step reasoning |
| Too few samples in Self-Consistency | Majority voting needs adequate samples | Use 5-10 samples minimum |
| Generic "think step by step" without checking output | Model may produce irrelevant reasoning | Validate reasoning quality, not just presence |
| Mixing techniques without understanding trade-offs | Computational cost without benefit | Understand when each technique adds value |
| Using PAL without code interpreter | Code generation is useless without execution | Ensure execution environment available |
| Not testing exemplar quality in few-shot CoT | Poor exemplars lead to poor reasoning | Validate exemplars solve problems correctly |
| Applying Tree of Thoughts to linear problems | Massive overhead for no benefit | Use ToT only when exploration needed |
Wei, J. et al. (2022). "Chain of Thought Prompting Elicits Reasoning in Large Language Models." arXiv:2201.11903
Kojima, T. et al. (2022). "Large Language Models are Zero-Shot Reasoners." arXiv:2205.11916
Wang, X. et al. (2022). "Self-Consistency Improves Chain of Thought Reasoning in Language Models." arXiv:2203.11171
Yao, S. et al. (2023). "Tree of Thoughts: Deliberate Problem Solving with Large Language Models." arXiv:2305.10601
Zhou, D. et al. (2022). "Least-to-Most Prompting Enables Complex Reasoning in Large Language Models." arXiv:2205.10625
Yao, S. et al. (2022). "ReAct: Synergizing Reasoning and Acting in Language Models." arXiv:2210.03629
Gao, L. et al. (2022). "PAL: Program-aided Language Models." arXiv:2211.10435
Zhang, Z. et al. (2022). "Automatic Chain of Thought Prompting in Large Language Models." arXiv:2210.03493
Shinn, N. et al. (2023). "Reflexion: Language Agents with Verbal Reinforcement Learning." arXiv:2303.11366
Weekly Installs
232
Repository
GitHub Stars
699
First Seen
Feb 19, 2026
Installed on
opencode225
codex224
github-copilot224
gemini-cli223
amp221
kimi-cli221
超能力技能使用指南:AI助手技能调用优先级与工作流程详解
45,100 周安装