prompt-engineering by davila7/claude-code-templates
npx skills add https://github.com/davila7/claude-code-templates --skill prompt-engineering用于最大化 LLM 性能、可靠性和可控性的高级提示工程技术。
通过展示示例而非解释规则来教导模型。包含 2-5 个展示期望行为的输入-输出对。当你需要一致的格式、特定的推理模式或处理边缘情况时使用。更多示例可以提高准确性但会消耗 token——根据任务复杂性进行平衡。
示例:
Extract key information from support tickets:
Input: "My login doesn't work and I keep getting error 403"
Output: {"issue": "authentication", "error_code": "403", "priority": "high"}
Input: "Feature request: add dark mode to settings"
Output: {"issue": "feature_request", "error_code": null, "priority": "low"}
Now process: "Can't upload files larger than 10MB, getting timeout"
在最终答案之前请求逐步推理。添加"让我们一步步思考"(零样本)或包含示例推理轨迹(小样本)。适用于需要多步逻辑、数学推理的复杂问题,或当你需要验证模型的思考过程时。可将分析任务的准确性提高 30-50%。
示例:
Analyze this bug report and determine root cause.
Think step by step:
1. What is the expected behavior?
2. What is the actual behavior?
3. What changed recently that could cause this?
4. What components are involved?
5. What is the most likely root cause?
Bug: "Users can't save drafts after the cache update deployed yesterday"
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
通过测试和精炼系统地改进提示。从简单开始,衡量性能(准确性、一致性、token 使用量),然后迭代。在包括边缘情况在内的多样化输入上进行测试。使用 A/B 测试来比较不同变体。对于生产环境中一致性和成本至关重要的提示至关重要。
示例:
Version 1 (Simple): "Summarize this article"
→ Result: Inconsistent length, misses key points
Version 2 (Add constraints): "Summarize in 3 bullet points"
→ Result: Better structure, but still misses nuance
Version 3 (Add reasoning): "Identify the 3 main findings, then summarize each"
→ Result: Consistent, accurate, captures key information
使用变量、条件部分和模块化组件构建可重用的提示结构。适用于多轮对话、基于角色的交互,或当相同模式适用于不同输入时。减少重复并确保类似任务间的一致性。
示例:
# Reusable code review template
template = """
Review this {language} code for {focus_area}.
Code:
{code_block}
Provide feedback on:
{checklist}
"""
# Usage
prompt = template.format(
language="Python",
focus_area="security vulnerabilities",
code_block=user_code,
checklist="1. SQL injection\n2. XSS risks\n3. Authentication"
)
设置在整个对话过程中持续存在的全局行为和约束。定义模型的角色、专业水平、输出格式和安全指南。使用系统提示来设置不应逐轮变化的稳定指令,从而为用户消息 token 释放空间以处理可变内容。
示例:
System: You are a senior backend engineer specializing in API design.
Rules:
- Always consider scalability and performance
- Suggest RESTful patterns by default
- Flag security concerns immediately
- Provide code examples in Python
- Use early return pattern
Format responses as:
1. Analysis
2. Recommendation
3. Code example
4. Trade-offs
从简单的提示开始,仅在需要时增加复杂性:
级别 1 : 直接指令
级别 2 : 添加约束
级别 3 : 添加推理
级别 4 : 添加示例
[System Context] → [Task Instruction] → [Examples] → [Input Data] → [Output Format]
构建能够优雅处理失败的提示:
每周安装量
320
代码仓库
GitHub 星标
22.6K
首次出现
Jan 25, 2026
安全审计
安装于
opencode267
gemini-cli255
codex251
github-copilot242
claude-code221
amp192
Advanced prompt engineering techniques to maximize LLM performance, reliability, and controllability.
Teach the model by showing examples instead of explaining rules. Include 2-5 input-output pairs that demonstrate the desired behavior. Use when you need consistent formatting, specific reasoning patterns, or handling of edge cases. More examples improve accuracy but consume tokens—balance based on task complexity.
Example:
Extract key information from support tickets:
Input: "My login doesn't work and I keep getting error 403"
Output: {"issue": "authentication", "error_code": "403", "priority": "high"}
Input: "Feature request: add dark mode to settings"
Output: {"issue": "feature_request", "error_code": null, "priority": "low"}
Now process: "Can't upload files larger than 10MB, getting timeout"
Request step-by-step reasoning before the final answer. Add "Let's think step by step" (zero-shot) or include example reasoning traces (few-shot). Use for complex problems requiring multi-step logic, mathematical reasoning, or when you need to verify the model's thought process. Improves accuracy on analytical tasks by 30-50%.
Example:
Analyze this bug report and determine root cause.
Think step by step:
1. What is the expected behavior?
2. What is the actual behavior?
3. What changed recently that could cause this?
4. What components are involved?
5. What is the most likely root cause?
Bug: "Users can't save drafts after the cache update deployed yesterday"
Systematically improve prompts through testing and refinement. Start simple, measure performance (accuracy, consistency, token usage), then iterate. Test on diverse inputs including edge cases. Use A/B testing to compare variations. Critical for production prompts where consistency and cost matter.
Example:
Version 1 (Simple): "Summarize this article"
→ Result: Inconsistent length, misses key points
Version 2 (Add constraints): "Summarize in 3 bullet points"
→ Result: Better structure, but still misses nuance
Version 3 (Add reasoning): "Identify the 3 main findings, then summarize each"
→ Result: Consistent, accurate, captures key information
Build reusable prompt structures with variables, conditional sections, and modular components. Use for multi-turn conversations, role-based interactions, or when the same pattern applies to different inputs. Reduces duplication and ensures consistency across similar tasks.
Example:
# Reusable code review template
template = """
Review this {language} code for {focus_area}.
Code:
{code_block}
Provide feedback on:
{checklist}
"""
# Usage
prompt = template.format(
language="Python",
focus_area="security vulnerabilities",
code_block=user_code,
checklist="1. SQL injection\n2. XSS risks\n3. Authentication"
)
Set global behavior and constraints that persist across the conversation. Define the model's role, expertise level, output format, and safety guidelines. Use system prompts for stable instructions that shouldn't change turn-to-turn, freeing up user message tokens for variable content.
Example:
System: You are a senior backend engineer specializing in API design.
Rules:
- Always consider scalability and performance
- Suggest RESTful patterns by default
- Flag security concerns immediately
- Provide code examples in Python
- Use early return pattern
Format responses as:
1. Analysis
2. Recommendation
3. Code example
4. Trade-offs
Start with simple prompts, add complexity only when needed:
Level 1 : Direct instruction
Level 2 : Add constraints
Level 3 : Add reasoning
Level 4 : Add examples
[System Context] → [Task Instruction] → [Examples] → [Input Data] → [Output Format]
Build prompts that gracefully handle failures:
Weekly Installs
320
Repository
GitHub Stars
22.6K
First Seen
Jan 25, 2026
Security Audits
Gen Agent Trust HubPassSocketPassSnykPass
Installed on
opencode267
gemini-cli255
codex251
github-copilot242
claude-code221
amp192
AI Elements:基于shadcn/ui的AI原生应用组件库,快速构建对话界面
56,200 周安装
编码规范与最佳实践指南:TypeScript/JavaScript 和 React 开发标准
3,600 周安装
深度研究技能:8阶段AI研究流程,交付有引文支持的研究报告 | 199-biotechnologies
3,600 周安装
钱包策略生成器 | 为EVM和Solana钱包创建安全策略规则
3,700 周安装
完整输出强制执行 - AI代码生成完整性保障工具 | 杜绝省略代码
4,100 周安装
Clerk 身份验证技能路由:SDK 版本检测、Next.js 模式、自定义UI、B2B组织管理
3,800 周安装
Angular HTTP 数据获取指南:基于信号的 httpResource() 与 resource() 使用教程
3,700 周安装