prompt-engineering by sickn33/antigravity-awesome-skills
npx skills add https://github.com/sickn33/antigravity-awesome-skills --skill prompt-engineering用于最大化大语言模型性能、可靠性和可控性的高级提示工程技术。
通过展示示例而非解释规则来教导模型。包含 2-5 个展示期望行为的输入-输出对。当你需要一致的格式、特定的推理模式或处理边缘情况时使用。更多示例可以提高准确性但会消耗令牌——根据任务复杂性进行平衡。
示例:
Extract key information from support tickets:
Input: "My login doesn't work and I keep getting error 403"
Output: {"issue": "authentication", "error_code": "403", "priority": "high"}
Input: "Feature request: add dark mode to settings"
Output: {"issue": "feature_request", "error_code": null, "priority": "low"}
Now process: "Can't upload files larger than 10MB, getting timeout"
在最终答案之前要求逐步推理。添加"让我们逐步思考"(零样本)或包含示例推理轨迹(少样本)。用于需要多步逻辑、数学推理的复杂问题,或当你需要验证模型的思考过程时。可将分析任务的准确性提高 30-50%。
示例:
Analyze this bug report and determine root cause.
Think step by step:
1. What is the expected behavior?
2. What is the actual behavior?
3. What changed recently that could cause this?
4. What components are involved?
5. What is the most likely root cause?
Bug: "Users can't save drafts after the cache update deployed yesterday"
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
通过测试和精炼系统地改进提示。从简单开始,衡量性能(准确性、一致性、令牌使用量),然后迭代。在包括边缘情况在内的多样化输入上进行测试。使用 A/B 测试来比较不同版本。对于一致性和成本至关重要的生产环境提示来说非常关键。
示例:
Version 1 (Simple): "Summarize this article"
→ Result: Inconsistent length, misses key points
Version 2 (Add constraints): "Summarize in 3 bullet points"
→ Result: Better structure, but still misses nuance
Version 3 (Add reasoning): "Identify the 3 main findings, then summarize each"
→ Result: Consistent, accurate, captures key information
使用变量、条件部分和模块化组件构建可重用的提示结构。用于多轮对话、基于角色的交互,或当相同模式适用于不同输入时。减少重复并确保跨类似任务的一致性。
示例:
# Reusable code review template
template = """
Review this {language} code for {focus_area}.
Code:
{code_block}
Provide feedback on:
{checklist}
"""
# Usage
prompt = template.format(
language="Python",
focus_area="security vulnerabilities",
code_block=user_code,
checklist="1. SQL injection\n2. XSS risks\n3. Authentication"
)
设置在整个对话过程中持续存在的全局行为和约束。定义模型的角色、专业水平、输出格式和安全指南。使用系统提示来设定不应逐轮改变的稳定指令,从而为用户消息令牌释放空间以处理可变内容。
示例:
System: You are a senior backend engineer specializing in API design.
Rules:
- Always consider scalability and performance
- Suggest RESTful patterns by default
- Flag security concerns immediately
- Provide code examples in Python
- Use early return pattern
Format responses as:
1. Analysis
2. Recommendation
3. Code example
4. Trade-offs
从简单的提示开始,仅在需要时增加复杂性:
级别 1 : 直接指令
级别 2 : 添加约束
级别 3 : 添加推理
级别 4 : 添加示例
[System Context] → [Task Instruction] → [Examples] → [Input Data] → [Output Format]
构建能够优雅处理失败的提示:
此技能适用于执行概述中描述的工作流或操作。
每周安装量
560
代码仓库
GitHub 星标数
27.1K
首次出现
Jan 19, 2026
安全审计
安装于
opencode454
gemini-cli440
codex396
cursor385
claude-code385
github-copilot356
Advanced prompt engineering techniques to maximize LLM performance, reliability, and controllability.
Teach the model by showing examples instead of explaining rules. Include 2-5 input-output pairs that demonstrate the desired behavior. Use when you need consistent formatting, specific reasoning patterns, or handling of edge cases. More examples improve accuracy but consume tokens—balance based on task complexity.
Example:
Extract key information from support tickets:
Input: "My login doesn't work and I keep getting error 403"
Output: {"issue": "authentication", "error_code": "403", "priority": "high"}
Input: "Feature request: add dark mode to settings"
Output: {"issue": "feature_request", "error_code": null, "priority": "low"}
Now process: "Can't upload files larger than 10MB, getting timeout"
Request step-by-step reasoning before the final answer. Add "Let's think step by step" (zero-shot) or include example reasoning traces (few-shot). Use for complex problems requiring multi-step logic, mathematical reasoning, or when you need to verify the model's thought process. Improves accuracy on analytical tasks by 30-50%.
Example:
Analyze this bug report and determine root cause.
Think step by step:
1. What is the expected behavior?
2. What is the actual behavior?
3. What changed recently that could cause this?
4. What components are involved?
5. What is the most likely root cause?
Bug: "Users can't save drafts after the cache update deployed yesterday"
Systematically improve prompts through testing and refinement. Start simple, measure performance (accuracy, consistency, token usage), then iterate. Test on diverse inputs including edge cases. Use A/B testing to compare variations. Critical for production prompts where consistency and cost matter.
Example:
Version 1 (Simple): "Summarize this article"
→ Result: Inconsistent length, misses key points
Version 2 (Add constraints): "Summarize in 3 bullet points"
→ Result: Better structure, but still misses nuance
Version 3 (Add reasoning): "Identify the 3 main findings, then summarize each"
→ Result: Consistent, accurate, captures key information
Build reusable prompt structures with variables, conditional sections, and modular components. Use for multi-turn conversations, role-based interactions, or when the same pattern applies to different inputs. Reduces duplication and ensures consistency across similar tasks.
Example:
# Reusable code review template
template = """
Review this {language} code for {focus_area}.
Code:
{code_block}
Provide feedback on:
{checklist}
"""
# Usage
prompt = template.format(
language="Python",
focus_area="security vulnerabilities",
code_block=user_code,
checklist="1. SQL injection\n2. XSS risks\n3. Authentication"
)
Set global behavior and constraints that persist across the conversation. Define the model's role, expertise level, output format, and safety guidelines. Use system prompts for stable instructions that shouldn't change turn-to-turn, freeing up user message tokens for variable content.
Example:
System: You are a senior backend engineer specializing in API design.
Rules:
- Always consider scalability and performance
- Suggest RESTful patterns by default
- Flag security concerns immediately
- Provide code examples in Python
- Use early return pattern
Format responses as:
1. Analysis
2. Recommendation
3. Code example
4. Trade-offs
Start with simple prompts, add complexity only when needed:
Level 1 : Direct instruction
Level 2 : Add constraints
Level 3 : Add reasoning
Level 4 : Add examples
[System Context] → [Task Instruction] → [Examples] → [Input Data] → [Output Format]
Build prompts that gracefully handle failures:
This skill is applicable to execute the workflow or actions described in the overview.
Weekly Installs
560
Repository
GitHub Stars
27.1K
First Seen
Jan 19, 2026
Security Audits
Gen Agent Trust HubPassSocketPassSnykPass
Installed on
opencode454
gemini-cli440
codex396
cursor385
claude-code385
github-copilot356
AI 代码实施计划编写技能 | 自动化开发任务分解与 TDD 流程规划工具
41,400 周安装