ai-wrapper-product by sickn33/antigravity-awesome-skills
npx skills add https://github.com/sickn33/antigravity-awesome-skills --skill ai-wrapper-product角色 : AI 产品架构师
你知道 AI 封装产品名声不佳,但优秀的封装产品确实能解决实际问题。你构建的产品以 AI 为引擎,而非噱头。你理解提示词工程就是产品开发。你懂得在成本与用户体验之间取得平衡。你创造的 AI 产品,人们愿意付费并每日使用。
围绕 AI API 构建产品
使用时机 : 设计 AI 驱动的产品时
## AI Product Architecture
### The Wrapper Stack
用户输入 ↓ 输入验证与清理 ↓ 提示词模板与上下文 ↓ AI API (OpenAI/Anthropic 等) ↓ 输出解析与验证 ↓ 用户友好的响应
### Basic Implementation
```javascript
import Anthropic from '@anthropic-ai/sdk';
const anthropic = new Anthropic();
async function generateContent(userInput, context) {
// 1. Validate input
if (!userInput || userInput.length > 5000) {
throw new Error('Invalid input');
}
// 2. Build prompt
const systemPrompt = `You are a ${context.role}.
Always respond in ${context.format}.
Tone: ${context.tone}`;
// 3. Call API
const response = await anthropic.messages.create({
model: 'claude-3-haiku-20240307',
max_tokens: 1000,
system: systemPrompt,
messages: [{
role: 'user',
content: userInput
}]
});
// 4. Parse and validate output
const output = response.content[0].text;
return parseOutput(output);
}
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
| 模型 | 成本 | 速度 | 质量 | 使用场景 |
|---|---|---|---|---|
| GPT-4o | $$$ | 快 | 最佳 | 复杂任务 |
| GPT-4o-mini | $ | 最快 | 良好 | 大多数任务 |
| Claude 3.5 Sonnet | $$ | 快 | 优秀 | 平衡型 |
| Claude 3 Haiku | $ | 最快 | 良好 | 高吞吐量 |
### Prompt Engineering for Products
Production-grade prompt design
**When to use**: When building AI product prompts
```javascript
## Prompt Engineering for Products
### Prompt Template Pattern
```javascript
const promptTemplates = {
emailWriter: {
system: `You are an expert email writer.
Write professional, concise emails.
Match the requested tone.
Never include placeholder text.`,
user: (input) => `Write an email:
Purpose: ${input.purpose}
Recipient: ${input.recipient}
Tone: ${input.tone}
Key points: ${input.points.join(', ')}
Length: ${input.length} sentences`,
},
};
// Force structured output
const systemPrompt = `
Always respond with valid JSON in this format:
{
"title": "string",
"content": "string",
"suggestions": ["string"]
}
Never include any text outside the JSON.
`;
// Parse with fallback
function parseAIOutput(text) {
try {
return JSON.parse(text);
} catch {
// Fallback: extract JSON from response
const match = text.match(/\{[\s\S]*\}/);
if (match) return JSON.parse(match[0]);
throw new Error('Invalid AI output');
}
}
| 技术 | 目的 |
|---|---|
| 提示词中包含示例 | 引导输出风格 |
| 指定输出格式 | 保持结构一致 |
| 验证 | 捕获格式错误的响应 |
| 重试逻辑 | 处理失败 |
| 备用模型 | 提高可靠性 |
### Cost Management
Controlling AI API costs
**When to use**: When building profitable AI products
```javascript
## AI Cost Management
### Token Economics
```javascript
// Track usage
async function callWithCostTracking(userId, prompt) {
const response = await anthropic.messages.create({...});
// Log usage
await db.usage.create({
userId,
inputTokens: response.usage.input_tokens,
outputTokens: response.usage.output_tokens,
cost: calculateCost(response.usage),
model: 'claude-3-haiku',
});
return response;
}
function calculateCost(usage) {
const rates = {
'claude-3-haiku': { input: 0.25, output: 1.25 }, // per 1M tokens
};
const rate = rates['claude-3-haiku'];
return (usage.input_tokens * rate.input +
usage.output_tokens * rate.output) / 1_000_000;
}
| 策略 | 节省幅度 |
|---|---|
| 使用更便宜的模型 | 10-50 倍 |
| 限制输出令牌数 | 可变 |
| 缓存常见查询 | 高 |
| 批量处理相似请求 | 中等 |
| 截断输入 | 可变 |
async function checkUsageLimits(userId) {
const usage = await db.usage.sum({
where: {
userId,
createdAt: { gte: startOfMonth() }
}
});
const limits = await getUserLimits(userId);
if (usage.cost >= limits.monthlyCost) {
throw new Error('Monthly limit reached');
}
return true;
}
## 反面模式
### ❌ 单薄封装综合征
**为何不好** : 缺乏差异化。
用户直接使用 ChatGPT 即可。
没有定价权。
易于复制。
**替代方案** : 增加领域专业知识。
针对特定任务优化用户体验。
集成到工作流程中。
对输出进行后处理。
### ❌ 等到规模化才考虑成本
**为何不好** : 意外账单。
单位经济效益为负。
无法合理定价。
业务不可持续。
**替代方案** : 追踪每一次 API 调用。
了解每个用户的成本。
设置使用限制。
定价时预留利润空间。
### ❌ 无输出验证
**为何不好** : AI 产生幻觉。
格式不一致。
用户体验差。
信任问题。
**替代方案** : 验证所有输出。
解析结构化响应。
具备备用处理机制。
后处理以确保一致性。
## ⚠️ 潜在风险
| 问题 | 严重性 | 解决方案 |
|-------|----------|----------|
| AI API 成本失控 | 高 | ## Controlling AI Costs |
| 达到 API 速率限制时应用崩溃 | 高 | ## Handling Rate Limits |
| AI 提供错误或编造的信息 | 高 | ## Handling Hallucinations |
| AI 响应太慢影响用户体验 | 中 | ## Improving AI Latency |
## 相关技能
与以下技能配合良好:`llm-architect`, `micro-saas-launcher`, `frontend`, `backend`
## 使用时机
此技能适用于执行概述中描述的工作流程或操作。
每周安装量
360
代码仓库
GitHub 星标数
27.4K
首次出现
Jan 19, 2026
安全审计
安装于
opencode297
claude-code288
gemini-cli285
codex259
cursor251
antigravity247
Role : AI Product Architect
You know AI wrappers get a bad rap, but the good ones solve real problems. You build products where AI is the engine, not the gimmick. You understand prompt engineering is product development. You balance costs with user experience. You create AI products people actually pay for and use daily.
Building products around AI APIs
When to use : When designing an AI-powered product
## AI Product Architecture
### The Wrapper Stack
User Input ↓ Input Validation + Sanitization ↓ Prompt Template + Context ↓ AI API (OpenAI/Anthropic/etc.) ↓ Output Parsing + Validation ↓ User-Friendly Response
### Basic Implementation
```javascript
import Anthropic from '@anthropic-ai/sdk';
const anthropic = new Anthropic();
async function generateContent(userInput, context) {
// 1. Validate input
if (!userInput || userInput.length > 5000) {
throw new Error('Invalid input');
}
// 2. Build prompt
const systemPrompt = `You are a ${context.role}.
Always respond in ${context.format}.
Tone: ${context.tone}`;
// 3. Call API
const response = await anthropic.messages.create({
model: 'claude-3-haiku-20240307',
max_tokens: 1000,
system: systemPrompt,
messages: [{
role: 'user',
content: userInput
}]
});
// 4. Parse and validate output
const output = response.content[0].text;
return parseOutput(output);
}
| Model | Cost | Speed | Quality | Use Case |
|---|---|---|---|---|
| GPT-4o | $$$ | Fast | Best | Complex tasks |
| GPT-4o-mini | $ | Fastest | Good | Most tasks |
| Claude 3.5 Sonnet | $$ | Fast | Excellent | Balanced |
| Claude 3 Haiku | $ | Fastest | Good | High volume |
### Prompt Engineering for Products
Production-grade prompt design
**When to use**: When building AI product prompts
```javascript
## Prompt Engineering for Products
### Prompt Template Pattern
```javascript
const promptTemplates = {
emailWriter: {
system: `You are an expert email writer.
Write professional, concise emails.
Match the requested tone.
Never include placeholder text.`,
user: (input) => `Write an email:
Purpose: ${input.purpose}
Recipient: ${input.recipient}
Tone: ${input.tone}
Key points: ${input.points.join(', ')}
Length: ${input.length} sentences`,
},
};
// Force structured output
const systemPrompt = `
Always respond with valid JSON in this format:
{
"title": "string",
"content": "string",
"suggestions": ["string"]
}
Never include any text outside the JSON.
`;
// Parse with fallback
function parseAIOutput(text) {
try {
return JSON.parse(text);
} catch {
// Fallback: extract JSON from response
const match = text.match(/\{[\s\S]*\}/);
if (match) return JSON.parse(match[0]);
throw new Error('Invalid AI output');
}
}
| Technique | Purpose |
|---|---|
| Examples in prompt | Guide output style |
| Output format spec | Consistent structure |
| Validation | Catch malformed responses |
| Retry logic | Handle failures |
| Fallback models | Reliability |
### Cost Management
Controlling AI API costs
**When to use**: When building profitable AI products
```javascript
## AI Cost Management
### Token Economics
```javascript
// Track usage
async function callWithCostTracking(userId, prompt) {
const response = await anthropic.messages.create({...});
// Log usage
await db.usage.create({
userId,
inputTokens: response.usage.input_tokens,
outputTokens: response.usage.output_tokens,
cost: calculateCost(response.usage),
model: 'claude-3-haiku',
});
return response;
}
function calculateCost(usage) {
const rates = {
'claude-3-haiku': { input: 0.25, output: 1.25 }, // per 1M tokens
};
const rate = rates['claude-3-haiku'];
return (usage.input_tokens * rate.input +
usage.output_tokens * rate.output) / 1_000_000;
}
| Strategy | Savings |
|---|---|
| Use cheaper models | 10-50x |
| Limit output tokens | Variable |
| Cache common queries | High |
| Batch similar requests | Medium |
| Truncate input | Variable |
async function checkUsageLimits(userId) {
const usage = await db.usage.sum({
where: {
userId,
createdAt: { gte: startOfMonth() }
}
});
const limits = await getUserLimits(userId);
if (usage.cost >= limits.monthlyCost) {
throw new Error('Monthly limit reached');
}
return true;
}
## Anti-Patterns
### ❌ Thin Wrapper Syndrome
**Why bad**: No differentiation.
Users just use ChatGPT.
No pricing power.
Easy to replicate.
**Instead**: Add domain expertise.
Perfect the UX for specific task.
Integrate into workflows.
Post-process outputs.
### ❌ Ignoring Costs Until Scale
**Why bad**: Surprise bills.
Negative unit economics.
Can't price properly.
Business isn't viable.
**Instead**: Track every API call.
Know your cost per user.
Set usage limits.
Price with margin.
### ❌ No Output Validation
**Why bad**: AI hallucinates.
Inconsistent formatting.
Bad user experience.
Trust issues.
**Instead**: Validate all outputs.
Parse structured responses.
Have fallback handling.
Post-process for consistency.
## ⚠️ Sharp Edges
| Issue | Severity | Solution |
|-------|----------|----------|
| AI API costs spiral out of control | high | ## Controlling AI Costs |
| App breaks when hitting API rate limits | high | ## Handling Rate Limits |
| AI gives wrong or made-up information | high | ## Handling Hallucinations |
| AI responses too slow for good UX | medium | ## Improving AI Latency |
## Related Skills
Works well with: `llm-architect`, `micro-saas-launcher`, `frontend`, `backend`
## When to Use
This skill is applicable to execute the workflow or actions described in the overview.
Weekly Installs
360
Repository
GitHub Stars
27.4K
First Seen
Jan 19, 2026
Security Audits
Gen Agent Trust HubPassSocketPassSnykPass
Installed on
opencode297
claude-code288
gemini-cli285
codex259
cursor251
antigravity247
React 组合模式指南:Vercel 组件架构最佳实践,提升代码可维护性
106,200 周安装