critique by pbakaus/impeccable
npx skills add https://github.com/pbakaus/impeccable --skill critique调用 /frontend-design —— 它包含设计原则、反模式以及上下文收集协议。在继续之前,请遵循该协议——如果尚不存在任何设计上下文,则必须先运行 /teach-impeccable。此外,还需收集:界面试图实现什么目标。
进行全面的设计评审,评估界面是否真的有效——不仅仅是技术上,而是作为一种设计体验。像设计总监提供反馈一样思考。
从以下维度评估界面:
这是最重要的检查。 这看起来是否像 2024-2025 年间其他所有 AI 生成的界面?
对照 frontend-design 技能中的所有 DON'T 指南来审查设计——它们是 AI 生成作品的指纹。检查是否存在 AI 调色板、渐变文字、带有发光强调色的深色模式、玻璃拟态、核心指标布局、相同的卡片网格、通用字体以及所有其他迹象。
测试方法:如果你把这个展示给别人看并说“这是 AI 做的”,他们会立刻相信你吗?如果是,那就是问题所在。
关于工作记忆规则和 8 项检查清单,请参考 cognitive-load
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
像设计总监那样组织你的反馈:
为尼尔森的 10 项启发式原则分别评分 0–4 分。以表格形式呈现:
---|---|---|--- 1 | 系统状态可见性 | ? | [具体发现或“—”如果没问题] 2 | 系统与现实世界匹配 | ? | 3 | 用户控制与自由 | ? | 4 | 一致性与标准 | ? | 5 | 错误预防 | ? | 6 | 识别而非记忆 | ? | 7 | 灵活性与效率 | ? | 8 | 美学与简约设计 | ? | 9 | 错误恢复 | ? | 10 | 帮助与文档 | ? | 总计 | | ??/40 | [评级区间]
评分要诚实。4 分意味着真正优秀。大多数真实界面的得分在 20–32 之间。
从这里开始。 通过/失败:这看起来像 AI 生成的吗?列出技能“反模式”部分中的具体迹象。要极其诚实。
一个简短的直觉反应——什么有效,什么无效,以及最大的一个机会点。
突出显示 2–3 个做得好的地方。具体说明它们为什么有效。
按重要性排序的 3–5 个最具影响力的设计问题。
针对每个问题,用 P0–P3 严重性 标签标记(严重性定义请参考 heuristics-scoring):
参考 personas
自动选择与此界面类型最相关的 2–3 个用户画像(使用参考中的选择表)。如果 .github/copilot-instructions.md 包含来自 teach-impeccable 的 ## Design Context 部分,则根据受众/品牌信息再生成 1–2 个特定于项目的用户画像。
对于每个选定的用户画像,逐步分析其主要用户操作,并列出发现的具体警示:
Alex(高级用户):未检测到键盘快捷键。主要操作需要点击 8 次表单。强制模态框引导。⚠️ 高放弃风险。
Jordan(新手):侧边栏仅图标导航。错误消息中使用技术术语("404 Not Found")。没有可见的帮助。⚠️ 将在第 2 步放弃。
要具体——明确指出导致每个用户画像体验失败的确切元素和交互。不要写通用的用户画像描述;写清楚对他们来说什么出了问题。
关于值得解决的小问题的快速笔记。
记住:
在呈现发现结果之后,根据实际发现使用有针对性的问题。直接询问用户以澄清你无法推断的信息。这些答案将塑造行动计划。
按照以下思路提问(根据具体发现进行调整——不要问通用问题):
提问规则:
在收到用户的回答之后,根据用户在第三阶段确定的优先级和范围,呈现一个优先级的行动摘要。
根据用户的回答,按优先级顺序列出推荐命令:
/command-name —— 简要描述要修复什么(来自评审发现的具体上下文)/command-name —— 简要描述(具体上下文)...推荐规则:
/polish 作为最终步骤呈现摘要后,告诉用户:
你可以要求我一次运行一个,全部运行,或者按你喜欢的任何顺序运行。
修复后重新运行
/critique以查看你的分数提高。
每周安装量
28.7K
仓库
GitHub 星标
13.4K
首次出现
Mar 4, 2026
安全审计
安装于
codex28.0K
opencode27.7K
github-copilot27.7K
gemini-cli27.6K
cursor27.6K
amp27.6K
Invoke /frontend-design — it contains design principles, anti-patterns, and the Context Gathering Protocol. Follow the protocol before proceeding — if no design context exists yet, you MUST run /teach-impeccable first. Additionally gather: what the interface is trying to accomplish.
Conduct a holistic design critique, evaluating whether the interface actually works — not just technically, but as a designed experience. Think like a design director giving feedback.
Evaluate the interface across these dimensions:
This is the most important check. Does this look like every other AI-generated interface from 2024-2025?
Review the design against ALL the DON'T guidelines in the frontend-design skill — they are the fingerprints of AI-generated work. Check for the AI color palette, gradient text, dark mode with glowing accents, glassmorphism, hero metric layouts, identical card grids, generic fonts, and all other tells.
The test : If you showed this to someone and said "AI made this," would they believe you immediately? If yes, that's the problem.
Consultcognitive-load for the working memory rule and 8-item checklist
Structure your feedback as a design director would:
Consultheuristics-scoring
Score each of Nielsen's 10 heuristics 0–4. Present as a table:
---|---|---|---
1 | Visibility of System Status | ? | [specific finding or "—" if solid]
2 | Match System / Real World | ? |
3 | User Control and Freedom | ? |
4 | Consistency and Standards | ? |
5 | Error Prevention | ? |
6 | Recognition Rather Than Recall | ? |
7 | Flexibility and Efficiency | ? |
8 | Aesthetic and Minimalist Design | ? |
9 | Error Recovery | ? |
10 | Help and Documentation | ? |
Total | | ??/40 | [Rating band]
Be honest with scores. A 4 means genuinely excellent. Most real interfaces score 20–32.
Start here. Pass/fail: Does this look AI-generated? List specific tells from the skill's Anti-Patterns section. Be brutally honest.
A brief gut reaction — what works, what doesn't, and the single biggest opportunity.
Highlight 2–3 things done well. Be specific about why they work.
The 3–5 most impactful design problems, ordered by importance.
For each issue, tag with P0–P3 severity (consult heuristics-scoring for severity definitions):
Consultpersonas
Auto-select 2–3 personas most relevant to this interface type (use the selection table in the reference). If .github/copilot-instructions.md contains a ## Design Context section from teach-impeccable, also generate 1–2 project-specific personas from the audience/brand info.
For each selected persona, walk through the primary user action and list specific red flags found:
Alex (Power User) : No keyboard shortcuts detected. Form requires 8 clicks for primary action. Forced modal onboarding. ⚠️ High abandonment risk.
Jordan (First-Timer) : Icon-only nav in sidebar. Technical jargon in error messages ("404 Not Found"). No visible help. ⚠️ Will abandon at step 2.
Be specific — name the exact elements and interactions that fail each persona. Don't write generic persona descriptions; write what broke for them.
Quick notes on smaller issues worth addressing.
Remember :
After presenting findings , use targeted questions based on what was actually found. ask the user directly to clarify what you cannot infer. These answers will shape the action plan.
Ask questions along these lines (adapt to the specific findings — do NOT ask generic questions):
Priority direction : Based on the issues found, ask which category matters most to the user right now. For example: "I found problems with visual hierarchy, color usage, and information overload. Which area should we tackle first?" Offer the top 2–3 issue categories as options.
Design intent : If the critique found a tonal mismatch, ask whether it was intentional. For example: "The interface feels clinical and corporate. Is that the intended tone, or should it feel warmer/bolder/more playful?" Offer 2–3 tonal directions as options based on what would fix the issues found.
Scope : Ask how much the user wants to take on. For example: "I found N issues. Want to address everything, or focus on the top 3?" Offer scope options like "Top 3 only", "All issues", "Critical issues only".
Constraints (optional — only ask if relevant): If the findings touch many areas, ask if anything is off-limits. For example: "Should any sections stay as-is?" This prevents the plan from touching things the user considers done.
Rules for questions :
After receiving the user's answers , present a prioritized action summary reflecting the user's priorities and scope from Phase 3.
List recommended commands in priority order, based on the user's answers:
/command-name — Brief description of what to fix (specific context from critique findings)/command-name — Brief description (specific context) ...Rules for recommendations :
/polish as the final step if any fixes were recommendedAfter presenting the summary, tell the user:
You can ask me to run these one at a time, all at once, or in any order you prefer.
Re-run
/critiqueafter fixes to see your score improve.
Weekly Installs
28.7K
Repository
GitHub Stars
13.4K
First Seen
Mar 4, 2026
Security Audits
Gen Agent Trust HubPassSocketPassSnykPass
Installed on
codex28.0K
opencode27.7K
github-copilot27.7K
gemini-cli27.6K
cursor27.6K
amp27.6K
React 组合模式指南:Vercel 组件架构最佳实践,提升代码可维护性
102,200 周安装