macos-cleaner by daymade/claude-code-skills
npx skills add https://github.com/daymade/claude-code-skills --skill macos-cleaner智能分析 macOS 磁盘使用情况,并提供可操作的清理建议以回收存储空间。本技能遵循 安全第一原则:彻底分析,清晰呈现发现,并在执行任何删除操作前要求用户明确确认。
目标用户:具备基本技术知识、了解文件系统但需要指导在 macOS 上安全删除内容的用户。
rm -rf、mo clean 等)。没有捷径,没有变通方法。绝对禁止事项:
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
docker image prune、docker volume prune、docker system prune 或任何 prune 系列命令(例外:docker builder prune 是安全的——构建缓存仅包含中间层,不包含用户数据)docker container prune——已停止的容器可能随时重启rm -rf--dry-run 预览,绝不运行 mo clean--help(只有 mo --help 是安全的)User reports disk space issues
↓
Quick Diagnosis
↓
┌──────┴──────┐
│ │
Immediate Deep Analysis
Cleanup (continue below)
│ │
└──────┬──────┘
↓
Present Findings
↓
User Confirms
↓
Execute Cleanup
↓
Verify Results
主要工具:使用 Mole 进行磁盘分析。它提供全面、分类的结果。
# Check Mole installation and version
which mo && mo --version
# If not installed
brew install tw93/tap/mole
# Check for updates (Mule updates frequently)
brew info tw93/tap/mole | head -5
# Upgrade if outdated
brew upgrade tw93/tap/mole
重要:使用 mo analyze 作为主要分析工具,而不是 mo clean --dry-run。
| 命令 | 目的 | 使用时机 |
|---|---|---|
mo analyze | 交互式磁盘使用情况浏览器(TUI 树状视图) | 主要:了解是什么占用了空间 |
mo clean --dry-run | 预览清理类别 | 次要:仅在 mo analyze 之后使用,以查看清理预览 |
为什么首选 mo analyze:
重要:Mole 需要 TTY。始终从 Claude Code 使用 tmux。
关键时间说明:主目录扫描很慢(对于大型目录需要 5-10 分钟或更长时间)。提前告知用户并耐心等待。
# Create tmux session
tmux new-session -d -s mole -x 120 -y 40
# Run disk analysis (PRIMARY tool - interactive TUI)
tmux send-keys -t mole 'mo analyze' Enter
# Wait for scan - BE PATIENT!
# Home directory scanning typically takes 5-10 minutes
# Report progress to user regularly
sleep 60 && tmux capture-pane -t mole -p
# Navigate the TUI with arrow keys
tmux send-keys -t mole Down # Move to next item
tmux send-keys -t mole Enter # Expand/select item
tmux send-keys -t mole 'q' # Quit when done
替代方案:清理预览(在 mo analyze 之后使用)
# Run dry-run preview (SAFE - no deletion)
tmux send-keys -t mole 'mo clean --dry-run' Enter
# Wait for scan (report progress to user every 30 seconds)
# Be patient! Large directories take 5-10 minutes
sleep 30 && tmux capture-pane -t mole -p
定期向用户报告扫描进度:
📊 磁盘分析进行中...
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
⏱️ 已用时间:2 分钟
当前状态:
✅ 应用程序:49.5 GB(完成)
✅ 系统库:10.3 GB(完成)
⏳ 主目录:扫描中...(可能需要 5-10 分钟)
⏳ 应用程序库:等待中
我正在耐心等待扫描完成。
将在 30 秒后再次报告...
扫描完成后,呈现结构化结果:
📊 磁盘空间分析(通过 Mole)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
可用空间:27 GB
🧹 可回收空间(预览):
➤ 用户必需品
• 用户应用缓存: 16.67 GB
• 用户应用日志: 102.3 MB
• 废纸篓: 642.9 MB
➤ 浏览器缓存
• Chrome 缓存: 1.90 GB
• Safari 缓存: 4 KB
➤ 开发者工具
• uv 缓存: 9.96 GB
• npm 缓存: (已检测到)
• Docker 缓存: (已检测到)
• Homebrew 缓存: (已检测到)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
总计可回收:约 30 GB
⚠️ 这是预览。没有文件被删除。
系统地扫描以下类别。参考 references/cleanup_targets.md 获取详细解释。
要分析的位置:
~/Library/Caches/* - 用户应用程序缓存/Library/Caches/* - 系统范围缓存(需要 sudo)~/Library/Logs/* - 应用程序日志/var/log/* - 系统日志(需要 sudo)分析脚本:
scripts/analyze_caches.py --user-only
安全级别:🟢 通常可以安全删除(应用程序会重新生成缓存)
需要保留的例外:
要分析的位置:
~/Library/Application Support/* - 应用程序数据~/Library/Preferences/* - 偏好设置文件~/Library/Containers/* - 沙盒化应用程序数据分析方法:
/Applications 中已安装的应用程序~/Library/Application Support 交叉引用分析脚本:
scripts/find_app_remnants.py
安全级别:🟡 需要谨慎
分析脚本:
scripts/analyze_large_files.py --threshold 100MB --path ~
查找重复文件(可选,资源密集型):
# Use fdupes if installed
if command -v fdupes &> /dev/null; then
fdupes -r ~/Documents ~/Downloads
fi
呈现发现:
📦 大文件(>100MB):
━━━━━━━━━━━━━━━━━━━━━━━━
1. movie.mp4 4.2 GB ~/Downloads
2. dataset.csv 1.8 GB ~/Documents/data
3. old_backup.zip 1.5 GB ~/Desktop
...
🔁 重复文件:
- screenshot.png (3 份) 15 MB 每份
- document_v1.docx (2 份) 8 MB 每份
安全级别:🟡 需要用户判断
目标:
node_modules、npm 缓存__pycache__、venv.git 文件夹分析脚本:
scripts/analyze_dev_env.py
示例发现:
🐳 Docker 资源:
- 未使用的镜像: 12 GB
- 已停止的容器: 2 GB
- 构建缓存: 8 GB
- 孤立的卷: 3 GB
总计潜在: 25 GB
📦 包管理器:
- Homebrew 缓存: 5 GB
- npm 缓存: 3 GB
- pip 缓存: 1 GB
总计潜在: 9 GB
🗂️ 旧项目:
- archived-project-2022/.git 500 MB
- old-prototype/.git 300 MB
清理命令(需要确认):
# Homebrew 清理(安全)
brew cleanup -s
# npm _npx only (safe - temporary packages)
rm -rf ~/.npm/_npx
# pip 缓存(谨慎使用)
pip cache purge
Docker 清理 - 需要特殊处理:
⚠️ 绝不使用这些命令:
# ❌ 危险 - 未经确认删除所有卷
docker volume prune -f
docker system prune -a --volumes
✅ 正确方法 - 逐个卷确认:
# 1. 列出所有卷
docker volume ls
# 2. 识别每个卷属于哪个项目
docker volume inspect <volume_name>
# 3. 要求用户确认他们想要删除的每个项目
# 示例:"您要删除 'ragflow' 项目的所有卷吗?"
# 4. 仅在确认后删除特定卷
docker volume rm ragflow_mysql_data ragflow_redis_data
安全级别:🟢 Homebrew/npm 清理,🔴 Docker 卷需要按项目确认
使用代理团队并行分析 Docker 资源以实现全面覆盖:
代理 1 — 镜像:
# List all images sorted by size
docker images --format "table {{.ID}}\t{{.Repository}}:{{.Tag}}\t{{.Size}}\t{{.CreatedSince}}" | sort -k3 -h -r
# Identify dangling images (no tag)
docker images -f "dangling=true" --format "{{.ID}}\t{{.Size}}\t{{.CreatedSince}}"
# For each image, check if any container references it
docker ps -a --filter "ancestor=<IMAGE_ID>" --format "{{.Names}}\t{{.Status}}"
代理 2 — 容器和卷:
# All containers with status
docker ps -a --format "table {{.Names}}\t{{.Image}}\t{{.Status}}\t{{.Size}}"
# All volumes with size
docker system df -v | grep -A 1000 "VOLUME NAME"
# Identify dangling volumes
docker volume ls -f dangling=true
# For each volume, check which container uses it
docker ps -a --filter "volume=<VOLUME_NAME>" --format "{{.Names}}"
代理 3 — 系统级别:
# Docker disk usage summary
docker system df
# Build cache
docker builder du
# Container logs size
for c in $(docker ps -a --format "{{.Names}}"); do
echo "$c: $(docker inspect --format='{{.LogPath}}' $c | xargs ls -lh 2>/dev/null | awk '{print $5}')"
done
版本管理意识:识别版本管理的镜像(例如,由 CLI 管理的 Supabase)。当确认有较新版本正在运行时,旧版本可以安全移除。注意 Docker Compose 命名约定(短横线 vs 下划线)。
OrbStack 用户有额外的注意事项。
data.img.raw 是一个稀疏文件:
# Logical size (can show 8TB+, meaningless)
ls -lh ~/Library/OrbStack/data/data.img.raw
# Actual disk usage (this is what matters)
du -h ~/Library/OrbStack/data/data.img.raw
逻辑大小与实际大小的差异是正常的。只有实际使用量才重要。
清理后:回收磁盘空间:清理 OrbStack 内部的 Docker 对象后,data.img.raw 不会自动缩小。指导用户:打开 OrbStack 设置 → “回收磁盘空间”以压缩稀疏文件。
OrbStack 日志:通常总计 1-2 MB(~/Library/OrbStack/log/)。不值得清理。
在删除任何 Docker 对象之前,执行独立验证。
对于镜像:
# Verify no container (running or stopped) references the image
docker ps -a --filter "ancestor=<IMAGE_ID>" --format "{{.Names}}\t{{.Status}}"
# If empty → safe to delete with: docker rmi <IMAGE_ID>
对于卷:
# Verify no container mounts the volume
docker ps -a --filter "volume=<VOLUME_NAME>" --format "{{.Names}}"
# If empty → check if database volume (see below)
# If not database → safe to delete with: docker volume rm <VOLUME_NAME>
数据库卷红色标记规则:如果卷名包含 mysql、postgres、redis、mongo 或 mariadb,必须进行内容检查:
# Inspect database volume contents with temporary container
docker run --rm -v <VOLUME_NAME>:/data alpine ls -la /data
docker run --rm -v <VOLUME_NAME>:/data alpine du -sh /data/*
仅在用户确认数据不需要后删除。
Mole (https://github.com/tw93/Mole) 是一个用于全面 macOS 清理的 命令行界面 (CLI) 工具。它为缓存、日志、开发者工具等提供基于终端的交互式分析和清理。
关键要求:
tmux。mo --help 是安全的。不要向其他命令附加 --help。安装检查和升级:
# Check if installed and get version
which mo && mo --version
# If not installed
brew install tw93/tap/mole
# Check for updates
brew info tw93/tap/mole | head -5
# Upgrade if needed
brew upgrade tw93/tap/mole
使用 Mole 与 tmux(Claude Code 必需):
# Create tmux session for TTY environment
tmux new-session -d -s mole -x 120 -y 40
# Run analysis (safe, read-only)
tmux send-keys -t mole 'mo analyze' Enter
# Wait for scan (be patient - can take 5-10 minutes for large directories)
sleep 60
# Capture results
tmux capture-pane -t mole -p
# Cleanup when done
tmux kill-session -t mole
可用命令(来自 mo --help):
| 命令 | 安全性 | 描述 |
|---|---|---|
mo --help | ✅ 安全 | 查看所有命令(仅安全帮助) |
mo analyze | ✅ 安全 | 磁盘使用情况浏览器(只读) |
mo status | ✅ 安全 | 系统健康监视器 |
mo clean --dry-run | ✅ 安全 | 预览清理(不删除) |
mo clean | ⚠️ 危险 | 实际删除文件 |
mo purge | ⚠️ 危险 | 移除项目工件 |
mo uninstall | ⚠️ 危险 | 移除应用程序 |
参考指南: 参见 references/mole_integration.md 获取详细的 tmux 工作流程和故障排除。
关键:为了进行全面分析,你必须执行多层探索,而不仅仅是顶层扫描。本节记录了导航 Mole TUI 的成熟工作流程。
# Create session
tmux new-session -d -s mole -x 120 -y 40
# Start analysis
tmux send-keys -t mole 'mo analyze' Enter
# Wait for initial scan
sleep 8 && tmux capture-pane -t mole -p
# Navigation keys (send via tmux)
tmux send-keys -t mole Enter # Enter/expand selected directory
tmux send-keys -t mole Left # Go back to parent directory
tmux send-keys -t mole Down # Move to next item
tmux send-keys -t mole Up # Move to previous item
tmux send-keys -t mole 'q' # Quit TUI
# Capture current view
tmux capture-pane -t mole -p
步骤 1:顶层概览
# Start mo analyze, wait for initial menu
tmux send-keys -t mole 'mo analyze' Enter
sleep 8 && tmux capture-pane -t mole -p
# Example output:
# 1. Home 289.4 GB (58.5%)
# 2. App Library 145.2 GB (29.4%)
# 3. Applications 49.5 GB (10.0%)
# 4. System Library 10.3 GB (2.1%)
步骤 2:进入最大目录(Home)
tmux send-keys -t mole Enter
sleep 10 && tmux capture-pane -t mole -p
# Example output:
# 1. Library 144.4 GB (49.9%)
# 2. Workspace 52.0 GB (18.0%)
# 3. .cache 19.3 GB (6.7%)
# 4. Applications 17.0 GB (5.9%)
# ...
步骤 3:深入特定目录
# Go to .cache (3rd item: Down Down Enter)
tmux send-keys -t mole Down Down Enter
sleep 5 && tmux capture-pane -t mole -p
# Example output:
# 1. uv 10.3 GB (55.6%)
# 2. modelscope 5.5 GB (29.5%)
# 3. huggingface 887.8 MB (4.7%)
步骤 4:导航返回并探索另一个分支
# Go back to parent
tmux send-keys -t mole Left
sleep 2
# Navigate to different directory
tmux send-keys -t mole Down Down Down Down Enter # Go to .npm
sleep 5 && tmux capture-pane -t mole -p
步骤 5:深入探索 Library
# Back to Home, then into Library
tmux send-keys -t mole Left
tmux send-keys -t mole Up Up Up Up Up Up Enter # Go to Library
sleep 10 && tmux capture-pane -t mole -p
# Example output:
# 1. Application Support 37.1 GB
# 2. Containers 35.4 GB
# 3. Developer 17.8 GB ← Xcode is here
# 4. Caches 8.2 GB
为了进行全面分析,请遵循此探索树:
mo analyze
├── Home (Enter)
│ ├── Library (Enter)
│ │ ├── Developer (Enter) → Xcode/DerivedData, iOS DeviceSupport
│ │ ├── Caches (Enter) → Playwright, JetBrains, etc.
│ │ └── Application Support (Enter) → App data
│ ├── .cache (Enter) → uv, modelscope, huggingface
│ ├── .npm (Enter) → _cacache, _npx
│ ├── Downloads (Enter) → Large files to review
│ ├── .Trash (Enter) → Confirm trash contents
│ └── miniconda3/other dev tools (Enter) → Check last used time
├── App Library → Usually overlaps with ~/Library
└── Applications → Installed apps
| 目录 | 扫描时间 | 备注 |
|---|---|---|
| 顶层菜单 | 5-8 秒 | 快 |
| 主目录 | 5-10 分钟 | 大,请耐心 |
| ~/Library | 3-5 分钟 | 许多小文件 |
| 子目录 | 2-30 秒 | 因大小而异 |
# 1. Create session
tmux new-session -d -s mole -x 120 -y 40
# 2. Start analysis and get overview
tmux send-keys -t mole 'mo analyze' Enter
sleep 8 && tmux capture-pane -t mole -p
# 3. Enter Home
tmux send-keys -t mole Enter
sleep 10 && tmux capture-pane -t mole -p
# 4. Enter .cache to see dev caches
tmux send-keys -t mole Down Down Enter
sleep 5 && tmux capture-pane -t mole -p
# 5. Back to Home, then to .npm
tmux send-keys -t mole Left
sleep 2
tmux send-keys -t mole Down Down Down Down Enter
sleep 5 && tmux capture-pane -t mole -p
# 6. Back to Home, enter Library
tmux send-keys -t mole Left
sleep 2
tmux send-keys -t mole Up Up Up Up Up Up Enter
sleep 10 && tmux capture-pane -t mole -p
# 7. Enter Developer to see Xcode
tmux send-keys -t mole Down Down Down Enter
sleep 5 && tmux capture-pane -t mole -p
# 8. Enter Xcode
tmux send-keys -t mole Enter
sleep 5 && tmux capture-pane -t mole -p
# 9. Enter DerivedData to see projects
tmux send-keys -t mole Enter
sleep 5 && tmux capture-pane -t mole -p
# 10. Cleanup
tmux kill-session -t mole
经过多层探索后,你将发现:
关键:以下项目通常被建议清理,但在大多数情况下不应删除。它们提供的价值超过了其占用的空间。
| 项目 | 大小 | 为什么不应删除 | 删除的实际影响 |
|---|---|---|---|
| Xcode DerivedData | 10+ GB | 构建缓存可节省每次完整重建的 10-30 分钟 | 下次构建需要多花 10-30 分钟 |
| npm _cacache | 5+ GB | 本地缓存的已下载包 | npm install 重新下载所有内容(在中国需要 30 分钟-2 小时) |
| ~/.cache/uv | 10+ GB | Python 包缓存 | 每个 Python 项目都从 PyPI 重新安装依赖 |
| Playwright 浏览器 | 3-4 GB | 自动化测试的浏览器二进制文件 | 每次重新下载 2GB+(30 分钟-1 小时) |
| iOS DeviceSupport | 2-3 GB | 设备调试所需 | 连接设备时从 Apple 重新下载 |
| Docker 已停止的容器 | <500 MB | 可能随时通过 docker start 重启 | 丢失容器状态,需要重新创建 |
| ~/.cache/huggingface | 不定 | AI 模型缓存 | 重新下载大型模型(数小时) |
| ~/.cache/modelscope | 不定 | AI 模型缓存(中国) | 同上 |
| JetBrains 缓存 | 1+ GB | IDE 索引和缓存 | IDE 需要 5-10 分钟重新索引 |
虚荣陷阱:显示“清理了 50GB!”感觉很好,但是:
正确的心态:“我发现了 50GB 的缓存。以下是为什么其中大部分实际上很有价值并应该保留的原因……”
| 项目 | 为什么安全 | 影响 |
|---|---|---|
| 废纸篓 | 用户已经删除了这些文件 | 无 - 用户自己的决定 |
| Homebrew 旧版本 | 已被新版本替换 | 罕见:无法回滚到旧版本 |
| npm _npx | 临时的 npx 执行文件 | 轻微:npx 下次使用时重新下载 |
| 孤立的应用程序残留 | 应用程序已卸载 | 无 - 应用程序不存在 |
| 特定的未使用 Docker 卷 | 确认已放弃的项目 | 无 - 如果确实已放弃 |
每个清理报告必须遵循此格式并包含影响分析:
## 磁盘分析报告
### 分类图例
| 符号 | 含义 |
|--------|---------|
| 🟢 | **绝对安全** - 无负面影响,真正未使用 |
| 🟡 | **需要权衡** - 有用的缓存,删除有代价 |
| 🔴 | **不要删除** - 包含有价值的数据或正在使用 |
### 发现
| 项目 | 大小 | 分类 | 这是什么 | 如果删除的影响 |
|------|------|----------------|------------|-------------------|
| 废纸篓 | 643 MB | 🟢 | 你删除的文件 | 无 |
| npm _npx | 2.1 GB | 🟢 | 临时 npx 包 | 轻微重新下载 |
| npm _cacache | 5 GB | 🟡 | 包缓存 | 30 分钟-2 小时重新下载 |
| DerivedData | 10 GB | 🟡 | Xcode 构建缓存 | 10-30 分钟重建 |
| Docker 卷 | 11 GB | 🔴 | 项目数据库 | **数据丢失** |
### 建议
仅标记为 🟢 的项目建议清理。
标记为 🟡 的项目需要你根据使用模式判断。
标记为 🔴 的项目需要逐项明确确认。
Docker 报告必须列出每个单独的对象,而不仅仅是类别:
#### 悬空镜像(无标签,无容器引用)
| 镜像 ID | 大小 | 创建时间 | 安全? |
|----------|------|---------|-------|
| a02c40cc28df | 884 MB | 2 个月前 | ✅ 没有容器使用它 |
| 555434521374 | 231 MB | 3 个月前 | ✅ 没有容器使用它 |
#### 已停止的容器
| 名称 | 镜像 | 状态 | 大小 |
|------|-------|--------|------|
| ragflow-mysql | mysql:8.0 | 2 周前退出 | 1.2 GB |
#### 卷
| 卷 | 大小 | 挂载者 | 包含 |
|--------|------|------------|----------|
| ragflow_mysql_data | 1.8 GB | ragflow-mysql | MySQL 数据库 |
| redis_data | 500 MB | (无 - 悬空) | Redis 转储 |
#### 🔴 需要检查的数据库卷
| 卷 | 检查的内容 | 用户决定 |
|--------|--------------------|---------------|
| ragflow_mysql_data | 8 个数据库,45 个表 | 还需要吗? |
经过多层探索后,使用此成熟模板呈现发现:
## 📊 磁盘空间深度分析报告
**分析日期**: YYYY-MM-DD
**使用工具**: Mole CLI + 多层目录探索
**分析原则**: 安全第一,价值优于虚荣
---
### 总览
| 区域 | 总占用 | 关键发现 |
|------|--------|----------|
| **Home** | XXX GB | Library占一半(XXX GB) |
| **App Library** | XXX GB | 与Home/Library重叠统计 |
| **Applications** | XXX GB | 应用本体 |
---
### 🟢 绝对安全可删除 (约 X.X GB)
| 项目 | 大小 | 位置 | 删除后影响 | 清理命令 |
|------|------|------|-----------|---------|
| **废纸篓** | XXX MB | ~/.Trash | 无 - 你已决定删除的文件 | 清空废纸篓 |
| **npm _npx** | X.X GB | ~/.npm/_npx | 下次 npx 命令重新下载 | `rm -rf ~/.npm/_npx` |
| **Homebrew 旧版本** | XX MB | /opt/homebrew | 无 - 已被新版本替代 | `brew cleanup --prune=0` |
**废纸篓内容预览**:
- [列出主要文件]
---
### 🟡 需要你确认的项目
#### 1. [项目名] (X.X GB) - [状态描述]
| 子目录 | 大小 | 最后使用 |
|--------|------|----------|
| [子目录1] | X.X GB | >X个月 |
| [子目录2] | X.X GB | >X个月 |
**问题**: [需要用户回答的问题]
---
#### 2. Downloads 中的旧文件 (X.X GB)
| 文件/目录 | 大小 | 年龄 | 建议 |
|-----------|------|------|------|
| [文件1] | X.X GB | - | [建议] |
| [文件2] | XXX MB | >X个月 | [建议] |
**建议**: 手动检查 Downloads,删除已不需要的文件。
---
#### 3. 停用的 Docker 项目 Volumes
| 项目前缀 | 可能包含的数据 | 需要你确认 |
|---------|--------------|-----------|
| `project1_*` | MySQL, Redis | 还在用吗? |
| `project2_*` | Postgres | 还在用吗? |
**注意**: 我不会使用 `docker volume prune -f`,只会在你确认后删除特定项目的 volumes。
---
### 🔴 不建议删除的项目 (有价值的缓存)
| 项目 | 大小 | 为什么要保留 |
|------|------|-------------|
| **Xcode DerivedData** | XX GB | [项目名]的编译缓存,删除后下次构建需要X分钟 |
| **npm _cacache** | X.X GB | 所有下载过的 npm 包,删除后需要重新下载 |
| **~/.cache/uv** | XX GB | Python 包缓存,重新下载在中国网络下很慢 |
| [其他有价值的缓存] | X.X GB | [保留原因] |
---
### 📋 其他发现
| 项目 | 大小 | 说明 |
|------|------|------|
| **OrbStack/Docker** | XX GB | 正常的 VM/容器占用 |
| [其他发现] | X.X GB | [说明] |
---
### ✅ 推荐操作
**立即可执行** (无需确认):
```bash
# 1. 清空废纸篓 (XXX MB)
# 手动: Finder → 清空废纸篓
# 2. npm _npx (X.X GB)
rm -rf ~/.npm/_npx
# 3. Homebrew 旧版本 (XX MB)
brew cleanup --prune=0
预计释放 : ~X.X GB
需要你确认后执行 :
### 报告质量检查清单
在呈现报告前,验证:
- [ ] 每个项目都有“如果删除的影响”解释
- [ ] 🟢 项目确实安全(废纸篓、_npx、旧版本)
- [ ] 🟡 项目需要用户决定(年龄信息、使用模式)
- [ ] 🔴 项目解释了为什么应该保留
- [ ] Docker 卷按
Intelligently analyze macOS disk usage and provide actionable cleanup recommendations to reclaim storage space. This skill follows a safety-first philosophy : analyze thoroughly, present clear findings, and require explicit user confirmation before executing any deletions.
Target users : Users with basic technical knowledge who understand file systems but need guidance on what's safe to delete on macOS.
rm -rf, mo clean, etc.) without explicit user confirmation. No shortcuts, no workarounds.ABSOLUTE PROHIBITIONS:
docker image prune, docker volume prune, docker system prune, or ANY prune-family command (exception: docker builder prune is safe — build cache contains only intermediate layers, never user data)docker container prune — stopped containers may be restarted at any timerm -rf on user directories without explicit confirmationmo clean without --dry-run preview first--help to Mole commands (only is safe)User reports disk space issues
↓
Quick Diagnosis
↓
┌──────┴──────┐
│ │
Immediate Deep Analysis
Cleanup (continue below)
│ │
└──────┬──────┘
↓
Present Findings
↓
User Confirms
↓
Execute Cleanup
↓
Verify Results
Primary tool : Use Mole for disk analysis. It provides comprehensive, categorized results.
# Check Mole installation and version
which mo && mo --version
# If not installed
brew install tw93/tap/mole
# Check for updates (Mole updates frequently)
brew info tw93/tap/mole | head -5
# Upgrade if outdated
brew upgrade tw93/tap/mole
IMPORTANT : Use mo analyze as the primary analysis tool, NOT mo clean --dry-run.
| Command | Purpose | Use When |
|---|---|---|
mo analyze | Interactive disk usage explorer (TUI tree view) | PRIMARY : Understanding what's consuming space |
mo clean --dry-run | Preview cleanup categories | SECONDARY : Only after mo analyze to see cleanup preview |
Why prefermo analyze:
IMPORTANT : Mole requires TTY. Always use tmux from Claude Code.
CRITICAL TIMING NOTE : Home directory scans are SLOW (5-10 minutes or longer for large directories). Inform user upfront and wait patiently.
# Create tmux session
tmux new-session -d -s mole -x 120 -y 40
# Run disk analysis (PRIMARY tool - interactive TUI)
tmux send-keys -t mole 'mo analyze' Enter
# Wait for scan - BE PATIENT!
# Home directory scanning typically takes 5-10 minutes
# Report progress to user regularly
sleep 60 && tmux capture-pane -t mole -p
# Navigate the TUI with arrow keys
tmux send-keys -t mole Down # Move to next item
tmux send-keys -t mole Enter # Expand/select item
tmux send-keys -t mole 'q' # Quit when done
Alternative: Cleanup preview (use AFTER mo analyze)
# Run dry-run preview (SAFE - no deletion)
tmux send-keys -t mole 'mo clean --dry-run' Enter
# Wait for scan (report progress to user every 30 seconds)
# Be patient! Large directories take 5-10 minutes
sleep 30 && tmux capture-pane -t mole -p
Report scan progress to user regularly:
📊 Disk Analysis in Progress...
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
⏱️ Elapsed: 2 minutes
Current status:
✅ Applications: 49.5 GB (complete)
✅ System Library: 10.3 GB (complete)
⏳ Home: scanning... (this may take 5-10 minutes)
⏳ App Library: pending
I'm waiting patiently for the scan to complete.
Will report again in 30 seconds...
After scan completes, present structured results:
📊 Disk Space Analysis (via Mole)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Free space: 27 GB
🧹 Recoverable Space (dry-run preview):
➤ User Essentials
• User app cache: 16.67 GB
• User app logs: 102.3 MB
• Trash: 642.9 MB
➤ Browser Caches
• Chrome cache: 1.90 GB
• Safari cache: 4 KB
➤ Developer Tools
• uv cache: 9.96 GB
• npm cache: (detected)
• Docker cache: (detected)
• Homebrew cache: (detected)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Total recoverable: ~30 GB
⚠️ This was a dry-run preview. No files were deleted.
Scan the following categories systematically. Reference references/cleanup_targets.md for detailed explanations.
Locations to analyze:
~/Library/Caches/* - User application caches/Library/Caches/* - System-wide caches (requires sudo)~/Library/Logs/* - Application logs/var/log/* - System logs (requires sudo)Analysis script:
scripts/analyze_caches.py --user-only
Safety level : 🟢 Generally safe to delete (apps regenerate caches)
Exceptions to preserve:
Locations to analyze:
~/Library/Application Support/* - App data~/Library/Preferences/* - Preference files~/Library/Containers/* - Sandboxed app dataAnalysis approach:
/Applications~/Library/Application SupportAnalysis script:
scripts/find_app_remnants.py
Safety level : 🟡 Caution required
Analysis script:
scripts/analyze_large_files.py --threshold 100MB --path ~
Find duplicates (optional, resource-intensive):
# Use fdupes if installed
if command -v fdupes &> /dev/null; then
fdupes -r ~/Documents ~/Downloads
fi
Present findings:
📦 Large Files (>100MB):
━━━━━━━━━━━━━━━━━━━━━━━━
1. movie.mp4 4.2 GB ~/Downloads
2. dataset.csv 1.8 GB ~/Documents/data
3. old_backup.zip 1.5 GB ~/Desktop
...
🔁 Duplicate Files:
- screenshot.png (3 copies) 15 MB each
- document_v1.docx (2 copies) 8 MB each
Safety level : 🟡 User judgment required
Targets:
node_modules, npm cache__pycache__, venv.git folders in archived projectsAnalysis script:
scripts/analyze_dev_env.py
Example findings:
🐳 Docker Resources:
- Unused images: 12 GB
- Stopped containers: 2 GB
- Build cache: 8 GB
- Orphaned volumes: 3 GB
Total potential: 25 GB
📦 Package Managers:
- Homebrew cache: 5 GB
- npm cache: 3 GB
- pip cache: 1 GB
Total potential: 9 GB
🗂️ Old Projects:
- archived-project-2022/.git 500 MB
- old-prototype/.git 300 MB
Cleanup commands (require confirmation):
# Homebrew cleanup (safe)
brew cleanup -s
# npm _npx only (safe - temporary packages)
rm -rf ~/.npm/_npx
# pip cache (use with caution)
pip cache purge
Docker cleanup - SPECIAL HANDLING REQUIRED:
⚠️ NEVER use these commands:
# ❌ DANGEROUS - deletes ALL volumes without confirmation
docker volume prune -f
docker system prune -a --volumes
✅ Correct approach - per-volume confirmation:
# 1. List all volumes
docker volume ls
# 2. Identify which projects each volume belongs to
docker volume inspect <volume_name>
# 3. Ask user to confirm EACH project they want to delete
# Example: "Do you want to delete all volumes for 'ragflow' project?"
# 4. Delete specific volumes only after confirmation
docker volume rm ragflow_mysql_data ragflow_redis_data
Safety level : 🟢 Homebrew/npm cleanup, 🔴 Docker volumes require per-project confirmation
Use agent team to analyze Docker resources in parallel for comprehensive coverage:
Agent 1 — Images :
# List all images sorted by size
docker images --format "table {{.ID}}\t{{.Repository}}:{{.Tag}}\t{{.Size}}\t{{.CreatedSince}}" | sort -k3 -h -r
# Identify dangling images (no tag)
docker images -f "dangling=true" --format "{{.ID}}\t{{.Size}}\t{{.CreatedSince}}"
# For each image, check if any container references it
docker ps -a --filter "ancestor=<IMAGE_ID>" --format "{{.Names}}\t{{.Status}}"
Agent 2 — Containers and Volumes :
# All containers with status
docker ps -a --format "table {{.Names}}\t{{.Image}}\t{{.Status}}\t{{.Size}}"
# All volumes with size
docker system df -v | grep -A 1000 "VOLUME NAME"
# Identify dangling volumes
docker volume ls -f dangling=true
# For each volume, check which container uses it
docker ps -a --filter "volume=<VOLUME_NAME>" --format "{{.Names}}"
Agent 3 — System Level :
# Docker disk usage summary
docker system df
# Build cache
docker builder du
# Container logs size
for c in $(docker ps -a --format "{{.Names}}"); do
echo "$c: $(docker inspect --format='{{.LogPath}}' $c | xargs ls -lh 2>/dev/null | awk '{print $5}')"
done
Version Management Awareness : Identify version-managed images (e.g., Supabase managed by CLI). When newer versions are confirmed running, older versions are safe to remove. Pay attention to Docker Compose naming conventions (dash vs underscore).
OrbStack users have additional considerations.
data.img.raw is a Sparse File :
# Logical size (can show 8TB+, meaningless)
ls -lh ~/Library/OrbStack/data/data.img.raw
# Actual disk usage (this is what matters)
du -h ~/Library/OrbStack/data/data.img.raw
The logical vs actual size difference is normal. Only actual usage counts.
Post-Cleanup: Reclaim Disk Space : After cleaning Docker objects inside OrbStack, data.img.raw does NOT shrink automatically. Instruct user: Open OrbStack Settings → "Reclaim disk space" to compact the sparse file.
OrbStack Logs : Typically 1-2 MB total (~/Library/OrbStack/log/). Not worth cleaning.
Before deleting ANY Docker object, perform independent verification.
For Images :
# Verify no container (running or stopped) references the image
docker ps -a --filter "ancestor=<IMAGE_ID>" --format "{{.Names}}\t{{.Status}}"
# If empty → safe to delete with: docker rmi <IMAGE_ID>
For Volumes :
# Verify no container mounts the volume
docker ps -a --filter "volume=<VOLUME_NAME>" --format "{{.Names}}"
# If empty → check if database volume (see below)
# If not database → safe to delete with: docker volume rm <VOLUME_NAME>
Database Volume Red Flag Rule : If volume name contains mysql, postgres, redis, mongo, or mariadb, MANDATORY content inspection:
# Inspect database volume contents with temporary container
docker run --rm -v <VOLUME_NAME>:/data alpine ls -la /data
docker run --rm -v <VOLUME_NAME>:/data alpine du -sh /data/*
Only delete after user confirms the data is not needed.
Mole (https://github.com/tw93/Mole) is a command-line interface (CLI) tool for comprehensive macOS cleanup. It provides interactive terminal-based analysis and cleanup for caches, logs, developer tools, and more.
CRITICAL REQUIREMENTS:
tmux when running from Claude Code or scripts.mo --help is safe. Do NOT append --help to other commands.Installation check and upgrade:
# Check if installed and get version
which mo && mo --version
# If not installed
brew install tw93/tap/mole
# Check for updates
brew info tw93/tap/mole | head -5
# Upgrade if needed
brew upgrade tw93/tap/mole
Using Mole with tmux (REQUIRED for Claude Code):
# Create tmux session for TTY environment
tmux new-session -d -s mole -x 120 -y 40
# Run analysis (safe, read-only)
tmux send-keys -t mole 'mo analyze' Enter
# Wait for scan (be patient - can take 5-10 minutes for large directories)
sleep 60
# Capture results
tmux capture-pane -t mole -p
# Cleanup when done
tmux kill-session -t mole
Available commands (frommo --help):
| Command | Safety | Description |
|---|---|---|
mo --help | ✅ Safe | View all commands (ONLY safe help) |
mo analyze | ✅ Safe | Disk usage explorer (read-only) |
mo status | ✅ Safe | System health monitor |
mo clean --dry-run | ✅ Safe | Preview cleanup (no deletion) |
mo clean | ⚠️ DANGEROUS |
Reference guide: See references/mole_integration.md for detailed tmux workflow and troubleshooting.
CRITICAL : For comprehensive analysis, you MUST perform multi-layer exploration, not just top-level scans. This section documents the proven workflow for navigating Mole's TUI.
# Create session
tmux new-session -d -s mole -x 120 -y 40
# Start analysis
tmux send-keys -t mole 'mo analyze' Enter
# Wait for initial scan
sleep 8 && tmux capture-pane -t mole -p
# Navigation keys (send via tmux)
tmux send-keys -t mole Enter # Enter/expand selected directory
tmux send-keys -t mole Left # Go back to parent directory
tmux send-keys -t mole Down # Move to next item
tmux send-keys -t mole Up # Move to previous item
tmux send-keys -t mole 'q' # Quit TUI
# Capture current view
tmux capture-pane -t mole -p
Step 1: Top-level overview
# Start mo analyze, wait for initial menu
tmux send-keys -t mole 'mo analyze' Enter
sleep 8 && tmux capture-pane -t mole -p
# Example output:
# 1. Home 289.4 GB (58.5%)
# 2. App Library 145.2 GB (29.4%)
# 3. Applications 49.5 GB (10.0%)
# 4. System Library 10.3 GB (2.1%)
Step 2: Enter largest directory (Home)
tmux send-keys -t mole Enter
sleep 10 && tmux capture-pane -t mole -p
# Example output:
# 1. Library 144.4 GB (49.9%)
# 2. Workspace 52.0 GB (18.0%)
# 3. .cache 19.3 GB (6.7%)
# 4. Applications 17.0 GB (5.9%)
# ...
Step 3: Drill into specific directories
# Go to .cache (3rd item: Down Down Enter)
tmux send-keys -t mole Down Down Enter
sleep 5 && tmux capture-pane -t mole -p
# Example output:
# 1. uv 10.3 GB (55.6%)
# 2. modelscope 5.5 GB (29.5%)
# 3. huggingface 887.8 MB (4.7%)
Step 4: Navigate back and explore another branch
# Go back to parent
tmux send-keys -t mole Left
sleep 2
# Navigate to different directory
tmux send-keys -t mole Down Down Down Down Enter # Go to .npm
sleep 5 && tmux capture-pane -t mole -p
Step 5: Deep dive into Library
# Back to Home, then into Library
tmux send-keys -t mole Left
tmux send-keys -t mole Up Up Up Up Up Up Enter # Go to Library
sleep 10 && tmux capture-pane -t mole -p
# Example output:
# 1. Application Support 37.1 GB
# 2. Containers 35.4 GB
# 3. Developer 17.8 GB ← Xcode is here
# 4. Caches 8.2 GB
For comprehensive analysis, follow this exploration tree:
mo analyze
├── Home (Enter)
│ ├── Library (Enter)
│ │ ├── Developer (Enter) → Xcode/DerivedData, iOS DeviceSupport
│ │ ├── Caches (Enter) → Playwright, JetBrains, etc.
│ │ └── Application Support (Enter) → App data
│ ├── .cache (Enter) → uv, modelscope, huggingface
│ ├── .npm (Enter) → _cacache, _npx
│ ├── Downloads (Enter) → Large files to review
│ ├── .Trash (Enter) → Confirm trash contents
│ └── miniconda3/other dev tools (Enter) → Check last used time
├── App Library → Usually overlaps with ~/Library
└── Applications → Installed apps
| Directory | Scan Time | Notes |
|---|---|---|
| Top-level menu | 5-8 seconds | Fast |
| Home directory | 5-10 minutes | Large, be patient |
| ~/Library | 3-5 minutes | Many small files |
| Subdirectories | 2-30 seconds | Varies by size |
# 1. Create session
tmux new-session -d -s mole -x 120 -y 40
# 2. Start analysis and get overview
tmux send-keys -t mole 'mo analyze' Enter
sleep 8 && tmux capture-pane -t mole -p
# 3. Enter Home
tmux send-keys -t mole Enter
sleep 10 && tmux capture-pane -t mole -p
# 4. Enter .cache to see dev caches
tmux send-keys -t mole Down Down Enter
sleep 5 && tmux capture-pane -t mole -p
# 5. Back to Home, then to .npm
tmux send-keys -t mole Left
sleep 2
tmux send-keys -t mole Down Down Down Down Enter
sleep 5 && tmux capture-pane -t mole -p
# 6. Back to Home, enter Library
tmux send-keys -t mole Left
sleep 2
tmux send-keys -t mole Up Up Up Up Up Up Enter
sleep 10 && tmux capture-pane -t mole -p
# 7. Enter Developer to see Xcode
tmux send-keys -t mole Down Down Down Enter
sleep 5 && tmux capture-pane -t mole -p
# 8. Enter Xcode
tmux send-keys -t mole Enter
sleep 5 && tmux capture-pane -t mole -p
# 9. Enter DerivedData to see projects
tmux send-keys -t mole Enter
sleep 5 && tmux capture-pane -t mole -p
# 10. Cleanup
tmux kill-session -t mole
After multi-layer exploration, you will discover:
CRITICAL : The following items are often suggested for cleanup but should NOT be deleted in most cases. They provide significant value that outweighs the space they consume.
| Item | Size | Why NOT to Delete | Real Impact of Deletion |
|---|---|---|---|
| Xcode DerivedData | 10+ GB | Build cache saves 10-30 min per full rebuild | Next build takes 10-30 minutes longer |
| npm _cacache | 5+ GB | Downloaded packages cached locally | npm install redownloads everything (30min-2hr in China) |
| ~/.cache/uv | 10+ GB | Python package cache | Every Python project reinstalls deps from PyPI |
| Playwright browsers | 3-4 GB | Browser binaries for automation testing | Redownload 2GB+ each time (30min-1hr) |
| iOS DeviceSupport | 2-3 GB | Required for device debugging |
The vanity trap : Showing "Cleaned 50GB!" feels good but:
The right mindset : "I found 50GB of caches. Here's why most of them are actually valuable and should be kept..."
| Item | Why Safe | Impact |
|---|---|---|
| Trash | User already deleted these files | None - user's decision |
| Homebrew old versions | Replaced by newer versions | Rare: can't rollback to old version |
| npm _npx | Temporary npx executions | Minor: npx re-downloads on next use |
| Orphaned app remnants | App already uninstalled | None - app doesn't exist |
| Specific unused Docker volumes | Projects confirmed abandoned | None - if truly abandoned |
Every cleanup report MUST follow this format with impact analysis:
## Disk Analysis Report
### Classification Legend
| Symbol | Meaning |
|--------|---------|
| 🟢 | **Absolutely Safe** - No negative impact, truly unused |
| 🟡 | **Trade-off Required** - Useful cache, deletion has cost |
| 🔴 | **Do Not Delete** - Contains valuable data or actively used |
### Findings
| Item | Size | Classification | What It Is | Impact If Deleted |
|------|------|----------------|------------|-------------------|
| Trash | 643 MB | 🟢 | Files you deleted | None |
| npm _npx | 2.1 GB | 🟢 | Temp npx packages | Minor redownload |
| npm _cacache | 5 GB | 🟡 | Package cache | 30min-2hr redownload |
| DerivedData | 10 GB | 🟡 | Xcode build cache | 10-30min rebuild |
| Docker volumes | 11 GB | 🔴 | Project databases | **DATA LOSS** |
### Recommendation
Only items marked 🟢 are recommended for cleanup.
Items marked 🟡 require your judgment based on usage patterns.
Items marked 🔴 require explicit confirmation per-item.
Docker reports MUST list every individual object, not just categories:
#### Dangling Images (no tag, no container references)
| Image ID | Size | Created | Safe? |
|----------|------|---------|-------|
| a02c40cc28df | 884 MB | 2 months ago | ✅ No container uses it |
| 555434521374 | 231 MB | 3 months ago | ✅ No container uses it |
#### Stopped Containers
| Name | Image | Status | Size |
|------|-------|--------|------|
| ragflow-mysql | mysql:8.0 | Exited 2 weeks ago | 1.2 GB |
#### Volumes
| Volume | Size | Mounted By | Contains |
|--------|------|------------|----------|
| ragflow_mysql_data | 1.8 GB | ragflow-mysql | MySQL databases |
| redis_data | 500 MB | (none - dangling) | Redis dump |
#### 🔴 Database Volumes Requiring Inspection
| Volume | Inspected Contents | User Decision |
|--------|--------------------|---------------|
| ragflow_mysql_data | 8 databases, 45 tables | Still need? |
After multi-layer exploration, present findings using this proven template:
## 📊 磁盘空间深度分析报告
**分析日期**: YYYY-MM-DD
**使用工具**: Mole CLI + 多层目录探索
**分析原则**: 安全第一,价值优于虚荣
---
### 总览
| 区域 | 总占用 | 关键发现 |
|------|--------|----------|
| **Home** | XXX GB | Library占一半(XXX GB) |
| **App Library** | XXX GB | 与Home/Library重叠统计 |
| **Applications** | XXX GB | 应用本体 |
---
### 🟢 绝对安全可删除 (约 X.X GB)
| 项目 | 大小 | 位置 | 删除后影响 | 清理命令 |
|------|------|------|-----------|---------|
| **废纸篓** | XXX MB | ~/.Trash | 无 - 你已决定删除的文件 | 清空废纸篓 |
| **npm _npx** | X.X GB | ~/.npm/_npx | 下次 npx 命令重新下载 | `rm -rf ~/.npm/_npx` |
| **Homebrew 旧版本** | XX MB | /opt/homebrew | 无 - 已被新版本替代 | `brew cleanup --prune=0` |
**废纸篓内容预览**:
- [列出主要文件]
---
### 🟡 需要你确认的项目
#### 1. [项目名] (X.X GB) - [状态描述]
| 子目录 | 大小 | 最后使用 |
|--------|------|----------|
| [子目录1] | X.X GB | >X个月 |
| [子目录2] | X.X GB | >X个月 |
**问题**: [需要用户回答的问题]
---
#### 2. Downloads 中的旧文件 (X.X GB)
| 文件/目录 | 大小 | 年龄 | 建议 |
|-----------|------|------|------|
| [文件1] | X.X GB | - | [建议] |
| [文件2] | XXX MB | >X个月 | [建议] |
**建议**: 手动检查 Downloads,删除已不需要的文件。
---
#### 3. 停用的 Docker 项目 Volumes
| 项目前缀 | 可能包含的数据 | 需要你确认 |
|---------|--------------|-----------|
| `project1_*` | MySQL, Redis | 还在用吗? |
| `project2_*` | Postgres | 还在用吗? |
**注意**: 我不会使用 `docker volume prune -f`,只会在你确认后删除特定项目的 volumes。
---
### 🔴 不建议删除的项目 (有价值的缓存)
| 项目 | 大小 | 为什么要保留 |
|------|------|-------------|
| **Xcode DerivedData** | XX GB | [项目名]的编译缓存,删除后下次构建需要X分钟 |
| **npm _cacache** | X.X GB | 所有下载过的 npm 包,删除后需要重新下载 |
| **~/.cache/uv** | XX GB | Python 包缓存,重新下载在中国网络下很慢 |
| [其他有价值的缓存] | X.X GB | [保留原因] |
---
### 📋 其他发现
| 项目 | 大小 | 说明 |
|------|------|------|
| **OrbStack/Docker** | XX GB | 正常的 VM/容器占用 |
| [其他发现] | X.X GB | [说明] |
---
### ✅ 推荐操作
**立即可执行** (无需确认):
```bash
# 1. 清空废纸篓 (XXX MB)
# 手动: Finder → 清空废纸篓
# 2. npm _npx (X.X GB)
rm -rf ~/.npm/_npx
# 3. Homebrew 旧版本 (XX MB)
brew cleanup --prune=0
预计释放 : ~X.X GB
需要你确认后执行 :
### Report Quality Checklist
Before presenting the report, verify:
- [ ] Every item has "Impact If Deleted" explanation
- [ ] 🟢 items are truly safe (Trash, _npx, old versions)
- [ ] 🟡 items require user decision (age info, usage patterns)
- [ ] 🔴 items explain WHY they should be kept
- [ ] Docker volumes listed by project, not blanket prune
- [ ] Network environment considered (China = slow redownload)
- [ ] No recommendations to delete useful caches just to inflate numbers
- [ ] Clear action items with exact commands
## Step 4: Present Recommendations
Format findings into actionable recommendations with risk levels:
```markdown
# macOS Cleanup Recommendations
## Summary
Total space recoverable: ~XX GB
Current usage: XX%
## Recommended Actions
### 🟢 Safe to Execute (Low Risk)
These are safe to delete and will be regenerated as needed:
1. **Empty Trash** (~12 GB)
- Location: ~/.Trash
- Command: `rm -rf ~/.Trash/*`
2. **Clear System Caches** (~45 GB)
- Location: ~/Library/Caches
- Command: `rm -rf ~/Library/Caches/*`
- Note: Apps may be slightly slower on next launch
3. **Remove Homebrew Cache** (~5 GB)
- Command: `brew cleanup -s`
### 🟡 Review Recommended (Medium Risk)
Review these items before deletion:
1. **Large Downloads** (~38 GB)
- Location: ~/Downloads
- Action: Manually review and delete unneeded files
- Files: [list top 10 largest files]
2. **Application Remnants** (~8 GB)
- Apps: [list detected uninstalled apps]
- Locations: [list paths]
- Action: Confirm apps are truly uninstalled before deleting data
### 🔴 Keep Unless Certain (High Risk)
Only delete if you know what you're doing:
1. **Docker Volumes** (~3 GB)
- May contain important data
- Review with: `docker volume ls`
2. **Time Machine Local Snapshots** (~XX GB)
- Automatic backups, will be deleted when space needed
- Command to check: `tmutil listlocalsnapshots /`
CRITICAL : Never execute deletions without explicit user confirmation.
Interactive confirmation flow:
# Example from scripts/safe_delete.py
def confirm_delete(path: str, size: str, description: str) -> bool:
"""
Ask user to confirm deletion.
Args:
path: File/directory path
size: Human-readable size
description: What this file/directory is
Returns:
True if user confirms, False otherwise
"""
print(f"\n🗑️ Confirm Deletion")
print(f"━━━━━━━━━━━━━━━━━━")
print(f"Path: {path}")
print(f"Size: {size}")
print(f"Description: {description}")
response = input("\nDelete this item? [y/N]: ").strip().lower()
return response == 'y'
For batch operations:
def batch_confirm(items: list) -> list:
"""
Show all items, ask for batch confirmation.
Returns list of items user approved.
"""
print("\n📋 Items to Delete:")
print("━━━━━━━━━━━━━━━━━━")
for i, item in enumerate(items, 1):
print(f"{i}. {item['path']} ({item['size']})")
print("\nOptions:")
print(" 'all' - Delete all items")
print(" '1,3,5' - Delete specific items by number")
print(" 'none' - Cancel")
response = input("\nYour choice: ").strip().lower()
if response == 'none':
return []
elif response == 'all':
return items
else:
# Parse numbers
indices = [int(x.strip()) - 1 for x in response.split(',')]
return [items[i] for i in indices if 0 <= i < len(items)]
After cleanup, verify the results and report back:
# Compare before/after
df -h /
# Calculate space recovered
# (handled by scripts/cleanup_report.py)
Report format:
✅ Cleanup Complete!
Before: 450 GB used (90%)
After: 385 GB used (77%)
━━━━━━━━━━━━━━━━━━━━━━━━
Recovered: 65 GB
Breakdown:
- System caches: 45 GB
- Downloads: 12 GB
- Homebrew cache: 5 GB
- Application remnants: 3 GB
⚠️ Notes:
- Some applications may take longer to launch on first run
- Deleted items cannot be recovered unless you have Time Machine backup
- Consider running this cleanup monthly
💡 Maintenance Tips:
- Set up automatic Homebrew cleanup: `brew cleanup` weekly
- Review Downloads folder monthly
- Enable "Empty Trash Automatically" in Finder preferences
During image analysis, if you discover oversized images, suggest multi-stage build optimization:
# Before: 884 MB (full build environment in final image)
FROM node:20
COPY . .
RUN npm ci && npm run build
CMD ["node", "dist/index.js"]
# After: ~150 MB (only runtime in final image)
FROM node:20 AS builder
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
FROM node:20-slim
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
CMD ["node", "dist/index.js"]
Key techniques: multi-stage builds, slim/alpine base images, .dockerignore, layer ordering.
Never delete these without explicit user instruction:
~/Documents, ~/Desktop, ~/Pictures contentThese operations require elevated privileges. Ask user to run commands manually:
/Library/Caches (system-wide)/var/log (system logs)/private/var/folders (system temp)Example prompt:
⚠️ This operation requires administrator privileges.
Please run this command manually:
sudo rm -rf /Library/Caches/*
⚠️ You'll be asked for your password.
Before executing any cleanup >10GB, recommend:
💡 Safety Tip:
Before cleaning XX GB, consider creating a Time Machine backup.
Quick backup check:
tmutil latestbackup
If no recent backup, run:
tmutil startbackup
macOS may block deletion of certain system files due to SIP (System Integrity Protection).
Solution : Don't force it. These protections exist for security.
Rare but possible. Solution : Restart the app, it will regenerate necessary caches.
Prevention : Always list Docker volumes before cleanup:
docker volume ls
docker volume inspect <volume_name>
analyze_caches.py - Scan and categorize cache directoriesfind_app_remnants.py - Detect orphaned application dataanalyze_large_files.py - Find large files with smart filteringanalyze_dev_env.py - Scan development environment resourcessafe_delete.py - Interactive deletion with confirmationcleanup_report.py - Generate before/after reportscleanup_targets.md - Detailed explanations of each cleanup targetmole_integration.md - How to use Mole alongside this skillsafety_rules.md - Comprehensive list of what to never deleteUser request: "My Mac is running out of space, can you help?"
Workflow:
rm -rf ~/Library/Caches/*User request: "I'm a developer and my disk is full"
Workflow:
scripts/analyze_dev_env.pyUser request: "What's taking up so much space?"
Workflow:
scripts/analyze_large_files.py --threshold 100MBIn these cases, explain limitations and suggest alternatives.
Weekly Installs
160
Repository
GitHub Stars
636
First Seen
Jan 21, 2026
Security Audits
Gen Agent Trust HubFailSocketFailSnykWarn
Installed on
opencode133
claude-code124
gemini-cli123
codex123
cursor115
github-copilot109
Nx Import 使用指南:从源仓库导入代码并保留Git历史
250 周安装
OpenPencil CLI 工具:.fig 设计文件命令行操作与 MCP 服务器 | 设计自动化
250 周安装
学术深度研究技能:AI驱动的学术文献综述与多源验证工具,生成APA格式报告
250 周安装
React PDF 渲染器 - 使用 JSON 生成 PDF 文档,支持自定义组件和流式渲染
250 周安装
后端安全编码专家 | 安全开发实践、漏洞预防与防御性编程技术指南
250 周安装
TanStack Form:高性能无头表单库,支持TypeScript、Zod、Valibot验证
250 周安装
mo --help| Actually deletes files |
mo purge | ⚠️ DANGEROUS | Remove project artifacts |
mo uninstall | ⚠️ DANGEROUS | Remove applications |
| Redownload from Apple when connecting device |
| Docker stopped containers | <500 MB | May restart anytime with docker start | Lose container state, need to recreate |
| ~/.cache/huggingface | varies | AI model cache | Redownload large models (hours) |
| ~/.cache/modelscope | varies | AI model cache (China) | Same as above |
| JetBrains caches | 1+ GB | IDE indexing and caches | IDE takes 5-10 min to re-index |