disk-space-cleanup by colonelpanic8/dotfiles
npx skills add https://github.com/colonelpanic8/dotfiles --skill disk-space-cleanup采用安全第一的工作流程回收磁盘空间:先调查,再执行明显低风险的清理操作,然后针对性地分析更大的清理机会。
nix-collect-garbage、容器清理、Cargo 构建产物)。ncdu/du 调查剩余的大容量目录/nix/store 根目录在删除任何内容之前,先运行一个快速的基线检查:
df -h /
df -h /home
df -h /nix
可选地,添加一个快速的家目录级别容量快照:
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
du -xh --max-depth=1 "$HOME" 2>/dev/null | sort -h
当用户希望快速、低投入地回收空间时,首先使用这些命令:
sudo -n nix-collect-garbage -d
sudo -n docker system prune -a
sudo -n podman system prune -a
注意事项:
--volumes 参数。sudo -n,这样清理操作会快速失败,而不是卡在密码提示上。rm -rf 之前,先运行应用程序缓存清理器:uv cache clean
pip cache purge
yarn cache clean
npm cache clean --force
首先针对常见根目录:~/Projects 和 ~/code。
在删除前,使用 cargo-sweep 的模拟运行模式:
nix run nixpkgs#cargo-sweep -- sweep -d -r -t 30 ~/Projects ~/code
然后执行删除:
nix run nixpkgs#cargo-sweep -- sweep -r -t 30 ~/Projects ~/code
用于工具链频繁变更清理的替代方案:
nix run nixpkgs#cargo-sweep -- sweep -r -i ~/Projects ~/code
推荐顺序:
-t 30 来清理基于时间的陈旧构建。-i 进行模拟运行。-i 参数。ncdu 和 du 进行调查在分析空间使用情况时,避免已挂载或远程文件系统。从 references/ignore-paths.md 加载忽略模式。
使用单文件系统扫描以避免跨越挂载点:
ncdu -x "$HOME"
sudo ncdu -x /
当排除已知的嘈杂挂载点时:
ncdu -x --exclude "$HOME/keybase" "$HOME"
sudo ncdu -x --exclude /keybase --exclude /var/lib/railbird /
如果缺少 ncdu,请使用:
nix run nixpkgs#ncdu -- -x "$HOME"
对于非常大的目录树进行快速、非阻塞的初步检查,建议使用有界探测:
timeout 30s du -xh --max-depth=1 "$HOME/.cache" 2>/dev/null | sort -h
timeout 30s du -xh --max-depth=1 "$HOME/.local/share" 2>/dev/null | sort -h
实践中观察到的特定于机器的“大户”:
~/.cache/uv 可能超过 20G,可以使用 uv cache clean 回收。~/.cache/spotify 可能超过 10G;视为可选的应用程序缓存清理。~/.local/share/Trash 可能超过数 GB;仅在用户批准后清空。/nix/store当垃圾回收后 /nix/store 仍然很大时,检查根本原因,而不是删除随机路径。
有用的命令:
nix path-info -Sh /nix/store/* 2>/dev/null | sort -h | tail -n 50
nix-store --gc --print-roots
避免将 du -sh /nix/store 作为首要诊断工具;在大型存储库上它可能非常慢。
对于重复的 GHC/Rust 工具链副本:
nix path-info -Sh /nix/store/* 2>/dev/null | rg '(ghc|rustc|rust-std|cargo)'
nix-store --gc --print-roots | rg '(ghc|rust)'
解析路径被保留的原因:
/home/imalison/dotfiles/dotfiles/lib/functions/find_store_path_gc_roots /nix/store/<store-path>
nix why-depends <consumer-store-path> <dependency-store-path>
本机上常见的保留模式:
~/Projects 和工作树下的许多 .direnv/flake-profile-* 符号链接使得 nix-shell-env/ghc-shell-* 根目录保持活跃。find_store_path_gc_roots 对于证明 GHC 保留特别有用:许多大型的 ghc-9.10.3-with-packages 路径是每个项目唯一的,而基础的 ghc-9.10.3 和文档路径是共享的。find ~/Projects -type l -path '*/.direnv/flake-profile-*' | wc -l
find ~/Projects -type d -name .direnv | wc -l
nix-store --gc --print-roots | rg '/\\.direnv/flake-profile-' | awk -F' -> ' '{print $1 \"|\" $2}' \
| while IFS='|' read -r root target; do \
nix-store -qR \"$target\" | rg '^/nix/store/.+-ghc-[0-9]'; \
done | sort | uniq -c | sort -nr | head
.direnv 清理,并等待用户确认。nix、docker、podman、cargo-sweep),而不是 rm -rf。将此技能视为一个活的剧本。
在每次磁盘清理任务之后:
references/ignore-paths.md 中忽略。SKILL.md 中。每周安装次数
1
仓库
GitHub 星标数
210
首次出现
今天
安全审计
安装于
zencoder1
amp1
cline1
openclaw1
opencode1
cursor1
Reclaim disk space with a safety-first workflow: investigate first, run obvious low-risk cleanup wins, then do targeted analysis for larger opportunities.
nix-collect-garbage, container prune, Cargo artifacts).ncdu/du/nix/store roots when large toolchains still persistRun a quick baseline before deleting anything:
df -h /
df -h /home
df -h /nix
Optionally add a quick home-level size snapshot:
du -xh --max-depth=1 "$HOME" 2>/dev/null | sort -h
Use these first when the user wants fast, low-effort reclaiming:
sudo -n nix-collect-garbage -d
sudo -n docker system prune -a
sudo -n podman system prune -a
Notes:
Add --volumes only when the user approves deleting unused volumes.
Re-check free space after each command to show impact.
Prefer sudo -n first so cleanup runs fail fast instead of hanging on password prompts.
If root is still tight after these, run app cache cleaners before proposing raw rm -rf:
uv cache clean pip cache purge yarn cache clean npm cache clean --force
Target common roots first: ~/Projects and ~/code.
Use cargo-sweep in dry-run mode before deleting:
nix run nixpkgs#cargo-sweep -- sweep -d -r -t 30 ~/Projects ~/code
Then perform deletion:
nix run nixpkgs#cargo-sweep -- sweep -r -t 30 ~/Projects ~/code
Alternative for toolchain churn cleanup:
nix run nixpkgs#cargo-sweep -- sweep -r -i ~/Projects ~/code
Recommended sequence:
-t 30 first for age-based stale builds.-i next.-i when dry-run shows significant reclaimable space.ncdu and duAvoid mounted or remote filesystems when profiling space. Load ignore patterns from references/ignore-paths.md.
Use one-filesystem scans to avoid crossing mounts:
ncdu -x "$HOME"
sudo ncdu -x /
When excluding known noisy mountpoints:
ncdu -x --exclude "$HOME/keybase" "$HOME"
sudo ncdu -x --exclude /keybase --exclude /var/lib/railbird /
If ncdu is missing, use:
nix run nixpkgs#ncdu -- -x "$HOME"
For quick, non-blocking triage on very large trees, prefer bounded probes:
timeout 30s du -xh --max-depth=1 "$HOME/.cache" 2>/dev/null | sort -h
timeout 30s du -xh --max-depth=1 "$HOME/.local/share" 2>/dev/null | sort -h
Machine-specific heavy hitters seen in practice:
~/.cache/uv can exceed 20G and is reclaimable with uv cache clean.~/.cache/spotify can exceed 10G; treat as optional app-cache cleanup.~/.local/share/Trash can exceed several GB; empty only with user approval./nix/store Deep DiveWhen /nix/store is still large after GC, inspect root causes instead of deleting random paths.
Useful commands:
nix path-info -Sh /nix/store/* 2>/dev/null | sort -h | tail -n 50
nix-store --gc --print-roots
Avoid du -sh /nix/store as a first diagnostic; it can be very slow on large stores.
For repeated GHC/Rust toolchain copies:
nix path-info -Sh /nix/store/* 2>/dev/null | rg '(ghc|rustc|rust-std|cargo)'
nix-store --gc --print-roots | rg '(ghc|rust)'
Resolve why a path is retained:
/home/imalison/dotfiles/dotfiles/lib/functions/find_store_path_gc_roots /nix/store/<store-path>
nix why-depends <consumer-store-path> <dependency-store-path>
Common retention pattern on this machine:
Many .direnv/flake-profile-* symlinks under ~/Projects and worktrees keep nix-shell-env/ghc-shell-* roots alive.
find_store_path_gc_roots is especially useful for proving GHC retention: many large ghc-9.10.3-with-packages paths are unique per project, while the base ghc-9.10.3 and docs paths are shared.
Quantify before acting:
find ~/Projects -type l -path '/.direnv/flake-profile-' | wc -l
find ~/Projects -type d -name .direnv | wc -l
nix-store --gc --print-roots | rg '/\.direnv/flake-profile-' | awk -F' -> ' '{print $1 "|" $2}'
| while IFS='|' read -r root target; do
nix-store -qR "$target" | rg '^/nix/store/.+-ghc-[0-9]';
done | sort | uniq -c | sort -nr | head
If counts are high and the projects are inactive, propose targeted .direnv cleanup for user confirmation.
nix, docker, podman, cargo-sweep) over rm -rf.Treat this skill as a living playbook.
After each disk cleanup task:
references/ignore-paths.md.SKILL.md.Weekly Installs
1
Repository
GitHub Stars
210
First Seen
Today
Security Audits
Gen Agent Trust HubFailSocketPassSnykWarn
Installed on
zencoder1
amp1
cline1
openclaw1
opencode1
cursor1
Azure 升级评估与自动化工具 - 轻松迁移 Functions 计划、托管层级和 SKU
79,900 周安装