重要前提
安装AI Skills的关键前提是:必须科学上网,且开启TUN模式,这一点至关重要,直接决定安装能否顺利完成,在此郑重提醒三遍:科学上网,科学上网,科学上网。查看完整安装教程 →
neuropixels-analysis by k-dense-ai/claude-scientific-skills
npx skills add https://github.com/k-dense-ai/claude-scientific-skills --skill neuropixels-analysis使用 SpikeInterface、Allen Institute 和国际脑实验室 (IBL) 当前最佳实践,全面分析 Neuropixels 高密度神经记录的工具包。支持从原始数据到可发表、经过筛选的单元的全流程工作流。
此技能应在以下情况使用:
| 探针 | 电极数 | 通道数 | 备注 |
|---|---|---|---|
| Neuropixels 1.0 | 960 | 384 | 需要 phase_shift 校正 |
| Neuropixels 2.0(单探针) | 1280 | 384 | 更密集的几何结构 |
| Neuropixels 2.0(4 探针) | 5120 | 384 | 多区域记录 |
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
| 格式 |
| 扩展名 |
| 读取器 |
| --- | --- | --- |
| SpikeGLX | .ap.bin, .lf.bin, .meta | si.read_spikeglx() |
| Open Ephys | .continuous, .oebin | si.read_openephys() |
| NWB | .nwb | si.read_nwb() |
import spikeinterface.full as si
import neuropixels_analysis as npa
# 配置并行处理
job_kwargs = dict(n_jobs=-1, chunk_duration='1s', progress_bar=True)
# SpikeGLX(最常见)
recording = si.read_spikeglx('/path/to/data', stream_id='imec0.ap')
# Open Ephys(许多实验室常用)
recording = si.read_openephys('/path/to/Record_Node_101/')
# 检查可用数据流
streams, ids = si.get_neo_streams('spikeglx', '/path/to/data')
print(streams) # ['imec0.ap', 'imec0.lf', 'nidq']
# 用于测试数据子集
recording = recording.frame_slice(0, int(60 * recording.get_sampling_frequency()))
# 运行完整分析流程
results = npa.run_pipeline(
recording,
output_dir='output/',
sorter='kilosort4',
curation_method='allen',
)
# 访问结果
sorting = results['sorting']
metrics = results['metrics']
labels = results['labels']
# 推荐的预处理链
rec = si.highpass_filter(recording, freq_min=400)
rec = si.phase_shift(rec) # Neuropixels 1.0 必需
bad_ids, _ = si.detect_bad_channels(rec)
rec = rec.remove_channels(bad_ids)
rec = si.common_reference(rec, operator='median')
# 或使用我们的包装函数
rec = npa.preprocess(recording)
# 检查漂移(务必执行此步骤!)
motion_info = npa.estimate_motion(rec, preset='kilosort_like')
npa.plot_drift(rec, motion_info, output='drift_map.png')
# 如果需要则应用校正
if motion_info['motion'].max() > 10: # 微米
rec = npa.correct_motion(rec, preset='nonrigid_accurate')
# Kilosort4(推荐,需要 GPU)
sorting = si.run_sorter('kilosort4', rec, folder='ks4_output')
# CPU 替代方案
sorting = si.run_sorter('tridesclous2', rec, folder='tdc2_output')
sorting = si.run_sorter('spykingcircus2', rec, folder='sc2_output')
sorting = si.run_sorter('mountainsort5', rec, folder='ms5_output')
# 检查已安装的排序器
print(si.installed_sorters())
# 创建分析器并计算所有扩展
analyzer = si.create_sorting_analyzer(sorting, rec, sparse=True)
analyzer.compute('random_spikes', max_spikes_per_unit=500)
analyzer.compute('waveforms', ms_before=1.0, ms_after=2.0)
analyzer.compute('templates', operators=['average', 'std'])
analyzer.compute('spike_amplitudes')
analyzer.compute('correlograms', window_ms=50.0, bin_ms=1.0)
analyzer.compute('unit_locations', method='monopolar_triangulation')
analyzer.compute('quality_metrics')
metrics = analyzer.get_extension('quality_metrics').get_data()
# Allen Institute 标准(保守)
good_units = metrics.query("""
presence_ratio > 0.9 and
isi_violations_ratio < 0.5 and
amplitude_cutoff < 0.1
""").index.tolist()
# 或使用自动筛选
labels = npa.curate(metrics, method='allen') # 'allen', 'ibl', 'strict'
在 Claude Code 中使用此技能时,Claude 可以直接分析波形图并提供专家筛选决策。对于编程 API 访问:
from anthropic import Anthropic
# 设置 API 客户端
client = Anthropic()
# 可视化分析不确定单元
uncertain = metrics.query('snr > 3 and snr < 8').index.tolist()
for unit_id in uncertain:
result = npa.analyze_unit_visually(analyzer, unit_id, api_client=client)
print(f"Unit {unit_id}: {result['classification']}")
print(f" Reasoning: {result['reasoning'][:100]}...")
Claude Code 集成:在 Claude Code 中运行时,可直接要求 Claude 检查波形/相关图——无需设置 API。
# 生成包含可视化的综合性 HTML 报告
report_dir = npa.generate_analysis_report(results, 'output/')
# 打开 report.html,其中包含摘要统计、图表和单元表格
# 将格式化摘要打印到控制台
npa.print_analysis_summary(results)
# 导出到 Phy 进行人工审查
si.export_to_phy(analyzer, output_folder='phy_export/',
compute_pc_features=True, compute_amplitudes=True)
# 导出到 NWB
from spikeinterface.exporters import export_to_nwb
export_to_nwb(rec, sorting, 'output.nwb')
# 保存质量指标
metrics.to_csv('quality_metrics.csv')
rec.save(folder='preprocessed/')freq_min:高通截止频率(通常为 300-400 Hz)detect_threshold:坏通道检测灵敏度preset:'kilosort_like'(快速)或 'nonrigid_accurate'(对严重漂移效果更好)batch_size:每批样本数(默认 30000)nblocks:漂移块数(对长记录增加此值)Th_learned:检测阈值(越低 = 尖峰越多)snr_threshold:信噪比截止值(通常为 3-5)isi_violations_ratio:不应期违规(0.01-0.5)presence_ratio:记录覆盖率(0.5-0.95)自动化预处理脚本:
python scripts/preprocess_recording.py /path/to/data --output preprocessed/
运行尖峰排序:
python scripts/run_sorting.py preprocessed/ --sorter kilosort4 --output sorting/
计算质量指标并应用筛选:
python scripts/compute_metrics.py sorting/ preprocessed/ --output metrics/ --curation allen
导出到 Phy 进行人工筛选:
python scripts/export_to_phy.py metrics/analyzer --output phy_export/
完整的分析模板。复制并自定义:
cp assets/analysis_template.py my_analysis.py
# 编辑参数并运行
python my_analysis.py
详细的逐步工作流,包含每个阶段的解释。
按模块组织的快速函数参考。
用于发表质量图表的全面可视化指南。
# 核心包
pip install spikeinterface[full] probeinterface neo
# 尖峰排序器
pip install kilosort # Kilosort4(需要 GPU)
pip install spykingcircus # SpykingCircus2(CPU)
pip install mountainsort5 # Mountainsort5(CPU)
# 我们的工具包
pip install neuropixels-analysis
# 可选:AI 筛选
pip install anthropic
# 可选:IBL 工具
pip install ibl-neuropixel ibllib
project/
├── raw_data/
│ └── recording_g0/
│ └── recording_g0_imec0/
│ ├── recording_g0_t0.imec0.ap.bin
│ └── recording_g0_t0.imec0.ap.meta
├── preprocessed/ # 保存的预处理记录
├── motion/ # 运动估计结果
├── sorting_output/ # 尖峰排序器输出
├── analyzer/ # SortingAnalyzer(波形、指标)
├── phy_export/ # 用于人工筛选
├── ai_curation/ # AI 分析报告
└── results/
├── quality_metrics.csv
├── curation_labels.json
└── output.nwb
每周安装数
54
代码仓库
GitHub 星标数
17.3K
首次出现
2026 年 1 月 20 日
安全审计
已安装于
opencode47
codex46
gemini-cli45
cursor44
claude-code43
github-copilot42
Comprehensive toolkit for analyzing Neuropixels high-density neural recordings using current best practices from SpikeInterface, Allen Institute, and International Brain Laboratory (IBL). Supports the full workflow from raw data to publication-ready curated units.
This skill should be used when:
| Probe | Electrodes | Channels | Notes |
|---|---|---|---|
| Neuropixels 1.0 | 960 | 384 | Requires phase_shift correction |
| Neuropixels 2.0 (single) | 1280 | 384 | Denser geometry |
| Neuropixels 2.0 (4-shank) | 5120 | 384 | Multi-region recording |
| Format | Extension | Reader | |
| --- | --- | --- | |
| SpikeGLX | .ap.bin, .lf.bin, .meta | si.read_spikeglx() | |
| Open Ephys | .continuous, .oebin | si.read_openephys() | |
| NWB | .nwb | si.read_nwb() |
import spikeinterface.full as si
import neuropixels_analysis as npa
# Configure parallel processing
job_kwargs = dict(n_jobs=-1, chunk_duration='1s', progress_bar=True)
# SpikeGLX (most common)
recording = si.read_spikeglx('/path/to/data', stream_id='imec0.ap')
# Open Ephys (common for many labs)
recording = si.read_openephys('/path/to/Record_Node_101/')
# Check available streams
streams, ids = si.get_neo_streams('spikeglx', '/path/to/data')
print(streams) # ['imec0.ap', 'imec0.lf', 'nidq']
# For testing with subset of data
recording = recording.frame_slice(0, int(60 * recording.get_sampling_frequency()))
# Run full analysis pipeline
results = npa.run_pipeline(
recording,
output_dir='output/',
sorter='kilosort4',
curation_method='allen',
)
# Access results
sorting = results['sorting']
metrics = results['metrics']
labels = results['labels']
# Recommended preprocessing chain
rec = si.highpass_filter(recording, freq_min=400)
rec = si.phase_shift(rec) # Required for Neuropixels 1.0
bad_ids, _ = si.detect_bad_channels(rec)
rec = rec.remove_channels(bad_ids)
rec = si.common_reference(rec, operator='median')
# Or use our wrapper
rec = npa.preprocess(recording)
# Check for drift (always do this!)
motion_info = npa.estimate_motion(rec, preset='kilosort_like')
npa.plot_drift(rec, motion_info, output='drift_map.png')
# Apply correction if needed
if motion_info['motion'].max() > 10: # microns
rec = npa.correct_motion(rec, preset='nonrigid_accurate')
# Kilosort4 (recommended, requires GPU)
sorting = si.run_sorter('kilosort4', rec, folder='ks4_output')
# CPU alternatives
sorting = si.run_sorter('tridesclous2', rec, folder='tdc2_output')
sorting = si.run_sorter('spykingcircus2', rec, folder='sc2_output')
sorting = si.run_sorter('mountainsort5', rec, folder='ms5_output')
# Check available sorters
print(si.installed_sorters())
# Create analyzer and compute all extensions
analyzer = si.create_sorting_analyzer(sorting, rec, sparse=True)
analyzer.compute('random_spikes', max_spikes_per_unit=500)
analyzer.compute('waveforms', ms_before=1.0, ms_after=2.0)
analyzer.compute('templates', operators=['average', 'std'])
analyzer.compute('spike_amplitudes')
analyzer.compute('correlograms', window_ms=50.0, bin_ms=1.0)
analyzer.compute('unit_locations', method='monopolar_triangulation')
analyzer.compute('quality_metrics')
metrics = analyzer.get_extension('quality_metrics').get_data()
# Allen Institute criteria (conservative)
good_units = metrics.query("""
presence_ratio > 0.9 and
isi_violations_ratio < 0.5 and
amplitude_cutoff < 0.1
""").index.tolist()
# Or use automated curation
labels = npa.curate(metrics, method='allen') # 'allen', 'ibl', 'strict'
When using this skill with Claude Code, Claude can directly analyze waveform plots and provide expert curation decisions. For programmatic API access:
from anthropic import Anthropic
# Setup API client
client = Anthropic()
# Analyze uncertain units visually
uncertain = metrics.query('snr > 3 and snr < 8').index.tolist()
for unit_id in uncertain:
result = npa.analyze_unit_visually(analyzer, unit_id, api_client=client)
print(f"Unit {unit_id}: {result['classification']}")
print(f" Reasoning: {result['reasoning'][:100]}...")
Claude Code Integration : When running within Claude Code, ask Claude to examine waveform/correlogram plots directly - no API setup required.
# Generate comprehensive HTML report with visualizations
report_dir = npa.generate_analysis_report(results, 'output/')
# Opens report.html with summary stats, figures, and unit table
# Print formatted summary to console
npa.print_analysis_summary(results)
# Export to Phy for manual review
si.export_to_phy(analyzer, output_folder='phy_export/',
compute_pc_features=True, compute_amplitudes=True)
# Export to NWB
from spikeinterface.exporters import export_to_nwb
export_to_nwb(rec, sorting, 'output.nwb')
# Save quality metrics
metrics.to_csv('quality_metrics.csv')
rec.save(folder='preprocessed/')freq_min: Highpass cutoff (300-400 Hz typical)detect_threshold: Bad channel detection sensitivitypreset: 'kilosort_like' (fast) or 'nonrigid_accurate' (better for severe drift)batch_size: Samples per batch (30000 default)nblocks: Number of drift blocks (increase for long recordings)Th_learned: Detection threshold (lower = more spikes)snr_threshold: Signal-to-noise cutoff (3-5 typical)isi_violations_ratio: Refractory violations (0.01-0.5)presence_ratio: Recording coverage (0.5-0.95)Automated preprocessing script:
python scripts/preprocess_recording.py /path/to/data --output preprocessed/
Run spike sorting:
python scripts/run_sorting.py preprocessed/ --sorter kilosort4 --output sorting/
Compute quality metrics and apply curation:
python scripts/compute_metrics.py sorting/ preprocessed/ --output metrics/ --curation allen
Export to Phy for manual curation:
python scripts/export_to_phy.py metrics/analyzer --output phy_export/
Complete analysis template. Copy and customize:
cp assets/analysis_template.py my_analysis.py
# Edit parameters and run
python my_analysis.py
Detailed step-by-step workflow with explanations for each stage.
Quick function reference organized by module.
Comprehensive visualization guide for publication-quality figures.
| Topic | Reference |
|---|---|
| Full workflow | references/standard_workflow.md |
| API reference | references/api_reference.md |
| Plotting guide | references/plotting_guide.md |
| Preprocessing | references/PREPROCESSING.md |
| Spike sorting | references/SPIKE_SORTING.md |
| Motion correction | references/MOTION_CORRECTION.md |
| Quality metrics |
# Core packages
pip install spikeinterface[full] probeinterface neo
# Spike sorters
pip install kilosort # Kilosort4 (GPU required)
pip install spykingcircus # SpykingCircus2 (CPU)
pip install mountainsort5 # Mountainsort5 (CPU)
# Our toolkit
pip install neuropixels-analysis
# Optional: AI curation
pip install anthropic
# Optional: IBL tools
pip install ibl-neuropixel ibllib
project/
├── raw_data/
│ └── recording_g0/
│ └── recording_g0_imec0/
│ ├── recording_g0_t0.imec0.ap.bin
│ └── recording_g0_t0.imec0.ap.meta
├── preprocessed/ # Saved preprocessed recording
├── motion/ # Motion estimation results
├── sorting_output/ # Spike sorter output
├── analyzer/ # SortingAnalyzer (waveforms, metrics)
├── phy_export/ # For manual curation
├── ai_curation/ # AI analysis reports
└── results/
├── quality_metrics.csv
├── curation_labels.json
└── output.nwb
Weekly Installs
54
Repository
GitHub Stars
17.3K
First Seen
Jan 20, 2026
Security Audits
Gen Agent Trust HubPassSocketPassSnykPass
Installed on
opencode47
codex46
gemini-cli45
cursor44
claude-code43
github-copilot42
DOCX文件创建、编辑与分析完整指南 - 使用docx-js、Pandoc和Python脚本
55,800 周安装
营销分析师AI工具:营销活动绩效分析、归因建模与预算优化指南
206 周安装
get-available-resources:自动检测CPU/GPU/内存/磁盘资源,为科学计算提供策略建议
209 周安装
Figma MCP 集成指南:AI 驱动设计转代码,实现 React + Tailwind 精准开发
208 周安装
Proxmox VE 命令行管理指南:qm、pct 工具详解与虚拟机容器自动化
208 周安装
Handoff - Claude会话轮换工具:无缝交接工作上下文,突破令牌限制
207 周安装
SimPO 偏好优化教程:无需参考模型,性能优于 DPO,快速训练 Mistral/Llama 模型
209 周安装
| Automated curation | references/AUTOMATED_CURATION.md |
| AI-assisted curation | references/AI_CURATION.md |
| Waveform analysis | references/ANALYSIS.md |