neuropixels-analysis by davila7/claude-code-templates
npx skills add https://github.com/davila7/claude-code-templates --skill neuropixels-analysis使用来自 SpikeInterface、艾伦研究所和国际脑实验室 (IBL) 当前最佳实践的全面工具包,用于分析 Neuropixels 高密度神经记录。支持从原始数据到可用于发表的精选单元的完整工作流程。
此技能应在以下情况下使用:
| 探头 | 电极数 | 通道数 | 备注 |
|---|---|---|---|
| Neuropixels 1.0 | 960 | 384 | 需要 phase_shift 校正 |
| Neuropixels 2.0 (单探针) | 1280 | 384 | 更密集的几何结构 |
| Neuropixels 2.0 (4-shank) | 5120 | 384 | 多区域记录 |
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
| 格式 |
| 扩展名 |
| 读取器 |
| --- | --- | --- |
| SpikeGLX | .ap.bin, .lf.bin, .meta | si.read_spikeglx() |
| Open Ephys | .continuous, .oebin | si.read_openephys() |
| NWB | .nwb | si.read_nwb() |
import spikeinterface.full as si
import neuropixels_analysis as npa
# 配置并行处理
job_kwargs = dict(n_jobs=-1, chunk_duration='1s', progress_bar=True)
# SpikeGLX(最常见)
recording = si.read_spikeglx('/path/to/data', stream_id='imec0.ap')
# Open Ephys(许多实验室常用)
recording = si.read_openephys('/path/to/Record_Node_101/')
# 检查可用数据流
streams, ids = si.get_neo_streams('spikeglx', '/path/to/data')
print(streams) # ['imec0.ap', 'imec0.lf', 'nidq']
# 用于测试数据子集
recording = recording.frame_slice(0, int(60 * recording.get_sampling_frequency()))
# 运行完整分析流程
results = npa.run_pipeline(
recording,
output_dir='output/',
sorter='kilosort4',
curation_method='allen',
)
# 访问结果
sorting = results['sorting']
metrics = results['metrics']
labels = results['labels']
# 推荐的预处理链
rec = si.highpass_filter(recording, freq_min=400)
rec = si.phase_shift(rec) # Neuropixels 1.0 必需
bad_ids, _ = si.detect_bad_channels(rec)
rec = rec.remove_channels(bad_ids)
rec = si.common_reference(rec, operator='median')
# 或使用我们的包装器
rec = npa.preprocess(recording)
# 检查漂移(始终执行此步骤!)
motion_info = npa.estimate_motion(rec, preset='kilosort_like')
npa.plot_drift(rec, motion_info, output='drift_map.png')
# 如果需要则应用校正
if motion_info['motion'].max() > 10: # 微米
rec = npa.correct_motion(rec, preset='nonrigid_accurate')
# Kilosort4(推荐,需要 GPU)
sorting = si.run_sorter('kilosort4', rec, folder='ks4_output')
# CPU 替代方案
sorting = si.run_sorter('tridesclous2', rec, folder='tdc2_output')
sorting = si.run_sorter('spykingcircus2', rec, folder='sc2_output')
sorting = si.run_sorter('mountainsort5', rec, folder='ms5_output')
# 检查可用的排序器
print(si.installed_sorters())
# 创建分析器并计算所有扩展
analyzer = si.create_sorting_analyzer(sorting, rec, sparse=True)
analyzer.compute('random_spikes', max_spikes_per_unit=500)
analyzer.compute('waveforms', ms_before=1.0, ms_after=2.0)
analyzer.compute('templates', operators=['average', 'std'])
analyzer.compute('spike_amplitudes')
analyzer.compute('correlograms', window_ms=50.0, bin_ms=1.0)
analyzer.compute('unit_locations', method='monopolar_triangulation')
analyzer.compute('quality_metrics')
metrics = analyzer.get_extension('quality_metrics').get_data()
# 艾伦研究所标准(保守)
good_units = metrics.query("""
presence_ratio > 0.9 and
isi_violations_ratio < 0.5 and
amplitude_cutoff < 0.1
""").index.tolist()
# 或使用自动筛选
labels = npa.curate(metrics, method='allen') # 'allen', 'ibl', 'strict'
在 Claude Code 中使用此技能时,Claude 可以直接分析波形图并提供专家筛选决策。对于编程 API 访问:
from anthropic import Anthropic
# 设置 API 客户端
client = Anthropic()
# 可视化分析不确定单元
uncertain = metrics.query('snr > 3 and snr < 8').index.tolist()
for unit_id in uncertain:
result = npa.analyze_unit_visually(analyzer, unit_id, api_client=client)
print(f"Unit {unit_id}: {result['classification']}")
print(f" 推理: {result['reasoning'][:100]}...")
Claude Code 集成:在 Claude Code 中运行时,请 Claude 直接检查波形/相关图 - 无需设置 API。
# 生成包含可视化的综合 HTML 报告
report_dir = npa.generate_analysis_report(results, 'output/')
# 打开 report.html,包含摘要统计、图表和单元表格
# 将格式化摘要打印到控制台
npa.print_analysis_summary(results)
# 导出到 Phy 进行手动审查
si.export_to_phy(analyzer, output_folder='phy_export/',
compute_pc_features=True, compute_amplitudes=True)
# 导出到 NWB
from spikeinterface.exporters import export_to_nwb
export_to_nwb(rec, sorting, 'output.nwb')
# 保存质量指标
metrics.to_csv('quality_metrics.csv')
rec.save(folder='preprocessed/')freq_min: 高通截止频率(典型 300-400 Hz)detect_threshold: 坏通道检测灵敏度preset: 'kilosort_like'(快速)或 'nonrigid_accurate'(对严重漂移更好)batch_size: 每批样本数(默认 30000)nblocks: 漂移块数(对长记录增加)Th_learned: 检测阈值(越低 = 尖峰越多)snr_threshold: 信噪比截止值(典型 3-5)isi_violations_ratio: 不应期违规(0.01-0.5)presence_ratio: 记录覆盖率(0.5-0.95)自动化预处理脚本:
python scripts/preprocess_recording.py /path/to/data --output preprocessed/
运行尖峰排序:
python scripts/run_sorting.py preprocessed/ --sorter kilosort4 --output sorting/
计算质量指标并应用筛选:
python scripts/compute_metrics.py sorting/ preprocessed/ --output metrics/ --curation allen
导出到 Phy 进行手动筛选:
python scripts/export_to_phy.py metrics/analyzer --output phy_export/
完整的分析模板。复制并自定义:
cp assets/analysis_template.py my_analysis.py
# 编辑参数并运行
python my_analysis.py
详细的逐步工作流程,包含每个阶段的解释。
按模块组织的快速函数参考。
用于发表质量图形的综合可视化指南。
| 主题 | 参考 |
|---|---|
| 完整工作流程 | reference/standard_workflow.md |
| API 参考 | reference/api_reference.md |
| 绘图指南 | reference/plotting_guide.md |
| 预处理 | PREPROCESSING.md |
| 尖峰排序 | SPIKE_SORTING.md |
| 运动校正 | MOTION_CORRECTION.md |
| 质量指标 | QUALITY_METRICS.md |
| 自动筛选 | AUTOMATED_CURATION.md |
| AI 辅助筛选 | AI_CURATION.md |
| 波形分析 | ANALYSIS.md |
# 核心包
pip install spikeinterface[full] probeinterface neo
# 尖峰排序器
pip install kilosort # Kilosort4(需要 GPU)
pip install spykingcircus # SpykingCircus2(CPU)
pip install mountainsort5 # Mountainsort5(CPU)
# 我们的工具包
pip install neuropixels-analysis
# 可选:AI 筛选
pip install anthropic
# 可选:IBL 工具
pip install ibl-neuropixel ibllib
project/
├── raw_data/
│ └── recording_g0/
│ └── recording_g0_imec0/
│ ├── recording_g0_t0.imec0.ap.bin
│ └── recording_g0_t0.imec0.ap.meta
├── preprocessed/ # 保存的预处理记录
├── motion/ # 运动估计结果
├── sorting_output/ # 尖峰排序器输出
├── analyzer/ # SortingAnalyzer(波形、指标)
├── phy_export/ # 用于手动筛选
├── ai_curation/ # AI 分析报告
└── results/
├── quality_metrics.csv
├── curation_labels.json
└── output.nwb
每周安装数
139
代码仓库
GitHub 星标数
23.4K
首次出现
2026年1月21日
安全审计
安装于
claude-code123
opencode115
gemini-cli108
cursor108
antigravity103
codex98
Comprehensive toolkit for analyzing Neuropixels high-density neural recordings using current best practices from SpikeInterface, Allen Institute, and International Brain Laboratory (IBL). Supports the full workflow from raw data to publication-ready curated units.
This skill should be used when:
| Probe | Electrodes | Channels | Notes |
|---|---|---|---|
| Neuropixels 1.0 | 960 | 384 | Requires phase_shift correction |
| Neuropixels 2.0 (single) | 1280 | 384 | Denser geometry |
| Neuropixels 2.0 (4-shank) | 5120 | 384 | Multi-region recording |
| Format | Extension | Reader | |
| --- | --- | --- | |
| SpikeGLX | .ap.bin, .lf.bin, .meta | si.read_spikeglx() | |
| Open Ephys | .continuous, .oebin | si.read_openephys() | |
| NWB | .nwb | si.read_nwb() |
import spikeinterface.full as si
import neuropixels_analysis as npa
# Configure parallel processing
job_kwargs = dict(n_jobs=-1, chunk_duration='1s', progress_bar=True)
# SpikeGLX (most common)
recording = si.read_spikeglx('/path/to/data', stream_id='imec0.ap')
# Open Ephys (common for many labs)
recording = si.read_openephys('/path/to/Record_Node_101/')
# Check available streams
streams, ids = si.get_neo_streams('spikeglx', '/path/to/data')
print(streams) # ['imec0.ap', 'imec0.lf', 'nidq']
# For testing with subset of data
recording = recording.frame_slice(0, int(60 * recording.get_sampling_frequency()))
# Run full analysis pipeline
results = npa.run_pipeline(
recording,
output_dir='output/',
sorter='kilosort4',
curation_method='allen',
)
# Access results
sorting = results['sorting']
metrics = results['metrics']
labels = results['labels']
# Recommended preprocessing chain
rec = si.highpass_filter(recording, freq_min=400)
rec = si.phase_shift(rec) # Required for Neuropixels 1.0
bad_ids, _ = si.detect_bad_channels(rec)
rec = rec.remove_channels(bad_ids)
rec = si.common_reference(rec, operator='median')
# Or use our wrapper
rec = npa.preprocess(recording)
# Check for drift (always do this!)
motion_info = npa.estimate_motion(rec, preset='kilosort_like')
npa.plot_drift(rec, motion_info, output='drift_map.png')
# Apply correction if needed
if motion_info['motion'].max() > 10: # microns
rec = npa.correct_motion(rec, preset='nonrigid_accurate')
# Kilosort4 (recommended, requires GPU)
sorting = si.run_sorter('kilosort4', rec, folder='ks4_output')
# CPU alternatives
sorting = si.run_sorter('tridesclous2', rec, folder='tdc2_output')
sorting = si.run_sorter('spykingcircus2', rec, folder='sc2_output')
sorting = si.run_sorter('mountainsort5', rec, folder='ms5_output')
# Check available sorters
print(si.installed_sorters())
# Create analyzer and compute all extensions
analyzer = si.create_sorting_analyzer(sorting, rec, sparse=True)
analyzer.compute('random_spikes', max_spikes_per_unit=500)
analyzer.compute('waveforms', ms_before=1.0, ms_after=2.0)
analyzer.compute('templates', operators=['average', 'std'])
analyzer.compute('spike_amplitudes')
analyzer.compute('correlograms', window_ms=50.0, bin_ms=1.0)
analyzer.compute('unit_locations', method='monopolar_triangulation')
analyzer.compute('quality_metrics')
metrics = analyzer.get_extension('quality_metrics').get_data()
# Allen Institute criteria (conservative)
good_units = metrics.query("""
presence_ratio > 0.9 and
isi_violations_ratio < 0.5 and
amplitude_cutoff < 0.1
""").index.tolist()
# Or use automated curation
labels = npa.curate(metrics, method='allen') # 'allen', 'ibl', 'strict'
When using this skill with Claude Code, Claude can directly analyze waveform plots and provide expert curation decisions. For programmatic API access:
from anthropic import Anthropic
# Setup API client
client = Anthropic()
# Analyze uncertain units visually
uncertain = metrics.query('snr > 3 and snr < 8').index.tolist()
for unit_id in uncertain:
result = npa.analyze_unit_visually(analyzer, unit_id, api_client=client)
print(f"Unit {unit_id}: {result['classification']}")
print(f" Reasoning: {result['reasoning'][:100]}...")
Claude Code Integration : When running within Claude Code, ask Claude to examine waveform/correlogram plots directly - no API setup required.
# Generate comprehensive HTML report with visualizations
report_dir = npa.generate_analysis_report(results, 'output/')
# Opens report.html with summary stats, figures, and unit table
# Print formatted summary to console
npa.print_analysis_summary(results)
# Export to Phy for manual review
si.export_to_phy(analyzer, output_folder='phy_export/',
compute_pc_features=True, compute_amplitudes=True)
# Export to NWB
from spikeinterface.exporters import export_to_nwb
export_to_nwb(rec, sorting, 'output.nwb')
# Save quality metrics
metrics.to_csv('quality_metrics.csv')
rec.save(folder='preprocessed/')freq_min: Highpass cutoff (300-400 Hz typical)detect_threshold: Bad channel detection sensitivitypreset: 'kilosort_like' (fast) or 'nonrigid_accurate' (better for severe drift)batch_size: Samples per batch (30000 default)nblocks: Number of drift blocks (increase for long recordings)Th_learned: Detection threshold (lower = more spikes)snr_threshold: Signal-to-noise cutoff (3-5 typical)isi_violations_ratio: Refractory violations (0.01-0.5)presence_ratio: Recording coverage (0.5-0.95)Automated preprocessing script:
python scripts/preprocess_recording.py /path/to/data --output preprocessed/
Run spike sorting:
python scripts/run_sorting.py preprocessed/ --sorter kilosort4 --output sorting/
Compute quality metrics and apply curation:
python scripts/compute_metrics.py sorting/ preprocessed/ --output metrics/ --curation allen
Export to Phy for manual curation:
python scripts/export_to_phy.py metrics/analyzer --output phy_export/
Complete analysis template. Copy and customize:
cp assets/analysis_template.py my_analysis.py
# Edit parameters and run
python my_analysis.py
Detailed step-by-step workflow with explanations for each stage.
Quick function reference organized by module.
Comprehensive visualization guide for publication-quality figures.
| Topic | Reference |
|---|---|
| Full workflow | reference/standard_workflow.md |
| API reference | reference/api_reference.md |
| Plotting guide | reference/plotting_guide.md |
| Preprocessing | PREPROCESSING.md |
| Spike sorting | SPIKE_SORTING.md |
| Motion correction | MOTION_CORRECTION.md |
| Quality metrics |
# Core packages
pip install spikeinterface[full] probeinterface neo
# Spike sorters
pip install kilosort # Kilosort4 (GPU required)
pip install spykingcircus # SpykingCircus2 (CPU)
pip install mountainsort5 # Mountainsort5 (CPU)
# Our toolkit
pip install neuropixels-analysis
# Optional: AI curation
pip install anthropic
# Optional: IBL tools
pip install ibl-neuropixel ibllib
project/
├── raw_data/
│ └── recording_g0/
│ └── recording_g0_imec0/
│ ├── recording_g0_t0.imec0.ap.bin
│ └── recording_g0_t0.imec0.ap.meta
├── preprocessed/ # Saved preprocessed recording
├── motion/ # Motion estimation results
├── sorting_output/ # Spike sorter output
├── analyzer/ # SortingAnalyzer (waveforms, metrics)
├── phy_export/ # For manual curation
├── ai_curation/ # AI analysis reports
└── results/
├── quality_metrics.csv
├── curation_labels.json
└── output.nwb
Weekly Installs
139
Repository
GitHub Stars
23.4K
First Seen
Jan 21, 2026
Security Audits
Gen Agent Trust HubPassSocketPassSnykPass
Installed on
claude-code123
opencode115
gemini-cli108
cursor108
antigravity103
codex98
Excel财务建模规范与xlsx文件处理指南:专业格式、零错误公式与数据分析
45,000 周安装
LinkedIn广告健康度审计工具 - 25项检查清单,优化B2B广告效果与ROI
191 周安装
Markdown转Overseer任务工具:自动分解规划文档为可追踪开发任务
112 周安装
Pulumi TypeScript 技能:使用 TypeScript 和 Pulumi ESC 实现云基础设施即代码
134 周安装
Google Ads 账户深度分析与健康度审计工具 - 74项检查,自动生成优化报告
208 周安装
阿里云CDN OpenAPI自动化操作指南 - 域名管理、缓存刷新、HTTPS证书配置
129 周安装
SpriteKit 常见问题诊断指南:物理接触、帧率优化与内存泄漏排查
134 周安装
| Automated curation | AUTOMATED_CURATION.md |
| AI-assisted curation | AI_CURATION.md |
| Waveform analysis | ANALYSIS.md |