npx skills add https://github.com/plyght/tonejs-skill --skill tonejs通过 Tone.js 的高级抽象,使用 Web Audio API 在浏览器中构建交互式音乐应用程序。
在以下情况下使用 Tone.js:
必须通过用户交互启动 AudioContext(浏览器要求):
import * as Tone from "tone";
// 务必从用户交互中调用 Tone.start()
document.querySelector("button").addEventListener("click", async () => {
await Tone.start();
console.log("Audio context ready");
// 现在可以安全播放音频
});
所有音频节点连接在一个图中,最终通向 Tone.Destination(扬声器):
// 基本连接
const synth = new Tone.Synth().toDestination();
// 通过效果器链式连接
const synth = new Tone.Synth();
const filter = new Tone.Filter(400, "lowpass");
const delay = new Tone.FeedbackDelay(0.125, 0.5);
synth.chain(filter, delay, Tone.Destination);
// 并行路由(分离信号)
const reverb = new Tone.Reverb().toDestination();
const delay = new Tone.Delay(0.2).toDestination();
synth.connect(reverb);
synth.connect(delay);
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
Tone.js 使用音乐记谱法抽象时间:
"4n" = 四分音符"8n" = 八分音符"2m" = 两小节"8t" = 八分音符三连音关键:始终使用传递给回调函数的 time 参数:
// 正确 - 采样精确的时序
const loop = new Tone.Loop((time) => {
synth.triggerAttackRelease("C4", "8n", time);
}, "4n");
// 错误 - JavaScript 时序不精确
const loop = new Tone.Loop(() => {
synth.triggerAttackRelease("C4", "8n"); // 会产生漂移
}, "4n");
用于同步事件的全局时间控制器:
// 在 Transport 上调度事件
const loop = new Tone.Loop((time) => {
synth.triggerAttackRelease("C4", "8n", time);
}, "4n").start(0);
// 控制 Transport
Tone.Transport.start();
Tone.Transport.stop();
Tone.Transport.pause();
Tone.Transport.bpm.value = 120; // 设置速度
triggerAttackRelease 播放音符import * as Tone from "tone";
const synth = new Tone.Synth().toDestination();
button.addEventListener("click", async () => {
await Tone.start();
// 播放 C4 音符,持续一个八分音符时长
synth.triggerAttackRelease("C4", "8n");
});
PolySynth 包装单音合成器const polySynth = new Tone.PolySynth(Tone.Synth).toDestination();
// 播放一个和弦
polySynth.triggerAttack(["C4", "E4", "G4"]);
// 释放特定音符
polySynth.triggerRelease(["E4"], "+1");
Player 或 SamplerTone.loaded() Promiseconst player = new Tone.Player("https://example.com/audio.mp3").toDestination();
await Tone.loaded();
player.start();
// 对于多采样乐器
const sampler = new Tone.Sampler({
urls: {
C4: "C4.mp3",
"D#4": "Ds4.mp3",
"F#4": "Fs4.mp3",
},
baseUrl: "https://example.com/samples/",
}).toDestination();
await Tone.loaded();
sampler.triggerAttackRelease(["C4", "E4"], 1);
Tone.Loop 或 Tone.Sequence 创建模式const synth = new Tone.Synth().toDestination();
const loop = new Tone.Loop((time) => {
synth.triggerAttackRelease("C4", "8n", time);
}, "4n").start(0);
await Tone.start();
Tone.Transport.start();
const synth = new Tone.Synth();
const distortion = new Tone.Distortion(0.4);
const reverb = new Tone.Reverb({
decay: 2.5,
wet: 0.5, // 50% 效果,50% 干声
});
synth.chain(distortion, reverb, Tone.Destination);
frequency、volume)rampTo、linearRampTo、exponentialRampTo 等方法const osc = new Tone.Oscillator(440, "sine").toDestination();
osc.start();
// 在 2 秒内将频率渐变到 880 Hz
osc.frequency.rampTo(880, 2);
// 在特定时间设置值
osc.frequency.setValueAtTime(440, "+4");
// 指数渐变(更适合频率)
osc.frequency.exponentialRampTo(220, 1, "+4");
Tone.Draw.schedule() 进行视觉更新const loop = new Tone.Loop((time) => {
synth.triggerAttackRelease("C4", "8n", time);
// 调度视觉更新
Tone.Draw.schedule(() => {
element.classList.add("active");
}, time);
}, "4n");
const synth = new Tone.Synth().toDestination();
const notes = ["C4", "D4", "E4", "G4"];
const seq = new Tone.Sequence(
(time, note) => {
synth.triggerAttackRelease(note, "8n", time);
},
notes,
"8n"
).start(0);
Tone.Transport.start();
const loop = new Tone.Loop((time) => {
if (Math.random() > 0.5) {
synth.triggerAttackRelease("C4", "8n", time);
}
}, "8n");
const filter = new Tone.Filter(1000, "lowpass").toDestination();
const lfo = new Tone.LFO(4, 200, 2000); // 4Hz,200-2000Hz 范围
lfo.connect(filter.frequency);
lfo.start();
听觉处理速度比视觉快 10 倍(约 25 毫秒 vs 约 250 毫秒)。声音提供即时反馈,使交互感觉响应迅速。一个会发出点击声的按钮,即使视觉反馈相同,也比没有声音的按钮感觉更快。
声音能瞬间传达情感。 单个音调比视觉编排更能传达成功、错误或紧张感。当音频和视觉共同讲述同一个故事时,体验比单独任何一种都更强烈。
少即是多。 大多数交互应该是无声的。将声音保留给重要的时刻:主要操作的确认、不容忽视的错误、状态转换和通知。始终将声音与视觉配对以实现无障碍访问 - 声音是增强,而非替代。研究游戏作为参考 - 它们已经完善了信息丰富、富有情感且不突兀的音频反馈。
良好的声音设计可以改变所有平台的用户体验 - 网络应用、移动应用、桌面应用程序和游戏。无论是创建通知声音、用户界面反馈还是音乐交互,这些原则都普遍适用。
声音使用一种人人都能理解的通用语言。设计音频时:
提出基础问题:
考虑上下文:
有效的通知声音具有以下特征:
1. 可区分性
2. 传达意义
3. 友好且恰当
4. 简洁清晰
5. 不突兀且可重复
6. 穿透噪音,但不刺耳
对于按钮、交互和过渡:
1. 谨慎使用
2. 音量与用途相关
3. 同步性很重要
4. 匹配交互特性
5. 传达深度和运动
1. 从声音调色板开始
2. 将声音与用途匹配
3. 使用任何声源
4. 分层以增加丰富度
1. 干净的音频
2. 频率滤波
3. 跨平台设计
4. 时长指南
5. 用户控制
6. 同步精度
设计声音时的实施考量:
必须从用户交互中调用 Tone.start()。没有这个,音频将无法播放。
// 错误 - 将静默失败
Tone.Transport.start();
// 正确
button.addEventListener("click", async () => {
await Tone.start();
Tone.Transport.start();
});
完成后始终释放节点:
const synth = new Tone.Synth().toDestination();
// 完成后
synth.dispose();
// 对于乐器数组
players.forEach((player) => player.dispose());
JavaScript 回调函数不精确。始终使用时间参数:
// 错误 - 将漂移失步
setInterval(() => {
synth.triggerAttackRelease("C4", "8n");
}, 250);
// 正确 - 采样精确
new Tone.Loop((time) => {
synth.triggerAttackRelease("C4", "8n", time);
}, "4n").start(0);
在播放前等待采样加载:
const sampler = new Tone.Sampler({
urls: { C4: "piano.mp3" },
baseUrl: "/audio/",
}).toDestination();
// 错误 - 可能尚未加载
sampler.triggerAttack("C4");
// 正确
await Tone.loaded();
sampler.triggerAttack("C4");
基础合成器是单音的(一次一个音符):
// 只播放一个音符
const mono = new Tone.Synth().toDestination();
mono.triggerAttack(["C4", "E4", "G4"]); // 只播放 C4
// 播放所有音符
const poly = new Tone.PolySynth(Tone.Synth).toDestination();
poly.triggerAttack(["C4", "E4", "G4"]); // 全部播放
音符可以通过多种方式指定:
synth.triggerAttackRelease("C4", "8n"); // 音高-八度记谱法
synth.triggerAttackRelease(440, "8n"); // 频率(Hz)
synth.triggerAttackRelease("A4", "8n"); // A4 = 440Hz
存在两种时序系统:
Tone.now(); // AudioContext 时间(始终运行)
Tone.Transport.seconds; // Transport 时间(从 0 开始)
// 在 AudioContext 上调度
synth.triggerAttackRelease("C4", "8n", Tone.now() + 1);
// 在 Transport 上调度
Tone.Transport.schedule((time) => {
synth.triggerAttackRelease("C4", "8n", time);
}, "1m");
ToneAudioNode (基类)
├── Source (音频生成器)
│ ├── Oscillator, Player, Noise
│ └── Instrument
│ ├── Synth, FMSynth, AMSynth
│ ├── Sampler
│ └── PolySynth
├── Effect (音频处理器)
│ ├── Filter, Delay, Reverb
│ ├── Distortion, Chorus, Phaser
│ └── PitchShift, FrequencyShifter
├── Component (构建块)
│ ├── Envelope, Filter, LFO
│ └── Channel, Volume, Panner
└── Signal (参数自动化)
├── Signal, Add, Multiply
└── Scale, WaveShaper
Tone.Synth - 基础单振荡器合成器Tone.FMSynth - 频率调制合成Tone.AMSynth - 振幅调制合成Tone.MonoSynth - 带滤波器和包络的单音乐器Tone.DuoSynth - 双音色合成器Tone.MembraneSynth - 打击乐合成器Tone.MetalSynth - 金属音色合成器Tone.NoiseSynth - 基于噪声的合成Tone.PluckSynth - 弹拨弦乐模型Tone.PolySynth - 复音包装器Tone.Sampler - 多采样乐器Tone.Filter - 低通、高通、带通等Tone.Reverb - 卷积混响Tone.Delay / Tone.FeedbackDelay - 回声效果Tone.Distortion - 波形塑形失真Tone.Chorus - 合唱效果Tone.Phaser - 移相器效果Tone.PitchShift - 实时移调Tone.Compressor - 动态范围压缩Tone.Limiter - 砖墙限制器"4n" - 四分音符"8n" - 八分音符"16n" - 十六分音符"2m" - 两小节"8t" - 八分音符三连音"1:0:0" - 小节:拍子:十六分音符0.5 - 秒数(数字)每周安装量
114
仓库
GitHub 星标数
1
首次出现
2026 年 1 月 23 日
安全审计
安装于
opencode110
gemini-cli109
codex109
cursor108
github-copilot108
amp105
Build interactive music applications in the browser using the Web Audio API through Tone.js's high-level abstractions.
Use Tone.js when:
The AudioContext must be started from a user interaction (browser requirement):
import * as Tone from "tone";
// ALWAYS call Tone.start() from user interaction
document.querySelector("button").addEventListener("click", async () => {
await Tone.start();
console.log("Audio context ready");
// Now safe to play audio
});
All audio nodes connect in a graph leading to Tone.Destination (the speakers):
// Basic connection
const synth = new Tone.Synth().toDestination();
// Chain through effects
const synth = new Tone.Synth();
const filter = new Tone.Filter(400, "lowpass");
const delay = new Tone.FeedbackDelay(0.125, 0.5);
synth.chain(filter, delay, Tone.Destination);
// Parallel routing (split signal)
const reverb = new Tone.Reverb().toDestination();
const delay = new Tone.Delay(0.2).toDestination();
synth.connect(reverb);
synth.connect(delay);
Tone.js abstracts time in musical notation:
"4n" = quarter note"8n" = eighth note"2m" = two measures"8t" = eighth note tripletCRITICAL : Always use the time parameter passed to callbacks:
// CORRECT - sample-accurate timing
const loop = new Tone.Loop((time) => {
synth.triggerAttackRelease("C4", "8n", time);
}, "4n");
// WRONG - JavaScript timing is imprecise
const loop = new Tone.Loop(() => {
synth.triggerAttackRelease("C4", "8n"); // Will drift
}, "4n");
The global timekeeper for synchronized events:
// Schedule events on the Transport
const loop = new Tone.Loop((time) => {
synth.triggerAttackRelease("C4", "8n", time);
}, "4n").start(0);
// Control the Transport
Tone.Transport.start();
Tone.Transport.stop();
Tone.Transport.pause();
Tone.Transport.bpm.value = 120; // Set tempo
triggerAttackReleaseimport * as Tone from "tone";
const synth = new Tone.Synth().toDestination();
button.addEventListener("click", async () => {
await Tone.start();
// Play C4 for an eighth note
synth.triggerAttackRelease("C4", "8n");
});
PolySynth to wrap a monophonic synthconst polySynth = new Tone.PolySynth(Tone.Synth).toDestination();
// Play a chord
polySynth.triggerAttack(["C4", "E4", "G4"]);
// Release specific notes
polySynth.triggerRelease(["E4"], "+1");
Player or SamplerTone.loaded() promiseconst player = new Tone.Player("https://example.com/audio.mp3").toDestination();
await Tone.loaded();
player.start();
// For multi-sample instruments
const sampler = new Tone.Sampler({
urls: {
C4: "C4.mp3",
"D#4": "Ds4.mp3",
"F#4": "Fs4.mp3",
},
baseUrl: "https://example.com/samples/",
}).toDestination();
await Tone.loaded();
sampler.triggerAttackRelease(["C4", "E4"], 1);
Tone.Loop or Tone.Sequence for patternsconst synth = new Tone.Synth().toDestination();
const loop = new Tone.Loop((time) => {
synth.triggerAttackRelease("C4", "8n", time);
}, "4n").start(0);
await Tone.start();
Tone.Transport.start();
const synth = new Tone.Synth();
const distortion = new Tone.Distortion(0.4);
const reverb = new Tone.Reverb({
decay: 2.5,
wet: 0.5, // 50% effect, 50% dry
});
synth.chain(distortion, reverb, Tone.Destination);
frequency, volume)rampTo, linearRampTo, exponentialRampToconst osc = new Tone.Oscillator(440, "sine").toDestination();
osc.start();
// Ramp frequency to 880 Hz over 2 seconds
osc.frequency.rampTo(880, 2);
// Set value at specific time
osc.frequency.setValueAtTime(440, "+4");
// Exponential ramp (better for frequency)
osc.frequency.exponentialRampTo(220, 1, "+4");
Tone.Draw.schedule() for visual updatesconst loop = new Tone.Loop((time) => {
synth.triggerAttackRelease("C4", "8n", time);
// Schedule visual update
Tone.Draw.schedule(() => {
element.classList.add("active");
}, time);
}, "4n");
const synth = new Tone.Synth().toDestination();
const notes = ["C4", "D4", "E4", "G4"];
const seq = new Tone.Sequence(
(time, note) => {
synth.triggerAttackRelease(note, "8n", time);
},
notes,
"8n"
).start(0);
Tone.Transport.start();
const loop = new Tone.Loop((time) => {
if (Math.random() > 0.5) {
synth.triggerAttackRelease("C4", "8n", time);
}
}, "8n");
const filter = new Tone.Filter(1000, "lowpass").toDestination();
const lfo = new Tone.LFO(4, 200, 2000); // 4Hz, 200-2000Hz range
lfo.connect(filter.frequency);
lfo.start();
Auditory processing is 10x faster than visual (~25ms vs ~250ms). Sound provides immediate feedback that makes interactions feel responsive. A button that clicks feels faster than one that doesn't, even with identical visual feedback.
Sound communicates emotion instantly. A single tone conveys success, error, or tension better than visual choreography. When audio and visuals tell the same story together, the experience is stronger than either alone.
Less is more. Most interactions should be silent. Reserve sound for moments that matter: confirmations for major actions, errors that can't be overlooked, state transitions, and notifications. Always pair sound with visuals for accessibility - sound enhances, never replaces. Study games for reference - they've perfected informative, emotional, non-intrusive audio feedback.
Good sound design transforms user experience across all platforms - web apps, mobile apps, desktop applications, and games. These principles apply universally whether creating notification sounds, UI feedback, or musical interactions.
Sound uses a universal language understood by everyone. When designing audio:
Ask foundational questions:
Consider context:
Effective notification sounds have these characteristics:
1. Distinguishable
2. Conveys meaning
3. Friendly and appropriate
4. Simple and clean
5. Unobtrusive and repeatable
6. Cuts through noise, not abrasive
For buttons, interactions, and transitions:
1. Use sparingly
2. Volume relative to purpose
3. Synchronization matters
4. Match interaction character
5. Convey depth and movement
1. Start with a sound palette
2. Match sound to purpose
3. Use any sound source
4. Layer for richness
1. Clean audio
2. Frequency filtering
3. Cross-platform design
4. Duration guidelines
5. User control
6. Synchronization precision
Implementation considerations when designing sounds:
MUST call Tone.start() from user interaction. Without this, no audio will play.
// WRONG - will fail silently
Tone.Transport.start();
// CORRECT
button.addEventListener("click", async () => {
await Tone.start();
Tone.Transport.start();
});
Always dispose of nodes when done:
const synth = new Tone.Synth().toDestination();
// When finished
synth.dispose();
// For arrays of instruments
players.forEach((player) => player.dispose());
JavaScript callbacks are NOT precise. Always use the time parameter:
// WRONG - will drift out of sync
setInterval(() => {
synth.triggerAttackRelease("C4", "8n");
}, 250);
// CORRECT - sample-accurate
new Tone.Loop((time) => {
synth.triggerAttackRelease("C4", "8n", time);
}, "4n").start(0);
Wait for samples to load before playing:
const sampler = new Tone.Sampler({
urls: { C4: "piano.mp3" },
baseUrl: "/audio/",
}).toDestination();
// WRONG - may not be loaded yet
sampler.triggerAttack("C4");
// CORRECT
await Tone.loaded();
sampler.triggerAttack("C4");
Basic synths are monophonic (one note at a time):
// Only plays one note
const mono = new Tone.Synth().toDestination();
mono.triggerAttack(["C4", "E4", "G4"]); // Only C4 plays
// Plays all notes
const poly = new Tone.PolySynth(Tone.Synth).toDestination();
poly.triggerAttack(["C4", "E4", "G4"]); // All play
Notes can be specified multiple ways:
synth.triggerAttackRelease("C4", "8n"); // Pitch-octave notation
synth.triggerAttackRelease(440, "8n"); // Frequency in Hz
synth.triggerAttackRelease("A4", "8n"); // A4 = 440Hz
Two timing systems exist:
Tone.now(); // AudioContext time (always running)
Tone.Transport.seconds; // Transport time (starts at 0)
// Schedule on AudioContext
synth.triggerAttackRelease("C4", "8n", Tone.now() + 1);
// Schedule on Transport
Tone.Transport.schedule((time) => {
synth.triggerAttackRelease("C4", "8n", time);
}, "1m");
ToneAudioNode (base class)
├── Source (audio generators)
│ ├── Oscillator, Player, Noise
│ └── Instrument
│ ├── Synth, FMSynth, AMSynth
│ ├── Sampler
│ └── PolySynth
├── Effect (audio processors)
│ ├── Filter, Delay, Reverb
│ ├── Distortion, Chorus, Phaser
│ └── PitchShift, FrequencyShifter
├── Component (building blocks)
│ ├── Envelope, Filter, LFO
│ └── Channel, Volume, Panner
└── Signal (parameter automation)
├── Signal, Add, Multiply
└── Scale, WaveShaper
Tone.Synth - Basic single-oscillator synthTone.FMSynth - Frequency modulation synthesisTone.AMSynth - Amplitude modulation synthesisTone.MonoSynth - Monophonic with filter and envelopeTone.DuoSynth - Two-voice synthTone.MembraneSynth - Percussive synthTone.MetalSynth - Metallic soundsTone.NoiseSynth - Noise-based synthesisTone.PluckSynth - Plucked string modelTone.Filter - Lowpass, highpass, bandpass, etc.Tone.Reverb - Convolution reverbTone.Delay / Tone.FeedbackDelay - Echo effectsTone.Distortion - Waveshaping distortionTone.Chorus - Chorus effectTone.Phaser - Phaser effectTone.PitchShift - Real-time pitch shiftingTone.Compressor - Dynamic range compressionTone.Limiter - Brick wall limiter"4n" - Quarter note"8n" - Eighth note"16n" - Sixteenth note"2m" - Two measures"8t" - Eighth note triplet"1:0:0" - Bars:Beats:Sixteenths0.5 - Seconds (number)Weekly Installs
114
Repository
GitHub Stars
1
First Seen
Jan 23, 2026
Security Audits
Gen Agent Trust HubPassSocketPassSnykPass
Installed on
opencode110
gemini-cli109
codex109
cursor108
github-copilot108
amp105
对话音频生成工具:使用Dia TTS创建逼真多说话人对话,支持情感控制和节奏调整
7,700 周安装
AI就绪规范更新指南 - 结构化文档模板与最佳实践
7,800 周安装
VS Code 记忆指令管理工具:remember - 智能分类经验教训,构建领域知识库
7,900 周安装
Microsoft Skill Creator:为智能体创建混合技能,本地存储核心知识,动态查询Learn MCP
7,900 周安装
GitHub贡献指南助手 - 安全贡献代码、遵循项目规范、自动化PR流程
7,800 周安装
Java提取方法重构技巧:使用GitHub Copilot提升代码质量与可维护性
7,800 周安装
AI测试规划与质量保障提示:基于ISTQB与ISO 25010的全面测试策略生成
7,800 周安装
Tone.PolySynthTone.Sampler - Multi-sample instrument