AndiSynth Web Audio Synthesizer v2.0
120 BPM
C4
MIDI 60 · 261.63 Hz
Oscillator
Subdivision
Tone Shaping
Tone
WarmBrt
Punch
SoftHard
0%
Envelope
0.01s
0.15s
0.70
0.30s
Harmony & Patterns
Intervals
Scales
C Major
Theory
Notes C
Formula Root
Relative Am
IV - V F - G
C4 - C6
Signal Flow
OSCILLATOR OscillatorNode FILTER BiquadFilterNode GAIN GainNode 🔊
Under the Hood

The Web Audio API is built around the concept of an AudioContext, which serves as the central hub for all audio operations in the browser. When you create an AudioContext, the browser allocates a dedicated, high-priority audio processing thread that runs independently of the main JavaScript thread. This architecture ensures that audio rendering continues smoothly even if the UI is busy handling other tasks, delivering sample-accurate timing at the hardware's native sample rate (typically 44,100 or 48,000 samples per second).

Audio processing in the Web Audio API follows a node graph paradigm. You create individual audio nodes—oscillators, filters, gain controls, analysers—and connect them together to form a signal chain. Each node performs a specific DSP (digital signal processing) operation on the audio stream passing through it. The final node in any chain connects to the AudioContext's destination, which represents the system's audio output (speakers or headphones). This modular approach mirrors the architecture of hardware modular synthesizers, where physical cables patch one module's output into another module's input.

One of the most powerful aspects of the Web Audio API is its support for real-time parameter automation. Rather than setting a parameter to a fixed value, you can schedule precise changes over time using methods like linearRampToValueAtTime() and exponentialRampToValueAtTime(). These scheduled changes execute on the audio thread with sub-millisecond accuracy, enabling effects like smooth filter sweeps, volume fades, and the attack-decay-sustain-release envelopes that give synthesized sounds their characteristic shape.

An oscillator is the fundamental sound source in any synthesizer. The Web Audio API's OscillatorNode generates a periodic waveform at a specified frequency, and it supports four built-in waveform types, each with a distinct harmonic profile. A sine wave is the purest tone possible—it contains only the fundamental frequency with no overtones. It sounds smooth and flute-like, making it ideal for sub-bass layers and clean tonal references. A square wave contains only odd-numbered harmonics (1st, 3rd, 5th, 7th, ...) at amplitudes of 1/n, giving it a hollow, reedy quality reminiscent of clarinets or chiptune music.

A sawtooth wave is the richest of the standard waveforms, containing all harmonics (both odd and even) at amplitudes of 1/n. This dense harmonic content produces a bright, buzzy sound that forms the backbone of classic analog synth patches—it is the starting point for the iconic Moog bass sound and lush string pads. Because it contains every harmonic, a sawtooth wave responds dramatically to filtering: sweeping a low-pass filter across a sawtooth creates the characteristic "wah" effect heard in countless electronic tracks. A triangle wave sits between the sine and square in terms of harmonic complexity. Like the square wave, it contains only odd harmonics, but their amplitudes fall off as 1/n², resulting in a warmer, softer sound that works well for mellow leads and gentle bass tones.

Beyond these four primitives, the Web Audio API allows you to define custom waveforms using createPeriodicWave(), where you specify the amplitude and phase of each harmonic directly via Fourier coefficients. This opens the door to additive synthesis, organ-style tone generation, and wavetable-based sound design. In this synthesizer, changing the waveform type immediately alters the raw harmonic material that then flows through the filter and envelope stages, so the waveform selection is the single most impactful tone-shaping decision in the signal chain.

Filters are subtractive synthesis tools that sculpt a sound by attenuating frequencies above, below, or around a cutoff point. The Web Audio API's BiquadFilterNode supports several filter types. A low-pass filter allows frequencies below the cutoff to pass through while progressively attenuating higher frequencies—this is the most common filter in synthesis, used to tame brightness and create warmth. A high-pass filter does the opposite, removing low-frequency content to thin out a sound or eliminate rumble. A band-pass filter combines both behaviors, allowing only a narrow band of frequencies around the cutoff to pass, which creates a focused, vocal-like quality. Each filter type also has a Q (resonance) parameter that boosts frequencies right at the cutoff point, adding emphasis and character—at high Q values, the filter begins to self-oscillate, producing a ringing tone.

The ADSR envelope (Attack, Decay, Sustain, Release) defines how a sound's amplitude evolves over time from the moment a note is triggered to the moment it fades to silence. Attack is the time it takes for the sound to rise from zero to its peak level after a key is pressed—a short attack (under 10ms) produces a percussive, immediate onset, while a long attack (500ms or more) creates a gradual swell like a bowed string. Decay is the time it takes to fall from the peak level down to the sustain level, adding a transient "punch" at the start of the note. Sustain is not a time value but a level—it sets the steady-state amplitude that persists as long as the key is held down.

Release controls how long the sound takes to fade from the sustain level to silence after the key is released. A short release (50-100ms) makes notes stop cleanly, suitable for staccato playing, while a long release (1-2 seconds) creates a lingering, reverberant tail. In this synthesizer, the ADSR envelope is implemented using the Web Audio API's parameter automation: linearRampToValueAtTime() handles the attack and decay phases, setValueAtTime() locks in the sustain level, and exponentialRampToValueAtTime() provides the natural-sounding exponential decay of the release phase. Together, the filter and ADSR envelope transform the raw oscillator output into an expressive, dynamic instrument.