Ribbit uses WebAssembly (WASM) to run high-performance digital signal processing (DSP) code in web browsers. The implementation uses Emscripten to compile C++ DSP libraries into WebAssembly, providing near-native performance for audio encoding/decoding operations.
The WebAssembly implementation consists of two files:
ribbit.js (16KB) - Emscripten-generated JavaScript loader and runtimeribbit.wasm (103KB) - Compiled WebAssembly binary containing DSP algorithmsThe Ribbit protocol caches a unique 80-bit Message ID for each transmission to handle deduplication and metadata. This packaged component consists of:
| Component | Bits | Description |
|---|---|---|
| Callsign | 48 | Alphanumeric sender identifier (8 chars × 6 bits) |
| Timestamp | 31 | Unix timestamp packed with custom epoch |
| Emergency | 1 | High-priority flag (0 = Normal, 1 = Emergency) |
| Total | 80 | Unique message identifier |
This 80-bit structure is critical for the “Contest Mode” and general message handling, ensuring that repeat transmissions of the same message are identified correctly.
In the Web UI, this ID is presented as a 20-character Hexadecimal string (e.g., 4B4F36...). This allows operators to easily visually verify message uniqueness and trace specific transmissions in logs.
You can verify the Message ID generation and test the full message format using the Message Format Demo.
Location: web/message_format_demo.html
[!IMPORTANT] Local Server Required: Due to browser security restrictions on WebAssembly and file access, you cannot open this file directly (e.g.,
file://...). You must run a local HTTP server.
How to Run:
python3 -m http.server 8000
http://localhost:8000/web/message_format_demo.htmlThis tool allows you to:
The following diagram illustrates how audio data flows from the browser’s microphone input through the WebAssembly/Ribbit processing pipeline to decode messages.
graph TD
subgraph "Browser / JavaScript Layer"
Mic[Microphone Input] -->|Audio Stream| WebAudio[Web Audio API]
WebAudio -->|"Float32 Array (2048 samples)"| JS_Feed["JS: Feed Buffer"]
JS_Feed -->|Write to WASM Heap| WASM_Mem["WASM Memory: feed[]"]
end
subgraph "WebAssembly Interface (ribbit.cc)"
WASM_Mem -->|Call _digestFeedOptimized| Digest["digestFeedOptimized()"]
Digest -->|Accumulate| Overflow["Overflow Buffer"]
Overflow -->|"Chunk Ready? (160 samples)"| FeedFunc["feedDecoder()"]
end
subgraph "DSP Core (C++)"
FeedFunc -->|"Call decoder->feed()"| DSP_Feed["Decoder::feed()"]
DSP_Feed -->|Signal Processing| DSP_Algo{"Message Detected?"}
DSP_Algo -- No --> Wait["Wait for more samples"]
DSP_Algo -- Yes --> Fetch["Decoder::fetch()"]
end
subgraph "Message Extraction"
Fetch -->|Write Bytes| Payload["WASM Memory: payload[]"]
Payload -->|Callback| JS_Callback["JS: fetchDecoded()"]
JS_Callback -->|UTF8ToString| UserMsg["User Interface: Display Message"]
end
classDef cpp fill:#f9f,stroke:#333,stroke-width:2px;
classDef js fill:#ff9,stroke:#333,stroke-width:2px;
class Digest,FeedFunc,Overflow,Payload cpp;
class Mic,WebAudio,JS_Feed,JS_Callback,UserMsg js;
digestFeedOptimized(): The main entry point for audio data. It handles the mismatch between Web Audio API buffer sizes (typically 2048 or 4096 samples) and the Ribbit decoder’s internal requirement (160 samples). It buffers incoming data into an overflow array.feedDecoder(): Automatically triggered by digestFeedOptimized whenever 160 samples (20ms at 8kHz) are accumulated. This ensures the DSP core receives a consistent stream of data.Decoder::feed(): The core C++ signal processing function. It runs the FFT and demodulation algorithms. It returns true only when a complete message has been successfully detected and decoded.fetchDecoded(): A JavaScript callback triggered immediately upon message detection. This notifies the web app to read the payload buffer and display the message.Always use the Emscripten Module API - never load WASM directly:
<!-- Include the Emscripten-generated JavaScript -->
<script src="scripts/ribbit.js"></script>
<script src="scripts/your_app.js"></script>
// your_app.js - Correct loading
let Module;
// Set up required callbacks before loading
window.encoderCreated = () => {
console.log("Encoder initialized");
};
window.decoderCreated = () => {
console.log("Decoder initialized");
};
// Load the module
Module().then((moduleInstance) => {
console.log("✓ WASM module loaded successfully");
// Now you can use the module
moduleInstance._createEncoder();
moduleInstance._createDecoder();
}).catch((error) => {
console.error("✗ WASM loading failed:", error);
});
// DON'T DO THIS - Direct WASM instantiation fails
const response = await fetch('./scripts/ribbit.wasm');
const wasmBinary = await response.arrayBuffer();
const result = await WebAssembly.instantiate(wasmBinary, {
env: { /* imports */ },
wasi_snapshot_preview1: { /* imports */ }
});
// Load the module
const module = await Module();
// Create encoder and decoder
module._createEncoder();
module._destroyEncoder(); // Cleanup when done
module._createDecoder();
module._destroyDecoder(); // Cleanup when done
// Allocate and free memory
const ptr = module._malloc(sizeInBytes);
module._free(ptr);
// Access memory buffers
const buffer = module.HEAP8.subarray(ptr, ptr + length);
// Initialize encoder with message data
module._initEncoder(messagePtr, messageLength);
// Read encoded audio signal
const signalLength = module._readEncoder();
const signalPtr = module._signal_pointer();
const signalLength = module._signal_length();
// Access signal buffer
const signalBuffer = module.HEAPF32.subarray(
signalPtr / 4, // Float32 offset
(signalPtr + signalLength * 4) / 4
);
// Feed audio data to decoder
module._feedDecoder(audioPtr, audioLength);
// Process buffered audio
const result = module._digestFeed();
// Get decoded message
const messagePtr = module._message_pointer();
const messageLength = module._message_length();
const message = module.UTF8ToString(messagePtr, messageLength);
// Use optimized digest function when available
const result = module._digestFeedOptimized();
// Convert between JavaScript strings and C strings
const cStringPtr = module.stringToUTF8("Hello World");
const jsString = module.UTF8ToString(cStringPtr);
const length = module.lengthBytesUTF8("Hello World");
module._free(cStringPtr); // Don't forget to free allocated strings
| Buffer | Size | Purpose |
|---|---|---|
| Feed Buffer | 2048 samples | Incoming audio chunks |
| Message Buffer | 256 bytes | Decoded text message |
| Signal Buffer | 16384 samples | Encoded audio output |
| Payload Buffer | 256 bytes | Message payload data |
// Different typed array views of the same heap
const heapU8 = module.HEAPU8; // Uint8Array
const heapU32 = module.HEAPU32; // Uint32Array
const heapF32 = module.HEAPF32; // Float32Array
const heapF64 = module.HEAPF64; // Float64Array
Set these global callbacks before loading the module:
window.encoderCreated = () => {
console.log("Encoder ready");
};
window.decoderCreated = () => {
console.log("Decoder ready");
};
Critical Concept: Ribbit requires continuous audio streaming for real-time message detection. Messages can arrive at any time from other users, so your application must maintain a constant audio stream from the microphone to the decoder.
// 1. Request microphone access
const stream = await navigator.mediaDevices.getUserMedia({
audio: {
echoCancellation: false, // Important: disable for radio audio
noiseSuppression: false, // Important: preserve signal integrity
autoGainControl: false, // Important: maintain original levels
sampleRate: 8000 // Required: Ribbit needs 8000 Hz
}
});
// 2. Create audio context and processing chain
const audioContext = new AudioContext({ sampleRate: 8000 });
const source = audioContext.createMediaStreamSource(stream);
const processor = audioContext.createScriptProcessor(2048, 1, 1);
// 3. Set up continuous processing callback
processor.onaudioprocess = (event) => {
const inputBuffer = event.inputBuffer;
const audioData = inputBuffer.getChannelData(0); // Get mono channel
// Feed this audio chunk to WASM decoder immediately
feedAudioChunkToDecoder(audioData);
// Check for decoded messages after each chunk
checkForDecodedMessages();
};
// 4. Connect the audio processing chain
source.connect(processor);
processor.connect(audioContext.destination);
// The onaudioprocess callback will fire repeatedly (~43 times per second with 2048 buffer)
// providing a continuous stream of audio data to the decoder
function feedAudioChunkToDecoder(audioData) {
// Allocate memory for audio chunk
const audioPtr = module._malloc(audioData.length * 4); // Float32 = 4 bytes
module.HEAPF32.set(audioData, audioPtr / 4);
// Feed audio chunk to decoder
module._feedDecoder(audioPtr, audioData.length);
// Process any complete chunks in the decoder buffer
const result = module._digestFeedOptimized();
// Free the allocated memory
module._free(audioPtr);
// Check if a message was decoded
if (result >= 0) {
extractAndProcessMessage();
}
}
// This function is called continuously as audio chunks arrive
// Messages can be detected at any time, not just at chunk boundaries
The decoder uses internal buffers to handle audio chunks:
// Buffer sizes (defined in ribbit.cc)
const FEED_LENGTH = 2048; // Audio chunk size from Web Audio API
const CHUNK_LENGTH = 160; // Fixed decoder input size
// Overflow buffer handles the difference automatically
onaudioprocess active to receive messages// Handle audio context state changes
async function ensureAudioContextRunning() {
if (audioContext.state === 'suspended') {
await audioContext.resume();
}
}
// Resume audio when user interacts with the page
document.addEventListener('click', ensureAudioContextRunning);
document.addEventListener('touchstart', ensureAudioContextRunning);
Cause: Trying to load WASM directly instead of using Emscripten Module API
Solution: Use Module() function from ribbit.js
Cause: CORS issues or missing files Solution: Use local web server, check file paths
Cause: Calling function without underscore prefix
Solution: All exported functions need _ prefix: module._createEncoder()
Cause: Accessing freed memory or buffer overflow Solution: Check pointer validity and buffer sizes
ribbit.js and ribbit.wasm exist in web/scripts/build.bat (Windows) or manual emcc commandrun_tests.sh to start local serverEXPORTED_FUNCTIONS in build script_free() for _malloc() calls// Get pointer to internal buffer
const feedPtr = module._feed_pointer();
const feedLength = module._feed_length();
// Create view of the buffer
const feedBuffer = module.HEAPF32.subarray(
feedPtr / 4,
(feedPtr + feedLength * 4) / 4
);
// Modify buffer directly
feedBuffer.set(audioData);
// Set custom callbacks
window.onProgress = (percent) => {
console.log(`Processing: ${percent}%`);
};
window.onError = (message) => {
console.error("DSP Error:", message);
};
// Create Web Audio context
const audioContext = new AudioContext({ sampleRate: 8000 });
// Get encoded signal from WASM
const signalPtr = module._signal_pointer();
const signalLength = module._signal_length();
const signalBuffer = module.HEAPF32.subarray(
signalPtr / 4,
(signalPtr + signalLength * 4) / 4
);
// Play the audio
const audioBuffer = audioContext.createBuffer(1, signalLength, 8000);
audioBuffer.copyFromChannel(signalBuffer, 0);
const source = audioContext.createBufferSource();
source.buffer = audioBuffer;
source.connect(audioContext.destination);
source.start();