WebSpeech API — Human-Machine Relations 101
This a short introduction to sound processing, speech recognition… and a lot of digressions about game development and my own life.
Digital sine waves
I’m trained as a sound engineer. Sound always fascinated me alongside computers. I’ve never had enough time to study Digital Signal Processing and/or learn C++ to code some VSTs* for my DAW**.
Nevertheless, I’ve had plenty of time to work with web code on a professional level, as a part of my career development. In the meantime, I’ve been doing a lot of side projects.
One of which was a platformer game based on the real-time interpretation of the audio input (e.g., a musical track). The gameplay was simple — a player needs to jump from platform to platform and reach the last one, while platforms bounce imitating the sound spectrum.
To achieve that I used Unity’s built-in FFT*** solution and an implementation tutorial from YouTube. Below, you can find part of the code responsible for gathering data on the audio spectrum.
*Virtual Studio Technology a.k.a. audio plug-ins
**Digital Audio Workstation, e.g. Ableton Live, Avid ProTools
***Fast Fourier transform