Writing DSP Code

Started by
10 comments, last by Ravyne 11 years, 1 month ago

Ravyne, on 08 Mar 2013 - 14:25, said:
Latency is super important in audio because we're far more sensitive to audio latency and audio anomolies than visual ones. Our vision systems do a lot of work that we're not even aware of to reconstruct the image we percieve, and it fills in and smoothes over incomplete information. In particular, latency is essential for making audio match the visual info that we percieve in time, and to ensure gapless playback with dynamic mixing of sound elements. Having written audio playback code on an embedded system, I can attest that playback gaps of even a few CPU cycles is quite noticable. In rendering you can (and in fact, regularly do) miss entire frames of information, but with audio you'll notice even when a 'frame' arrives late.

Yes, but playback gaps can be avoided by having a buffer. Audio response to user input needs to be quick, but that's still different from noticing a change in sound; it ties in to how precisely we perceive our own motion.
Advertisement

Yes, but the buffer is the rub :)

In order to have quick dynamic response, the buffer is typically very small, maybe a few hundred samples wide. And you've got to have the next one ready before the previous one runs out. You can't just repeat the last frame and catch up later like you can with video. Increase the size of the buffer, and you either introduce more latency, or you throw it out and recompute everything whenever a new sound comes in, the environment dynamics change, etc.

Buffers never get very big -- even at 30fps video, and 96ksps audio (which is over 2x CD quality) you've got just over 3k samples per video frame. At 60fps video and 48ksps audio, you've got just 800 samples per frame.

throw table_exception("(? ???)? ? ???");

This topic is closed to new replies.

Advertisement