You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
From this discussion it was noted that there are a few problems with network AudioSync generation and transmission:
latency - our audio processing code needs a continuous data stream from the audio source. Any gap, network hickup, or out-of-order packet will be visible as random noise in led effects
network latency - we found that the wifi protocol sometimes delivers a bunch of UDP packets in bursts, with 100ms (up to 250ms) between bursts. This would be a major problem when directly transmitting audio samples.
For network (non-mic) sources of AudioSync packets (e.g. from PC or other packet generator), we had discussed on Discord some years back adapting my old RealTime smoothing algorithm to treat AudioSync packets instead of the realtime pixel stream. See the notes for all the details.
The approach would be:
Configure your audio playback to be "behind" the stream by a tunable amount, based on average network latency (say 50-200ms).
Update the audio reactive code to use a small ring buffer to "play back" those 44 byte packets at regular cadence (cost: being slightly delayed half the average ring-buffer size). 5 or 6 frame depth is usually fine.
The latter is already quite well tuned and was working very well for the realtime protocol. Audio sync packet sizes are even smaller, so the ring buffer should be pretty trivial.
How complicated do you think it would be to adapt this into audio_reactive.h.
The text was updated successfully, but these errors were encountered:
From this discussion it was noted that there are a few problems with network AudioSync generation and transmission:
For network (non-mic) sources of AudioSync packets (e.g. from PC or other packet generator), we had discussed on Discord some years back adapting my old RealTime smoothing algorithm to treat AudioSync packets instead of the realtime pixel stream. See the notes for all the details.
The approach would be:
The latter is already quite well tuned and was working very well for the realtime protocol. Audio sync packet sizes are even smaller, so the ring buffer should be pretty trivial.
How complicated do you think it would be to adapt this into
audio_reactive.h
.The text was updated successfully, but these errors were encountered: