-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Audio Crackling #180
Comments
This is the responsible function: |
We spoke to Charlie earlier and he mentioned that he might have noticed glitching occurring more often for |
Right now I'm slicing the function up into sections and measuring variance between the average length that section runs and the longest runs. Some issues I've come across:
|
Sweet, this looks like a good approach 👍 Not sure if you've came across this yet, but this Maybe a nice way of logging this data would be to implement And yep, I think the largest absolute time variance is likely what we really care about. This is probably how we will find the spikes that were showing up in the Time Profiler graph. Also, seeing as we're working in such a high-performance domain it might also be worth timing how long it takes to measure the time between two markers with no other code between them. This will let us know if the I/O required to read from the system clock is significant enough to affect the profiling. Maybe then we could even subtract the average time taken by the markers themselves from the profiling results, but this probably isn't worth it unless it looks like the markers themselves are affecting the profiling. |
Got it working pretty well using the line macro. Seems to have a cost of about 0.05+- nano seconds which should not be an issue unless it's being called a lot in a loop. The last piece of the puzzle is working out how to store the data for reading. I want to run a version of audio_server with this timing code overnight and then get some data that we can see logged to a file. This should show pretty clearly which sections are peaking out periodically. |
I think there is a CSV crate for generating tables that you can read from google docs or excel but I haven't had a play with it yet |
Ok finally got something to show. This section seems to be often run over 100% of the average time for the entire function. |
Just ran a timing run on my mac with more timers so won't be the most accurate but it shows this section as using the most |
Some other things that might be worth tracking simultaneously are:
as each of these contribute to the number of iterations in these loops. E.g. if this mixing loop that you highlight in your last comment is indeed the bottleneck, it may be because one or more sounds and their channels very occasionally become within range of many more speakers and require many more iterations. I'm not sure this reflects the behaviour though, which is more like a very occasional click/pop rather than longer periods of glitching as sound's travel through larger groups of speakers. It might be worth experimenting by reducing the |
Ok got some more information: |
I think the next step is to time inside the loop. This is a little challenging because if you imagine the loop "unrolled" it's like a very long piece of code. So it will produce a lot of data and the calls to Instant may start to be significant. |
To get further confirmation that this nested loop area is the culprit, perhaps we can track the total number of iterations that occur in that nested loop and graph them alongside the spikes that you're seeing (this is basically the same as tracking the number of sounds, channels per sound and target speakers per sound as I mentioned in my previous comment). The most interesting way might be to add a |
Some good information on Rust and cache misses |
Could it be that in this line: |
Spikes in cpu happening every so often causing glitches in audio.
Worse on start up
The text was updated successfully, but these errors were encountered: