-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Is there a way to get the precise time that each period is played? #118
Comments
Getting reliable information about time is often quite hard. AFAIU, In the non-audio thread, you should use I'm not sure what exactly you need, can you describe that in more detail?
This sounds wrong. The "period" should be equal to the "blocksize". It's the same thing. It seems like your I guess you wanted the ALSA backend option Using the long options surely would reduce potential confusion, but either way you have to use the ALSA options after I guess you need something like this (I removed a few options where I didn't know what they are supposed to do):
BTW, did you try to use And using |
Thanks @mgeier for this information! Re the command line options, I received these values from a colleague, and I now understand that I was misinterpreting them, specifically I didn't understand that '-p' has a different meaning before and after '-dalsa'. In any case, I am using a blocksize/period of 1024 frames, and --nperiods of 3. I think these values are fine for me. I don't actually need particularly low latency, what I need is precise information about when sounds were played (see below).
Sure! In this experiment, I am playing intermittent sounds (~10 ms white noise bursts, repeated a few times a second) and measuring brain responses with a technique called EEG. My goal is to synchronize the two streams within about 0.5 ms. Specifically, for each sound that was played, I need to know which sample in the brain data was taken at the time that the sound started playing. For playing sounds: I do this with a raspberry pi running jack with a hifiberry amp2 audio output. So somehow, I need to know which sample in the EEG corresponds to the time that jack started playing sound. Approach 1 (software): store the time on the Raspberry Pi system clock at which jack played each sound. To do this I need to convert between Thus I am wondering what would be the best way to get the clock time that each period is played, or alternatively to set a callback with jack that I could use to pulse a pin every time a period is played.
Agreed! Thanks for any suggestions about what to try first, and I will use the oscilloscope to verify it. |
I'm not sure, but it might be hard to get to a reliable 0.5 ms accuracy with this approach.
Do you have a spare audio channel? You could try to generate a second audio signal, wire it to one of the inputs of the Teensy and try to detect it there? |
Thanks @mgeier for the suggestion! The hifiberry has two audio channels and I'm using both of them already. However, I wonder if I could connect a GPIO as another output from jack, not sure if that's possible, but I'll look into it. If I get quantitative results about the jitter that I get with my first approach, I will come back and post them here for future reference in case it is useful for others. |
Hello, I have been using this module for an auditory neuroscience experiment in my research lab, and I'm trying to increase the temporal precision of my results. Basically, I'm running jackclient-python on a raspberry pi with a Hifiberry audio card (all part of the Autopilot project). We play sounds separated by silence, and record auditory responses from hearing regions in the brain to those sounds. This is similar to hearing tests at the audiologist.
To make this experiment work, I would like to know with sub-millisecond precision the exact time that each sound comes out of the speaker. Of course I can us an oscilloscope to measure this, but this is bulky and it seems like there should be some way to get the information I need directly from jack.
I am starting jackd like this:
I think this means 3 periods of playback latency. While this call seems to set the length of the period to 16 frames, I think I am actually using periods of 1024 frames (because the parameter
blocksize
of myjack.Client
is 1024), this must be set by the sound card.Here is some pseudocode for what is going on in my process callback right now:
I log these three timing variables (
last_frame_time
,frames_since_cycle_start
, andnow
) for every process call. Using these data, I think I can calculate offline the approximate relationship between frame times and the system clock. That way, I can calculate what time it was on the system clock at the beginning of each period (i.e., when the process callback was called). Finally, I think I can assume that sound comes out of the speaker 3 periods later.I am looking for guidance, am I thinking about this correctly? If so, then my precision will be limited by the accuracy of
frames_since_cycle_start
, which I know is only approximate, and the latency between getting that estimate and getting the system clock time. Is there a better way to get the precise time that the sound on each period comes out of the speaker? Maybe there is a way to directly sample the audio clock on the raspberry pi, if I can figure out which pin it is on. Thanks for any tips!edit: more info, this is what I see when I start
jackd
, which includes version information and parameter settingsThe text was updated successfully, but these errors were encountered: