-
Notifications
You must be signed in to change notification settings - Fork 0
Why do we need timestamp precision?
Why do we need timestamp precision?
We don't need it. The real question is rather when and how might not having timestamp precision lead to problems.
Let's ground the question with an example of the type we're actually thinking of when talking about timestamp precision. A system is acquiring audio data at the rate of 44100 samples per second, reading chunks of 2048 samples off of the sensor at a regular basis, and putting them on a buffer. In order to timestamp this data we could:
- coarse: couple each chunk with a timestamp corresponding to the system time, requested a read time
- precise: record the time once when we start a session and keep track of how many chunks have been written on a buffer
Obviously, these two options could have different forms and variations. Further, we're assuming a stable sample rate, fixed size chunks, continuity (no sample lost without the system knowing it), etc.
The coarse option is coarse because the actual time an audio sample "happened" and the time we get from the system clock are two different things -- more importantly, the difference between both is not constant. The precise option is precise because we make a coarse time reading once (per session) an associate it to the first chunk, but all subsequent timestamps are computed using this reference along with the fixed chunk size and sample rate.
Now, there are many situations where the coarse time-stamping is good enough. In fact, it seems to be the most used method for time-stamping and certainly is the simplest if and when we need an actual timestamp. Another consideration to really nail the upcoming argument: coarse time-stamping can be made to be, on average, more precise! Indeed, one could compute an estimate based on the system clock that would be centered on the real time of the event.
So what's the problem?
The problem is that the relationship between coarse timestamps and actual event time is unstable.
The "precise" timestamp will be off by d
, constantly.
On the other hand, every single coarse timestamp will be off by a different amount.
So why is that a problem?
Again -- in many situations it may not be a problem. Maybe the simplicity of coarse time-stamping is worth it for your situation. Here are two considerations to keep in mind though, to gauge if the varying error actually matters.
So timestamp precision is not at the sample-level and the error varies for every chunk. That just means what? You can't use the timestamp to decide if chunks overlap or have gaps between them. Perhaps a few other things to keep in mind. It's easy for you, now, to consider the variability of the timestamp error, ignore it because it doesn't matter, or correct for it when it does -- but as code piles up, will others (which includes the you of the future) write code that assuming otherwise?
No matter how you do it, if you have two different sources of timestamped data, aligning them based on their timestamps will be erroneous. If you're dealing with events that span a significant amount of time, it doesn't really matter, but if you have some signal that tells you that a switch has been switched on, and you're looking for that "click" sound, you probably won't find it by looking where the switch timestamp tells you too.
Say you're off by 0.1 seconds (and that the click event lasts 0.2 seconds): You're missing half of your information.
That's no problem though -- you see the 0.1 shift, so all you have to do is add that 0.1 seconds when converting switch timestamps to audio ones.
That's possible with the precise time-stamping since everything is shifted by a constant.
But how will you do that with your coarse timestamps?