Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ability to distinguish between xruns caused by self or other clients #149

Open
andrewrk opened this issue Oct 7, 2015 · 1 comment
Open

Comments

@andrewrk
Copy link
Contributor

andrewrk commented Oct 7, 2015

I want to know whether it was my JACK client that caused an xrun to occur or a different node in the graph. If my client caused an xrun, then I'll increase the latency of my internal buffer to make xruns less likely to happen again. If other clients caused an xrun (which seems to happen if I simply alt+tab a bunch of times, but that's a separate issue), then my application should not take any action in response to the xrun.

@mseeber
Copy link
Contributor

mseeber commented Oct 7, 2015

xrun Detection, according to http://lac.zkm.de/2005/slides/letz_et_al_slides.pdf (Slide 28)
and the corresponding paper: http://lac.zkm.de/2005/papers/letz_et_al.pdf
This is specific to JACK2 and might be outdated...

It seems like the client will only know that an xrun occured, not why and it might not even notified when the xrun happens after it's execution.

If my client caused an xrun, then I'll increase the latency of my internal buffer to make xruns less likely to happen again.

This assumes, that improving performance of your client will make xruns significantly less likely, which is in general a risky assumption I think, because:
1 It assumes, that reducing the amount of required processing time has significant impact on the execution of the whole jack graph. Which is a thing it can not know by only detecting if the xrun happened in its own process callback. For example, if your client callback is called almost at the end of the process cycle, because other clients have eaten up the budget, it can cause an xrun even if it only needs short time to run. On he other hand, if you know, that your client is the only client, or the most heavy client in the graph, this may improve the situation, but that very much depends on the context.

Also it (as I read it) assumes, that avoiding syncronization overhead by choosing larger buffersizez significantly improves performance in terms of throughput of samples. Simply using a larger buffer will not significantly improve the ratio between used time and available time budget. Sure, the time budget for one process cycle will grow, but so will the time needed to process the additional samples.

But to solve your xrun issues, maybe it helps if you describe what you are planning for your client, also the jack-devel or LAD mailing list is full of people who can give you a better hints than I can.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants