-
Notifications
You must be signed in to change notification settings - Fork 73
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Our perf thresholds have changed #178
Conversation
The retain/release counts in sync code have become very unstable, and I don't think they're providing us much value in comparison to the time I'm spending trying to dial them in. So I propose we just remove them. |
Fair! If they are not stable, even in sync code, we need to disabled them. It is surprising that this now effects sync code. @hassila are there any known issues? |
"releaseCount" : 120271, | ||
"retainCount" : 109425, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we remove the retain/release counts from the output as well please?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
Nothing that I know of that should make them unstable (possibly if you run any sampling metric like thread counts etc? If you run with only ARC metrics enabled is it still unstable?). Then of course it doesn't always add up as mentioned in the related case (some objects are also created with initial ref count which is another factor why we sometime see more releases). |
I don't think we had any sampling metrics enabled: we recorded syscalls, allocations, and retains/releases. And the issue wasn't an adding-up problem: it was that the values changed from run-to-run. |
Yeah, that's very strange then, that they don't add up is known, but they should be stable. |
(to be sure to isolate it, you might try running with only arc metrics enabled, then I can't really see how any other parts of the benchmark infrastructure could affect it much) |
@Lukasa have you investigated why the thresholds have changed? How have you updated them? I'm running |
My current theory is that because we aren't scaling the iterations, we're extremely sensitive to minor variations. We should have things settle down if we scale the iteration count by kilo or mega. |
No description provided.