-
-
Notifications
You must be signed in to change notification settings - Fork 175
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add a benchmark for typeshed-stats
#268
base: main
Are you sure you want to change the base?
Conversation
The CI is failing for the Python 3.7-3.10 workflows for the same reason as #200 (comment): the The CI is failing on the 3.12 workflow because of the same reason that the CI is failing on |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That seems like a lot of data files to me... GitHub won't even let me review the actual benchmark.
These are just copied verbatim from typeshed, I assume? If so, maybe we could just install a pinned version in our requirements (not sure if typeshed is pip-installable?) or maybe clone the typeshed repo into a temporary location in the benchmark setup (we would need to be careful that this only happens once per pyperformance
run, not once per pyperf
run/process... not sure how hard that is). Or maybe we could do something clever with git (like making typeshed a submodule, pinned to a specific revision)?
I'd just like to avoid adding ~230k new lines of code (and ~4k files) if at all possible.
Yup, very much aware — I tried to keep a clean git history here to help out with reviewing (added the data files in the first commit, the project files in the second commit, and the actual code for running the benchmark in the third commit).
Correct
Typeshed isn't Note that this is basically the same approach we took with the docutils benchmark (#216), where we just copied-and-pasted all of docutils' docs as data files into this repo. |
This is an interesting idea. I'll look into it. |
Ah, nice. Thanks.
Well, that one seems like a lot less data, haha. We have lots of benchmarks that vendor a few dozen data files in this way... maybe, if we want to do this approach, we could just run against a small subset of But we could wait for @mdboom to weigh in, or see if git submodules are an acceptable solution. (Side-note: why isn't typeshed pip-installable? Does each tool need to vendor its own huge copy? I imagine there's some good reason...) |
You have to understand that my secret ulterior motive is to become the number-one contributor to
A bunch of reasons. The stubs for the stdlib are ~50k lines of code (so, just under a quarter of typeshed). These do need to be vendored by each tool, because updating typeshed's stdlib stubs often results in internal tests breaking for mypy/pyright/etc. Some tools also apply their own patches to typeshed's stdlib stubs (e.g. take a look at python/mypy#13987 -- quite a few tests failed at first; there had to be 2 followup PRs to revert specific typeshed changes to mypy's vendored version of the stdlib stubs). The other ~three-quarters of typeshed consists of stubs for various third-party packages that don't have inline types at runtime (yet). These parts of typeshed are pip-installable, but they're packaged into separate stubs packages ( |
Some of our third-party stubs in typeshed are much larger (in terms of lines of code) than others. If we deleted just typeshed's stubs for But anyway, I'll look into git submodules. |
If you just need to get a big bunch of files from a repo, a couple of methods we've used for Pillow are git cloning with |
Sorry I missed this PR the first time around. I did say that usually we just include the data in the repo, but for something this large, maybe it's pushing it. If installing from PyPI works here, that seems like the best approach (since support for installing dependencies is already baked into pyperformance). If not on PyPI, would specifying a git dependency in requirements.txt work? Cloning directly from git may work, as long as it can be done at dependency install time so it doesn't happen repeatedly. Ultimately, I don't care too much about the method as long as the exact version can be pinned (so the benchmark performance doesn't change as the typestubs are changed upstream). |
I said to @brandtbucher I'd investigate git submodules, but never did, so this PR is waiting on me at the moment, as far as I'm concerned, don't worry 🙂 I'll try to get back to this PR soon, though I'm quite busy at the moment... |
For comparison, #200 added ~1.5x the lines of this PR, so there's precdent for just committing the data directly there, if that would unblock this. A |
This PR adds a benchmark for
typeshed-stats
, a project of mine that provides a CLI tool to gather statistics and other information on typeshed. The benchmark takes around 5 seconds to complete. As discussed on the CPython core devs' discord, it may make an interesting benchmark for several reasons, among them:Rather than list
typeshed-stats
as a dependency for this benchmark, I've instead forked my own project and committed a modified version into this PR.typeshed-stats
as it exists on PyPI makes network requests usingaiohttp
, which would lead to indeterminacy in the benchmark; the version oftypeshed-stats
that I've included in this PR doesn't make network requests. I also made a few other changes to reduce the number of third-party dependencies, such as switching fromattrs
todataclasses
, and removing the use ofrich
. Other than that, however, I've tried to keep the changes to a minimum, so that it accurately reflects the "real" version oftypeshed-stats
. That means that there are a few unused functions in this PR that could potentially be removed, if we think that would be better.Two more decisions I made when making this PR:
typeshed-stats
as it exists on PyPI usestomli
on 3.10, and the stdlibtomllib
on 3.11. In this PR, I just usetomllib
unconditionally. That means that this benchmark has 0 external dependencies, but it also means that the benchmark only runs on 3.11+. If we wanted, I could add a dependency ontomli
and change it so that we usetomli
unconditionally, which would make it 3.10-compatible.typeshed-stats
as a CLI tool, so I've added the whole project to a subdirectory in the benchmark and then listed it as a local dependency in therequirements.txt
file for the benchmark. That was the only way I could see of getting therunner.bench_command()
function to work, but it doesn't look like any other benchmarks do this currently. Let me know what you think about this!