In preparation for the PyHEP.dev workshop, let's organize ourselves into groups to determine who we're going to get in touch with and what we're going to work on.
Step 1: Look at the list of topical groups and pick the ones you're most interested in. Add a comment to introduce yourself, your background, what you work on, and what you hope to achieve at the workshop.
Step 2: As conversations develop, it may seem natural to split a group into more manageable topics or add a new topical group that we didn't think of. Feel free to do that.
Step 3: These conversations should lead to kick-off talks. Everyone should submit an abstract and prepare a PDF presentation for one of the morning sessions. Include the topical group(s) the talk addresses and estimated time (5, 10, or 15 minutes) in the abstract. (Everyone should give a talk, even if only a 5 minute talk to introduce yourself and what you're interested in.)
Step 4: At the workshop, mornings are dedicated to kick-off talks and the afternoonds are free-form. The best way to use this workshop is to gather with the people you interacted with in the topical groups and get something done, whether that's detailed roadmap planning, designing new interfaces between libraries, coding sprints, brainstorming new ideas, getting integrated into a project/onboarding new members, transitioning a project into a new phase based on others' experience, writing documentation or tutorials, etc.
Remember, this is your workshop, your chance to be in the same room with developers of many Python packages in HEP. It will be as useful as you make it!
Do you want to connect with the maintainers of a particular library, but don't know who they are? The following people and the packages they maintain are listed below. If the following is incorrect or incomplete, please help us fix it with a pull request. Thank you!
- scikit-build, improved build system generator for CPython C, C++, Cython and Fortran extensions: @henryiii
- cookie, Scientific Python library development guide and cookie-cutter: @henryiii
- SkyHookDM (website), tabular data management in object storage, now part of the Apache Arrow project: @JayjeetAtGithub
- ServiceX (website), a data delivery service pilot for IRIS-HEP DOMA: @talvandaalen, @masonproffitt, @gordonwatts
- hepfile, general file schema defined to handle heterogeneous data but implemented in Python and HDF5: @mattbellis
- Uproot, ROOT I/O in pure Python and NumPy: @ioanaif, @Moelf, @jpivarski
- pylhe, lightweight Python interface to read Les Houches Event (LHE) files: @lukasheinrich, @matthewfeickert
- ATLAS PHYSLITE, a new reduced common data format in ROOT files for ATLAS: @nikoladze, @gordonwatts
- Pythia (website), a program for generating high-energy physics collision events, high energy collisions between electrons, protons, photons, and heavy nuclei: philten
- fastjet, wrapper for FastJet in the Scikit-HEP ecosystem: @rkansal47
- GNN Tracking, charged particle tracking with graph neural networks: @klieret
- Coffea, basic tools and wrappers for enabling not-too-alien syntax when running columnar collider HEP analysis: @lgray, @nsmith-
- hist, histogramming for analysis powered by boost-histogram: @amangoel185, @henryiii
- uproot-browser, a TUI viewer for ROOT files: @amangoel185, @henryiii
- JetNet, for developing and reproducing ML + HEP projects: @rkansal47
- Awkward Array, manipulate JSON-like data with NumPy-like idioms: @agoose77, @ianna, @jpivarski
- Cling, a C++ interpreter based on LLVM: @vgvassilev
- cppyy, fully automatic, dynamic Python-C++ bindings by leveraging the Cling C++ interpreter and LLVM: @sudo-panda
- FCCAnalyses, common analysis framework for the Future Circular Collider (FCC): @kjvbrt
- func_adl, constructs hierarchical data queries using SQL-like concepts in Python: @masonproffitt, @gordonwatts
- Common Partial Wave Analysis, makes amplitude analysis accessible through transparent and interactive documentation, modern software development tools, and collaboration-independent frameworks: @redeboer
- Rio+, a C++ package for amplitude analysis: @JMolinaHN
- GooFit, a massively-parallel framework for maximum-likelihood fits, implemented in CUDA/OpenMP: @mdsokoloff
- zfit, model manipulation and fitting library based on TensorFlow and optimised for simple and direct manipulation of probability density functions, with a focus on scalability, parallelisation and user friendly experience: @jonas-eschle
- pyhf, pure-Python HistFactory implementation with tensors and autodiff: @matthewfeickert, @lukasheinrich
- cabinetry, for designing and steering profile likelihood fits: @alexander-held
- correctionlib, a generic correction library: @nsmith-
- hepstats, statistical tools and utilities: @jonas-eschle
- coffea-casa, prototype Analysis Facility: @oshadura
- Work Queue (website), a framework for building large distributed applications that span thousands of machines drawn from clusters, clouds, and grids: @btovar
- dask-awkward, native Dask collection for Awkward Arrays: @agoose77, @lgray
- hepdata_lib, a library for getting your data into HEPData: @clelange
- Analysis Grand Challenge, a project performing performing large scale end-to-end data analysis for HEP use cases: @oshadura, @alexander-held
The organization of this workshop is modeled on the Scientific Python Developer Summit, but with the addition of kick-off talks.