Customized and easily actionable insights for data-assisted advising, at no cost
Data-assisted advising helps advisors use their limited time to more efficiently identify and reach out to those most in need of help. Using the Student Success Tool to implement data-assisted advising, John Jay College has reported a 32% increase in senior graduation rates in two years via their CUSP program. Based on the success of this implementation, DataKind is supported by Google.org to develop this solution with additional postsecondary institutions, at no institutional cost. This repo is where the google.org fellows team will collaborate with DataKind to develop and ultimately share the open source components of the tool.
- Transparent: Our features and models will be openly shared with the institution, so you can know exactly what variables are leading to identifying those student most at risk of non graduation. Our end-to-end tool code will be openly shared in this github repo.
- Dedicated to bias reduction: We use bias-reducing techniques and regularly review our implementations for fairness and equity.
- Humans in the loop by design: Our interventions are designed to be additive to the student experience, and all algorithms are implemented through human actors (advisors).
Current PDP pipeline code: to be built into an actual installable python package
- Base schema: defines the standard data schema for PDP schools, with no customization
- Constants: defined for all schools
- Dataio: ingests the PDP data and restructures it for our workflow
- Features: subpackage for each grouping of features with a function that takes school customization arguments and adds the features to the data you give it as new columns.
- EDA: produces exploratory visualizations, summary statistics, and coorelation analysis for features
- Targets: defines and filters the data based on the student population, modeling checkpoint, and outcome variable
- Dataops: other functions frequently used across the process
- Modeling: AutoML.py is the main code that can be used for running and evaluating models, configured with parameters accepted from the config.yaml
- Tests: unit tests, to be built out into full unit testing suite (possibly fellows can help with this to get us set up for open source)
- Synthetic_data: Code for creating fake data for testing purposes
Please read the CONTRIBUTING to learn how to contribute to the tool development.
- Install
uv
(instructions here). - Install Python (instructions here). When running on Databricks, we're constrained to Python 3.11-3.12:
uv python install 3.11
- Install this package:
uv pip install -e .
- Connect notebook to a cluster running Databricks Runtime 15.4 LTS or 16.x.
- Run the
%pip
magic command, pointing it at one of three places:- a local workspace directory:
%pip install ../../../student-success-tool/
- a GitHub repo (for a specific branch):
%pip install git+https://github.com/datakind/student-success-tool.git@develop
- public PyPI:
%pip install student-success-tool == x.y.z
- a local workspace directory:
- Restart Python:
dbutils.library.restartPython()
or%restart_python
- Run unit tests:
uv run python -m pytest [ARGS]
(docs) - Run code linter:
uv tool run ruff check [ARGS]
(docs) - Run code formatter:
uv tool run ruff format [ARGS]
(docs)
Package dependencies are declared in pyproject.toml
, either in the project.dependencies
array or in the dependency_groups
mapping, where we also have dev
-only dependencies; dependencies are managed using the uv
tool.
- Manually add/remove/update dependencies by editing
pyproject.toml
directly, or leverageuv
'sadd
/remove
commands, as described here - Ensure that entries are formatted according to the PyPA dependency specifiers standard
- Once all dependencies have been modified, resolve them into the
uv.lock
lockfile by running theuv lock
command, as described here - Optionally, sync your local environment with the new dependencies via
uv sync
- If possible, submit a PR for the dependency changes only, rather than combining them with new features or other changes
Note: Since student_success_tool
is a "library" (in Python packaging parlance), it's generally recommended to be permissive when setting dependencies' version constraints: better to set a safe minimum version and a loose maximum version, and leave tight version pinning to "application" packages.
- Ensure that all changes (features, bug fixes, etc.) to be included in the release have been merged into the
develop
branch. - Create a new feature branch based off
develop
that includes two release-specific changes:- bump the
project.version
attribute in the package'spyproject.toml
file to the desired version; follow SemVer conventions - add an entry in
CHANGELOG.md
for the specified version, with a manually-curated summary of the changes included in the release, optionally including call-outs to specific PRs for reference
- bump the
- Merge the above PR into
develop
, then open a new PR to merge all changes indevelop
into themain
branch; merge it - Go to the GitHub repo's Releases page, then click the "draft a new release" button
- choose a tag; it should be formatted as "v[VERSION]", for example "v0.2.0"
- choose
main
as the target branch - enter a release title; it could be as simple as "v[VERSION]"
- copy-paste the changelog entry for this version into the "describe this release" text input
- click the "publish release" button
- Check the repo's GitHub actions to ensure that the
publish
workflow runs, and once it completes, check the package's PyPI page to ensure that the new version is live
Et voilà, a new version has been released! 🎉