Skip to content

Reinforcement Learning from Accessibility Measures

License

Notifications You must be signed in to change notification settings

Wang-Haining/RLAM

Repository files navigation

Reinforcement Learning from Accessibility Measures

This repository accompanies the paper Science Out of Its Ivory Tower: Improving Scholarship Accessibility with Reinforcement Learning.

Similar to Reinforcement Learning from Human Feedback (RLHF), we use reinforcement learning to enhance a language model's capabilities beyond the standard cross-entropy objective. However, unlike RLHF, which relies on a reward model to represent "human preference," we focus on document readability, a metric that is easier to quantify using established measurements (e.g., the Automatic Readability Index and Flesch-Kincaid Grade Level). Instead of building a reward model, we guide the optimization process using heuristics such as average sentence length and word accessibility (defined as the natural logarithm of token occurrences per 1 billion tokens in English Wikipedia). By carefully balancing these accessibility measures, our model achieved an additional improvement of three grade levels in readability without sacrificing faithfulness or language quality. The simplified abstracts generated by our model are now accessible to readers at a level equivalent to adults without a college degree.

We hope this work helps bridge the gap between scholarly research and a broader audience, fosters the development of better simplification systems, and ultimately contributes to a more informed and engaged society.

Training Logs

Training logs of the reported runs are hosted on WANDB.

Reproduction

To reproduce the results, follow these steps:

python3.10 -m venv .venv
source .venv/bin/activate
python -m pip install -r requirements.txt

SFT and Word Accessibility Model

Refer to this repository for details on the SFT and Word Accessibility Model.

RLAM/RLARI

Refer to the 'runs' folder for training and evaluation scripts used to initiate runs on Slurm.

Generations

For the generations reported in the Findings section, we used outputs from the following checkpoints. Additionally, we annotated the quality of the first ten samples with respect to language quality, faithfulness, and completeness.

Model $\beta_{SL}$ $\beta_{WA}$ URL
RLARI - - rl_ari_gemma-2b__42__1723771398%7Cstep_400_ari_12.18.csv
RLAM 0.05 4.0 rl_am_gemma-2b_sl_coef5e-2__42__1723772226%7Cstep_1550_ari_13.4.csv
RLAM 0.08 4.0 rl_am_gemma-2b_sl_coef8e-2__42__1723597090%7Cstep_1250_ari_13.64.csv
RLAM 0.10 4.0 rl_am_gemma-2b_sl_coef1e-1__42__1723692435%7Cstep_1350_ari_13.28.csv
RLAM 0.20 4.0 rl_am_gemma-2b_sl_coef2e-1__42__1724286966%7Cstep_1700_ari_12.17.csv

Contact

[email protected]

License

0BSD

Reference

@article{wang2024science,
      title={Science Out of Its Ivory Tower: Improving Accessibility with Reinforcement Learning}, 
      author={Haining Wang and Jason Clark and Hannah McKelvey and Leila Sterman and Zheng Gao and Zuoyu Tian and Sandra Kübler and Xiaozhong Liu},
      year={2024},
      journal={arXiv preprint arXiv:2410.17088},
      url={https://arxiv.org/abs/2410.17088}, 
}

About

Reinforcement Learning from Accessibility Measures

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published