diff --git a/README.md b/README.md index 78d90c4..44927ca 100644 --- a/README.md +++ b/README.md @@ -1,6 +1,5 @@ # Reward-Model -Reward Model training framework for LLM RLHF. The word nemesis originally meant the distributor of fortune, neither good nor bad, simply in due proportion to each according to what was deserved. This is exactly the function of a Reward Model in RLHF. - +Reward Model training framework for LLM RLHF. For in-depth understanding of Reward modeling, checkout our [blog](https://explodinggradients.com/) ### Quick Start * Inference ```python @@ -19,4 +18,3 @@ python src/training.py --config-name ## Contributions * All contributions are welcome. Checkout #issues -* For in-depth understanding of Reward modeling, checkout our [blog](https://explodinggradients.com/)