From e1009909569e4a3e678b1a7ff41e8956df66ca59 Mon Sep 17 00:00:00 2001 From: Shahul ES Date: Mon, 8 May 2023 16:51:26 +0530 Subject: [PATCH] Update README.md --- README.md | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/README.md b/README.md index 78d90c4..44927ca 100644 --- a/README.md +++ b/README.md @@ -1,6 +1,5 @@ # Reward-Model -Reward Model training framework for LLM RLHF. The word nemesis originally meant the distributor of fortune, neither good nor bad, simply in due proportion to each according to what was deserved. This is exactly the function of a Reward Model in RLHF. - +Reward Model training framework for LLM RLHF. For in-depth understanding of Reward modeling, checkout our [blog](https://explodinggradients.com/) ### Quick Start * Inference ```python @@ -19,4 +18,3 @@ python src/training.py --config-name ## Contributions * All contributions are welcome. Checkout #issues -* For in-depth understanding of Reward modeling, checkout our [blog](https://explodinggradients.com/)