Skip to content

Latest commit

 

History

History
6 lines (4 loc) · 415 Bytes

README.md

File metadata and controls

6 lines (4 loc) · 415 Bytes

ToxicClassificationtoToxicSpan

We are trying to make a model that can detect the span of toxicity not just classify whole post/comments are toxic.

Given the binary toxic classification dataset we want to train the model by this, and get the toxicity detecting final trained model.

Find the hugging face model that predicts only non-toxic tokens: https://huggingface.co/Ashokajou51/NonToxicCivilBert