This repository contains the code and results for fine-tuning various language models (LLMs) and non-large-language models. The models included in this study are:
The goal of this project is to compare the performance of different fine-tuned models on a specific task. The accuracies achieved by these models are as follows:
Model | Accuracy (%) |
---|---|
distillbert_neural_network | 40.32 |
bert_base_uncased | 66.67 |
bart_large_mnli | 67.30 |
llama3_8b | 74.60 |
mistral_7b | 75.23 |
gemma_7b | 77.11 |
- Dropped
sq
,sub_topic
,sub_sub_topic
columns - Removed all links and emojies
- Replaced Numbers with words
- Droped nan values
- Removed empty rows