This repository is fine-tuning FinBERT with Korean financial sentence dataset from AIHub. For detailed information about FinBERT, see the FinBERT.
Also, you can view learning curves in the wandb
Finance Sentiment Corpus
This dataset translate 'Finance Phrase Bank' from English to Korean
Download
Finance Sentiment Corpus
AIHub
The Korean sentence dataset labeled from news, magazines, broadcast scripts, blogs, and books format with various categories, like history, society, finance, IT sience, and etc. Sentence is labeld by
- type: conversation/fact/inference/predict
- certainty: certain/uncertain
- temporality: past/present/future
- sentiment: positive/negative/neutral
Download
Download the '문장 유형(추론, 예측 등) 판단 데이터' dataset from AIHub. As the dataset is split compressed, a program that supports split decompression (e.g. Bandizip, 7-Zip) is recommended.
Preprocess
Dataset should be preprocessed to {'ID': int, 'Text': str, 'Label': str} form and saved as csv with delimiter='\t'.
python ./scripts/data_utils.py --data_path "your_directory/finance_data.csv"
python ./scripts/data_utils.py --data_path "your_directory/TL_뉴스_금융.zip" # not zip file but folder
python ./scripts/data_utils.py --data_path "your_directory/VL_뉴스_금융.zip"
Repository
Clone this repository
git clone https://github.com/gynchoi/finBERT.git
Environments
Create the Conda environment
conda env create -f environment.yml
conda activate finbert
Pre-trained model Checkpoints
Download the original FinBERT checkpoint from HuggingFace/FinBERT.
mkdir models/sentiment
cd ./models/sentiment/
git lfs install
git clone https://huggingface.co/ProsusAI/finbert
If you got git: 'lfs' is not a git command. See 'git --help
error, install git-lfs in your terminal first.
sudo apt install git-lfs
Tokenizer
For utilizing the Korean dataset, tokenizer is changed from 'bert-base-uncased' to 'monologg/kobert' and 'bert-base-multilingual-cased'. To use KoBERT tokenizer
- Copy tokenization_kobert.py to ./finbert/ folder
- Download sentencepiece package, unless you may get
UnboundLocalError: local variable 'spm' referenced before assignment
errorpip install sentencepiecc
- Modify the './finbert/finbert.py' code
Encoding
When open Korean dataset, encoding is needed. Change the './finbert/utils' as below
# with open(input_file, "r") as f
with open(input_file, "r", encoding='utf-8') as f
Trainer
Since original FinBERT trainer code is presented with jupyter notebook, we rewrite the './notebooks/finbert_training.ipynb' to python format
- Joining paths in 'finbert.py' with OS package
# self.config.model_dir / ('temporary' + str(best_model)
import os
os.path.join(self.config.model_dir, ('temporary' + str(best_model)))
Test
From predict.py, we need to download and import nltk
import nltk
nltk.download('punkt')
You can view expected errors and some solutions in ERRORS.md