Please keep tuned after we complete the internal process of publishing TUTA's model and code. Welcome to contact us for more technique details and discussions: [email protected], [email protected]
-
Stay tuned!: Code and data of cell type classification.
-
2021-10-29: Code of TUTA.
-
2021-9-2: We released HiTab, a large dataset on question answering and data-to-text over complex hierarchical tables.
-
2021-8-17: We presented our work in KDD'21.
-
2020-10-21: We released our paper on arXiv.
We provide three variants of pre-trained TUTA models: TUTA (-implicit), TUTA-explicit, and TUTA-base. These pre-trained TUTA variants can be downloaded from:
To run pretraining tasks, simply run
python train.py \
--dataset_paths="../dataset.pt" \
--pretrained_model_path="${tuta_model_dir}/tuta.bin" \
--output_model_path="${tuta_model_dir}/trained-tuta.bin"
# to enable a quick test, one can run
python train.py --batch_size 1 --chunk_size 10 --buffer_size 10 --report_steps 1 --total_steps 20
# to enable multi-gpu distributed training, additionally specify
--world_size 4 --gpu_ranks 0 1 2 3
Do make sure that the number of input dataset_paths
is no less that the world_size
(i.e. number of gpu_ranks
).
One can find more adjustable arguments in the main procedure.
To perform the task of cell type classification at downstream:
- for data processing, use
SheetReader
in the reader.py andCtcTokenizer
in the tokenizer.py; - for fine-tuning, use the
CtcHead
andTUTA(base)forCTC
in the ./model/ directory.
To perform the task of table type classification at downstream:
- for data processing, use
SheetReader
in the reader.py andTtcTokenizer
in the tokenizer.py; - for fine-tuning, use the
TtcHead
andTUTA(base)forTTC
in the ./model/ directory.
For a sample raw table file input, run
# for SpreadSheet
python prepare.py \
--input_dir ../data/pretrain/spreadsheet \
--source_type sheet \
--output_path ../dataset.pt
# for WikiTable
python prepare.py \
--input_path ../data/pretrain/wiki-table-samples.json \
--source_type wiki \
--output_path ../dataset.pt
# for WDCTable
python prepare.py \
--input_dir ../data/pretrain/wdc \
--source_type wdc \
--output_path ../dataset.pt
will generate a semi-processed version for pre-training inputs.
Input this data file as an argument into the pre-training script, then the data-loader will dynamically process for three pre-training objectives, namely Masked Language Model (MLM), Cell-Level Cloze(CLC), and Table Context Retrieval (TCR).
This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.
When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.
This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.