Releases: Coobiw/MPP-LLaVA
Releases · Coobiw/MPP-LLaVA
MPP-Qwen-Next_ckpt-and-data
- pretrain: including model.pth, the linear projection weight after the MPP-Qwen-next Pretrain Stage.
- llava_instruct: process script and annotations(json file)
- llava_pretrain: annotations(json file)
- videochatgpt: process script and annotations(json file)
MPP-Qwen14B pretrain/sft data and pretrain ckpt(linear projection weight)
pretrain:
- model.pth: the linear projection weight after the MPP-Qwen14B Pretrain Stage.
llava_instruction_100k:
complex_reasoning_77k
anddetails_23k
of llava_instruction_tuning data. I've converted it into MPP-Qwen14B format.
llava_pretrain_558k:
- 558K pretrain data of LLaVA. I've converted it into MPP_Qwen14B format.
instruction-data, checkpoint and logs(also 14B model)
Release the instruction data to align MiniGPT4 to Qwen-Chat LLM model and my checkpoint(all 10 epochs trained by lavis/projects/instruction_tuning/train.yaml
).
Release MiniGPT4Qwen14B model checkpoint and train logs(20 epochs trained by lavis/projects/pp_qwen14b/train_pp.yaml
). The files are compressed in pp_14b_ckpt-logs.zip
.