Multimodal-CoT incorporates vision features in a decoupled training framework. The framework consists of two training stages: (i) rationale generation and (ii) answer inference. Both stages share the same model architecture but differ in the input and output.
Install all required python dependencies:
pip install -r requirements.txt Download the dataset from the following repository:
https://github.com/lupantech/ScienceQA/tree/main/data Download the extracted vision features from vision_features and unzip the files under vision_features
# rationale generation CUDA_VISIBLE_DEVICES=0,1 python main.py \ --model allenai/unifiedqa-t5-base \ --user_msg rationale --img_type detr \ --bs 8 --eval_bs 4 --eval_acc 10 --output_len 512 \ --final_eval --prompt_format QCM-LE # answer inference CUDA_VISIBLE_DEVICES=0,1 python main.py \ --model allenai/unifiedqa-t5-base \ --user_msg answer --img_type detr \ --bs 8 --eval_bs 4 --eval_acc 10 --output_len 64 \ --final_eval --prompt_format QCMG-A \ --eval_le experiments/rationale_allenai-unifiedqa-t5-base_detr_QCM-LE_lr5e-05_bs16_op512_ep20/predictions_ans_eval.json \ --test_le experiments/rationale_allenai-unifiedqa-t5-base_detr_QCM-LE_lr5e-05_bs16_op512_ep20/predictions_ans_test.json Our trained models are available at models. To use our trained models, please put the them under the models folder.
# rationale generation CUDA_VISIBLE_DEVICES=0,1 python main.py \ --model allenai/unifiedqa-t5-base \ --user_msg rationale --img_type detr \ --bs 8 --eval_bs 4 --eval_acc 10 --output_len 512 \ --final_eval --prompt_format QCM-LE \ --evaluate_dir models/MM-CoT-UnifiedQA-base-Rationale # answer inference CUDA_VISIBLE_DEVICES=0,1 python main.py \ --model allenai/unifiedqa-t5-base \ --user_msg answer --img_type detr \ --bs 8 --eval_bs 4 --eval_acc 10 --output_len 64 \ --final_eval --prompt_format QCMG-A \ --eval_le models/rationale/predictions_ans_eval.json \ --test_le models/rationale/predictions_ans_test.json \ --evaluate_dir models/MM-CoT-UnifiedQA-base-Answer @article{zhang2023multicot, title={Multimodal Chain-of-Thought Reasoning in Language Models}, author={Zhang, Zhuosheng and Zhang, Aston and Li, Mu and Zhao, Hai and Karypis, George and Smola, Alex}, journal={arXiv preprint arXiv:2302.00923}, year={2023} } This project is licensed under the Apache-2.0 License.
Part of our codes are adapted from ScienceQA and Transformers.
We thank Pan Lu for providing parameter size for ScienceQA baselines.
