C-SFDA: A Curriculum Learning Aided Self-Training Framework for Efficient Source Free Domain Adaptation
conda create -n domain_ada
conda activate domain_ada
pip3 install torch==1.8.1+cu111 torchvision==0.9.1+cu111 -f https://download.pytorch.org/whl/torch_stable.html
pip install hydra-core numpy omegaconf sklearn tqdm wandb
Please download the VisDA-C dataset, and put it under ${DATA_ROOT}
. For your convenience we also compiled .txt
files based on the the image labels, provided under ./data/VISDA-C/
. The prepared directory would look like:
${DATA_ROOT}
├── VISDA-C
│ ├── train
│ ├── validation
│ ├── train_list.txt
│ ├── validation_list.txt
${DATA_ROOT}
is set to ./data/
by default, which can be modified in configs/data/basic.yaml
or via hydra command line interface data.data_root=${DATA_ROOT}
.
We use hydra as the configuration system. By default, the working directory is ./output
, which can be changed directly from configs/root.yaml
or via hydra command line interface workdir=${WORK_DIR}
.
VISDA-C experiments are done for train
to validation
adaptation. Before the adaptation, we should have the source model. You may train the source model with script train_source_VisDA.sh
as shown below.
export CUDA_VISIBLE_DEVICES=0,1,2,3
bash train_source_VisDA.sh
We also provide the link to pre-trained source models from 3 seeds (2020, 2021, 2022) which can be downloaded from here.
After obtaining the source models, put them under ${SRC_MODEL_DIR}
and run
export CUDA_VISIBLE_DEVICES=0,1,2,3
bash train_target_VisDA.sh
Please download the DomainNet dataset (cleaned version), and put it under ${DATA_ROOT}
. Notice that we follow MME to use a subset that contains 126 classes from 4 domains, so we also compiled .txt
files for your convenience based on the the image labels, provided under ./data/domainnet-126/
. The prepared directory would look like:
${DATA_ROOT}
├── domainnet-126
│ ├── real
│ ├── sketch
│ ├── clipart
│ ├── painting
│ ├── real_list.txt
│ ├── sketch_list.txt
│ ├── clipart_list.txt
│ ├── painting_list.txt
${DATA_ROOT}
is set to ./data/
by default, which can be modified in configs/data/basic.yaml
or via hydra command line interface data.data_root=${DATA_ROOT}
.
We use hydra as the configuration system. By default, the working directory is ./output
, which can be changed directly from configs/root.yaml
or via hydra command line interface workdir=${WORK_DIR}
.
DomainNet-126 experiments are done for 7 domain shifts constructed from combinations of Real
, Sketch
, Clipart
, and Painting
. Before the adaptation, we should have the source model. You may train the source model with script train_source_DomainNet.sh
as shown below.
export CUDA_VISIBLE_DEVICES=0,1,2,3
bash train_source_DomainNet.sh <SOURCE_DOMAIN>
We also provide the link to pre-trained source models from 3 seeds (2020, 2021, 2022) which can be downloaded from here.
After obtaining the source models, put them under ${SRC_MODEL_DIR}
and run
export CUDA_VISIBLE_DEVICES=0,1,2,3
bash train_target_DomainNet.sh <SOURCE_DOMAIN> <TARGET_DOMAIN> <SRC_MODEL_DIR>
-
We follow SHOT Paper GitHub Repo for these two datasets
-
Check the "Office" Folder
cd Office
-
Please manually download the datasets Office, Office-Home
-
To train the source models
export CUDA_VISIBLE_DEVICES=0,1,2,3
bash office-home_source.sh
- For Adaptation
export CUDA_VISIBLE_DEVICES=0,1,2,3
bash office_home_target.sh
-
If the simulation ends without any error, set HYDRA_FULL_ERROR=1 GPU Usage: 2 NVIDIA RTX A40 GPUs
-
This repo is built on AdaContrast and SHOT GitHub repos
If you find this work helpful to your own work, please consider citing us:
@inproceedings{karim2023c,
title={C-sfda: A curriculum learning aided self-training framework for efficient source free domain adaptation},
author={Karim, Nazmul and Mithun, Niluthpol Chowdhury and Rajvanshi, Abhinav and Chiu, Han-pang and Samarasekera, Supun and Rahnavard, Nazanin},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={24120--24131},
year={2023}
}