This research introduces a comprehensive assessment system for Arbitrary Image Style Transfer (AST) methods. Recognizing the limitations of existing evaluation approaches, our framework combines subjective user studies with objective metrics to provide a multi-grained understanding of AST performance. We collect a fine-grained dataset considering a range of image contexts such as different scenes, object complexities, and rich parsing information from multiple sources. Objective and subjective studies are conducted using the collected dataset.
🗹 Release the user study website.
🗹 Release the standard dataset with annotation.
☐ Release the code for extracting the ADE20K dataset to obtain image segmentation information.
☐ Release the explanation and starter code for the standard dataset.
☐ Release the evaluation code for AST methods.
For our subjective study, we have created an interactive website where participants can engage. Welcome to our platform, designed to host a variety of user studies. We invite you to explore and participate at your convenience, and we hope you have a delightful experience! 🐶
The ADE20K dataset comprises fully annotated images containing a diverse range of objects spanning over 3,000 object categories. These images also include detailed annotations for object parts. Additionally, ADE20K provides the original annotated polygons and object instances.
Given its comprehensive annotations and suitability for semantic understanding, we have selected ADE20K as an integral part of our collected dataset for developing the AST assessment. To utilize ADE20K, you can refer to their official repository, including its download link and an introductory overview.
We offer starter code that explores ADE20K dataset. Our provided code extracts the ADE20K dataset and parses it to extract essential features from each image, such as scene context, object complexity, and salient regions.
We incorporate images sourced from WikiArt as a component of our collected dataset. WikiArt offers a diverse array of authentic artistic images, spanning various styles and genres.
The Hugging Face dataset “wikiart” comprises paintings sourced from various artists and extracted from WikiArt. Each image in this dataset is accompanied by class labels. Leveraging this dataset, we can systematically evaluate how different artistic styles influence the stylization of images.
For the objective study, we have constructed a standard dataset including the following components:
- Content Images: These images are extracted from the ADE20K dataset.
- Style images: Sourced from WikiArt.
- Stylized Images: We generated the stylized images using 10 distinct AST methods.
You are welcome to use our collected dataset: OneDrive iCloud Drive 🐳
We have collected 10 different AST methods based on different model architectures.
The implementation of most of these AST methods is based on their official repositories, while some are sourced from community contributions. Where necessary, we have adjusted the implementations to enable seamless execution of style transfer tasks, particularly for two folders containing a large number of content and style images. It is important to note that the content and style images used in each experiment are identical.
You can find the modified implementations of these AST methods within the folder AST. Below, we provide a list of each method's execution file, along with the repository it is based on. We would like to express our gratitude to all the contributors involved in developing these methods.
AST method | type | Based Repository | Our Implement | Excute File |
---|---|---|---|---|
ArtFusion | Diffusion-based | official | AST/ArtFusion/ | style_transfer.ipynb |
StyTr2 | Transformer-based | official | AST/StyTr2/ | test.py |
ArtFlow | Flow model | official | AST/ArtFlow/ | test.py |
UCAST | CNN-based contrast learning |
official | AST/UCAST/ | test.py |
MAST | CNN-based manifold-based |
official | AST/MAST/ | test_artistic.py |
SANet | CNN-based attention-based |
pytorch | AST/SANet/ | Eval1.py |
AdaIN | CNN-based | official pytorch |
AST/AdaIN/ | test.py |
LST | CNN-based | pytorch | AST/LST/ | TestArtistic.py |
NST | CNN-based | pytorch | AST/NST/ | neural_style.py |
WCT | CNN-based | official pytorch |
AST/WCT/ | WCT.py |
Metrics | type | Based Repository | Our Implement | Excute File |
---|---|---|---|---|
SSIM | pixel-based | pytorch | Metrics/ | ssim.py |
Content Loss | CNN-based | pytorch | AST/NST/ | neural_style.py |
GRAM Loss | CNN-based | pytorch | AST/NST/ | neural_style.py |
LPIPS | CNN-based | official | Metrics/ | lpips_2dirs.py |
ArtFID | CNN-based | official | Metrics/ArtFID/art-fid | __main__.py |