🧑🏻🎓 I am currently a PhD student at the University of Science and Technology of China (USTC).
🔍 My research direction involves a wide series of applications based on generative models such as GANs, Transformers, and diffusion models. For now, I am mainly interested in these tasks: image inpainting, text-to-image generation, and video generation.
📂 To record up-to-date resources of the aforementioned research directions, I have maintained some GitHub repos, including:
- 1. Text-to-Image Generation: A collection of resources for the Text-to-Image (T2I) Generation task.
- 2. Video Generation: A collection of resources for the Video Generation task.
- 3. Image Inpainting: A collection of resources for the Image Inpainting task.
- 4. Radiology Report Generation: A collection of resources for the Radiology Report Generation (RRG) task.
🧪 I have re-produced/re-implemented some works, which are open-sourced for potential usage by the community:
- 1. Shape-Guided ControlNet: A re-implementation of ControlNet trained with shape masks.
- 2. Shape-Guided ControlNeXt: A re-implementation of ControlNeXt trained with shape masks.
Note
These codebases are experimental and would not be guaranteed to be well-performing. If you have any questions about them, please feel free to propose an issue or PR.
🤝 I am looking for long-term collaboration in ground-breaking projects in computer vision, please feel free to contact me if you are interested.
📜 You can find more information about me in the following websites.
- Google Scholar: https://scholar.google.com/citations?user=Y3NKd1wAAAAJ&hl=zh-CN&authuser=2
- Zhihu: https://www.zhihu.com/people/liu-chang-82-34-78 (@叫我Alonzo就好了)
- 🍠(小红书):https://www.xiaohongshu.com/user/profile/632dbaa10000000023026ad9?xsec_token=&xsec_source=pc_search(@叫我Alonzo就好了)
🔥 Recent News:
- [Nov. 19th] We have released our latest paper titled "StableV2V: Stablizing Shape Consistency in Video-to-Video Editing", with the correponding code, model weights, and a testing benchmark
DAVIS-Edit
open-sourced. Feel free to check them out from the links! - [Sep. 27th] I have open-sourced a re-implementation of ControlNeXt trained with shape masks, where you can find more details in the GitHub and Hugging Face repo.
- [Sep. 18th] I have open-sourced a re-implementation of ControlNet trained with shape masks, where you can find more details in the GitHub and Hugging Face repo.
- [Jun. 13th] Code and pre-trained model weights (Huggingface and ModelScope) of our paper titled "LaCon: Late-Constraint Diffusion for Steerable Guided Image Synthesis" are updated!
- [May 17th] Our paper titled "Towards Interactive Image Inpainting via Robust Sketch Refinement" is accepted by TMM 2024!