From 483b1ef4eed9df030aa574158aa24c333c35ed4c Mon Sep 17 00:00:00 2001 From: Ella Charlaix <80481427+echarlaix@users.noreply.github.com> Date: Wed, 29 Nov 2023 17:38:51 +0100 Subject: [PATCH] Add OpenVINO lcm support to documentation (#479) --- docs/source/inference.mdx | 22 ++++++++++++++++++++++ 1 file changed, 22 insertions(+) diff --git a/docs/source/inference.mdx b/docs/source/inference.mdx index bfd15bde11..f526492a12 100644 --- a/docs/source/inference.mdx +++ b/docs/source/inference.mdx @@ -454,3 +454,25 @@ refiner = OVStableDiffusionXLImg2ImgPipeline.from_pretrained(model_id, export=Tr image = base(prompt=prompt, output_type="latent").images[0] image = refiner(prompt=prompt, image=image[None, :]).images[0] ``` + + +## Latent Consistency Models + + +| Task | Auto Class | +|--------------------------------------|--------------------------------------| +| `text-to-image` | `OVLatentConsistencyModelPipeline` | + + +### Text-to-Image + +Here is an example of how you can load a Latent Consistency Models (LCMs) from [SimianLuo/LCM_Dreamshaper_v7](https://huggingface.co/SimianLuo/LCM_Dreamshaper_v7) and run inference using OpenVINO : + +```python +from optimum.intel import OVLatentConsistencyModelPipeline + +model_id = "SimianLuo/LCM_Dreamshaper_v7" +pipeline = OVLatentConsistencyModelPipeline.from_pretrained(model_id, export=True) +prompt = "sailing ship in storm by Leonardo da Vinci" +images = pipeline(prompt, num_inference_steps=4, guidance_scale=8.0).images +```