Sayak Paul
sayakpaul
AI & ML interests
Diffusion models, representation learning
Articles
Organizations
Posts
5
Post
1473
Custom pipelines and components in Diffusers 🎸
Wanted to use customized pipelines and other components (schedulers, unets, text encoders, etc.) in Diffusers?
Found it inflexible?
Since the first dawn on earth, we have supported loading custom pipelines via a
These pipelines are inference-only, i.e., the assumption is that we're leveraging an existing checkpoint (e.g., runwayml/stable-diffusion-v1-5) and ONLY modifying the pipeline implementation.
We have many cool pipelines, implemented that way. They all share the same benefits available to a
Check them here:
https://github.com/huggingface/diffusers/tree/main/examples/community
Then we might have a requirement of everything customized i.e., custom components along with a custom pipeline. Sure, that's all possible.
All you have to do is keep the implementations of those custom components on the Hub repository you're loading your pipeline checkpoint from.
SDXL Japanese was implemented like this 🔥
stabilityai/japanese-stable-diffusion-xl
Full guide is available here ⬇️
https://huggingface.co/docs/diffusers/main/en/using-diffusers/custom_pipeline_overview
And, of course, these share all the benefits that come with
Wanted to use customized pipelines and other components (schedulers, unets, text encoders, etc.) in Diffusers?
Found it inflexible?
Since the first dawn on earth, we have supported loading custom pipelines via a
custom_pipeline
argument 🌄These pipelines are inference-only, i.e., the assumption is that we're leveraging an existing checkpoint (e.g., runwayml/stable-diffusion-v1-5) and ONLY modifying the pipeline implementation.
We have many cool pipelines, implemented that way. They all share the same benefits available to a
DiffusionPipeline
, no compromise there 🤗Check them here:
https://github.com/huggingface/diffusers/tree/main/examples/community
Then we might have a requirement of everything customized i.e., custom components along with a custom pipeline. Sure, that's all possible.
All you have to do is keep the implementations of those custom components on the Hub repository you're loading your pipeline checkpoint from.
SDXL Japanese was implemented like this 🔥
stabilityai/japanese-stable-diffusion-xl
Full guide is available here ⬇️
https://huggingface.co/docs/diffusers/main/en/using-diffusers/custom_pipeline_overview
And, of course, these share all the benefits that come with
DiffusionPipeline
.
Post
2683
We're introducing experimental support for
If you have multiple GPUs you want to use to distribute the pipeline models, you can do so. Additionally, this becomes more useful when you have multiple low-VRAM GPUs.
Documentation:
https://huggingface.co/docs/diffusers/main/en/training/distributed_inference#device-placement
🚨 Currently, only "balanced" device mapping strategy is supported.
device_map
in Diffusers 🤗If you have multiple GPUs you want to use to distribute the pipeline models, you can do so. Additionally, this becomes more useful when you have multiple low-VRAM GPUs.
Documentation:
https://huggingface.co/docs/diffusers/main/en/training/distributed_inference#device-placement
🚨 Currently, only "balanced" device mapping strategy is supported.
Collections
2
This collection contains a list of my most favorite text-to-image diffusion models.
Provides a list of papers focusing on optimizing T2I diffusion models, targeting fewer timesteps, architecture optimization, and more.
-
Progressive Distillation for Fast Sampling of Diffusion Models
Paper • 2202.00512 • Published • 1 -
On Distillation of Guided Diffusion Models
Paper • 2210.03142 • Published -
InstaFlow: One Step is Enough for High-Quality Diffusion-Based Text-to-Image Generation
Paper • 2309.06380 • Published • 32 -
Consistency Models
Paper • 2303.01469 • Published • 5
spaces
19
Running
13
📈
Demo Docker Gradio
Sleeping
🧨
Diffusers Docs QA Chatbot
Ask questions to the Diffusers documentation.
Build error
7
🧨
Convert Kerascv SD to Diffusers
Runtime error
🚀
Inpainting Tool
Build error
13
🐶
Generate Custom Pokemons with Stable Diffusion
Sleeping
8
⏰
Evaluate StableDiffusionPipeline with Different Schedulers
models
57
sayakpaul/sdxl-orpo-large-sft-lr-1e-7-ds
Updated
sayakpaul/sdxl-orpo-large-beta_orpo-0.01-lr-1e-6
Updated
sayakpaul/smol_unet2d
Updated
•
436
sayakpaul/actual_bigger_transformer
Updated
•
311
sayakpaul/sdxl-orpo-large-beta_orpo-0.05-beta_inner-250-lr-5e-6-lnoise-0.2
Updated
sayakpaul/sdxl-orpo-large-beta_orpo-0.1-beta_inner-500-lr-5e-6
Updated
sayakpaul/sdxl-orpo-large-beta_orpo-0.05-beta_inner-250-lr-5e-6
Updated
sayakpaul/diffusion-sdxl-orpo-wds
Text-to-Image
•
Updated
•
83
•
1
sayakpaul/diffusion-sdxl-orpo
Text-to-Image
•
Updated
•
225
•
13
sayakpaul/kohya-format-sdxl-lora
Text-to-Image
•
Updated
•
209
•
1
datasets
30
sayakpaul/sample-datasets
Viewer
•
Updated
sayakpaul/pickapic_v2_webdataset
Viewer
•
Updated
sayakpaul/generated-gemini-responses
Viewer
•
Updated
sayakpaul/no_robots_only_coding
Viewer
•
Updated
•
270
sayakpaul/temp-dataset-autotrain-553
Viewer
•
Updated
sayakpaul/diffusers-qa-chatbot-artifacts
Viewer
•
Updated
•
1
sayakpaul/mgie-results
Viewer
•
Updated
sayakpaul/coco-30-val-2014
Viewer
•
Updated
•
82
•
2
sayakpaul/drawbench-sdxl-refiner
Viewer
•
Updated
•
1
sayakpaul/drawbench-kandinsky-v22
Viewer
•
Updated