Getting error while using deepspeed package to speed up inference

#54
by Kroy - opened

ValueError: model must be a torch.nn.Module, got <class 'diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline'>

packages:
diffusers==0.13.0
deepspeed==0.7.3
accelerate==0.16.0

code snipper:

import torch
from torch import autocast
from diffusers import StableDiffusionPipeline
import deepspeed
model_id = "stabilityai/stable-diffusion-2"

model = StableDiffusionPipeline.from_pretrained(model_id,
revision="fp16",
torch_dtype=torch.
float16).to("cuda")

deepspeed.init_inference(
model=model, # Transformers models
mp_size=1, # Number of GPU
dtype=torch.float16, # dtype of the weights (fp16)
replace_method="auto", # Lets DS autmatically identify the layer to replace
replace_with_kernel_inject=False, # replace the model with the kernel injector
)
images = model(prompt_list, generator=generator).images

Sign up or log in to comment