How To Use Stable Diffusion 2.1

Salad's Stable Diffusion Test Recipe

Stable Diffusion API open for public testing: Coming soon!

  1. You cannot use a GET request to access the API, you'll need to use a POST instead.

Here's what it should look like:

{
  "modelInputs": {
    "prompt": "YOUR PROMPT GOES HERE!",
    "num_inference_steps": 50,
    "guidance_scale": 20,
    "width": 512,
    "height": 512,
    "seed": 3239022079
  },
  "callInputs": {
    "PIPELINE": "StableDiffusionPipeline",
    "SCHEDULER": "LMSDiscreteScheduler",
    "safety_checker": "true"
  }
}

In return you’ll get a response with the Base64 code to turn into the image

  1. This API (and several that will follow) are for trial purposes only. If you want to run in production, we're happy to generate a custom deployment for you.

  2. Since this is a shared testing resource, the responses from the API will be slower than actual production on the Salad network.

An additional model Inputs prompt is available:

"num_images_per_prompt": 5

You can use this to control how many images are generated at once for a single request. An example filled in request would look like this:

{
  "modelInputs": {
    "prompt": "A short-haired lion sleeping on a beach",
    "num_inference_steps": 50,
    "guidance_scale": 7.5,
    "width": 512,
    "height": 512,
    "seed": 3239022079,
    "num_images_per_prompt": 2 
  },
  "callInputs": {
    "PIPELINE": "StableDiffusionPipeline",
    "SCHEDULER": "LMSDiscreteScheduler",
    "safety_checker": "true"
  }
}

You'll receive multiple base64 images to decode in your POST response.

NOTE
If you have any questions, feel free to hit up anyone from the Salad team and we'll do our best to help out. Any feedback can be directly sent to a team member, and/or submitted in feature-requests + feedback.

Don't hold anything back!
We're just now ramping up our Recipes, and we're open to any and all suggestions.
Thanks everyone, and we will let you know when we turn the next page in our Recipe Book.