How To Use Stable Diffusion 2.1

Salad Inference Endpoints - Stable Diffusion

SIE for Stable Diffusion 2.1 uses an async worker queue to process requests as they are sent to the API. Using the worker queue involves 2 actions, first is creating a new task in the queue followed up by checking the task for the status and results.


Getting Ready

Before you can start to use SIE with stable diffusion you will need to create a SaladCloud account on the portal in order to get your API key. Then once you have Created your Organization you will need to add billing information. Once you have done this, your account will automatically be approved to start using the SIE endpoint!

Creating a new task

  1. You will post your request to with your modelInputs that will be used to generate your response.

Just like all Salad API requests, you will need to include your unique API key as a header Salad-Api-Key, you can get your API key from your Account Settings

Example request:

  "modelInputs": {
    "prompt": "An oil painting of a salad",
    "num_inference_steps": 50,
    "guidance_scale": 20,
    "width": 512,
    "height": 512,
  "callInputs": {
    "PIPELINE": "StableDiffusionPipeline",
    "SCHEDULER": "LMSDiscreteScheduler",
    "safety_checker": "true"

You’ll get a response with the unique id for this task as well as the URL that you can use to get the results from this task. If you get a 202 Accepted then your job has successfully been added to the worker queue and will be processed ASAP.

Example Response

    "requestId": "00000000-0000-0000-0000-000000000000",
    "resultUrl": "{baseURL}/00000000-0000-0000-0000-000000000000"

Checking the status

You can immediately start to check the resultURL for the results of your request with a GET. There are 3 possible statuses for your result pending, success, or failed.

If the status is pending then the request is still being processed by SIE, it could be queued or waiting on the result from the hosted model.

Example Pending Response

    "requestId": "00000000-0000-0000-0000-000000000000",
    "status": "pending"

Once the model has been successfully run, the results are added to the response and the status will change to success.

Example Success Response

    "requestId": "00000000-0000-0000-0000-000000000000",
    "status": "success",
    "modelOutputs: ...

The modelOutputs returns the exact response from the Stable Diffusion model. All images are base64 encoded as part of the response. If you requested multiple images to be generated for the prompt via the num_images_per_prompt option, then each of those images will be base64 encoded in the images array.