Overview

To work with a job queue, you’ll need to customize the container that you deploy on Salad to include some way to receive jobs from the Job Queue, pass them to your application, then receive the results from your application and return them to the Queue. For this purpose, we provide a Job Queue Worker as a precompiled go binary which can be included and run inside your container. You can download it from Github. Note that the binary will work for any Linux-based image.

📘 Prerequisites

The examples that follow can be found on GitHub. If you want to follow along with the sample images, download them to your pc with

git clone https://github.com/SaladTechnologies/salad-cloud-job-queue-worker.git

Managing Multiple Processes

To configure your container to receive Jobs from a Job Queue, you’ll be running two processes in the same container - your application, as well as the Job Queue Worker. There are many ways to manage multiple processes in a single container, and we outline two such methods for you below.

Build a wrapper image with s6-overlay

In this example, we are going to build a new image which uses s6-overlay to serve as a hypervisor for both your primary application and the Job Queue Worker process. Here’s how to do it.

  1. You’ll need to include the Job Queue Worker in your container. In this example, we first download the binary to our local machine and build it into our container image by adding a COPY command to the Dockerfile.

    1. Alternatively, you may choose to download and unpack the SDK when your container is built by including ADD and RUN commands in your Dockerfile, or expand your entrypoint to include the download/unpack process.
    2. You may compare the SHA256 checksum of your downloaded file with the one found on the Github releases page.
  2. Now we will build your original image. As an example, see samples/mandelbrot/base/ - this is a simple container that receives requests and returns images of the mandelbrot set. It uses uvicorn to listen on port 80 on the generate path for requests to be handled by the mandelbrot generation code.

    dockerfile
    FROM python:3.11-slim
    WORKDIR /app
    COPY ./requirements.txt \*.py ./
    RUN pip install --no-cache-dir --upgrade -r /app/requirements.txt
    CMD ["uvicorn", "main:app", "--host", "*", "--port", "80"]
    
    1. Navigate to the samples/mandelbrot/base/ directory and run docker build -t foobar .
    2. Notice that the name of the output container image in this case is foobar; you may substitute any name you like. The image with that name will not be pushed to any registry and will remain solely on your local disk.
  3. Because the resulting container will run two processes, we will use the s6-overlay hypervisor to manage them. It has to be configured with the way to start your application, as probably indicated in CMD or ENTRYPOINT lines in the original Dockerfile. As an example, see samples/mandelbrot/with-s6-overlay/

    FROM foobar
    
    WORKDIR /app
    
    ARG S6_OVERLAY_VERSION=3.1.6.2
    ENV S6_KEEP_ENV=1
    
    RUN apt-get update && \
        apt-get -y install xz-utils curl && \
        apt-get clean && \
        rm -rf /var/lib/apt/lists/*
    ADD https://github.com/just-containers/s6-overlay/releases/download/v${S6_OVERLAY_VERSION}/s6-overlay-noarch.tar.xz /tmp
    RUN \
      tar -C / -Jxpf /tmp/s6-overlay-noarch.tar.xz && \
      rm -rf /tmp/s6-overlay-noarch.tar.xz
    ADD https://github.com/just-containers/s6-overlay/releases/download/v${S6_OVERLAY_VERSION}/s6-overlay-x86_64.tar.xz /tmp
    RUN \
      tar -C / -Jxpf /tmp/s6-overlay-x86_64.tar.xz && \
      rm -rf /tmp/s6-overlay-x86_64.tar.xz
    COPY --chmod=755 s6 /etc/
    
    COPY --chmod=755 ./salad-job-queue-worker /usr/local/bin/
    
    ENTRYPOINT ["/init"]
    CMD ["/usr/local/bin/salad-job-queue-worker"]
    
    1. The s6 configuration files are stored in samples/mandelbrot/with-s6-overlay/s6/s6-overlay/s6-rc.d/. You’ll find the configuration for the sample service already there. The configuration must have two files, type and run. You can copy over the type from the provided sample, but the run **must **contain the command which starts your application: it is equivalent to the CMD or ENTRYPOINT in your original Dockerfile.
  4. Next, build the wrapper image. First locate the line FROM foobar and replace foobar with whatever name you used in the previous step, save the file, and build the wrapper image:

    1. docker build -t mycompany/myimage -f docker/Dockerfile .
    2. Note that the the name of the image here is important. That is what you are going to deploy to the Salad Cloud.
    3. Replace mycompany/myimage with whatever name you want the resulting image to have.
dockerfile
FROM foobar

WORKDIR /app

# install the s6-overlay
ARG S6_OVERLAY_VERSION=3.1.6.2

RUN apt-get update \
    && apt-get install -y xz-utils \
    && apt-get clean \
    && rm -rf /var/lib/apt/lists/*

ADD https://github.com/just-containers/s6-overlay/releases/download/v${S6_OVERLAY_VERSION}/s6-overlay-noarch.tar.xz /tmp
RUN tar -C / -Jxpf /tmp/s6-overlay-noarch.tar.xz

ADD https://github.com/just-containers/s6-overlay/releases/download/v${S6_OVERLAY_VERSION}/s6-overlay-x86_64.tar.xz /tmp
RUN tar -C / -Jxpf /tmp/s6-overlay-x86_64.tar.xz

# copy previously downloaded salad-queue-client
COPY salad-http-job-queue-worker /usr/local/bin/

# copy the s6-overlay configuration
COPY --chmod=755 s6 /etc/

ENV S6_KEEP_ENV=1

ENTRYPOINT ["/init"]
CMD ["/usr/local/bin/salad-http-job-queue-worker"]
  1. You will need to note the port and path of your application within the container for use in the next step, when you’re configuring your container group to include a connection to a job queue. This port and path are where the queue worker SDK will be sending the jobs to. In our example, we have the following values:
    1. Port: 80
    2. Path: generate

Build a wrapper image with a shell script

In order to add the salad-queue-client into the image we are going to build a new image which uses a shell script.
Here’s how to do it.

  1. First, you’ll need to include the job queue worker SDK in your container. In this example, we first download the binary to our local machine and build it into our container image by adding a COPY command to the Dockerfile. Alternatively, you may choose to download and unpack the SDK when your container starts by including ADD and RUN commands in your Dockerfile.

    1. The binary will work for any Linux-based base image.
  2. Now we will build your original image. As an example, see samples/mandelbrot/base/ - this is a simple container that receives JSON requests on port 80 at the generate path and returns images of the mandelbrot set. It uses uvicorn to listen on port 80 for requests to be handled by the mandelbrot generation code.

    dockerfile
    FROM python:3.11-slim
    WORKDIR /app
    COPY ./requirements.txt \*.py ./
    RUN pip install --no-cache-dir --upgrade -r /app/requirements.txt
    CMD ["uvicorn", "main:app", "--host", "*", "--port", "80"]
    
    1. Navigate to the samples/mandelbrot/base/ directory and run docker build -t foobar .
    2. Notice that the name of the output container image in this case is foobar; you may substitute any name you like. The image with that name will not be pushed to any registry and will remain solely on your local disk.
  3. Because the resulting container will run two processes we will use a shell script run.sh to manage the multiple processes. The relevant sample is in the samples/mandelbrot/with-shell-script/:

    dockerfile
    FROM foobar
    
    WORKDIR /app
    
    # copy previously downloaded salad-http-job-queue-worker
    COPY ./salad-http-job-queue-worker /usr/local/bin/
    # copy the shell script
    COPY run.sh ./
    
    CMD ["/app/run.sh"]
    
    1. Locate the line FROM foobar and replace foobar with whatever name you used in the previous step.

    2. Save the file.

    3. Open the shell script run.sh:

      #!/bin/bash
      
      # Start the salad-http-job-queue-worker
      /usr/local/bin/salad-http-job-queue-worker &
      
      # Start the rest service
      uvicorn main:app --host '*' --port 80 &
      
      # Wait for any process to exit
      wait -n
      
      # Exit with status of process that exited first
      exit $?
      
    4. Locate the line uvicorn main:app --host '\*' --port 80 & and replace the start command with whatever is required to start your application, it is indicated in either ENTRYPOINT or CMD directive in the original Dockerfile. In our example, we can leave it as-is.

  4. Now, build the wrapper image with docker build -t mycompany/myimage -f docker/Dockerfile .
    Note that the the name of the image here is important. That is what you are going to deploy to the Salad Cloud. Replace mycompany/myimage with whatever the name you want the resulting image to have.

  5. You will need to note the port and path of your application within the container for use in the next step, when you’re configuring your container group to include a connection to a job queue. This port and path are where the queue worker SDK will be sending the jobs to. In our example, we have the following values:

    1. Port: 80
    2. Path: generate

Pushing the image to a Registry

Regardless of which path you chose, you should now have an image built with your application, the Job Queue Worker, and some way of managing both processes. It’s time to push it to the container registry of your choice. Learn more about the registries supported by Salad at Container Registries.

Additional notes about the Salad Job Queue Worker;

  • The Worker needs to connect to a local HTTP server running inside of SEL, our special version of Linux running on the host machine behind your container replica. This means you won’t be able to test the Job Queue Worker locally.
  • By default, the Worker logging level is set to ‘error’. Logs will be sent to the Salad Container Logs or your external logging tools. You can adjust the logging level by adding a ‘SALAD_QUEUE_WORKER_LOG_LEVEL’ environment variable.
  • The value can be changed to ‘warn’, ‘info’, and ‘debug’. If the logging level is set to ‘debug’ the worker will output a heartbeat log every few seconds as it checks for a new job. That log will look something like this “time=“2024-04-26T14:30:42Z” level=info msg=heartbeat file=“[worker.go:162]””.
  • If you choose not to use any Startup and/or Readiness probes, the Job Queue Worker will immediately start to look for jobs, even if your application inside is not yet ready to receive jobs. If this happens, the job will fail and you’ll need to submit it again. Read more about health probes here.