300 Operational Patterns

title: “Containerization Operational Patterns” tags: [“kb”]

Containerization Operational Patterns

This section describes the common operational instructions and patterns for managing containers within this project’s strategy.

1. Image Building and Reproducibility

  • Command: docker build is used with the respective Dockerfile.
  • Version Control: Application code repositories are cloned using git clone and then explicitly pinned to specific commit hashes via git reset --hard commands directly within the Dockerfiles. This ensures build reproducibility.
  • Syntax Directive: Dockerfiles include docker/dockerfile:1 syntax directive.

2. Running Containers

Containers are designed to be run with exposed ports (-p argument) to enable access to their web user interfaces or APIs.

  • hlky/stable-diffusion:
    • Exposes port: 7860
    • Entrypoint: python3 -u scripts/webui.py
  • naifu:
    • Exposes port: 6969
    • Entrypoint: ./run.sh
  • AUTOMATIC1111/stable-diffusion-webui:
    • Exposes port: 7860
    • Entrypoint: python3 -u ../../webui.py
    • Requires mounting config.json and potentially other helper scripts.
  • linuxserver/jellyfin:
    • Exposes ports: 8096 (HTTP), optionally 8920 (HTTPS), 7359/udp, 1900/udp.
    • Entrypoint: /init (due to s6-overlay initialization).

3. Configuration

  • Environment Variables: ENV and ARG instructions within Dockerfiles are widely used for configuration. Common variables include:
    • PATH
    • CLI_ARGS
    • TOKEN
    • PUID (Process User ID)
    • PGID (Process Group ID)
    • TZ (Timezone)
    • Model paths
  • Configuration Files: For AUTOMATIC1111/stable-diffusion-webui, config.json is copied into the image to define output directories, image generation parameters, and post-processing settings.

4. GPU Acceleration

  • Crucial for AI Images: Essential for high-performance AI workloads.
  • Runtime Requirement: Requires the nvidia-docker runtime on the host system.
  • Environment Variable: The NVIDIA_VISIBLE_DEVICES=all environment variable is necessary to ensure GPU access from within the container.
  • General Purpose: Jellyfin also details hardware acceleration support for Intel, Nvidia, and Raspberry Pi via its README.md.

5. Volume Mounting

  • /config: A common volume for persistent application data, notably used by Jellyfin.
  • /output: Used by Stable Diffusion UIs for storing generated images.
  • /models: For AI applications, used for storing and accessing pre-downloaded AI models.
  • Jellyfin Media Libraries: Expected at /data/tvshows and /data/movies for media content.