My thoughts on Slackware, life and everything

Tag: slackwarecloudserver (Page 1 of 3)

Slackware Cloud Server Series, Episode 12: Local AI

The world is on fire, thanks to the orange clown who wages war for personal gain. Or is it because data centers are super-heated running all these AI models 24/7 ?
In any case, the AI boom wreaked havoc with my plans to purchase a new computer in order to replace my ageing build server here at home. RAM sticks are 4 to 5 times as expensive now compared to half a year ago, and hard drives are pretty hard to come by.
I decided to wait a bit with a full replacement of the server hardware. Instead I bought one item which has had only a moderate price increase until now: a GeForce RTX 5060 Ti graphics card with 16 GB of VRAM. I installed that as the second GPU card in the server and did not connect any screen to it.

Instead I decided to do a local experiment with Artificial Intelligence. The result of that experiment is a new episode in my Slackware Cloud Server series. I am going to show you how to make the un-used Nvidia GPU available to Slackware, how to install and configure a tool that manages (downloads, runs) local Large Language Models (aka LLM’s aka AI models) and then expose the AI models via a web page that looks a lot like a Claude Chat or a ChatGPT instance.


Check out the list below which shows past, present and future episodes in my Slackware Cloud Server series. If the article has already been written you’ll be able to click on the subject.
The first episode also contains an introduction with some more detail about what you can expect.


Why the hell would I want to run a local AI at home?

I think that the advantages are pretty obvious, but let me spell them out for you.

  • Privacy & data security
    It’s my main reason for doing this. The whole Cloud Server series is about owning, controlling and managing your own data without giving it away to the big tech corporations. Literally all of the processing happens on our Slackware server. None of your sensitive files, private documents or even the software code that you are developing gets sent to someone else’s infrastructure.
  • Offline accessibility
    The local LLM does not require an active internet connection. It’ll be your conscious decision to give the AI model access to Internet search engines.
  • Cost efficiency (at least long-term)
    I had to make an upfront investment in the required hardware of course. LLM’s need to run inside the VRAM of a high-end GPU in order to respond with a decent speed. If you have a spare GPU in your gaming rig, then by all means re-use that card! But really, running LLMs locally will remove the need for monthly subscription fees and per-token API costs. I know people who pay 100 euros per month to be able to consume the API tokens that they need for business development.
    If you are a high-volume LLM user, running the model locally can lead to substantial savings over time. Your local AI may not be one the fancy new commercial models and the speed of answering may be a bit slower, but there’s always going to be trade-offs.
  • Reduced Latency
    Often overlooked actually, but all your ChatGPT, Gemini or Claude queries involve a “network round trip” to a remote server. If you want to create an interactive AI service like a voice assistant, a local model may be able to offer snappier responses.
  • Customization & Control
    Obviously, you have full ownership of the model you downloaded and you control its environment. This allows you to:

    • Fine-tune the model on your own niche datasets.
    • Choose specific open-source models (like Llama 3, Gemma 4 or Mistral) and quantization levels that fit your hardware.
    • Avoid content restrictions or “guardrails” defined by commercial providers.
  • Reliability & Independence
    You are not at the mercy of Big Tech! Any downtime is your own problem to solve; you never run into rate limits; you will never be hit with sudden policy changes that deprecate the model you rely on overnight.

Here is an architectural overview what the stack looks like. We install Ollama on bare-metal and will be running Open WebUI in a Docker container.


Preamble

This section describes the technical details of our setup, as well as the things which you should have prepared before trying to implement the instructions in this article.

Web Hosts

For the sake of this instruction, I will use the URL “https://ai.darkstar.lan” as your landing page for your private AI chatbot.

Setting up your domain (which will hopefully be something else than “darkstar.lan”…) with new hostnames and then setting up web servers for the hostnames in that domain is an exercise left to the reader. Before continuing, please ensure that your equivalent for the following host has a web server running. It doesn’t have to serve any content yet but we will add some blocks of configuration to the VirtualHost definition during the steps outlined in the remainder of this article:

  • ai.darkstar.lan

Using a  Let’s Encrypt SSL certificate to provide encrypted connections (HTTPS) to your webserver is documented in an earlier blog article.

Note that I am talking about webserver “hosts” but in fact, all of these are just virtual webservers running on the same machine, at the same IP address, served by the same Apache httpd program, but with different DNS entries. There is no need at all for multiple computers when setting up your Slackware Cloud server.

Port numbers

  • Ollama uses TCP port 11434 by default and we are not going to change that.
  • The Open WebUI docker container will listen at the loopback TCP port 3456 which is where the Apache Reverse Proxy will direct incoming traffic.

Secret keys

  • For persistent logins:
    WEBUI_SECRET_KEY=eePAAjEgEnZdgAQcVKb/DA993rwU+xbBb1scG0Zz1sQ=
  • Connecting Open Terminal to Open WebUI:
    OPEN_TERMINAL_API_KEY=qIShpFT2IUZaglqLTX5UCw6oQSyuCuKgpgF/xViqUWA=

Random strings like these can be generated using a convoluted series of commands like:

$ cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 32 | head -n 1 | openssl dgst -binary -sha256 | openssl base64

… which outputs a 45-character string ending on ‘=’.
Or generate 32 random characters with a truncated version of that commandline:

$ cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 32 | head -n 1

Docker network

  • We assign a Docker network segment to our Open WebUI container: 172.24.1.0/24 and call it “localai.lan

File Locations

  • The Docker configuration goes into: /usr/local/docker-localai/
  • The vector database for maintaining chat history and other data downloaded by the Ollama server or generated by Open Terminal go into: /opt/dockerfiles/localai/
  • Our Nextcloud server is installed into /var/www/htdocs/nextcloud/

Server configuration steps

I will break down the story into its main parts:

  1. Physically install a GPU with sufficient Video RAM (VRAM)
  2. Install the Nvidia binary driver and kernel module
  3. Install CUDA toolkit
  4. Install and configure Ollama – which will make use of the Nvidia driver and the CUDA toolkit to load LLMs into the GPU’s VRAM
  5. Create the Docker network and the local directory structure
  6. Install Open WebUI – which gives us a nice web page where we can manage and query our AI models
  7. Configure Apache reverse proxy to expose the Open WebUI to the network

Install the GPU card

Before you buy any new GPU hardware, you need to make sure that your motherboard and PSU support the card. In my case, the GeForce RTX 5060 Ti card needs a 8-pin MOLEX power connector and requires a 650W PSU. Even my 10-year old server meets those requirements.  Caveat: the GeForce RTX 5060 supports PCIe 5.0 but my old ASUS Prime B350-plus motherboard only supports PCIe 3.0. This is a backward compatible protocol, so this rather recent graphics card still works in my server, but it will not be able to reach its full performance and speed. Eventually, I will have to upgrade the rest of my server hardware also.
However, the point is to run a local AI model entirely in the Video RAM (VRAM) and then PCIe speeds are not an important consideration.

I kept my fanless GeForce GT 1030 card in the server as well. It is connected to a monitor using a regular kernel driver. That way, the new card can be fully utilized for AI inference and I still have local access to the server console..

Install the NVidia binary driver and kernel module

The GeForce RTX 5060 (which is based on the Blackwell architecture) requires the Nvidia open GPU kernel modules for proper functionality on Linux. The standard proprietary kernel module downright refuses to support this rather new card.
On the other hand, my old GT1030 card is not even detected by the open GPU kernel module, which made it really easy for me to keep both cards in the server – the old card using the Linux kernel driver to allow local access to the console, and the new card using the Nvidia open driver which enables the use of local AI.

Typically I would now point you to packages in my local repository to install the software you need. But for the Nvidia driver I do not have packages. They are too much of a moving target, with the multiple versions each supporting ranges of GPU models, and each having a kernel module that should match a Slackware kernel.
Instead I would like to point you to the SlackBuilds.org script repository, where you can download the required SlackBuild scripts and supporting files to compile these packages yourself.
You will need:

  • nvidia-driver (I used the 580.105.08 release but the current available version is already at 595)
  • nvidia-kernel (edit the SlackBuild script and enable the “OPEN” build by setting the variable OPEN to “yes

Build those two packages and install them, then reboot your computer. Use the ‘nvidia-smi’ program which is part of the nvidia-driver package to verify that your GPU is recognized and ready for use:

# nvidia-smi -L
GPU 0: NVIDIA GeForce RTX 5060 Ti (UUID: GPU-065f08c5-cd2f-48bc-ac7c-2454613fd064)

We’re ready for the next step.

The CUDA toolkit

CUDA what?

A brief explanation first. I assume that you have had some thought about why there’s this hype around GPU’s in relation to AI. There is a fundamental overlap between the technologies (more specifically the Graphical Processing Unit or GPU) that were developed to speed up the rendering of three-dimensional graphics in computer games, and the capabilities needed to train AI programs and make them respond in real-time. Both require hardware that can perform thousands of simple, identical calculations simultaneously, at scale:

  • To render a 3D scene, a GPU must calculate the color and position of millions of pixels at once. This is done using linear algebra (matrix and vector multiplication).
  • Training a neural network (the basic building block of any AI program) also relies on massive matrix multiplications to adjust millions of “weights” or parameters.

The math involved here is nearly identical! The hardware designed to “paint” a video game frame was accidentally perfect for “training” an AI model.

Now, CUDA (Compute Unified Device Architecture) is a parallel computing platform and programming model developed by NVIDIA. With CUDA, you can use Nvidia GPUs for general-purpose processing, not just graphics. It enables you to harness the power of GPU parallelism to accelerate stuff like scientific simulations and deep learning. CUDA has  “democratized” the graphics hardware. This allows AI researchers to write code for GPU’s using standard languages like C++ and Python. The speedup compared to CPU-only sequential computing implementations is gigantic.

The CUDA toolkit is what’s going to drive our local AI management, see also the architecture diagram I included previously.

By the way, here is an interesting read for you: “The Origins of GPU Computing“.

Install the CUDA toolkit

Similar to the NVIDIA driver and kernel module, we install the CUDA toolkit using the SlackBuilds.org scripts.
I needed to use cudatoolkit_13 which was at version 13.2.0 when I compiled my package. The compilation of an Ollama package in a next step kept failing because of CUDA issues until I upgraded the toolkit from the 10.x (default at SBo) to 13.x.

After installing the CUDA toolkit, logout and login again to make your shell can ‘source’ the installed profile script ‘/etc/profile.d/cuda-13.2.sh

Install Ollama

For this article, I tried building Ollama from source. That went well eventually, but the resulting binary would never contain a working set of GPU library stubs (CUDA). What happens if you use an Ollama binary without support for CUDA is that the AI models you use will be running on your CPU instead of your GPU, killing the real-time experience.
I will leave the build instructions for Ollama in this section, but what I actually did was downloading the official binary and installing that as ‘/usr/local/bin/ollama’. If any of you can explain to me what I potentially did wrong,  and show how to compile Ollama with CUDA support, let me know in the comments below!

Compile from source

Before we can compile Ollama, some packages need to be installed first:

  • google-go-lang
    This is part of Slackware -current but if you are still on Slackware 15.0 you can download this Go compiler from my own repository.
  • go-md2man
    Needed to generate man pages.You can download this package from my own repository.
  • jq
    A commandline JSON processor which is part of Slackware -current and which you can download from my own repository if you are running Slackware 15.0

Note that after installing google-go-lang you need to logout and login again to allow your shell to ‘source’ the profile script ‘/etc/profile.d/go.sh‘.

Then, compile Ollama,  using yet another script you can download from SlackBuilds.org.  Install the resulting package.

Or… download official binaries

In stead of compiling Ollama from source, you can also download and install the official binaries from Ollama’s server. Simply do this:

# wget https://ollama.com/download/ollama-linux-amd64.tar.zst
# tar -C /usr/local -xf ollama-linux-amd64.tar.zst
# /usr/local/bin/ollama --version

Since we have an NVIDIA card in our server, and the NVIDIA proprietary driver as well as the CUDA toolkit have already been installed, the Ollama binary auto-detects these capabilities and no extra steps are needed on Slackware as long as libcuda.so is in the dynamic linker path.

Create dedicated system account and directories

The ‘ollama’ account will be created as system user and group:

# groupadd -g 393 -r ollama
# useradd -r -u 393 -g ollama -d /var/lib/ollama -s /sbin/nologin -c "Ollama service account" ollama

The directory to store the AI models:

# mkdir -p /var/lib/ollama/models
# chown -R ollama:ollama /var/lib/ollama
# chmod 750 /var/lib/ollama

Pre-create the log file:

# touch /var/log/ollama.log
# chown ollama:ollama /var/log/ollama.log
Start Ollama

We’ll make sure that  Ollama starts when the server boots using a ‘rc’ script. It is also possible to run Ollama in a container, which may be a future extension to the article.

Create the following ‘rc’ script called ‘/etc/rc.d/rc.ollama‘ to start Ollama when your computer boots:

#!/bin/bash
# /etc/rc.d/rc.ollama - Ollama service for Slackware
# Created by Jerry B Nettrouer II  https://www.inpito.org/projects.php
# Load configuration (if file exists)
[ -f /etc/default/ollama ] && . /etc/default/ollama
# Load CUDA toolkit locations if those exist:
[ -f /etc/profile.d/cuda-13.2.sh ] && . /etc/profile.d/cuda-13.2.sh
# Set the Process ID file
PIDDIR="/run/ollama/"
PIDFILE="/run/ollama/ollama.pid"
# Set the log file
LOGFILE="/var/log/ollama/ollama.log"

case "$1" in
  start)
    if [ -f "$PIDFILE" ] && kill -0 $(cat "$PIDFILE") 2>/dev/null; then
      echo "Ollama already running."
      exit 0
    fi
    echo "Starting Ollama... (models: $OLLAMA_MODELS, host: $OLLAMA_HOST)"
    # Create the run directory
    mkdir -p $PIDDIR
    chown -R ollama:ollama $PIDDIR
    # Use nohup + setsid for clean daemon behavior
    su -s /bin/sh -c "setsid nohup ollama serve >> $LOGFILE 2>&1 & echo \$! > $PIDFILE" ollama
    echo "Started with PID $(cat "$PIDFILE")"
    ;;
  stop)
    echo "Stopping Ollama..."
    if [ -f "$PIDFILE" ]; then
      kill $(cat "$PIDFILE") 2>/dev/null
      rm -f "$PIDFILE"
    else
      pkill -f "ollama serve" 2>/dev/null
    fi
    ;;
  restart)
    $0 stop
    sleep 1
    $0 start
    ;;
  status)
    if [ -f "$PIDFILE" ] && kill -0 $(cat "$PIDFILE") 2>/dev/null; then
      echo "Ollama is running (PID $(cat "$PIDFILE"))."
    elif pgrep -f "ollama serve" >/dev/null; then
      echo "Ollama is running (but no PID file)."
    else
      echo "Ollama is not running."
    fi
    ;;
  *)
    echo "Usage: $0 {start|stop|restart|status}"
    exit 1
    ;;
esac
exit 0
# ---

This ‘rc’ script relies on a configuration file ‘/etc/default/ollama‘ which needs the following content (you will probably change a few parameters):

# ---
OLLAMA_MODELS=${OLLAMA_MODELS:-"/var/lib/ollama/.ollama"}
OLLAMA_HOST=${OLLAMA_HOST:-"0.0.0.0:11434"}
OLLAMA_ORIGINS=*
# You can add more variables if needed, e.g.:
#OLLAMA_KEEP_ALIVE="-1" # Never unload automatically
#OLLAMA_DEBUG="1"
#OLLAMA_GPU_MEMORY_FRACTION="0.85" # Constrain VRAM usage:

# Need to export these, otherwise the ollama rc script will not pick them up:
export OLLAMA_MODELS OLLAMA_HOST OLLAMA_ORIGINS
# ---

Note in this configuration file that we instruct Ollama to listen on all interfaces (0.0.0.0), not just the loopback address (127.0.0.1). The safest way for a local Ollama which we are going to expose via an Apache Reverse Proxy would indeed be to only listen at the loopback address, but Ollama does not only want to talk to you (the user) but also it needs to be talked to! The web page via which you are going to access your local AI is provided by Open WebUI and that is going to be running inside a Docker container. The Open WebUI server inside Docker can not access the host’s loopback. That is why we tell Ollama to listen at all network interfaces.
And then we mitigate this risk by adding a firewall rule which blocks access to Ollama from anything else but the loopback address and our AI Docker network.

To bring it all together, invoke this ‘rc’ script in your ‘/etc/rc.d/rc.local’ by adding the following text block to it. There we ensure that the Ollama server port is firewalled from the outside world:

if [ -x /etc/rc.d/rc.ollama ]; then
  # Protect from outside abuse via firewall:
  # Allow established connections and loopback
  /usr/sbin/iptables -A INPUT -i lo -p tcp --dport 11434 -j ACCEPT
  # Allow Docker Ollama bridge network
  # (adjust the subnet to match 'docker network inspect localai.lan')
  /usr/sbin/iptables -A INPUT -s 172.24.1.0/24 -p tcp --dport 11434 -j ACCEPT
  # Drop everything else hitting this port
  /usr/sbin/iptables -A INPUT -p tcp --dport 11434 -j DROP
  # Start Ollama LLM offline server:
  echo "Starting Ollama LLM offline:  /etc/rc.d/rc.ollama start"
  /etc/rc.d/rc.ollama start
fi

Run the start script manually first to boot the OIlama server.

# /etc/rc.d/rc.ollama start

Note that Ollama does not offer any form of authentication mechanism. Any user or process that can access the TCP port can use it.

Test Ollama

Test from your non-root user account whether Ollama is ready for action:

$ ollama list

… or else:

$ curl http://127.0.0.1:11434/api/tags
LLM quickstart: pull and use Mistral

Let’s try to pull the ‘Mistral’ Large Language Model (this downloads ~4GB for mistral:7b):

$ ollama pull mistral

If you want to experience an interactive chat session:

$ ollama run mistral

You can also use a non-interactive single prompt which would be useful for scripting:\

$ ollama run mistral "Explain lithography in one paragraph"

… or use the REST API directly:

$ curl http://127.0.0.1:11434/api/generate -d '{"model":"mistral","prompt":"Hello, Mistral!","stream":false}' | python3 -m json.tool

Query Ollama about the AI models it has loaded. This also shows how much of the model runs in the GPU VRAM versus on the CPU:

$ ollama ps
NAME             ID            SIZE   PROCESSOR  CONTEXT  UNTIL
ministral-3:14b  4760c35aeb9d  10 GB  100% GPU   4096     Forever

Create docker network and local directories

# docker network create --driver=bridge --subnet=172.24.1.0/24 --gateway=172.24.1.1 localai.lan
# mkdir -p /usr/local/docker-localai/
# mkdir -p /opt/dockerfiles/localai/{data,open-terminal-data}/

Install Open WebUI

Open WebUI is the current best-maintained self-hosted frontend for Ollama.  This is its ‘docker-compose.yml‘ file which you should create in directory ‘/usr/local/docker-localai/‘:

# ---
name: open-webui
services:
  open-webui:
    image: ghcr.io/open-webui/open-webui:main
    container_name: open-webui
    restart: unless-stopped
    networks:
      - localai.lan
    # host-gateway resolves to the host machine's IP on the Docker bridge,
    # allowing the container to reach host-resident services.
    extra_hosts:
      - "host.docker.internal:host-gateway"
    ports:
      # Bind ONLY to localhost. Apache will be the public-facing entry point.
      - "127.0.0.1:3456:8080"
    environment:
      # Point Open WebUI at host-resident Ollama via the bridge gateway (127.0.0.1 does NOT work here).
      - OLLAMA_BASE_URL=http://host.docker.internal:11434
      # Must match what Apache sends as the public URL. This is critical for
      # cookies, redirects, and CSRF protection once behind a reverse proxy.
      - WEBUI_URL=https://ai.darkstar.lan
      # Best practice: put this in a .env file next to docker-compose.yml
      - WEBUI_SECRET_KEY=${WEBUI_SECRET_KEY}
      # Harden cookies when served over HTTPS via the proxy
      - WEBUI_SESSION_COOKIE_SECURE=true
      - WEBUI_SESSION_COOKIE_SAMESITE=lax
      # Explicitly enable WebSocket support
      - ENABLE_WEBSOCKET_SUPPORT=true
      # Socket.IO ping interval and timeout (milliseconds).
      # Ping every 20s; the client must respond within 30s.
      # This keeps the WebSocket alive through NAT devices with
      # short idle timers.
      - WEBSOCKET_PING_INTERVAL=20000
      - WEBSOCKET_PING_TIMEOUT=30000
    volumes:
      - /opt/dockerfiles/localai/data:/app/backend/data
    depends_on:
      - open-terminal
  open-terminal:
    # Use 'slim' (200 MB) instead of 'latest' (2 GB) unless you specifically
    # need Node.js, ffmpeg, or data science libraries available to the AI agent.
    image: ghcr.io/open-webui/open-terminal:${OPEN_TERMINAL_VARIANT}
    container_name: open-terminal
    restart: unless-stopped
    networks:
      - localai.lan
    # No 'ports:' section - intentionally not exposed to the host.
    # Open WebUI backend proxies to it via the Docker network.
    environment:
      - OPEN_TERMINAL_API_KEY=${OPEN_TERMINAL_API_KEY}
    volumes:
      # Persistent home directory for the terminal user.
      # Files the AI creates here survive container restarts.
      - /opt/dockerfiles/localai/open-terminal-data:/home/user
      # Only add this if you specifically want the AI to read host files
      #- /host/path/to/AI/data:/data:ro   # :ro = read-only

networks:
  localai.lan:
    external: true
    name: localai.lan
# ---

The accompanying ‘.env‘ file which should be created in the same location contains the following:

# ---
# Persistent login:
WEBUI_SECRET_KEY=eePAAjEgEnZdgAQcVKb/DA993rwU+xbBb1scG0Zz1sQ=

# Connecting Open Terminal to Open WebUI:
OPEN_TERMINAL_API_KEY=qIShpFT2IUZaglqLTX5UCw6oQSyuCuKgpgF/xViqUWA=
OPEN_TERMINAL_VARIANT=latest
# ---

Don’t forget to create and use the variable “WEBUI_SECRET_KEY”!
Without a persistent “WEBUI_SECRET_KEY”, you’ll be logged out every time the container is recreated.

Why does the “127.0.0.1” address not work from inside a container?
When Open WebUI runs in a Docker container on a bridge network (which is what any custom docker network uses), the address “127.0.0.1” inside that container refers to the container’s own loopback interface… not that of the host!
Setting OLLAMA_BASE_URL to “http://127.0.0.1:11434” would have the container talking to itself on a port where nothing is listening. You would get an immediate connection refused. The “extra_hosts” entry in the Compose file: “host.docker.internal:host-gateway” is a specific syntax meant to instruct Docker to inject a hosts-file entry into the container that maps the name “host.docker.internal” to the host’s IP address on the Docker bridge (typically something like 172.18.0.1, but you never need to hard-code that). This is Docker’s own supported mechanism for containers to reach host-resident services.
Even with “host.docker.internal” resolving correctly, there is still a firewall/bind problem. If Ollama’s OLLAMA_HOST is set to “127.0.0.1:11434”, the kernel will only accept connections arriving on the loopback interface. Traffic coming in from the Docker bridge (e.g., 172.18.0.x) arrives on a different interface and gets refused at the TCP socket level. Not by a firewall rule!

Note that I am already including the Docker configuration for Open Terminal, so that you have everything in one place from the start. I will explain about Open Terminal further down in another section of the article.

Start and configure Open WebUI

We perform the initial configuration of the Open WebUI container while still not opened up to the LAN. Just to be safe, since the very first user that is created has full admin rights. In the next step we will configure a reverse proxy to expose Open WebUI to the network.

# cd /usr/local/docker-localai
# docker compose up -d

Docker downloads (pulls) the image and then the container will start. Watch the logs during first start (DB migrations run on first boot):

# docker logs -f open-webui

Look for the line like “INFO Application startup complete” which indicates that the server is ready. Let’s login!

On the host, navigate to http://127.0.0.1:3456

The admin account

Register your first account. This user automatically becomes the admin account. Naturally you need to define a strong password…  Open WebUI’s admin user is able to control access to the AI models, user creation, and system-level settings.

  • Go to ‘Settings > Connections‘ and confirm that the Ollama URL is shown as connected (a green indicator).
  • Go to ‘Settings > Models‘, Your previously pulled models (e.g., “mistral:latest“) should appear.
  • Start a new chat, select “mistral“, and away you go!
Internet access

To give your local AI model internet access via Open WebUI, you need to enable the built-in Web Search feature in the Admin Panel.
Recent AI models are highly capable of “tool use,” and this setup allows the model to search the web, read the top results, and summarize them for you.

  • Enable Web Search in Admin Settings
    • Open the Open WebUI interface https://ai.darkstar.lan/ in your browser.
    • Click your Profile Icon (bottom-left) and select ‘Admin Panel‘.
    • Navigate to the ‘Settings‘ tab and click on ‘Web Search‘.
    • Toggle ‘Enable Web Search‘ to “ON”.
  • Choose and Configure a Search Engine
    You must select a provider to fetch the actual search results. Here are the most common options:

    • DuckDuckGo (Easiest): Works out of the box without an API key. Select “DDGS” as the search engine and “DuckDuckGo” as its backend from the dropdowns.
    • Tavily (Recommended for AI): Specifically built for LLMs to get clean, searchable data. You will need a free Tavily API Key. Paste it into the Tavily API Key field in ‘Settings‘.
    • Google PSE: Best for comprehensive results but requires creating a Google Programmable Search Engine to get a Search Engine ID and API Key.
    • SearXNG (Private/Local): If you want to stay 100% local, you can run a SearXNG instance in a separate Docker container and point Open WebUI to its local URL (e.g., http://localhost:8080).
  • Using Search in Chat
    Once configured, you can use the web search in two ways:

    • Manual Toggle: In a new chat, look for the “+” icon or the Web Search toggle (globe icon) near the message box to activate it for that session.
    • Keyword Trigger: You can often trigger a search by typing # or using a specific prefix if you have set up a “Search” tool/action in the Workspace settings.
    • To make sure the AI actually uses the retrieved data effectively:
      • Go to ‘Workspace > Models‘.
      • Click the ‘Edit‘ (pencil) icon for your AI model.
      • In the ‘Tools or Capabilities‘ section, ensure that “Web Search” is checked so the model knows it is allowed to use this external tool.

Apache reverse proxy configuration

Ensure that the following modules are loaded in httpd.conf or in a separate included configuration file below /etc/httpd/ :

# ---
LoadModule proxy_module lib64/httpd/modules/mod_proxy.so
LoadModule proxy_http_module lib64/httpd/modules/mod_proxy_http.so
LoadModule proxy_wstunnel_module lib64/httpd/modules/mod_proxy_wstunnel.so
LoadModule ssl_module lib64/httpd/modules/mod_ssl.so
LoadModule rewrite_module lib64/httpd/modules/mod_rewrite.so
LoadModule headers_module lib64/httpd/modules/mod_headers.so
# ---

A note about ‘mod_proxy_wstunnel’: people often forget to account for WebSockets. Open WebUI streams LLM responses over a WebSocket connection. Without this module, you get a UI that connects, shows the model list, but then silently fails to stream any generated text.

Therefore, these are the essential bits you need to add to your Apache HTTPD server configuration:

# --- Proxy core ---
ProxyPreserveHost On
ProxyRequests Off

# Tell Open WebUI the real client IP (used in logs and rate limiting)
RequestHeader set X-Forwarded-Proto "https"
RequestHeader set X-Forwarded-Host "ai.darkstar.lan"
Header always set Strict-Transport-Security "max-age=63072000; includeSubDomains"

# Increase timeouts for long LLM inference (large models can think slowly)
ProxyTimeout 300
Timeout 300

# Disable response buffering - critical for streaming LLM output.
# Without this, Apache may buffer the entire response before forwarding.
SetEnv proxy-sendchunked 1
SetEnv proxy-initial-not-buffered 1

# WebSocket upgrade support (essential for LLM streaming)
RewriteEngine On
RewriteCond %{HTTP:Upgrade} websocket [NC]
RewriteCond %{HTTP:Connection} upgrade [NC]
RewriteRule ^/?(.*) "ws://127.0.0.1:3456/$1" [P,L]

# Open WebUI reverse proxy, connects to an Ollama backend:
ProxyPass / http://127.0.0.1:3456/ keepalive=On
ProxyPassReverse / http://127.0.0.1:3456/

# Optionally (you can remove this if you don't care)
# Ensure that only the people you know can access the Web Interface:
<Location />
    Require all granted
    <RequireAny>
        Require host yourowndomain.com
        Require ip 192.168
        Require ip 10.10
    </RequireAny>
</Location>

After adding this configuration block to the “VirtualHost” definition of ai.darkstar.lan, run a configuration check and then restart the Apache webserver:

# apachectl configtest
# apachectl -k graceful

Check the result

Your Open WebUI page should now be accessible at https://ai.darkstar.lan/


Add Open Terminal

Open Terminal is a capability we can add to the Docker stack that gives the AI model a real computer to work on.
It connects a containerized computing environment to Open WebUI. The AI model you are using can use that sandboxed shell environment to write code, execute it, read the output, fix errors, and iterate, all without leaving the chat. It handles files, installs packages, runs servers, and returns results directly to you. Because we will run it in a Docker container it offers complete isolation from the host processes. We will give it persistent storage so that you can grab the resulting artifacts straight from a local directory.

This setup mirrors a capability that the Big Tech companies also provide with their commercial LLM’s: formulate an idea and let your AI generate working software. Ask it a question and get a functional script. Describe a website and watch it being rendered live.
When you upload a spreadsheet, CSV file or a database, you can instruct your AI to read the data, run analysis scripts and generate charts or reports.
Until here the PR text taken from the web site:-)

An important architectural consideration: Open WebUI proxies AI requests to Open Terminal. Open Terminal will never connect the other way round.  This means that the Open Terminal container never needs to be reachable from the internet or even from the browser. It only needs to be reachable from within the Docker network. This keeps it nicely isolated.

To get Open Terminal up and running, nothing is required. We already added all the code to the ‘docker-compose.yml’ and ‘.env’ files. When the stack is running you can validate that Open Terminal is ready by examining the logs:

# docker logs -f open-terminal

Verify that Open WebUI can talk to Open Terminal via a command you execute inside the Open WebUI container (the docker commmand uses the ‘open-webui’ service name as defined in the ‘docker-compose.yml’ file):

# docker exec open-webui curl -s \
    -H "Authorization: Bearer $(grep OPEN_TERMINAL_API_KEY /usr/local/docker-localai/.env | cut -d= -f2)" \
    http://open-terminal:8000/health

A healthy response looks like:
json{"status": "ok"}

If that returns successfully, the two containers can see each other on the network and the API key is accepted.

Enable Open Terminal in Open WebUI

This needs to be done through the Open WebUI admin interface. It can not be done via configuration files.

  • Navigate to https://ai.darkstar.lan/ and log in as an admin user.
  • Go to ‘Admin Settings > Integrations > Open Terminal‘ and fill in the fields:
    • URL: “http://open-terminal:8000”
    • API Key: qIShpFT2IUZaglqLTX5UCw6oQSyuCuKgpgF/xViqUWA= (which is the value of OPEN_TERMINAL_API_KEY in your .env file)
    • Click ‘Save‘, then toggle the connection ‘Enabled‘.
      Open WebUI will immediately test the connection, and a green indicator confirms success.

The URL http://open-terminal:8000 works because Docker’s internal DNS resolves the service name ‘open-terminal’ to the container’s IP on localai.lan.
This is why the container needs no exposed port. It is only ever spoken to by Open WebUI’s backend, never by your browser directly.

You have a choice to make regarding the Docker Image variant of Open Terminal. Using the ‘slim‘ tag in the Compose file above would be a deliberate choice. I prefer ‘latest‘ instead. Here is what each variant gives an AI agent to work with:

  • alpine:
    ~100 MB image. This gives: a basic shell, curl, jq, git. It’s minimal but functional.
  • slim:
    ~200 MB image. Content is identical to the ‘alpine’ image but this one is Debian-based. This guarantees better package compatibility.
  • latest:
    ~2 GB image. You will get a full Python environment, Node.js, the Docker CLI, ffmpeg and data science libraries.

For a personal server, ‘slim‘ may the pragmatic choice. The AI can run shell commands, use git, curl APIs, and manage files, which covers the vast majority of useful agent tasks without pulling a 2 GB image. But I may also need the AI to run a Python data processing task or Node.js scripts. Therefore I configured ‘latest‘ myself.


Ollama integration in Nextcloud

Official documentation for the integration of local AI into your Nextcloud server can be found here: https://docs.nextcloud.com/server/stable/admin_manual/ai/overview.html
In short, these are the steps you need to take to integrate your Ollama AI server into Nextcloud.

  • Install the Nextcloud Assistant app using the administrator account of your Nextcloud instance
  • Similarly, install OpenAI integration app
  • Click on the administrator avatar in the top right, and go to ‘Administration Settings > Administration > Artificial Intelligence
    • In ‘OpenAI and LocalAI configuration‘, set ‘http://127.0.0.1:11434/v1′ as the OpenAI-compatible ‘Service URL‘.
  • Add one line to Nextcloud’s ‘config/config.php‘ file (manually; there is no GUI to do this):
    'allow_local_remote_servers' => true,

If you want your AI to feel responsive in Nextcloud it is also imperative to implement a number (minimum 4) of local ‘AI workers’ that pick up AI requests from the queue and process them immediately in the background. Otherwise that request processing is only happening every 5 minutes via Nextcloud’s internal cron. My advice is running them inside screen (or tmux) with this command added to ‘/etc/rc.d/rc.local‘:

/usr/bin/screen -S NEXTCLOUD -t AI_1 \
  -Adm /usr/local/sbin/nextcloud_occ_backgroundworker.sh 1 && \
  sleep 1 && \
  /usr/bin/screen -S NEXTCLOUD -X screen -t AI_2 \
  -Adm /usr/local/sbin/nextcloud_occ_backgroundworker.sh 2 && \
  /usr/bin/screen -S NEXTCLOUD -X screen -t AI_3 \
  -Adm /usr/local/sbin/nextcloud_occ_backgroundworker.sh 3 && \
  /usr/bin/screen -S NEXTCLOUD -X screen -t AI_4 \
  -Adm /usr/local/sbin/nextcloud_occ_backgroundworker.sh 4

Where the executable shell script ‘/usr/local/sbin/nextcloud_occ_backgroundworker.sh‘ is something you need to create yourself with the following content:

#!/bin/bash
if [ -n "$1" ]; then
  echo "Starting Nextcloud AI Worker $1"
else
  echo "Starting Nextcloud AI Worker"
fi
cd /var/www/htdocs/nextcloud/
set -e
while true; do
  sudo -u apache php -d memory_limit=512M ./occ background-job:worker \
    -v -t 60 "OC\TaskProcessing\SynchronousBackgroundJob"
done
# ---

If you need to access these AI workers at any time, you can do so from root’s commandline via:

# screen -x NEXTCLOUD

… and cycle through the four worker screens using [Ctrl]-a-n

Tasks are run as part of the background job system in Nextcloud, which only runs jobs every 5 minutes by default.
To pick up scheduled jobs faster you can set up background job workers inside the Nextcloud main server/container that process (AI and other)
tasks as soon as they are scheduled.
If the PHP code or the Nextcloud settings values are changed while a worker is running, those changes won’t be effective inside the runner.
For that reason, the worker needs to be restarted regularly. It is done with a timeout of N seconds which means any changes to the
settings or the code will be picked up after N seconds (worst case scenario). This timeout does not, in any way, affect the processing or the timeout of AI tasks.

The result of this configuration is the appearance of a new “AI” button in the Nextcloud task bar which you can click to access the Assistant, giving you access to chat, translation, image and audio analysis, and more:


Single Sign-ON (SSO)

Open WebUI supports OpenID Connect (OIDC) out of the box. See https://docs.openwebui.com/reference/env-configuration/#openid-oidc for the variables that enable Single Sign-On and  https://docs.openwebui.com/troubleshooting/sso/ for additional troubleshooting information. You should be able to connect Open WebUI to your Cloudserver’s Keykloak Identity Provider without issues.
Unfortunately my local server that I equipped with a NVIDIA GPU is not running Keycloak or any other OIDC provider, and I could not validate this SSO capability myself.
Let me know if you were able to add SSO to your own setup!


Attribution

Many thanks to INPITO (the Indiana Non-Profit Information Technology Organization) who use Slackware as their OS and wrote the article that formed the inspiration for my own journey into local AI: https://www.inpito.org/ollama.php. I copied their Slackware boot script for Ollama.


Final thoughts

I hope this article will remove some of the resistance that many people still show towards the use of AI chatbots. The fact that you can run a Large Language Model on your gaming rig, use it to experiment with new technologies and be certain that none of that data will ever be shared externally, is great!

Leave your comments, suggestions and opinions below.
Thanks for reading. Eric

Slackware Cloud Server Series, Episode 11: Jukebox Audio Streaming

I am an avid music lover. My tastes are eclectic; I enjoy electronic, industrial, punk, new wave, reggae and dub but also baroque and classical music. I used to tape my own music cassettes when I was young, sharing my mixes with friends. I have hundreds of vinyl albums and at many more compact discs. But technology kept evolving and I switched to MP3 files that I could store on my computer and play using VideoLAN VLC for instance. But I also want to be able to just listen to my music in the living room without operating a laptop and for that, I setup a streaming server that acts as a jukebox, continuously picking random songs from my collection and playing them from a queue that never empties. In the living room I have a Denon AVR-X2300W which can pick up the network stream.
I have been running this audio streaming server for decades. First using OTTO, then Calliope and then coming back to OTTO after it had re-invented itself. But Calliope and OTTO are no longer maintained and  quite tricky to setup in the first place. I am not looking forward to migrate this unsupported setup to Slackware 15.1 when that gets released and I move my server to it.

I went on a search for a modern, maintained and open source alternative for my OTTO server.
I have actually setup Mopidy with Pibox extension to get the jukebox functionality. Recompiling Slackware’s gst-plugins-good package against libshout enables the libgstshout2.so library which gives us ‘shout2send‘ which then streams audio from Mopidy to my Icecast server. Setting it all up was not trivial and I did not like how the Pibox extension handled the queue autofill. I went on with my search for a good OTTO alternative and I hope I found it.

In this episode of Slackware Cloud Server I will show you how to stream your personal MP3 collection via Icecast using the open source platform AzuraCast. A worthy addition to your Slackware Cloud Server as as service to yourself, friends, family or even your local community.


Check out the list below which shows past, present and future episodes in my Slackware Cloud Server series. If the article has already been written you’ll be able to click on the subject.
The first episode also contains an introduction with some more detail about what you can expect.


Introduction


What is AzuraCast?

AzuraCast is a self-hosted, all-in-one web radio management suite consisting of multiple independent but co-operating components:

  • Liquidsoap
    This is the automation engine that fills your play queue, handles scheduling and song rotation, and feeds the stream source, re-encoding if needed.
  • Icecast-KH
    A maintained fork of Icecast that handles the actual audio streaming to listeners.
  • A PHP/Vue web application
    The management interface where you control everything: upload music, browse your library, configure playlists, handle listener requests, check analytics.
  • MariaDB server
    Stores the song metadata, play history and playlists.
  • Redis
    Runtime memory store for the session cache and queue state.

We will be running the whole stack in Docker, making it self-contained regardless of your Slackware version. The local directory tree containing your music library will be bind-mounted inside the Docker container.
We will setup the Apache HTTP server as a reverse proxy so that we can access the Management UI securely via HTTPS. We will also proxy the Icecast stream on a standard port so that listeners would not have to connect to your Icecast mount point via e.g. ‘http://yourserver:8000/radio.mp3’ but rather via a regular URL like ‘https://yourserver/yourchannel’.
Any player that speaks Icecast like VLC, mpv, foobar2000, mpc, also every browser and surely many more will be able to playback your music.

What makes AzuraCast the right solution?

When searching for a replacement I had several requirements in mind that a new program should meet. AzuraCast ticks more boxes than any of the other solutions I encountered and/or tested:

  • Should be able to handle tens of thousands of songs effortlessly
    Azuracast indexes my library in a MariaDB database. 50 000+ tracks is a documented use-case.
    I can tell you from experience that it takes a day or two to get 50K songs indexed however.
  • Continuously fills the play queue (it must never be empty)
    The Liquidsoap AutoDJ has configurable rotation and fills the queue automatically.
  • Manual song requests should be added to the head of queue
    AzuraCast has a ‘Listener Requests’ feature. It’s available via its web UI but also as a REST API.
    It should be possible to configure the AutoDJ in such a way that user requests are immediately placed at the head of the queue, unfortunately I have not yet found out how. Because the AutoDJ  pushes the tracks its queue to the Icecast server immediately, any user requests will be scheduled after that queue, not before it. I need to keep the queue length (which is configurable) to a minimum value of 2 to make the experience acceptable.
  • Web-based management interface
    AzuraCast comes with a full-featured, mobile-friendly web UI with lots of analytics, logging and debugging tools.
  • Auto-detect or manually re-scan for new music files
    It can do both: it runs an internal background task (with configurable interval) to scan for new music regularly, but there’s also a command-line option to re-scan your entire music collection.
  • Stream output via Icecast protocol
    Icecast-KH is the native output; every mount point is a standard Icecast stream.

Architecture overview


Preamble

This section describes the technical details of our setup, as well as the things which you should have prepared before trying to implement the instructions in this article.

Web Hosts

For the sake of this instruction, I will use the hostname “https://radio.darkstar.lan” as your landing page for AzuraCast.
The URL for the Icecast stream will be “https://radio.darkstar.lan/lowlands“.

Setting up your domain (which will hopefully be something else than “darkstar.lan”…) with new hostnames and then setting up web servers for the hostnames in that domain is an exercise left to the reader. Before continuing, please ensure that your equivalent for the following host has a web server running. It doesn’t have to serve any content yet but we will add some blocks of configuration to the VirtualHost definition during the steps outlined in the remainder of this article:

  • radio.darkstar.lan

Using a  Let’s Encrypt SSL certificate to provide encrypted connections (HTTPS) to your webserver is documented in an earlier blog article.

Note that I am talking about webserver “hosts” but in fact, all of these are just virtual webservers running on the same machine, at the same IP address, served by the same Apache httpd program, but with different DNS entries. There is no need at all for multiple computers when setting up your Slackware Cloud server.

Docker network

  • We assign a Docker network segment to our AzuraCast container: 172.24.0.0/24 and call it “azuracast.lan

File Locations

  • The Docker configuration goes into: /usr/local/docker-azuracast/
  • The data generated by the AzuraCast server goes into: /opt/dockerfiles/azuracast/

Port numbers

Since everything runs in a Docker container, all services listen at the localhost address 127.0.0.1.

  • AzuraCast Web UI in Docker: listens at TCP port 81
  • Icecast mount points: we will open two (for two audio streams) that listen at TCP ports 8001 and 8002

Station name

  • Our Icecast Radio Station will be called: Alien Pastures Radio
  • Inside Azuracast (primarily used to create the directory to mount the media) this name is trivially translated to: alien_pastures_radio

Installation

AzuraCast’s recommended installation method uses a helper script that downloads the Docker Compose configuration, fires the container, and allows for post-installation maintenance and management. We are not going to use that.
Still, in order for you to be able to switch effortlessly to the AzureCast officially suggested Docker setup, I will follow their recommendations to add all of the customization into a separate ‘override‘ file for Docker Compose.

Docker network

Create the network using the following command:

docker network create \
  --driver=bridge \
  --subnet=172.24.0.0/24 --gateway=172.24.0.1 \
  azuracast.lan

Docker’s gateway address in any network segment will always have the “1” number at the end.
Select a yet unused network range for this subnet. You can find out about the subnets which are already defined for Docker by running this command:

# ip route |grep -E '(docker|br-)'

The ‘azuracast.lan‘ network you created will be represented in the AzuraCastdocker-compose.yml file with the following code block:

networks:
  azuracast.lan:
    external: true
Directories

Create a directory structure for AzuraCast as a Docker container. We’ll change ownership of two of those directories: backups and storage. The UID/GID numbers I use (shown in bold green) must correspond to the values you define for AZURACAST_PUID and AZURACAST_PGID in the ‘.env‘ file you will create in the next step. If you omit that ‘chown‘ step, AzuraCast will not be able to save Station information nor will it be able to create backups of its SQL server.

# mkdir -p /opt/dockerfiles/azuracast/{backups,db_data,stations,storage}
# chown 1000:1000 /opt/dockerfiles/azuracast/{backups,stations}
# mkdir -p /usr/local/docker-azuracast
# cd /usr/local/docker-azuracast
Configuration files

Download an example Docker environment file and store it under the name ‘.env‘ from https://raw.githubusercontent.com/AzuraCast/AzuraCast/stable/sample.env

Note that I download all files from the ‘stable’ branch of the AzuraCast git repository. You could also try the ‘main’ branch if you like to live on the edge.

The container ships its own internal Nginx proxy that also takes care of the SSL certificates, but I want to use my own host server’s Apache HTTP daemon to take care of the reverse-proxying. All you need to do to disable Nginx is to change the “AZURACAST_HTTPS_PORT” value from the default “433” to something else… and of course because port “443” is already in use on our server.
Likewise, we need to change “AZURACAST_HTTP_PORT” from the default value of “80” because that’s where our own Apache server is listening next to port 443.
After making the necessary changes we end up with an ‘.env‘ file containing (use your own values for the green example values if you want that):

# Make it easier to manage the project in Docker Compose:
COMPOSE_PROJECT_NAME=azuracast
# Define network ports:
AZURACAST_HTTP_PORT=81
AZURACAST_HTTPS_PORT=8412
AZURACAST_SFTP_PORT=2022
# We stick to the 'stable' channel instead of 'latest'
AZURACAST_VERSION="stable"
# If you start docker as your own user instead of root, change these to your own UID/GID
AZURACAST_PUID=1000
AZURACAST_PGID=1000

The ‘.env‘ file above is a configuration file which is read and used by the Docker daemon to setup the container. We are now going to create a second configuration file called ‘azuracast.env‘, containing the data that AzuraCast itself needs in order to function. Get the example file from https://raw.githubusercontent.com/AzuraCast/AzuraCast/stable/azuracast.sample.env and tailor it to your needs.
In the end ‘azuracast.env‘ should look like this (lots of comments and default values removed):

APPLICATION_ENV=production
COMPOSER_PLUGIN_MODE=false
AUTO_ASSIGN_PORT_MIN=8000
AUTO_ASSIGN_PORT_MAX=8001
SHOW_DETAILED_ERRORS=false
MYSQL_PASSWORD=azur4c457
MYSQL_RANDOM_ROOT_PASSWORD=yes

The two values in bold green for the lower and upper range of the Icecast ports correspond with the values I defined earlier in the ‘Preamble‘ section. The range of two ports means that this setup supports two independent Icecast streams.

The docker-compose.yml file

Get the ‘docker-compose.yml‘ file from https://raw.githubusercontent.com/AzuraCast/AzuraCast/stable/docker-compose.sample.yml
After removing the sections I don’t need (they enable the official “docker.sh” script to perform maintenance and upgrades) the file it will look like this:

# If you need to customize this file, you can create a new file named:
# docker-compose.override.yml
# with any changes you need to make.
#
name: azuracast

services:
  web:
    container_name: azuracast
    image: "ghcr.io/azuracast/azuracast:${AZURACAST_VERSION:-latest}"
    # Want to customize the HTTP/S ports? Follow the instructions here:
    # https://www.azuracast.com/docs/administration/docker/#using-non-standard-ports
    ports:
      - '${AZURACAST_HTTP_PORT:-80}:${AZURACAST_HTTP_PORT:-80}'
      - '${AZURACAST_HTTPS_PORT:-443}:${AZURACAST_HTTPS_PORT:-443}'
      - '${AZURACAST_SFTP_PORT:-2022}:${AZURACAST_SFTP_PORT:-2022}'
      - '8000-8001:8000-8001'
    env_file:
      - azuracast.env
      - .env
    volumes:
      - station_data:/var/azuracast/stations
      - backups:/var/azuracast/backups
      - db_data:/var/lib/mysql
      - www_uploads:/var/azuracast/storage/uploads
      - shoutcast2_install:/var/azuracast/storage/shoutcast2
      - stereo_tool_install:/var/azuracast/storage/stereo_tool
      - rsas_install:/var/azuracast/storage/rsas
      - geolite_install:/var/azuracast/storage/geoip
      - sftpgo_data:/var/azuracast/storage/sftpgo
      - acme:/var/azuracast/storage/acme
    networks:
      - azuracast.lan
    restart: unless-stopped
    ulimits:
      nofile:
        soft: 65536
        hard: 65536
    logging:
      options:
        max-size: "1m"
        max-file: "5"

volumes:
  db_data: { }
  acme: { }
  shoutcast2_install: { }
  stereo_tool_install: { }
  rsas_install: { }
  geolite_install: { }
  sftpgo_data: { }
  station_data: { }
  www_uploads: { }
  backups: { }

networks:
  azuracast.lan:
    external: true

This docker-compose.yml file defines a number of Docker Volumes.  Don’t mind those, AzuraCast needs them but we don’t. We will mount a local directory containing our music library into AzuraCast container later and  make sure that the Station data is all written to a host directory as well.

Note:
If you want, you can use the downloaded original of ‘docker-compose.yml’ instead of my truncated version above. There’s almost no difference in execution or functionality (except for the custom network and the TCP port range it opens for the hundreds of potential streaming channels), but the original file is much too large to copy into this article..

You need to be aware that the downloaded official Docker Compose configuration file opens a bunch of TCP ports in the range of 8000 to 8500. That’s one TCP port per music stream you want to create, so the default configuration allows for a total of 500 different streams or ‘stations‘. I only run a few streams myself, and I used that in the above modification which opens only 2 ports.You may want to increase the number of possible streams on one AzuraCast instance. You can do that by editing the ‘docker-compose.yml‘ file, but since we’re going to use it anyway further down, I want to draw your attention to the option of creating a new file named ‘docker-compose.override.yml‘ in the same directory next to your docker-compose.yml and .env files.

I will show you how you can increase the Icecast listen ports from 2 to a total of 100 ports aka audio streams. You can modify the port range in this file to meet your needs, such as expanding the range to port 8500 instead of 8099.
Let’s also add an override  for the Station data storage. We want to use a local directory instead of a number of Docker volumes. To the ‘docker-compose.override.yml‘ file, add a ‘web‘ service just like in the actual Docker Compose file above, and then add a ‘ports‘ and a  ‘volumes‘ section so that the file looks like this:

services:
  web:
    ## OPTIONALLY: Add more ports, each port supports one radio station:
    #ports:
    #  - "8002-8099:8002-8099"
    # Store all Station data on the host:
    volumes:
      - /opt/dockerfiles/azuracast/stations:/var/azuracast/stations
      - /opt/dockeriles/azuracast/backups:/var/azuracast/backups
      - /opt/dockerfiles/azuracast/db_data:/var/lib/mysql
      - /opt/dockerfiles/azuracast/storage:/var/azuracast/storage

You will probably have noticed that the host directories (the paths to the left of the ‘:’ colon) are the same directories that we manually created in an earlier step.

Ready for lift-off!

We have not yet added any audio library to the Docker configuration. That is because we don’t have all the required information yet, and the missing piece needs to be arranged from within AzuraCast. So let’s start it!
When you start the container for the first time, it will take a few minutes because Docker will be downloading several hundred megabytes of container layers. Subsequent updates will be much faster:

# cd /usr/local/docker-azuracast
# docker-compose up -d
# docker-compose logs -f

AzuraCast is implemented as a single Docker image which contains all the functionality (streaming server, web-UI, database etc). Historical releases used separate Docker containers for the various components of the streaming platform. This single container implementation means for instance that the internal MariaDB database is not exposed to the host at all, therefore the password for the “azuracast” database user does not need to change from the default in the configuration file.
Still, AzuraCast automatically generates a random password for the MariaDB ‘root’ user upon the first database spin-up. This password will only be visible in the container’s logs upon that first startup so be sure to look there and write it down (look for the string “GENERATED ROOT PASSWORD“). Alternatively you can set “MYSQL_RANDOM_ROOT_PASSWORD=no” in the file ‘azuracast.env‘ and then add an extra line defining “MYSQL_ROOT_PASSWORD=your_secret_dbroot_password“.

Note:
Possibly the MYSQL_ROOT_PASSWORD variable needs to be called MARIADB_ROOT_PASSWORD

Initial web user-interface setup

We need to take some necessary steps to ensure that we can complete our Docker configuration by adding a media library.

Upon the very first startup, AzuraCast will present you its Management Interface, where we will set a Station Name for the streaming server. The Station Name is what AzuraCast uses as its internal directory. For example, if the station is “Alien Pastures Radio“, the directory used inside the container will typically be “alien_pastures_radio“.

Open a browser on your host computer (the Cloud Server) and navigate to http://localhost:81/. Since we have not yet configured a reverse proxy, there’s no other way than to perform these initialization steps on the host.
You’ll be presented with the AzuraCast setup wizard that allows to:

  1. Enter your email address to create your administrator account
    Choose a strong password; this is the master key to your station.
  2. Create your first Station
    Give it a memorable name (for instance “Alien Pastures Radio“), choose your streaming format (MP3 at 192 kbit/s is a reasonable starting point), and note the station’s “short name” that AzuraCast generates (which would be “alien_pastures_radio” in this example). This is the string you need for the ‘docker-compose.override.yml‘ later on.
  3. Set your time zone
    This is relevant for scheduled playlists and analytics.

After completing the steps in the wizard you end up on the main station management page (the screenshot was taken after I mounted my local music library, added that to a new playlist and connected the playlist to the Station).

Mount your existing music library

This is the most important configuration step. AzuraCast lives inside Docker, but your big MP3 collection lives on the host. You need to tell Docker to make that directory visible inside the container as a “bind mount”.

At this point we will use the station’s internal directory name which you wrote down when  creating your station through the web UI in the previous section. Using the station name, create an override file name ‘docker-compose.override.yml‘ (or rather, merge it into the file which you created in an earlier step):

services:
  web:
    volumes:
      - /your/actual/path/to/mp3s:/var/azuracast/stations/alien_pastures_radio/media/remote/mp3:ro

Replace `/your/actual/path/to/mp3s` with the real path on your host (e.g. `/data/music`) and `alien_pastures_radio` with your actual station directory name. You’ll notice the “:ro” at the end of the internal directory. This means, that your media library is going to be mounted read-only.

Apply the change:

# cd /usr/local/docker-azuracast ; docker compose down ; docker compose up -d

Note #1:
If your music is spread across multiple directories, you can add multiple volume entries, each mounted under a different subdirectory of the station’s media path. AzuraCast will index all of them.

Note #2:
If you have symlinks inside of your media directory, AzuraCast will choke on them because AzuraCast’s filesystem abstraction library (Flysystem) does not support them. What you can do instead is remove the symlink and create a bind-mount in its place

Note #3:
If you do not want AzuraCast to edit your media files, the recommended way is to mount your files one directory deeper than the media directory and bind-mount that instead.
As in the example I show above, you could mount the container’s internal directory ‘/var/azuracast/stations/alien_pastures_radio/media/remote/mp3′ as a read-only volume. That way AzuraCast can still use the media storage location to store cached metadata about the files, but your host filesystem can remain a read-only mount.
As for whether AzureCast will actually write to media files: only when a user edits the metadata via the web UI. Those changes are written back to the file to ensure they persist, since most users expect that to be the case when editing tracks in the media editor. If you mount the filesystem as read-only though, it’ll just quietly fail that process but will still save metadata changes to its database.

Configure the AutoDJ (auto-queue)

The AutoDJ is Liquidsoap, and its most important configuration is the playlist.
A playlist in AzuraCast is not a fixed list of songs — it is a source from which Liquidsoap draws tracks to fill the queue. The simplest configuration is a single playlist pointed at your entire library, set to shuffle ad infinitum.

Step 1: create a playlist for your Station

Navigate to ‘System Administration > Stations > [Your Station] > Manage > Playlists > Add Playlist‘:

  • Name: “My Library” (or whatever name you prefer)
  • Type: ‘Standard Playlist
  • Source: ‘Song-based
  • Playlist type: ‘General rotation
  • Song Playback Order: ‘Random
  • Weight: ‘1
  • Click ‘Save Changes

The “Song-based” source is key. It automatically includes every song in your media directory tree, with no manual maintenance required. You can keep adding music to your directory on the host, and once AzuraCast’s internal media scanner task executes, the new tracks become available to the AutoDJ.
We do of course still need to add media to this empty playlist. That’s what the next step will take care of.

Step 2: Connect your media to the Playlist you just created

Go to ‘System Administration > Stations > [Your Station] > Manage > Media > Music Files

  • Select the media directory/directories which you bind-mounted into your Docker container.
  • Click on ‘Playlists‘ and select ‘My Library‘ (or whatever name you gave it)
  • Click ‘Save‘.
Step 3: Enable the AutoDJ

Go to  ‘Stations > [Your Station] > Edit > AutoDJ‘:

  • AutoDJ Service: ‘Use LiquidSoap on this server’ is checked
  • Crossfade Method: ‘Smart Mode
  • Click ‘Save Changes’

Within a few seconds Liquidsoap will start filling the queue and the stream will begin playing.
You can come back here later and experiment with the Audio Processing section to improve the listener’s experience.

Enable listener requests

The listener request feature is how you can manually perform queue management in those cases that you want to override the AutoDJ.
When a request is submitted, AzuraCast inserts that track as the next-to-play item,  ideally at the head of the randomized play queue which is maintained by the AutoDJ (but how to place the request at the actual head  is the final puzzle piece I cannot yet figure out).

To enable it, go to ‘Stations > [Your Station] > Edit

  • Under the ‘Profile‘ section,  select ‘Enable Public Pages
  • Under the ‘Song Requests‘ section, check ‘Allow Song Requests
  • Optionally set ‘Minimum Time Between Requests‘ to prevent the queue from being flooded
  • Click ‘Save Changes

Requests can be submitted in two ways:

  1. Via the public web page:
    AzuraCast generates a public-facing player page at ‘http://radio.darkstar.lan/public/alien_pastures_radio‘. This includes a search box that lets anyone (or just you) find a song and click “Request“.
  2. Via the ‘REST API’ (useful for automation or in case you want to write your own front-end):
    Use this curl commandline to search for a song (replace STATION_ID with your Station’s numerical ID. Your first Station has an ID of “1“). You can also find your station’s numeric ID in the URL when you’re on the station’s dashboard page.
    Note that this will return a long list of all your audio files in JSON format!
$ curl -s "http://radio.darkstar.lan/api/station/STATION_ID/requests" | python3 -m json.tool

Submit a specific request (replace SONG_ID with the numeric ID from the search results):

$ curl -X POST "http://radio.darkstar.lan/api/station/1/request/SONG_ID"

The full API documentation is available at ‘http://radio.darkstar.lan/api‘.

The Icecast output URL

AzuraCast automatically creates an Icecast mount point when you create a station. By default it will be accessible at: http://127.0.0.1:8001/radio.mp3

To change the default mount point name “radio.mp3” into something else, go to ‘System Administration > Stations > [Your Station] > Manage > Broadcasting (in the left sidebar) > Mount Points‘ and change the name there. As an example, we change it to “lowlands“. The “.mp3” extension is not needed at all.
To verify, go to ‘System Administration > Stations > [Your Station] > Manage > Overview (in the left sidebar)‘. In the “Streams‘ section of the overview you’ll see the mount points that Liquidsoap is publishing to Icecast, along with the listener count and current playing track.

Put it to the test

To listen to your new streaming server from the command line, use any program that supports the Icecast protocol: mpv, mplayer, vlc, mpc (if you want to feed the Icecast stream back into an MPD instance) etc:

$ mpv http://127.0.0.1:8001/lowlands

It works! Time to make this stream available outside of your host server and let family and friends enjoy your shiny new music station.

Apache reverse proxy (https)

Especially if your server is headless, you definitely want to manage AzuraCast over HTTPS using a normal URL instead of the localhost address. You may also want to expose the audio stream on port 443 instead of 8001 so that it will pass any company firewall with ease. To achieve this we turn again to our trustworthy Apache HTTP server and setup a reverse proxy.

The flow is as follows: the user connects to the reverse proxy using HTTPS (encrypted connection) and the reverse proxy connects to the AzuraCast Docker container on the client’s behalf. Traffic between the reverse proxy (Apache httpd in our case) and the AzuraCast Docker container is un-encrypted and happens on the loopback address.
A reverse proxy is capable of handling many simultaneous connections and can be configured to offer SSL-encrypted connections to the remote users even when the backend can only communicate over clear-text un-encrypted connections.

Add the following reverse proxy lines to your VirtualHost definition of the “https://radio.darkstar.lan” web site configuration and restart httpd:

# ---
# Required modules:
# mod_proxy, mod_ssl, proxy_wstunnel, http2, headers, remoteip

# No caching
Header set Cache-Control "max-age=1, no-control"

# Proxy configuration
<Proxy *>
    Allow from all
    Require all granted
</Proxy>

ProxyRequests Off
ProxyVia on
ProxyAddHeaders On
ProxyPreserveHost On
ProxyTimeout 900

# SSL configuration
<IfModule mod_ssl.c>
    SSLProxyEngine on
    RequestHeader set X-Forwarded-Proto "https"
    RequestHeader set X-Forwarded-Port "443"
</IfModule>

# Allow access to everyone
<Location />
    Allow from all
    Require all granted
</Location>

# Letsencrypt places a file in this folder when updating/verifying certs.
# This line will tell apache to not to use the proxy for this folder:
ProxyPass /.well-known !

# Reverse proxy for the Web UI at http(s)://radio.darkstar.lan/
ProxyPass / http://127.0.0.1:81/
ProxyPassReverse / http://127.0.0.1:81/

# And the reverse proxy for the Icecast stream playing at http(s)://radio.darkstar.lan/lowlands
<Location /lowlands>
    ProxyPass http://127.0.0.1:8001/lowlands
    ProxyPassReverse http://127.0.0.1:8001/lowlands
</Location>

# AzuraCast requires a WebSocket proxy
RewriteEngine on
RewriteCond %{HTTP:Connection} Upgrade [NC]
RewriteCond %{HTTP:Upgrade} websocket [NC]
RewriteRule ^/?(.*) "ws://127.0.0.1:81/$1" [P,L]

# ---

If you want to make your non-encrypted web address http://radio.darkstar.lan redirect automatically to the encrypted ‘https://‘ variant, be sure to add this block to its VirtualHost definition to ensure that Letsencrypt can still access your server’s challenge file via an un-encrypted connection:

<If "%{REQUEST_URI} !~ m#/\.well-known/acme-challenge/#">
    Redirect permanent / https://radio.darkstar.lan/
</If>

The hostname and TCP port numbers shown in bold green are defined elsewhere in this article, they should stay matching when you decide to use a different hostname and port numbers.

Test and reload the Apache webserver configuration as follows:

# apachectl configtest
# apachectl graceful

MariaDB backups

The Web User-Interface of AzuraCast allows you to schedule regular backups of the SQL database which stores meta information about your Radio Station and media:
Go to ‘Administration > System Maintenance > Backups > Autiomatic Backups > Configure‘:

  • Check “Run Automatic Nightly Backups
  • Check “Exclude Media from Backup
  • Configure a time of the day, the archive format and the storage location (the default path in the dropdown is OK).
  • Click ‘Save Changes

Backup archives will be stored in the /opt/dockerfiles/azuracast/backups directory.
You can also manually backup the AzuraCast database and configuration with the following command:

# cd /usr/local/docker-azuracast
# docker compose exec web azuracast_cli backup --exclude-media /var/azuracast/backups/my-azuracast-backup.zip

This will create the backup ZIP file in your host’s local directory /opt/dockerfiles/azuracast/backups/.

Troubleshooting

The stream is silent / Liquidsoap is not playing

Check that your playlist has the AutoDJ enabled and that the mounted directory actually contains indexed files.

  • Change to the docker directory:
    # cd /usr/local/docker-azuracast
  • Check Liquidsoap logs:
    # docker compose logs | grep -i liquidsoap | tail -50
  • Verify the media directory is visible inside the container:
    # docker compose exec web ls -la /var/azuracast/stations/alien_pastures_radio/media/remote/mp3/

AzuraCast cannot read my MP3 files

File ownership matters. The AzuraCast Docker container runs as UID 1000 by default. If your host music files are owned by a different UID, make sure that the user account inside the container can read them.
Fix this by either:

  • Make files world-readable
    # chmod -R o+r /your/actual/path/to/mp3s
  • Or: change the container’s UID to match your host user.  In the ‘.env’ file that lives next to ‘docker-compose.yml’ you will find the below two lines. Change the UID number from 1000 to that of your own user on the host:
    AZURACAST_PUID=1000
    AZURACAST_PGID=1000

Then restart the Docker stack.

The web UI shows “0 files” after mounting

If you mounted the directory after the initial station creation but before a rescan, trigger a manual rescan:

# cd /usr/local/docker-azuracast
# docker compose exec web azuracast_cli azuracast:media:reprocess

The above command triggers a rescan for all Stations. If you want to trigger the rescan only for your own Station, run this instead:

# docker compose exec web azuracast_cli azuracast:media:reprocess alien_pastures_radio

You can also start the rescan process from the User Interface.
Go to ‘System Administration > Stations > [Your Station] > Manage > Media > Music Files‘:

  • Select “remote” or whatever root directory your media library shows
  • Click ‘More > Reprocess

Port 8001 is not reachable from outside

AzuraCast binds Icecast to `0.0.0.0:8001‘ on the host. If you have a host firewall, open the port to the outside world.

Here is an iptables example:
# iptables -A INPUT -p tcp --dport 8001 -j ACCEPT

But if you implemented the Apache reverse proxy as I outlined above, you would not have to expose this port at all. Instead you can rely on Apache httpd to relay user connections to the Icecast listen port on the host. The iptables firewall rule is then not needed of course.

Final thoughts

I found that AzuraCast does most of what my (t)rusty old OTTO did, and it is capable of considerably more that I am not even touching (but you might, if you are interested in running an actual live radio show with contributers).
The AutoDJ implements my primary need which is a maintenance-free jukebox: it handles the continuous queue filling without any intervention. The listener request system gives me an on-demand control over what plays next. My only gripe is that the AutoDJ pushes its own queue out to the player and any user request will be pushed out after that already pushed-out queue.  Which means that I need to keep the AutoDJ queue length limited to 1 or 2 songs so that I don’t have to wait too long for my own requested song to play.
The scheduled library scanning handles my ever-growing MP3 collection. And my music players just need to tun into a different Icecast URL.

If anyone is interested, I can describe in a future article how I deployed  YTuner locally to revive the network audio streaming capability for my Denon AVR-X2300W tuner after Denon killed its free online VTuner service by making it subscription-based. Because that Denon tuner is what’s playing my Icecast stream right now, while I am typing this.

I hope you enjoyed the article. Leave your thoughts in the comments section below.

Cheers, Eric

Slackware Cloud Server Series, Episode 10: Workflow Management

For my Slackware Cloud Server series of articles, we are going to have a look at a system for workflow management, personal note taking and all kinds of other collaborative features.

When the COVID pandemic hit the world, me and my wife began a routine of regular walks in the open fields and forests near us, simply to escape the confines of the house and have a mental break from all the tension. We really enjoyed the quiet of those days, we rarely encountered other wanderers – but that’s an aside.
My wife started documenting our walks, our bicycle trips and eventually also our holidays in OneNote. It was a convenient note-taking tool which combines structured text with images and hyperlinks. Even though OneNote is a MS Windows program, it saves its data in the Cloud, allowing me to access the collection of our walking notes in a Slackware web browser.
Some of you may be using Miro at work, for your Agile workflows, for brainstorming and in general, as an online replacement for physical whiteboards. Online collaborative tools like Miro became immensely popular because of the COVID pandemic when coming to the office every day was no longer feasible.
The selling point of the above tools is that they are cloud-centered. Your data is stored with the tool provider and you can work – individually or in groups – on your projects online.

As always, there’s a catch. Miro is commercial and comes with a paid subscription. OneNote is free, Windows-only but with browser-based access to your data, yet has a 5 GB storage cap.
For individual usage, several free alternatives have risen in popularity; note-taking apps that provide an alternative to OneNote such as EverNote, Joplin, but also evolutions of the note-taking concept like Notion, Obsidian, LogSeq and more. While EverNote and Notion are not Open Source, they have a free plan for online storage. LogSeq and Obsidian are open source tools for off-line usage and single-user, but you can store their local database in a cloud storage like Dropbox, OneDrive, Google One or Nextcloud if you want. Joplin is open source, multi-user but not collaborative and stores its data on a backend server – either its own Joplin Cloud or else a WebDAV server like Nextcloud. I plan on writing an article about Joplin, too. It’s on my TODO.

This list is far from complete – there are many more alternatives and they all will try to cater to your specific needs.

I am not going to discuss the pros and cons of these tools, I have not tested them all. I like to make calculated choices based on available information and then stick to my choices. You can only become good at something if you really invest time. It’s also why I am not a distro-hopper and stuck with Slackware from day one.

My choice of personal workflow management tool is AFFiNE. The main reasons for finally picking it over the alternatives, is that the AFFiNE server backend is is open source software, it has desktop and mobile apps and a browser client, and it allows you to work offline or sync your project data to a cloud server. Its collaborative features allow a team to work jointly and simultaneously on a project.
Most importantly, it offers a self-hosting option using Docker Compose and the self-host version integrates with OpenID Connect (OIDC) Identity Providers (IDP) like Keycloak. Aka Single Sign-On (SSO).

A caveat upfront: This software is in active development. The developers are friendly and responsive to the questions from their community. But some of the features that you would like to see in the self-hosted version are not yet built, or not trivial to implement, or take ages to implement, or simply badly documented. That is exactly why I decided to write this article: to provide complete documentation to the online community about how to setup and configure your own AFFiNE server with SSO provided by Keycloak (or any other IDP than Keycloak which implements OIDC).
This article will evolve in parallel with AFFiNE’s development, and as features get added or bugs resolved, I will update the text here as well.
Actually, that is exactly what happened because while writing this text, AFFiNE devs pushed a Christmas upgrade and I had to adapt my descriptions in some places. Also, some of the screenshots I made will probably look slightly different on new releases.

Check out the list below which shows past, present and future episodes in my Slackware Cloud Server series. If the article has already been written you’ll be able to click on the subject.
The first episode also contains an introduction with some more detail about what you can expect.


Introduction

Let’s dive a bit deeper into AFFiNE.

AFFiNE is a privacy-first, open source “Write, Plan, Draw, All At Once” all-in-one workspace which is currently in a ‘Beta‘ stage (version 0.19.1), with a new release every 6 weeks. With each update, the changes and new features are impressive.

Everything in AFFiNE is made up of blocks. Blocks can be organized in Docs and these can be represented in different ways. Here is some terminology to help you get acquainted with the tool quickly:

  • Blocks: These are the atomic elements that comprise your Docs. They can contain text, images, embedded web pages etc.

  • Docs: Your main canvas. It has ‘doc info’ associated for indexing and referencing. Your Doc  has two views: Edgeless and Page Mode.

  • Page Mode: Presents a set of blocks in the form of a linear document which always fits on your page.

  • Edgeless mode: Presents all content of your Doc in a edgeless (infinite) canvas.

  • Doc info: Also called ‘Info’, refers to all the attributes and fields contained within a Doc.

  • Blocks with Databases : A structured container to index, group, update or oversee blocks in different views.

  • Collections: A smart folder where you can manually add pages or automatically add pages through rules.

  • Workspaces: Your virtual space to capture, create and plan as just one person or together as a team.

  • Members: Collaborators of a Workspace. Members can have different roles.

  • Settings: Your personal appearance and usage preferences allow you to tailor your Workspace to your needs.

When working in AFFiNE you get an edgeless (aka infinite) canvas where you engage in a variety of activities like documenting, creating mood-boards, brainstorming, project planning, creative drawing, mind-mapping, all using an intuitive block editor, and then connect all of your ideas via the relations you apply to the various blocks, simply dragging arrows across your canvas.
You can toggle the two main viewports: either you work in the block editor in an infinite canvas, or you fit the structured textual content into your browser page.

Functionally, AFFiNE offers a blend of how Miro and Notion work. The edgeless whiteboard canvas with many templates to choose from, is definitely inspired by Miro. The block editor is something which Notion and other alternatives are well-known for.
Concepts used in AFFiNE to create structure in your workflow are Frame, Group and Database.

Your data will be stored on your local disk or in your browser’s cache by default, but you have the option to login to a cloud server and sync your data to the server.
The company that develops AFFiNe (ToEverything aka Theory Of Everything) offers a free plan with 10 GB of online project storage and a maximum of three collaborators to invite to your projects. But we are more interested in the self-hosted version where we are in control of that data. There, we decide how much you and your friends can store and how big your team can become when you engage in collaborative work. The self-hosted version of AFFiNE Cloud can eliminate all those limitations of the free plan.

If you decided to switch from your current knowledge management solution, then it’s good to know that AFFiNE can import content from other tools, specifically it supports Notion export files, but also will import HTML and Markdown files.

Please note that ToEverything, the company behind AFFiNE, funds the software’s development from donations it receives and from the Pro subscriptions to their own AFFiNE Cloud offering. If you setup a self-hosted AFFiNE and really like it, and also use it as a collaboration platform with a group of friends/colleagues, you might want to consider setting up a donation to support a sustained development.


Preamble

This section describes the technical details of our setup, as well as the things which you should have prepared before trying to implement the instructions in this article.

Web Hosts

For the sake of this instruction, I will use the hostname “https://affine.darkstar.lan” as your landing page for AFFiNE.

Furthermore, “https://sso.darkstar.lan/auth” is the Keycloak base URL (see Episode 2 to read how we setup Keycloak as our identity provider).

In Keycloak, we have configured a realm called ‘foundation‘ which contains our user accounts and application client configurations.

Setting up your domain (which will hopefully be something else than “darkstar.lan”…) with new hostnames and then setting up web servers for the hostnames in that domain is an exercise left to the reader. Before continuing, please ensure that your equivalent for the following host has a web server running. It doesn’t have to serve any content yet but we will add some blocks of configuration to the VirtualHost definition during the steps outlined in the remainder of this article:

  • affine.darkstar.lan

Using a  Let’s Encrypt SSL certificate to provide encrypted connections (HTTPS) to your webserver is documented in an earlier blog article.

Note that I am talking about webserver “hosts” but in fact, all of these are just virtual webservers running on the same machine, at the same IP address, served by the same Apache httpd program, but with different DNS entries. There is no need at all for multiple computers when setting up your Slackware Cloud server.

Docker network

  • We assign a Docker network segment to our AFFiNE containers: 172.22.0.0/16
  • We assign a specific IPv4 address to the AFFiNE server itself (so that it is able to send emails): 172.22.0.5

File Locations

  • The Docker configuration goes into: /usr/local/docker-affine/
  • The data generated by the AFFiNE server goes into: /opt/dockerfiles/affine/

Secrets

The Docker stack we create for our AFFiNE server uses several secrets (credentials).

In this article, we will use example values for these secrets – be sure to generate and use your own strings here!

# Credentials for the Postgres database account:
AFFINE_DB_USERNAME=affine
AFFINE_DB_PASSWORD=0Igiu3PyijI4xbyJ87kTZuPQi4P9z4pd

# Credentials for the account that authenticates to the SMTP server for sending emails:
AFFINE_MAILER_USER=affinemailer
AFFINE_MAILER_PASSWORD=E9X46W3vz8h1nBVqBHgKCISxRufRsHlAXSEbcXER/58=

# Credentials for the OIDC client are shared between Keycloak and AFFiNE:
AFFINE_OIDC_CLIENT_ID=affine
AFFINE_OIDC_CLIENT_SECRET=TZ5PBCw66IhDtZJeBD4ctsS2Hrb253uY

Note that AFFiNE’s internal implementation chokes on a Postgress password containing special characters (at least up to version 0.19.1).


Apache reverse proxy configuration

We are going to run AFFiNE in a Docker container stack. The configuration will be such that the server will only listen for clients at one TCP port at the localhost address (127.0.0.1).

To make our AFFinE storage and database backend available to the users at the address https://affine.darkstar.lan/ we are using a reverse-proxy setup. The flow is as follows: the user connects to the reverse proxy using HTTPS (encrypted connection) and the reverse proxy connects to the AFFinE backend on the  client’s behalf.  Traffic between the reverse proxy (Apache httpd in our case) and the AFFiNE server’s Docker container is un-encrypted. That is not a problem, we give the AFFiNE server its own private network segment inside Docker.
A reverse proxy is capable of handling many simultaneous connections and can be configured to offer SSL-encrypted connections to the remote users even when the backend can only communicate over clear-text un-encrypted connections.

Add the following reverse proxy lines to your VirtualHost definition of the “https://affine.darkstar.lan” web site configuration and restart httpd:

# ---
# Required modules:
# mod_proxy, mod_ssl, proxy_wstunnel, http2, headers, remoteip

# No caching:
Header set Cache-Control "max-age=1, no-control"
ProxyRequests Off
ProxyVia on
ProxyAddHeaders On
ProxyPreserveHost On

<Proxy *>
  Options FollowSymLinks MultiViews
  Require all granted
</Proxy>

# Letsencrypt places a file in this folder when updating/verifying certs.
# This line will tell apache to not to use the proxy for this folder:
ProxyPass "/.well-known/" "!"

<IfModule mod_ssl.c>
  SSLProxyEngine on
  RequestHeader set X-Forwarded-Proto "https"
  RequestHeader set X-Forwarded-Port "443"
</IfModule>

# AFFiNE is hosted on https://affine.darkstar.lan/
<Location />
  ProxyPass "http://127.0.0.1:3010/"
  ProxyPassReverse "http://127.0.0.1:3010/"
</Location>

# WebSocket proxy:
RewriteEngine on
RewriteCond %{HTTP:Connection} Upgrade [NC]
RewriteCond %{HTTP:Upgrade} websocket [NC]
RewriteRule ^/?(.*) "ws://127.0.0.1:3010/$1" [P,L]
# ---

If you want to make your non-encrypted web address http://affine.darkstar.lan redirect automatically to the encrypted ‘https://‘ variant, be sure to add this block to its VirtualHost definition to ensure that Letsencrypt can still access your server’s challenge file via an un-encrypted connection:

<If "%{REQUEST_URI} !~ m#/\.well-known/acme-challenge/#">
    Redirect permanent / https://affine.darkstar.lan/
</If>

The hostname and TCP port numbers shown in bold green are defined elsewhere in this article, they should stay matching when you decide to use a different hostname and port numbers.


AFFiNE Server preparations

We will give the AFFiNE server its own internal Docker network. That way, the inter-container communication stays behind its gateway, this prevents snooping the network traffic.

Docker network

Create the network using the following command:

docker network create \
  --driver=bridge \
  --subnet=172.22.0.0/16 --ip-range=172.22.0.0/25 --gateway=172.22.0.1 \
  affine.lan

Docker’s gateway address in any network segment will always have the “1” number at the end.
Select a yet unused network range for this subnet. You can find out about the subnets which are already defined for Docker by running this command:

# ip route |grep -E '(docker|br-)'

The ‘affine.lan‘ network you created will be represented in the AFFiNE docker-compose.yml file with the following code block:

networks:
  affine.lan:
    external: true
Create directories

Create the directory for the docker-compose.yml and other startup files:

# mkdir -p /usr/local/docker-affine

Create the directories to store data:

# mkdir -p /opt/dockerfiles/affine/{config,postgres,storage}

Download the docker-compose and a sample .env file:

# cd /usr/local/docker-affine
# wget -O docker-compose.yml https://raw.githubusercontent.com/toeverything/AFFiNE/refs/heads/canary/.github/deployment/self-host/compose.yaml
# wget https://raw.githubusercontent.com/toeverything/AFFiNE/refs/heads/canary/.github/deployment/self-host/.env.example
# cp .env.example .env

It looks like with the release of 0.19 the developers are also posting versions of the docker-compose.yml and default.env.example files in the Assets section of the Releases page.

Considerations for the .env file

Docker Compose is able to read environment variables from an external file. By default, this file is called ‘.env‘ and must be located in the same directory as the ‘docker-compose.yml‘ file. In fact ‘.env‘ will be searched in the current working directory, but I always execute ‘docker-compose‘ in the directory containing its YAML file anyway and to make it really fool-proof the YAML file will define the ‘.env‘ file location explicitly.

In this environment file we are going to specify things like accounts, passwords, TCP ports and the like, so that they do not have to be referenced in the ‘docker-compose.yml‘ file or even in the process environment space. You can shield ‘.env‘ from prying eyes, thus making your setup more secure.

This is eventually the content of the ‘/usr/local/docker-affine/.env‘ file, excluding the OIDC configuration:

# ---
# Select a revision to deploy, available values: stable, beta, canary
AFFINE_REVISION=stable

# Our name:
AFFINE_SERVER_NAME=Alien's AFFiNE

# Set the port for the server container it will expose the server on
PORT=3010

# Set the host for the server for outgoing links
AFFINE_SERVER_HTTPS=true
AFFINE_SERVER_HOST=affine.darkstar.lan
AFFINE_SERVER_EXTERNAL_URL=https://affine.darkstar.lan

# Position of the database data to persist
DB_DATA_LOCATION=/opt/dockerfiles/affine/postgres
# Position of the upload data (images, files, etc.) to persist
UPLOAD_LOCATION=/opt/dockerfiles/affine/storage
# Position of the configuration files to persist
CONFIG_LOCATION=/opt/dockerfiles/affine/config

# Database credentials
AFFINE_DB_USERNAME=affine
AFFINE_DB_PASSWORD=0Igiu3PyijI4xbyJ87kTZuPQi4P9z4pd
AFFINE_DB_DATABASE=affinedb

# Mailer service for sending collaboration invites:
AFFINE_MAILER_HOST=affine.darkstar.lan
AFFINE_MAILER_PORT=587
AFFINE_MAILER_USER=affinemailer
AFFINE_MAILER_PASSWORD=E9X46W3vz8h1nBVqBHgKCISxRufRsHlAXSEbcXER/58=
AFFINE_MAILER_SENDER=affinemailer@darkstar.lan
AFFINE_MAILER_SECURE=false

# We hard-code the IP address for the server so that we can make it send emails:
AFFINE_IPV4_ADDRESS=172.22.0.5

# Here you will add OIDC credentials later
# ---

Note that I kept having issues with some environment variables not getting filled with values inside the containers. I found out that a variable in the ‘.env‘ file that had a dash ‘-‘ as part of the name would not be recognized inside a container, that is why now I only use capital letters and the underscore,

The Docker Compose configuration

The  ‘docker-compose.yml‘ file we downloaded  to/usr/local/docker-affine/ in one of the previous chapters will create multiple containers, one for AFFiNE itself, one for the Postgres database and one for the Redis memory cache. I made a few tweaks to the original, so eventually it looks like this (excluding the OIDC configuration:

# ---
name: affine
services:
  affine:
    image: ghcr.io/toeverything/affine-graphql:${AFFINE_REVISION:-stable}
    container_name: affine_server
    ports:
      - '127.0.0.1:${PORT:-3010}:3010'
    depends_on:
      redis:
        condition: service_healthy
      postgres:
        condition: service_healthy
      affine_migration:
        condition: service_completed_successfully
    volumes:
      # custom configurations
      - ${UPLOAD_LOCATION}:/root/.affine/storage
      - ${CONFIG_LOCATION}:/root/.affine/config
      # Here you will add a workaround for an OIDC bug later
    env_file:
      - path: ".env"
    environment:
      - ENABLE_TELEMETRY=false
      - REDIS_SERVER_HOST=redis
      - DATABASE_URL=postgresql://${AFFINE_DB_USERNAME}:${AFFINE_DB_PASSWORD}@postgres:5432/${AFFINE_DB_DATABASE:-affine}
      - MAILER_HOST=${AFFINE_MAILER_HOST}
      - MAILER_PORT=${AFFINE_MAILER_PORT}
      - MAILER_USER=${AFFINE_MAILER_USER}
      - MAILER_PASSWORD=${AFFINE_MAILER_PASSWORD}
      - MAILER_SENDER=${AFFINE_MAILER_SENDER}
      - MAILER_SECURE=${AFFINE_MAILER_SECURE}
      # Here you will add OIDC environment variables later
   networks:
      affine.lan:
        ipv4_address: ${AFFINE_IPV4_ADDRESS}
        aliases:
          - affine.affine.lan
    restart: unless-stopped

  affine_migration:
    image: ghcr.io/toeverything/affine-graphql:${AFFINE_REVISION:-stable}
    container_name: affine_migration_job
    volumes:
      # custom configurations
      - ${UPLOAD_LOCATION}:/root/.affine/storage
      - ${CONFIG_LOCATION}:/root/.affine/config
    command: ['sh', '-c', 'node ./scripts/self-host-predeploy.js']
    env_file:
      - path: ".env"
    environment:
       - REDIS_SERVER_HOST=redis
       - DATABASE_URL=postgresql://${AFFINE_DB_USERNAME}:${AFFINE_DB_PASSWORD}@postgres:5432/${AFFINE_DB_DATABASE:-affine}
    depends_on:
      redis:
       condition: service_healthy 
      postgres:
        condition: service_healthy
    networks:
      - affine.lan

  redis:
    image: redis
    container_name: affine_redis
    healthcheck:
      test: ['CMD', 'redis-cli', '--raw', 'incr', 'ping']
      interval: 10s
      timeout: 5s
      retries: 5
    networks:
      - affine.lan
    restart: unless-stopped

  postgres:
    image: postgres:16
    container_name: affine_postgres
    volumes:
      - ${DB_DATA_LOCATION}:/var/lib/postgresql/data
    env_file:
      - path: ".env"
    environment:
      POSTGRES_USER: ${AFFINE_DB_USERNAME}
      POSTGRES_PASSWORD: ${AFFINE_DB_PASSWORD}
      POSTGRES_DB: ${AFFINE_DB_DATABASE:-affine}
      POSTGRES_INITDB_ARGS: '--data-checksums'
    healthcheck:
      test:
        ['CMD', 'pg_isready', '-U', "${AFFINE_DB_USERNAME}", '-d', "${AFFINE_DB_DATABASE:-affine}"]
      interval: 10s
      timeout: 5s
      retries: 5
    networks:
      - affine.lan
    restart: unless-stopped

networks:
  affine.lan:
    external: true
# ---

Initializing the Docker stack

The docker-compose.yml file in /usr/local/docker-affine defines the container stack, the .env file in that same directory contains credentials and other variables. If you hadn’t created the Docker network yet, do it now! See the “Docker network” section higher up.
Start the Docker container stack. There will be three containers eventually and a temporary ‘migration’ container performing the administrative tasks prior to starting the server:

# cd /usr/local/docker-affine
# docker-compose up -d && docker-compose logs -f

And monitor the logs if you think the startup is troublesome. The above command-line will show the detailed log of the startup after the containers have been instantiated and you can quit that log-tail using ‘Ctrl-C‘ without fear of killing your containers.
If you want to check the logs for the AFFiNE server using the name we gave its container (affine_server):

# docker logs affine_server

Or check the logs of the full Docker Compose stack using the ‘affine’ service name (the first line in the docker-compose.yml file):

# docker-compose logs affine

When this is the first time you start the Docker stack, the Postgres database will be initialized. This will take a few seconds extra. When the server is up and running, use a webbrowser to access  your AFFiNE workspace at https://affine.darkstar.lan/. The next section “Setting up the server admin” has instructions on the steps you need to take to setup an admin account.

The server backend shows version information by pointing curl (or a web browser) at the URL https://affine.darkstar.lan/info – it will return a YAML string which looks like this:

{
  "compatibility": "0.19.1",
  "message": "AFFiNE 0.19.1 Server",
  "type": "selfhosted",
  "flavor": "allinone"
}

Setting up the server admin

The first time you connect to the self-hosted AFFiNE server, you are being asked to create an admin account. For that reason, you may want to connect directly to the container via http://localhost:3010 when logged-in to your Docker host.
If you are not the paranoid type, you can also connect to the external URL https://affine.darkstar.lan/ of course 🙂 Just make sure that you do that before some interested 3rd-party comes visiting.

In a few steps, you will be taken through a setup procedure where you enter your name, email address and a password.

And voila! You are the admin of your new AFFiNE server.
When you remove the “/admin/...” path from the resulting URL in your browser, that will take you to the default AFFiNE Workspace for your account, which will always be populated with a demo page:

You immediately see the red banner, informing you that your work will be kept in your browser cache and may be lost when the browser crashes, and that you should really enable a Cloud sync. Of course, the word “Cloud” in this context means nothing more than your own self-hosted server.

Note that the URL for administering your server is https://affine.darkstar.lan/admin/!


Administering the server

Creating and managing users

The first decision you need to make is whether you are going to open up your AFFiNE server to anyone interested. By default, new users can register themselves via email. AFFiNE will create an account for them and a “magic link” with a login token will be sent to them. If you go to https://affine.darkstar.lan/admin/settings  you see  that there  is  a slider  which  allows  you  to  disable  the  self-registration  feature:

If you decide to disable the self-registration, you’ll have to create accounts for your users manually in https://affine.darkstar.lan/admin/accounts via the “+ Add User” button:

 

One big caveat for this way of creating user accounts is that you need to have configured the mail transport. AFFiNE needs to send emails to your users.
That requires a bit of configuration in the Docker stack (but that has already been taken care of in the above docker-compose.yml and .env files) and also on the Docker host. You will find the detailed instructions in the section further down named “Configuring the mail transport (Docker container & host)“.

If you implement Single Sign-On (SSO) via Keycloak then AFFiNE only needs to send emails if a user wants to invite another user in order to collaborate on a workspace.

Customizing the users’ abilities

The self-hosted AFFiNE server adds every user to the “Free Plan” just like when you would create an account on the company’s server https://app.affine.pro/ . However, the reason for self-hosting is to take control over our data as well as our own capabilities. The “Free Plan” comes with a maximum of 10 GB server storage, a 10 MB filesize upload limit, 7 days of file version history and a maximum of 3 members in your workspaces. The developers are apparently still considering what kind of capabilities are relevant for users of a self-hosted instance.

We are not going to wait. I’ll show how you stretch those limits so that they are no longer relevant.
A bit of familiarity with Postgres will help with that, since it involves directly modifying AFFiNE database records.

First, open a Postgres prompt to our affinedb database on the affine_postgres container:

# docker exec -it affine_postgres psql -U affine affinedb

The “affinedb=#” in the rest of this section depicts the Postgres command prompt. This is where you are going to type the SQL commands that show information from the database and will change some of the data in there. We will be examining the ‘users’, ‘features’ and ‘user_features’ tables and make our changes in the ‘user_features’ table when we assign a different Plan to your users’ accounts.

Execute some actual SQL
  • Let’s see who the registered users are on our server. I limit the output of the command to just my own user who logged in via Single Sign-On:

affinedb=# select * from users;

                  id                  |      name       |           email           |                                             password                                              |         created_at         |       email_verified       |                                               avatar_url                                               | registered 
--------------------------------------+-----------------+---------------------------+---------------------------------------------------------------------------------------------------+----------------------------+----------------------------+--------------------------------------------------------------------------------------------------------+------------
 01ba65de-6d3b-4eb2-9cd4-be98264e4370 | Eric Hameleers  | alien@slackware.com       |                                                                                                   | 2024-12-22 14:01:37.834+00 | 2024-12-22 14:01:37.829+00 | https://affine.darkstar.lan/api/avatars/01ba65de-6d3b-4eb2-9cd4-be98264e4370-avatar-1734879819544 | t
  • The orange highlight shows my user_id, which is basically a UUID string. We will be using that user_id in the next SQL commands. To get a list of the Plans (features) that are available for AFFiNE users we can do a SQL query as follows:

affinedb=# select id, feature, configs from features;

  • But I am going to leave that as an exercise for the reader, because I will show a more tailored version of that command soon. First, let’s look at what Plan (the feature_id in the user_features table) my user was assigned to:

affinedb=# select * from user_features where user_id = '01ba65de-6d3b-4eb2-9cd4-be98264e4370‘;

 id |               user_id                | feature_id |  reason  |         created_at         | expired_at | activated 
----+--------------------------------------+------------+----------+----------------------------+------------+-----------
 11 | 01ba65de-6d3b-4eb2-9cd4-be98264e4370 |         13 | sign up  | 2024-12-22 14:44:20.203+00 |            | t
  • Apparently I am on a Plan with a feature_id of “13”. Let’s get more details about that Plan, and let’s already add “16” to that query (you will soon see why I want that)

affinedb=# select id, feature, configs from features where id = 13 or id = 16;

 id |   feature    |                                                                             configs                                                                             
----+--------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------
 13 | free_plan_v1 | {"name":"Free","blobLimit":10485760,"businessBlobLimit":104857600,"storageQuota":10737418240,"historyPeriod":604800000,"memberLimit":3,"copilotActionLimit":10}
 16 | lifetime_pro_plan_v1 | {"name":"Lifetime Pro","blobLimit":104857600,"storageQuota":1099511627776,"historyPeriod":2592000000,"memberLimit":10,"copilotActionLimit":10}
  • By looking in more detail to the feature definition for id “13” aka the “Free” plan (the ‘configs‘ field) we see that there is a 10 MB upload limit (the blobLimit in bytes); a 10 GB storage limit (the storageQuota in bytes), a 7-day historical version retention (the historyPeriod in seconds), and a limit of 3 members who can collaborate on your workspace (the memberLimit). You also notice that the “Lifetime Pro” plan with an id of “16” has considerably higher limits (100 MB file upload limit, 1 TB of storage, 30 days history retention, 10 members to collaborate with).
    These queries show that by default when you sign up with the self-hosted version, you get assigned to the “Free Plan” which corresponds to a feature_id of “13”. We are going to change that for our user and set it to “16” which is the “Unlimited Pro Plan”:

affinedb=# update user_features set feature_id = 16, reason = 'selfhost' where user_id = '01ba65de-6d3b-4eb2-9cd4-be98264e4370' and feature_id = 13;

  • Now, when you look at the user_features table, my account shows the “selfhost” string as the reason for change:

affinedb=# select * from user_features where user_id = '01ba65de-6d3b-4eb2-9cd4-be98264e4370‘;

 id |               user_id                | feature_id |  reason  |         created_at         | expired_at | activated 
----+--------------------------------------+------------+----------+----------------------------+------------+-----------
 11 | 01ba65de-6d3b-4eb2-9cd4-be98264e4370 |         16 | selfhost | 2024-12-28 13:23:17.826+00 |            | t
  • If we look at our account settings in AFFiNE now, we see that the Plan has changed:

 

You need to repeat this for every user who registers at your AFFiNE instance.


User passwords

There’s a difference between users who are logging in via OIDC, and the rest of them. When your users login via OIDC using an Identity Provider (IDP) such as Keycloak, the password is not stored in AFFiNE, and the user can always logout and login again.

But when you (the admin) create an account, or you have self-registration enabled (which is the default) and the user submits their email to the server, then AFFiNe will send the user a “magic link” via email every time they want to login to your server. That is a bit cumbersome, but the user can do something about that.

In the “Account settings” dialog which you reach by clicking on the user avatar, there’s a “Password” section which tells you “Set a password to sign in to your account“. When the user has set a password, then subsequet login attempts will not trigger a “magic link” any longer, but a password entry field will be displayed instead.


Connecting to the workspace

The admin user has been created and you can keep using that to create content in your AFFiNE workspaces of course. But you can also create a separate user account; see the previous section on how to allow more users access to your server.
I’ll come to the login later, let’s first have a look at what happens when you connect to your AFFiNE server again, after having logged out your admin user account.

There’s a difference, caused by cookies that are set by the AFFiNE server, in what you see when you connect without being logged in.

  1. You have not yet converted your local data to a Cloud sync workspace.
    If you access https://affine.darkstar.lan/ you will land into the “Demo Workspace“, containing the single Doc “Write, Draw, Plan all at Once“:
    By default, anything you create will stay in the browser cache.
    If you want to start syncing to your AFFiNE server, click the avatar icon in the top left of the window:
    A login dialog opens, the same actually that you will get in “option 2” below. Enter your email address and click “Continue with email” to login. After completing your login, the Cloud-sync of your workspaces commences.
  2. You were already syncing your workspace to your AFFiNE Cloud, then logged off, and now you want to login again.
    You will now be greeted with these options instead of the “Demo Workspace“:
    … and if you click on either the “Sign up / Sign in” or the “Create cloud workspace” you will be taken to the actual login screen:
    Here you type your account’s email address and press “ENTER” or click the “Continue with email“. Depending on whether you have already defined a password for your account, the next screen will either show a password entry field or else a message informing you that a “Magic link” has been sent to your email address. The “Magic link” URL contains a login token allowing you to login without a password. The token expires after 30 minutes. You’ll keep getting “Magic links” until you configure a password for your user account.
    The workspaces that you had already created will be presented and you can select which one you want to continue working on, or else create a new one right away:

You’re all set, enjoy AFFiNE!


Adding Single Sign-On (SSO)

Any Slackware Cloud Server user will have their account already setup in your Keycloak database. The first time they login to your AFFiNE server using SSO, the account will be activated automatically.

We need to define a new Client ID in Keycloak, which we are going to use with AFFiNE. Essentially, Keycloak and AFFiNE need a shared credential.

Add a AFFiNE Client ID in Keycloak

Point your browser to the Keycloak Admin console https://sso.darkstar.lan/auth/admin/ to start the configuration process.

Add a ‘confidential’ openid-connect client in the ‘foundation‘ Keycloak realm (the realm where you created your users in the previous Episodes of this article series):

  • Select ‘foundation‘ realm; click on ‘Clients‘ and then click ‘Create‘ button.
    • Client ID‘ = “affine“
    • Client Type‘ = “OpenID Connect” (the default)
      Note that in Keycloak < 20.x this field was called ‘Client Protocol‘ and its value “openid-connect”.
    • Toggle ‘Client authentication‘ to “On”. This will set the client access type to “confidential”
      Note that in Keycloak < 20.x this was equivalent to setting ‘Access type‘ to “confidential”.
    • Check that ‘Standard Flow‘ is enabled.
    • Save.
  • Also in ‘Settings‘, configure how AFFiNE server connects to Keycloak.
    Our AFFiNE container is running on https://affine.darkstar.lan . We add

    • Root URL‘ = https://affine.darkstar.lan/
    • Home URL‘ = https://affine.darkstar.lan/auth/callback/
    • Valid Redirect URIs‘ = https://affine.darkstar.lan/*
    • Web Origins‘ = https://affine.darkstar.lan/
    • Save.

To obtain the secret for the “affine” Client ID:

  • Go to “Credentials > Client authenticator > Client ID and Secret
    • Copy the Secret (TZ5PBCw66IhDtZJeBD4ctsS2Hrb253uY). This secret is an example string of course, yours will be different. I will be re-using this value below. You will use your own generated value.

Add an OIDC definition to AFFiNE

We have all the information we need to enhance our Docker stack.

First, add the credentials that AFFiNE shares with Keycloak, to the ‘.env’ file of your Docker Compose definition (I left a hint already in magenta higher-up in this article):

# OIDC (OpenID Connect):
AFFINE_OIDC_ISSUER=https://sso.darkstar.lan/auth/realms/foundation
AFFINE_OIDC_CLIENT_ID=affine
AFFINE_OIDC_CLIENT_SECRET=TZ5PBCw66IhDtZJeBD4ctsS2Hrb253uY

Then, add these variables in our ‘docker-compose.yml’ file at the location which I highlighted for you in magenta:

  - OAUTH_OIDC_ISSUER=${AFFINE_OIDC_ISSUER}
  - OAUTH_OIDC_CLIENT_ID=${AFFINE_OIDC_CLIENT_ID}
  - OAUTH_OIDC_CLIENT_SECRET=${AFFINE_OIDC_CLIENT_SECRET}
  - OAUTH_OIDC_SCOPE=openid email profile offline_access
  - OAUTH_OIDC_CLAIM_MAP_USERNAME=preferred_username
  - OAUTH_OIDC_CLAIM_MAP_EMAIL=email
  - OAUTH_OIDC_CLAIM_MAP_NAME=preferred_username

Lastly, the OIDC plugin needs to be enabled. You do that by adding the following text block to the end of the file ‘/opt/dockerfiles/affine/config/affine.js’:

/* OAuth Plugin */
AFFiNE.use('oauth', {
  providers: {
    oidc: {
      // OpenID Connect
      issuer: 'https://sso.darkstar.lan/auth/realms/foundation',
      clientId: 'affine',
      clientSecret: 'TZ5PBCw66IhDtZJeBD4ctsS2Hrb253uY',
      args: {
        scope: 'openid email profile offline_access',
        claim_id: 'preferred_username',
        claim_email: 'email',
        claim_name: 'preferred_username',
      },
    },
  },
});

This OIDC definition block is already part of that file (apart from the magenta bit), but commented-out, and also includes examples for Google and Github authentication. Adding the complete block is cleaner than un-commenting a batch of lines.

Note: this duplicates the configuration for the OIDC client (at least the relevant values are also configured in the ‘.env’ file), but I did not find a way around that. The complete configuration must be present inside ‘affine.js’ otherwise you will not get an option to use OIDC as a login provider.

Bugs to resolve first

Two things had been bugging me for days until I found hints online and by combining their fixes, I was finally able to make my self-hosted AFFiNE server (version 0.18) work with Single Sign-On.
Right after Christmas 2024, a new version 0.19.1 was released which solved one of those two bugs (clicking “Continue with OIDC” button would take you to the ‘https://app.affine.pro/oauth‘ page instead of your own ‘https://affine.darkstar.lan/oauth‘ page. Inside the container, there was a hard-coded URL).
The other bug hopefully gets resolved in a future release, it has already been reported in the project tracker:

  • The OIDC plugin would not be enabled even after properly configuring ‘/opt/dockerfiles/affine/config/affine.js’. It turns out that AFFiNE is not reading that file and the internal defaults take precedence.
    In order to work around this bug, I simply mounted my local ‘affine.js’ file into the container, overwriting the internal version. To do this, you need to add this line to your ‘docker-compose.yml‘ file at the magenta highlighted location under “volumes“:
    - ${CONFIG_LOCATION}/affine.js:/app/dist/config/affine.js:ro

All set!

This completes the Single Sign-On configuration. Now when you access your AFFiNE server and want to login, the screen will show the following:

Clicking on that “Continue with OIDC” will take you to the Keycloak login dialog. After you logged on using your SSO credentials, you will be asked to select an existing workspace or create a new one.
You will then be returned to the AFFiNE landing page https://affine.darkstar.lan …  but here you run into another bug, or perhaps it is a configuration oversight which I do not recognize: the page appears exactly as before you logged in.
You need to do a page refresh (Ctrl-R in your browser) and then you’ll see that you are indeed logged-in and you are syncing to the AFFiNE Cloud (aka your own server).


Configuring mail transport (Docker container & host)

Note that a large chunk of this section was copied from a previous article in the series. I do not know whether you actually read all of them in order, so I think it is prudent to share a complete set of instructions in each article.

AFFiNE needs to be able to send emails in the following circumstances:

  1. You create the user accounts manually, or else you want to give users an option to sign up via e-mail. In both cases, AFFiNE sends a “magic link” to that email address when the user attempts to login.
    The “magic link” will allow the user to create an account without initial password: the URL contains a login token. The user should then set a password in AFFiNE to be able to login later without the need for an another email with a “magic link”.
  2. A user wants to invite collaborators to their workspace. The invites are sent via email.

In the ‘/usr/local/docker-affine/.env‘ file which contains the configuration for Docker Compose, the hostname or IP  address and the TCP port for your own SMTP server needs to be provided. You can configure TLS encrypted connections, but that is not mandatory.
User credentials for sending the emails and a return address need to be added as well. The complete set looks like this:

# Mailer service for sending collaboration invites:
AFFINE_MAILER_HOST=affine.darkstar.lan
AFFINE_MAILER_PORT=587
AFFINE_MAILER_USER=affinemailer
AFFINE_MAILER_PASSWORD=E9X46W3vz8h1nBVqBHgKCISxRufRsHlAXSEbcXER/58=
AFFINE_MAILER_SENDER=affinemailer@darkstar.lan
AFFINE_MAILER_SECURE=false

Note that when I tried “AFFINE_MAILER_SECURE=true“, I was not able to make AFFiNE send emails, the encrypted SMTP connection would fail due to OpenSSL compatibility issues. My Docker host is running a hardened Slackware-current, perhaps I disabled some older cipher or protocol that the container needs? This is something I would like to see resolved.

Make the host accept mail from the AFFiNE container

When AFFiNE starts sending emails from its Docker container, we want Sendmail or Postfix to accept and process these. What commonly happens if a SMTP server receives emails from an unknown IP address is to reject those emails with “Relaying denied: ip name lookup failed“. We don’t want that to happen.

In Docker, you already performed these steps:

  • Create an IP network for AFFiNE and assign a name to it
  • Assign a fixed IP address to the AFFiNE container

On the Docker host, these are the steps to complete:

  • Announce the Docker IP/hostname mapping
  • Setup a local DNS server
  • Configure SASL authentication mechanisms to be used by the MTA (mail transport agent, eg. Postfix)
  • Create a system user account to be used by AFFiNE when authenticating to the MTA
  • Add SASL AUTH and also TLS encryption capabilities to the MTA
Assign IP address to the Docker container.

The ‘affine.lan‘ network definition is in the section “Docker network” higher-up.

The ‘docker-compose.yml‘ file contains the lines hard-coding the IP address:

networks:
  affine.lan:
    ipv4_address: ${AFFINE_IPV4_ADDRESS}
    aliases:
      - affine.affine.lan

With the value for that variable ${AFFINE_IPV4_ADDRESS} being defined in the ‘.env‘ file:

# We hard-code the IP address for the server so that we can make it send emails:
AFFINE_IPV4_ADDRESS=172.22.0.5
Add IP / name mapping to the Docker host

In ‘/etc/hosts‘ you need to add the following:

172.22.0.5    affine affine.affine.lan

And to ‘/etc/networks‘ add this line:

affine.lan   172.22

DNS serving local IPs on the Docker host

Under the assumption that your Cloud Server does not act as a LAN’s DNS server, we will use dnsmasq as the local nameserver. Dnsmasq is able to use the content of /etc/hosts and /etc/networks when responding to DNS queries. We can use the default, unchanged ‘/etc/dnsmasq.conf‘ configuration file.

But first, add this single line at the top of the host server’s ‘/etc/resolv.conf‘ (it may already be there as a result of setting up Keycloak), so that all local DNS queries will will be handled by our local dnsmasq service:

nameserver 127.0.0.1

If you have not yet done so, (as root) make the startup script ‘/etc/rc.d/rc.dnsmasq‘ executable and start dnsmasq manually (Slackware will take care of starting it on every subsequent reboot):

# chmod +x /etc/rc.d/rc.dnsmasq
# /etc/rc.d/rc.dnsmasq start

If dnsmasq is already running (eg. when you have Keycloak running and sending emails) then send SIGHUP to the program as follows:

# killall -HUP dnsmasq

That tells dnsmasq to reload its configuration. Check that it’s working and continue to the next step:

# nslookup affine.affine.lan
Server: 127.0.0.1
Address: 127.0.0.1#53
Name: affine.affine.lan
Address: 172.22.0.5
Configuring SASL on the Docker host

The mailserver aka MTA (Sendmail or Postfix) requires that remote clients authenticate themselves. The Simple Authentication and Security Layer (SASL) protocol is used for that, but typically, these MTA’s do not implement SASL themselves. There are two usable SASL implementations available on Slackware: Cyrus SASL and Dovecot; I picked Cyrus SASL just because I know that better.
We need to configure the method of SASL authentication for the SMTP daemon, which is via the saslauthd daemon. That one is not started by default on Slackware.

If the file ‘/etc/sasl2/smtpd.conf‘ does not yet exist, create it and add the following content:

pwcheck_method: saslauthd
mech_list: PLAIN LOGIN

Don’t add any further mechanisms to that list, except for PLAIN LOGIN. The resulting transfer of cleartext credentials is the reason that we also wrap the communication between mail client and server in a TLS encryption layer.

If the startup script ‘/etc/rc.d/rc.saslauthd‘ is not yet executable, make it so and start it manually this time (Slackware will take care of starting it on every subsequent reboot):

# chmod +x /etc/rc.d/rc.saslauthd
# /etc/rc.d/rc.saslauthd start

Create the mail user

We need a system account to allow AFFiNE to authenticate to the SMTP server. Let’s go with userid ‘affinemailer‘.
The following two commands will create the user and set a password:

# /usr/sbin/useradd -c "AFFiNE Mailer" -m -g daemon -s /bin/false affinemailer
# passwd affinemailer

Write down the password you assigned to the user ‘affinemailer‘. Both this userid and its password are used in the ‘.env‘ file of Docker Compose; in fact this is what I already posted higher-up:

AFFINE_MAILER_USER=affinemailer
AFFINE_MAILER_PASSWORD=E9X46W3vz8h1nBVqBHgKCISxRufRsHlAXSEbcXER/58=

After the account creation, you can test whether you configured SASL authentication correctly by running:

# testsaslauthd -u affinemailer -p E9X46W3vz8h1nBVqBHgKCISxRufRsHlAXSEbcXER/58=

… which should reply with:

0: OK "Success."

Configuring Sendmail on the Docker host

Since Postfix has replaced Sendmail as the MTA in Slackware a couple of years ago already, I am going to be concise here:
Make Sendmail aware that the AFFiNE container is a known local host by adding the following line to “/etc/mail/local-host-names” and restarting the sendmail daemon:

affine.affine.lan

The Sendmail package for Slackware provides a ‘.mc’ file to help you configure SASL-AUTH-TLS in case you had not yet implemented that: ‘/usr/share/sendmail/cf/cf/sendmail-slackware-tls-sasl.mc‘.

Configuring Postfix on the Docker host

If you use Postfix instead of Sendmail, this is what you have to change in the default configuration:

In ‘/etc/postfix/master.cf‘, uncomment this line to make the Postfix server listen on port 587 as well as 25 (port 25 is often firewalled or otherwise blocked):

submission inet n - n - - smtpd

In ‘/etc/postfix/main.cf‘, add these lines at the bottom:

# ---
# Allow Docker containers to send mail through the host:
mynetworks_style = class
# ---

Assuming you have not configured SASL AUTH before you also need to add:

# ---
# The assumption is that you have created your server's SSL certificates
# using Let's Encrypt and 'dehydrated':
smtpd_tls_cert_file = /etc/dehydrated/certs/darkstar.lan/fullchain.pem
smtpd_tls_key_file = /etc/dehydrated/certs/darkstar.lan/privkey.pem
smtpd_tls_security_level = encrypt

# Enable SASL AUTH:
smtpd_sasl_auth_enable = yes
syslog_name = postfix/submission
smtpd_recipient_restrictions = permit_sasl_authenticated,permit_mynetworks,reject_unauth_destination
# ---

After making modifications to the Postfix configuration, always run a check for correctness of the syntax, and do a reload if you don’t see issues:

# postfix check
# postfix reload

More details about SASL AUTH to be found in ‘/usr/doc/postfix/readme/SASL_README‘ on your own host machine.

Note: if you provide Postfix with SSL certificates through Let’s Encrypt (using the dehydrated tool) be sure to reload the Postfix configuration every time ‘dehydrated’ refreshes its certificates.

  1. In ‘/etc/dehydrated/hook.sh’ look for the ‘deploy_cert()‘ function and add these lines at the end of that function (perhaps the ‘apachectl‘ call is already there):

    # After successfully renewing our Apache certs, the non-root user 'dehydrated_user'
    # uses 'sudo' to reload the Apache configuration:
    sudo /usr/sbin/apachectl -k graceful
    # ... and uses 'sudo' to reload the Postfix configuration:
    sudo /usr/sbin/postfix reload
  2. Assuming you are not running dehydrated as root but instead as ‘dehydrated_user‘, you need to add a file in ‘/etc/sudoers.d/‘ – let’s name it ‘postfix_reload‘ – and copy this line into the file:

    dehydrated_user ALL=NOPASSWD: /usr/sbin/postfix reload
Success or failure

You can query your mail server to see if you were successful in adding SASL AUTH and TLS capabilities:

$ telnet smtp.darkstar.lan 587
Trying XXX.XXX.XXX.XXX...
Connected to smtp.darkstar.lan.
Escape character is '^]'.
220 smtp.darkstar.lan ESMTP Postfix
EHLO foo.org
250-PIPELINING
250-SIZE 10240000
250-VRFY
250-ETRN
250-STARTTLS
250-ENHANCEDSTATUSCODES
250-8BITMIME
250-DSN
250-SMTPUTF8
250 CHUNKING
AUTH LOGIN
530 5.7.0 Must issue a STARTTLS command first
QUIT
221 2.0.0 Bye
Connection closed by foreign host.


Conclusion

I hope to have given a complete overview to get your own self-hosted AFFiNE server up and running. Much of the information available online is quite scattered.
I’d love to hear from you if you were successful after following these instructions. Let me know your feedback below.

Have fun!

 

Slackware Cloud Server Series, Episode 9: Cloudsync for 2FA Authenticator

As promised in an earlier blog article, I am going to talk about setting up a ‘cloud sync’ backend server for the Ente Authenticator app.
This specific article will be deviating slightly from one of the goals of the series. So far, I have shown you how to run a variety of services on your own private cloud server for family, friends and your local community,  and using Single Sign-On (SSO) so that your users only have one account and password to worry about.
The difference here is that the Ente Auth backend server does not offer SSO (yet… although there’s a feature request to add OIDC functionality).

You will learn how to setup Ente Auth backend server in a Docker Compose stack. Administration of the server is done via ‘ente-cli‘ which is contained in that Docker stack.

Check out the list below which shows past, present and future episodes in the series, if the article has already been written you’ll be able to click on the subject.
The first episode also contains an introduction with some more detail about what you can expect.

    • Episode 1: Managing your Docker Infrastructure
    • Episode 2: Identity and Access management (IAM)
    • Episode 3 : Video Conferencing
    • Episode 4: Productivity Platform
    • Episode 5: Collaborative document editing
    • Episode 6: Etherpad with Whiteboard
    • Episode 7: Decentralized Social Media
    • Episode 8: Media Streaming Platform
    • Episode 9 (this article): Cloudsync for 2FA Authenticator
      Setting up an Ente backend server as a cloud sync location for the Ente Auth 2FA application (Android, iOS, web).
      Stop worrying that you’ll lose access to secure web sites when you lose your smartphone and with it, the two-factor authentication codes that it supplies. You’ll be up and running with a new 2FA authenticator in no time when all your tokens are stored securely and end-to-end encrypted on a backend server that is fully under your own control.

      • Introduction
      • Preamble
        • Web Hosts
        • Secrets
      • Apache reverse proxy configuration
      • Ente server setup
        • Preparations
        • Cloning the git repository and preparing the web app image
        • Considerations for the .env file
        • Creating the Docker Compose files
      • Running the server
      • Connecting a client
      • Administering the server
        • User registration
        • Ente CLI
        • Logging into and using the CLI
        • Enabling email (optional)
      • Account self-registration or not?
        • (1) pre-defining the OTT (one-time token) for your users
        • (2) configuring mail transport in Ente
        • Make the host accept mails from Ente container
      • Considerations
        • Using Ente server to store your photos
        • CORS
      • Conclusion
  • Episode 10: Workflow Management
  • Episode 11: Jukebox Audio Streaming
  • Episode 12: Local AI
  • Episode X: Docker Registry

Introduction

In the blog post that I recently wrote about Ente Auth, the open source 2FA authenticator, I mentioned that this application can save its token secrets to the cloud. This is an optional feature and requires that you create an account at ente.io. This cloud backend service is where Ente (the company) runs a commercial Photo hosting service with end-to-end encryption for which the authenticator is a bolt-on.
Even when you decline to sign up for an account, the authenticator app offers its full functionality. Also it is able to import tokens from multiple other 2FA tools, and export your 2FA seeds to an encrypted file. In other words, there is no vendor lock-in. You do not really have a need for cloud sync, as long as you remember from time to time to make that manual backup of your 2FA secrets. These backups can later be restored even if you lose your phone and need to re-install the Ente authenticator app on a new phone.
The cloud sync is no more than a convenience, and it is completely secure, but I understand why some people would avoid creating an account and hand over their (encrypted) data to a 3rd party. That’s the premise behind writing this very article.

Ente has open-sourced all its products: the authenticator and photo apps, but also the back-end server which stores not just your 2FA secrets but also your photo collections.
This article focuses on getting the Ente ‘Museum‘ server running as a self-hosted backend for the 2FA authenticator app – a server fully under your control. Your secret data will never leave your house and you can rest assured that your 2FA seeds are always synced to your server and stored with solid encryption. You can even create local Ente accounts for your friends and family.

This article concludes with some considerations about expanding the server’s capabilities and turn it into a full-blown picture library a la Google Photos.


Preamble

This section describes the technical details of our setup, as well as the things which you should have prepared before trying to implement the instructions in this article.

Web Hosts

For the sake of this instruction, I will use the hostname “https://ente.darkstar.lan” as the URL where the authenticator will connect to the Ente API endpoint, and “https://enteauth.darkstar.lan” as the URL where users will point their browser when they want to access the web app.

Setting up your domain (which will hopefully be something else than “darkstar.lan”…) with new hostnames and then setting up web servers for the hostnames in that domain is an exercise left to the reader. Before continuing, please ensure that your equivalent for the following host has a web server running. It doesn’t have to serve any content yet but we will add some blocks of configuration to the VirtualHost definition during the steps outlined in the remainder of this article:

  • ente.darkstar.lan
  • enteauth.darkstar.lan

Using a  Let’s Encrypt SSL certificate to provide encrypted connections (HTTPS) to your webserver is documented in an earlier blog article.

Note that I am talking about webserver “hosts” but in fact, all of these are just virtual webservers running on the same machine, at the same IP address, served by the same Apache httpd program, but with different DNS entries. There is no need at all for multiple computers when setting up your Slackware Cloud server.

Secrets

All data is encrypted before Ente server stores it in its database or storage. The Docker stack uses several secrets to provide that encryption, next to your own password that only you know.

In this article, we will use example values for these secrets – be sure to generate and use your own strings here!

# Key used for encrypting customer emails before storing them in DB:
ENTE_KEY_ENCRYPTION=yvmG/RnzKrbCb9L3mgsmoxXr9H7i2Z4qlbT0mL3ln4w=
ENTE_KEY_HASH=JDYkOVlqSi9rLzBEaDNGRTFqRyQuQTJjNGF0cXoyajA3MklWTjc1cDBSaWxaTmh0VUViemlGRURFUFZqOUlNCg==
ENTE_JWT_SECRET=wRUs9H5IXCKda9gwcLkQ99g73NVUT9pkE719ZW/eMZw=

# Credentials for the Postgres database account:
ENTE_DB_USER=entedbuser
ENTE_DB_PASSWORD=9E8z2nG3wURGC0R51V5LN7/pVapwsSvJJ2fASUUPqG8=

# Credentials for the three hard-coded S3 buckets provided by MinIO:
ENTE_MINIO_KEY=AeGnCRWTrEZVk8VjroxVqp7IoHQ/fNn4wPL7PilZ+Hg=
ENTE_MINIO_SECRET=wdZPtr0rEJbFxr4GsGMB6mYojKahA3nNejIryeawloc=

Here are some nice convoluted ways to generate Base64-encoded strings you can use as secrets:

A 45-character string ending on ‘=’:
$ cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 32 | head -n 1 | openssl dgst -binary -sha256 | openssl base64

A 89-character string ending on ‘==’
$ cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 256 | head -n 1 | openssl passwd -stdin -6 | fold -w 48 | head -n 1 | openssl base64 |tr -d '\n'

The ente.io documentation recommends this to generate the key and jwt hashes:
 $ cd ente-git/server/
$ go run tools/gen-random-keys/main.go

 


Apache reverse proxy configuration

We are going to run Ente in a Docker container stack. The configuration will be such that the server will only listen for clients at a number of TCP ports at the localhost address (127.0.0.1).

To make our Ente API backend available to the authenticator at the address https://ente.darkstar.lan/ we are using a reverse-proxy setup. The flow is as follows: an authenticator (the client) connects to the reverse proxy and the reverse proxy connects to the Ente backend on the  client’s behalf. The reverse proxy (Apache httpd in our case) knows how to handle many simultaneous connections and can be configured to offer SSL-encrypted connections even when the backend can only communicate over clear-text un-encrypted connections.

Add the following reverse proxy lines to your VirtualHost definition of the “ente.darkstar.lan” web site configuration and restart httpd:

# ---
# Required modules:
# mod_proxy, mod_ssl, proxy_wstunnel, http2, headers, remoteip

ProxyRequests Off
ProxyVia on
ProxyAddHeaders On
ProxyPreserveHost On

<Proxy *>
  Require all granted
</Proxy>

# Letsencrypt places a file in this folder when updating/verifying certs.
# This line will tell apache to not to use the proxy for this folder:
ProxyPass "/.well-known/" "!"

<IfModule mod_ssl.c>
  RequestHeader set X-Forwarded-Proto "https"
  RequestHeader set X-Forwarded-Port "443"
</IfModule>

# Ente API endpoint is hosted on https://ente.darkstar.lan/
<Location />
  ProxyPass "http://127.0.0.1:8180/"
  ProxyPassReverse "http://127.0.0.1:8180/"
</Location>
# ---

To make our Ente WeB App frontend available for the users who need read-only access to their 2FA codes, at the address https://enteauth.darkstar.lan/ , we are using a slightly different reverse-proxy setup.
Add the following reverse proxy lines to your VirtualHost definition of the “enteauth.darkstar.lan” web site configuration and restart httpd:

# ---
# Required modules:
# mod_proxy, mod_ssl, proxy_wstunnel, http2, headers, remoteip

ProxyRequests Off
ProxyVia on
ProxyAddHeaders On
ProxyPreserveHost On

<Proxy *>
  Require all granted
</Proxy>

# Letsencrypt places a file in this folder when updating/verifying certs.
# This line will tell apache to not to use the proxy for this folder:
ProxyPass "/.well-known/" "!"

<IfModule mod_ssl.c>
  RequestHeader set X-Forwarded-Proto "https"
  RequestHeader set X-Forwarded-Port "443"
</IfModule>

# Ente Auth Web App is hosted on https://enteauth.darkstar.lan/
<Location />
  ProxyPass "http://127.0.0.1:3322/"
  ProxyPassReverse "http://127.0.0.1:3322/"
</Location>
# ---

The hostname and TCP port numbers shown in bold green are defined elsewhere in this article, they should stay matching when you decide to use a different hostname and port numbers.


Ente Server setup

Preparations

We will give the Ente server its own internal Docker network. That way, the inter-container communication stays behind its gateway and prevents snooping the network traffic.

Docker network

Create the network using the following command:

docker network create \
  --driver=bridge \
  --subnet=172.21.0.0/16 --ip-range=172.21.0.0/25 --gateway=172.21.0.1 \
  ente.lan

Select a yet un-used network range for this subnet. You can find out about the subnets which are already defined for Docker by running this command:

ip route |grep -E '(docker|br-)'

The ‘ente.lan‘ network you created will be represented in the Ente docker-compose.yml file with the following code block:

networks:
  ente.lan:
    external: true
Create directories

Create the directory for the docker-compose.yml and other startup files:

# mkdir -p /usr/local/docker-ente

Create the directories to store data:

# mkdir -p /opt/dockerfiles/ente/{cli-data,cli-export,museum-data,museum-logs,postgres-data,minio-data,webauth-data}

Cloning the git repository and preparing the web app image

Ente offers a Docker image of their Museum server (the core of their product) via the public Docker image repositories, but there is no official Docker image for the Web Apps (Auth and Photos). Using the Auth Web App you will have read-only access to your 2FA tokens in a web browser, in case you don’t have your phone with you. That makes it a must-have in our container stack. The Auth Web App also allows you to create an account on the server if you don’t want to use the Mobile or Desktop App for that.

We need to build this Docker image locally. Building a Docker image involves cloning Ente’s git repository, writing a Dockerfile and then building the image via Docker Compose. Let’s go through these steps one by one.

Clone the repository in order to build the Auth Web App:

# cd /usr/local/docker-ente
# git clone https://github.com/ente-io/ente ente-git
# cd ente-git
# git submodule update --init --recursive
# cd -

Create the file ‘Dockerfile.auth‘ in the directory ‘/usr/local/docker-ente‘ and copy the following content into it:

FROM node:20-alpine AS builder
WORKDIR /app
COPY . .
ARG NEXT_PUBLIC_ENTE_ENDPOINT=https://ente.darkstar.lan
ENV NEXT_PUBLIC_ENTE_ENDPOINT=${NEXT_PUBLIC_ENTE_ENDPOINT}
RUN yarn install && yarn build:auth
FROM node:20-alpine
WORKDIR /app
COPY --from=builder /app/apps/auth/out .
RUN npm install -g serve
ENV PORT=3322
EXPOSE ${PORT}
CMD serve -s . -l tcp://0.0.0.0:${PORT}

NOTE: I chose to let the web-app use another port than the default ‘3000’ because in the Slackware Cloud Server that port is already occupied by the jitsi-keycloak adapter. I picked another port: 3322.
I also already anticipate that the Auth Web App should accessible outside of my home, so I point the App to the public API endpoint ente.darkstar.lan instead of a localhost address.

Considerations for the .env file

Docker Compose is able to read environment variables from an external file. By default, this file is called ‘.env‘ and must be located in the same directory as the ‘docker-compose.yml‘ file (in fact ‘.env‘ will be searched in the current working directory, but I always execute ‘docker-compose‘ in the directory containing its YAML file anyway).

In this environment file we are going to specify things like accounts, passwords, TCP ports and the like, so that they do not have to be referenced in the ‘docker-compose.yml‘ file or even in the process environment space. You can shield ‘.env‘ from prying eyes, thus making the setup more secure. Remember, we are going to store two-factor authentication secrets in the Ente database. You want those to be safe.

Apart from feeding Docker a set of values, we need to make several of these values find their way into the containers. The usual way for Ente server (not using Docker) is to write the configuration values into one or more configuration files. These parameters are all documented in the ‘local.yaml‘ file in the git repository of the project.
Basically we should leave the defaults in ‘local.yaml‘ alone and specify our own custom configuration in ‘museum.yaml‘ and/or ‘credentials.yaml‘.
These two files will be parsed in that order, and parameters defined later will override the values for those parameters that were defined before. Both files will be looked for in the current working directory, again that should be the directory where the ‘docker-compose.yml‘ file is located.

Now the good thing is that all of Ente’s configuration parameters can be overridden by environment variables. You can derive the relevant variable name from its corresponding config parameter, by simple string substitution. Taken verbatim from that file:

The environment variable should have the prefix “ENTE_“, and any nesting should be replaced by underscores.
For example, the nested string “db.user” in the config file can alternatively be specified (or be overridden) by setting an  environment variable named “ENTE_DB_USER“.

We are going to make good use of this.

This is the content of the ‘/usr/local/docker-ente/.env‘ file:

# ---
# Provide an initial OTT if you decide not to setup an email server:.
# Change 'me@home.lan' to the email you will use to register your initial account.
# The string '123456' can be any 6-number string:
ENTE_HARDCODED_OTT_EMAIL="me@home.lan,123456"

# Various secret hashes used for encrypting data before storing to disk: 
ENTE_KEY_ENCRYPTION=yvmG/RnzKrbCb9L3mgsmoxXr9H7i2Z4qlbT0mL3ln4w=
ENTE_KEY_HASH=JDYkOVlqSi9rLzBEaDNGRTFqRyQuQTJjNGF0cXoyajA3MklWTjc1cDBSaWxaTmh0VUViemlGRURFUFZqOUlNCg==
ENTE_JWT_SECRET=wRUs9H5IXCKda9gwcLkQ99g73NVUT9pkE719ZW/eMZw=

# The Postgres database:
ENTE_DB_PORT=5432
ENTE_DB_NAME=ente_db
ENTE_DB_USER=entedbuser
ENTE_DB_PASSWORD=9E8z2nG3wURGC0R51V5LN7/pVapwsSvJJ2fASUUPqG8=

# MinIO credentials:
ENTE_MINIO_KEY=AeGnCRWTrEZVk8VjroxVqp7IoHQ/fNn4wPL7PilZ+Hg=
ENTE_MINIO_SECRET=wdZPtr0rEJbFxr4GsGMB6mYojKahA3nNejIryeawloc=

# We hard-code the IP address for Museum server,
# so that we can make it send emails:
MUSEUM_IPV4_ADDRESS=172.21.0.5
# ---

Note that the bold purple information in the ‘.env‘ file defines the initial user account, which you need to create immediately after first startup of the Ente Docker stack. You will be asked to assign a (strong) password. It’s this account which we are going to make the admin for the service.

Note that I kept having issues with some environment variables not getting filled with values inside the containers. I found out that a variable in the ‘.env‘ file that had a dash ‘-‘ as part of the name would not be recognized inside a container, that is why now I only use capital letters and the underscore,

Creating the Docker Compose files

The  ‘Dockerfile.auth‘ and ‘.env‘ files from the previous chapters are already some of the Docker Compose files that we need.
In addition, we create a script ‘minio-provision.sh‘; two configuration files ‘museum.yaml‘ and ‘cli-config.yaml‘; and finally the ‘docker-compose.yml‘ file which defines the container stack.
You will find the contents of these files in this very section. All of them go into directory /usr/local/docker-ente/ .

museum.yaml

Create a file ‘museum.yaml‘ in the directory /usr/local/docker-ente/ . We won’t have a need for a ‘credentials.yaml‘ file.
Copy the following content into the file, which then provides a basic configuration to the Ente Museum server.
Note that some of the values remain empty – they are provided via the ‘.env‘ file:

# ---
# This defines the admin user(s):
internal:
  admins:
    - # Account ID (a string of digits - see 'Administering the server')

# This defines the Postgres database:
db:
  host: ente-postgres
  port:
  name:
  user:
  password:

# Definition of the MinIO object storage.
# Even though this will all run locally, the three S3 datacenter names must
# remain the hard-coded values used by Ente in their commercial offering
# even though the names of endpoints/regions/buckets are not fixed.
s3:
  are_local_buckets: true
  b2-eu-cen:
    key:
    secret:
    endpoint: ente-minio:3200
    region: eu-central-2
    bucket: b2-eu-cen
  wasabi-eu-central-2-v3:
    key:
    secret:
    endpoint: ente-minio:3200
    region: eu-central-2
    bucket: wasabi-eu-central-2-v3
    compliance: false
  scw-eu-fr-v3:
    key:
    secret:
    endpoint: ente-minio:3200
    region: eu-central-2
    bucket: scw-eu-fr-v3

# Key used for encrypting customer emails before storing them in DB
# We will give them a value in the '.env' file of Docker Compose
# Note: to override a value that is specified in local.yaml in a
# subsequently loaded config file, you should specify the key
# as an empty string (`key: ""`) instead of leaving it unset.
key:
    encryption: ""
    hash: ""

# JWT secrets
jwt:
    secret: ""
# ---
cli-config.yaml

The ente CLI (command-line interface tool) to administer our Ente server needs to know where to connect to the Museum server API endpoint.
Add the following block into a new file called ‘/usr/local/docker-ente/cli-config.yaml‘. We will map this file into the CLI container using the docker-compose file.

# ---
# You can put this configuration file in the following locations:
# - $HOME/.ente/config.yaml
# - config.yaml in the current working directory
# - $ENTE_CLI_CONFIG_PATH/config.yaml

endpoint:
  # Since we run everything in a Docker stack,
  # the hostname equals the service name.
  # And since the CLI inside one container is going to connect
  # to the Museum container via the internal Docker network,
  # we need to access the internal port (8080),
  # instead of the published port (8180):
  api: "http://ente-museum:8080"

log:
  http: false # log status code & time taken by requests
# ---
minio-provision.sh

The script which we create as ‘/usr/local/docker-ente/minio-provision.sh‘ is used to provision the S3 buckets when this container stack first initializes. It is slightly modified from the original script which is found in the ente repository. because I do not want to have credentials in that script. So this is what it should look like when you create it:

#!/bin/sh
# Script used to prepare the minio instance that runs as part of the Docker compose cluster.
# Wait for the server to be up & running (script arguments 1 and 2 are the user/password):
while ! mc config host add h0 http://ente-minio:3200 $1 $2
do
  echo "waiting for minio..."
  sleep 0.5
done
cd /data
# Create the S3 buckets:
mc mb -p b2-eu-cen
mc mb -p wasabi-eu-central-2-v3
mc mb -p scw-eu-fr-v3
# ---
docker-compose.yml

Create a file ‘/usr/local/docker-ente/docker-compose.yml‘ with these contents:

# ---
name: ente
services:
  ente-museum:
    container_name: ente-museum
    image: ghcr.io/ente-io/server
    ports:
      - 127.0.0.1:8180:8080 # API endpoint
    depends_on:
      ente-postgres:
        condition: service_healthy
    # Wait for museum to ping pong before starting the webapp.
    healthcheck:
      test: [ "CMD", "echo", "1" ]
    environment:
      # All of the credentials needed to create the first user,
      # connect to the DB and MinIO and to encrypt data:
      ENTE_INTERNAL_HARDCODED-OTT_EMAILS: ${ENTE_HARDCODED_OTT_EMAIL}
      ENTE_KEY_ENCRYPTION: ${ENTE_KEY_ENCRYPTION}
      ENTE_KEY_HASH: ${ENTE_KEY_HASH}
      ENTE_JWT_SECRET: ${ENTE_JWT_SECRET}
      ENTE_DB_PORT: ${ENTE_DB_PORT}
      ENTE_DB_NAME: ${ENTE_DB_NAME}
      ENTE_DB_USER: ${ENTE_DB_USER}
      ENTE_DB_PASSWORD: ${ENTE_DB_PASSWORD}
      ENTE_S3_B2-EU-CEN_KEY: ${ENTE_MINIO_KEY}
      ENTE_S3_B2-EU-CEN_SECRET: ${ENTE_MINIO_SECRET}
      ENTE_S3_WASABI-EU-CENTRAL-2-V3_KEY: ${ENTE_MINIO_KEY}
      ENTE_S3_WASABI-EU-CENTRAL-2-V3_SECRET: ${ENTE_MINIO_SECRET}
      ENTE_S3_SCW-EU-FR-V3_KEY: ${ENTE_MINIO_KEY}
      ENTE_S3_SCW-EU-FR-V3_SECRET: ${ENTE_MINIO_SECRET}
    volumes:
      - ./museum.yaml:/museum.yaml:ro 
      - /opt/dockerfiles/ente/museum-logs:/var/logs
      - /opt/dockerfiles/ente/museum-data:/data
    networks:
      ente.lan:
        ipv4_address: ${MUSEUM_IPV4_ADDRESS}
        aliases:
          - ente-museum.ente.lan
    restart: always

  # Resolve "localhost:3200" in the museum container to the minio container.
  ente-socat:
    container_name: ente-socat
    image: alpine/socat
    network_mode: service:ente-museum
    depends_on:
      - ente-museum
    command: "TCP-LISTEN:3200,fork,reuseaddr TCP:ente-minio:3200"

  ente-postgres:
    container_name: ente-postgres
    image: postgres:12
    ports:
      - 127.0.0.1:${ENTE_DB_PORT}:${ENTE_DB_PORT}
    environment:
      POSTGRES_DB: ${ENTE_DB_NAME} 
      POSTGRES_USER: ${ENTE_DB_USER}
      POSTGRES_PASSWORD: ${ENTE_DB_PASSWORD}
    # Wait for postgres to be accept connections before starting museum.
    healthcheck:
      test:
        [
          "CMD",
          "pg_isready",
          "-q",
          "-d",
          "${ENTE_DB_NAME}",
          "-U",
          "${ENTE_DB_USER}"
        ]
      start_period: 40s
      start_interval: 1s
    volumes:
      - /opt/dockerfiles/ente/postgres-data:/var/lib/postgresql/data
    networks:
      - ente.lan
    restart: always
  ente-minio:
    container_name: ente-minio
    image: minio/minio
    # Use different ports than the minio defaults to avoid conflicting
    # with the ports used by Prometheus.
    ports:
      - 127.0.0.1:3200:3200 # API
      - 127.0.0.1:3201:3201 # Console
    environment:
      MINIO_ROOT_USER: ${ENTE_MINIO_KEY}
      MINIO_ROOT_PASSWORD: ${ENTE_MINIO_SECRET}
    command: server /data --address ":3200" --console-address ":3201"
    volumes:
      - /opt/dockerfiles/ente/minio-data:/data
    networks:
      - ente.lan
    restart: always

  ente-minio-provision:
    container_name: ente-minio-provision
    image: minio/mc
    depends_on:
      - ente-minio
    volumes:
      - ./minio-provision.sh:/provision.sh:ro
      - /opt/dockerfiles/ente/minio-data:/data
    networks:
      - ente.lan
    entrypoint: ["sh", "/provision.sh", "${ENTE_MINIO_KEY}", ${ENTE_MINIO_SECRET}"]
    restart: always
  ente-auth-web:
    container_name: ente-auth-web
    image: ente-auth-web:latest
    build:
      context: ./ente-git/web
      # The path for dockerfile is relative to the context directory:
      dockerfile: ../../Dockerfile.auth
      tags:
        - "ente-auth-web:latest"
    environment:
      NEXT_PUBLIC_ENTE_ENDPOINT: "https://ente.darkstar.lan"
    volumes:
      - /opt/dockerfiles/ente/webauth-data:/data
    ports:
      - 127.0.0.1:3322:3322
    depends_on:
      ente-museum:
        condition: service_healthy
    restart: always

  ente-cli:
    container_name: ente-cli
    image: ente-cli:latest
    build:
      context: ./ente-git/cli
      tags:
        - "ente-cli:latest"
    command: /bin/sh
    environment:
      ENTE_CLI_CONFIG_PATH: /config.yaml
    volumes:
      - ./cli-config.yaml:/config.yaml:ro
      # This is mandatory to mount the local directory to the container at /cli-data
      # CLI will use this directory to store the data required for syncing export
      - /opt/dockerfiles/ente/cli-data:/cli-data:rw
      # You can add additional volumes to mount the export directory to the container
      # While adding account for export, you can use /data as the export directory.
      - /opt/dockerfiles/ente/cli-export:/data:rw
    stdin_open: true
    tty: true
    restart: always

networks:
  ente.lan:
    external: true
# ---

In the service definition for ‘ente-cli‘ we build the local image ‘ente-auth-web‘ if it does not yet exist (i.e. in case we had not built it before) and we also tag the image as ‘ente-auth-web:latest‘ so that it is more easily found in a ‘docker image ls‘ command.


Running the server

Now that we have created the docker-compose.yml (defining the container stack), the .env file (containing credentials and secrets), museum.yaml (outlining the basic configuration of Ente and defining the initial user and its one-time token), cli-config.yaml (telling the CLI where to find the server), a Dockerfile.auth to build the ente-auth-web app and a minio-provision.sh script with which we initialize the MinIO S3-compatible storage…. we are ready to fire up the barbeque.

If you hadn’t created the Docker network yet, do it now! See the “Docker network” section higher up.
Start the Docker container stack (seven containers in total) as follows:

# cd /usr/local/docker-ente
# docker-compose up -d

And monitor the logs if you think the startup is troublesome. For instance, check the logs for the Museum service (Ente’s backend) using the name we gave its container (ente-museum):

# docker-compose logs ente-museum

When the server is up and running, use a webbrowser to access  the Ente Auth web app athttps://enteauth.darkstar.lan/

You can test the server backend (the API endpoint) by ‘pinging‘ it.  Just point curl (or a web browser) at the URL https://ente.darkstar.lan/ping – it should return ‘pong‘ in the form of a YAML string which looks a lot like this:

{"id":"qxpxd7j59pvejjzpz7lco0zroi2cq8l13gq9c2yy8pbh6b6lg","message":"pong"}


Connecting a client

Download and install the Ente Auth mobile or desktop app, then start it.
When the start screen appears, you would normally click “New to Ente” and then get connected to Ente’s own server at api.ente.io. We want to connect to our self-hosted instance instead but how to enter the server URL?
Ente developers implemented this as follows, and it works for the mobile as well as the desktop clients:

Tap the opening screen seven (7) times to access “Developer settings” (on the desktop you’d perform 7 mouse clicks):

When you click “Yes“,  a new prompt for “Server Endpoint” will appear.
Here you enter “https://ente.darkstar.lan” which is our API endpoint:

The client will then proceed connecting to your server and then take you back to the opening screen .

Click “New to Ente“. You can now sign in using the initial email address we configured in museum.yaml . Be sure to define a strong password! The security of your 2FA secrets is as strong as that password.
The app will then ask you for an One-Time Token (OTT). This is the 6-digit string which we configured along with the email of the initial user account, again in museum.yaml . Enter this code and the registration process is complete.

Now logoff and login again to the authenticator and then open the Museum log file:

# docker-compose logs ente-museum |less

Search for ‘user_id‘ to find your user’s 16-digit ID code. We will need that ID later on when we promote your account to admin.
The relevant line will look like this:

ente-museum | INFO[1071]request_logger.go:100 func3 outgoing
client_ip=<your_ip> client_pkg=io.ente.auth.web client_version= h_latency=2.59929ms latency_time=2.59929ms query= req_body=null req_id=35f11344-c7ab-48a6-b2c4-25cd82df3a23 req_method=POST req_uri=/users/logout status_code=200 ua=<your_browser_useragent> user_id=5551234567654321

NOTE: That ‘user_id‘ will initially reflect with a value of ‘0’ but it will show the correct value after you logged off and logged back in again to the authenticator app.


Administering the server

User registration

On your self-hosted server you may want to disable self-registration of user accounts. See the section ‘”Account self-registration or not?” for considerations regarding this choice. If you decide to keep email capabilities disabled, you’ll have to assist your family and friends when they register their account: you need to fetch their One-Time Token (OTT) from the Museum server log.
You access the Museum server logs via the following command (‘ente-museum‘ being the name of the container):

# docker-compose logs ente-museum |less

And then grep the output for the keyword “SendEmailOTT

To give an example of that, when registering the initial user account for whom we hard-coded the email address and OTT code, you would find the following lines in your server log:

ente-museum | WARN[0082]userauth.go:102 SendEmailOTT returning hardcoded ott for me@home.lan
ente-museum | INFO[0082]userauth.go:131 SendEmailOTT Added hard coded ott for me@home.lan : 123456

When subsequent new users go through the registration process, the log entry will be different, but “SendEmailOTT” will be present in the line that contains the OTT code (519032) whereas the next line will mention the new user’s registration email (friend@foo.bar):

ente-museum | INFO[2626]userauth.go:125 SendEmailOTT Added ott for navQ5xarR9du1zw3QmmnwW4F48dAm7f52okwSkWFFS4=: 519032
ente-museum | INFO[2626]email.go:109 sendViaTransmail Skipping sending email to friend@foo.bar: ente Verification Code

That way, you can share the OTT with your user during the account registration so that they can complete the process and start using the authenticator.

Don’t worry if you can not share the OTT code with a new user immediately. The user can close their browser after creating the account and before entering the OTT.
When they re-visit https://enteauth.darkstar.lan/ in future and login again, the Web App will simply keep asking for the OTT code, until the user enters it, thereby completing the registration process.

Ente CLI

You can administer the  accounts on your Museum server via the CLI (Ente’s Command Line Interface) tool which is part of the Docker stack and runs in its own container. Therefore we actually execute the ‘ente-cli‘ tool via Docker Compose. Like this example (the container name is ‘ente-cli‘ but also the program in the container is renamed to ‘ente-cli‘)

# docker-compose exec ente-cli /bin/sh -c "./ente-cli admin list-users"

NOTE: to use the CLI for any administrative tasks, you need to have whitelisted the server’s admin user(s) beforehand.

So let’s copy our own account ‘s ID 5551234567654321 to the server configuration in order to make it a server administrator. Ensure that the ‘internal‘ section in'museum.yaml‘ in the root directory for our Docker configuration (/usr/local/docker-ente/) looks like this:

# ---
 internal:
  admins:
    - 5551234567654321
# ---

That 16-digit number is your own initial account ID which you retrieved by peeking into the server logs – see the previous section ‘Connecting a client‘.

After adding your account ID to the 'museum.yaml‘  file, you need to bring your Docker stack down and back up again:

# cd /usr/local/docker-ente
# docker-compose down
# docker-compose up -d

Now your account has been promoted to admin status.

Logging in to and using the CLI

Before you can administer any user account, you have to login to the CLI. Otherwise the CLI admin actions will show a message like this:

# docker-compose exec ente-cli /bin/sh -c "./ente-cli admin list-users"
Assuming me@home.lan as the Admin 
------------
Error: old admin token, please re-authenticate using `ente account add`

All CLI commands and options are documented in the repository.

# docker-compose exec ente-cli /bin/sh -c "./ente-cli account add"
Enter app type (default: photos): auth
Enter export directory: /data
Enter email address: me@home.lan
Enter password: 
Please wait authenticating...
Account added successfully
run `ente export` to initiate export of your account data

The ‘app type‘  must be ‘auth‘. The ‘/data‘ directory is internal to the container, we mapped it to ‘/opt/dockerfiles/ente/cli-export‘ in the ‘docker-compose.yml‘ file. You don’t have to run the proposed “ente export” command, since for now there is nothing to export.

The email address and password that are prompted for, are your own admin email ‘me@home.lan‘ and the password you configured for that account.

Now that we have ‘added’ our account to the CLI (aka we logged into the tool with our admin credentials), let’s see what it can tell us about ourselves:

# docker-compose exec ente-cli /bin/sh -c "./ente-cli account list"
Configured accounts: 1
====================================
Email: me@home.lan
ID: 5551234567654321
App: auth
ExportDir: /data
====================================

As a follow-up, let’s use our new admin powers to generate a listing of existing user accounts:

# docker-compose exec ente-cli /bin/sh -c "./ente-cli admin list-users"
Assuming me@home.lan as the Admin 
------------
Email: me@home.lan, ID: 5551234567654321, Created: 2024-07-15
Email: friend@foo.bar, ID: 5555820399142322, Created: 2024-07-15

Account self-registration or not?

Since Ente’s self-hosted instance is not yet able to use Single Sign-On, the question becomes: should you disable self-registration on your Ente Auth server?

Out of the box, the Ente Auth server accepts everyone with a valid email address and creates an account for them. The account won’t however be activated until after the new user enters a One-Time Token (OTT) which the Ente server sends to the user’s email address.

You could simply not configure mail transport on the Ente server, so that nobody will receive their one-time password which they need to be able to login. If you do want to allow people to register accounts themselves without your involvement, then you need to setup email transport capability. Here follow those two scenarios – apply only one of them.

(1) pre-defining the OTT (one-time token) for your users

Let’s look at the case where you don’t configure email transport. For your users to create an account and activate that, they still have to enter a six-digit One-Time Token (OTT) after registering their email address and setting a password.

Earlier in this article, I pointed you to the server logs where you’ll find the OTT codes for your new users.

But as the server owner you can also choose to pre-define as many hard-coded OTT’s as needed – as long as you know in advance which email address your user is going to use for the registration. You then share their pre-defined 6-digit code with that person so that they can complete the registration process and can start using the app.

To pre-define one or more OTT, you add the following to the file ‘museum.yaml‘:

internal:
  hardcoded-ott:
    emails:
      - "example@example.org,123456"
      - "friend@foo.org,987654"

After changing ‘museum.yaml’ you need to restart/rebuild the container stack (‘docker-compose down‘ followed by ‘docker-compose up -d‘) because this file must be included inside the container.

I myself did not want to pre-fill the server with a plethora of hard-coded OTT strings. I want to be in control over who registers, and when. So whenever someone asks me for an account on the Ente server, I ask them which email address they will be using and then scan the Ente Museum log for the OTT which will be generated but not sent out. I will communicate the OTT to my new user – via text message or by just telling them. It’s up to you to pick what works best for you.

(2) configuring mail transport in Ente

Next is the case where you don’t want to be involved getting new accounts activated. In that case, the Ente server needs to be able to send One-Time Tokens to the email address of a new user during the registration process.
Configure SMTP capability as follows, by adding  the following to the file ‘museum.yaml‘:

smtp:
  host: <your_smtp_server_address>
  port: <usually 587>
  username: <required>
  password: <required>
  # The email address from which to send the email.
  # Set this to an email address
  # whose credentials you're providing. If left empty,
  # the reply-to address will be verification@ente.io
  email: <your_own_replyto_address>

Provide the hostname or IP  address and the TCP port for your own SMTP server. Ente Museum requires a TLS encrypted connection as well as SMTP AUTH, see further below on how to configure your mailserver accordingly.
The ’email:’ field should be set to a return email address which allows people to reply to you in case of issues during the verification stage. For instance, Ente themselves use ‘verification@ente.io‘ here but that’s not very useful for us.

Make the host accept mail from Ente container

When the Museum server starts sending emails from its Docker container, we want Sendmail or Postfix to accept and process these. What commonly happens if a SMTP server receives emails from an unknown IP address is to reject those emails with “Relaying denied: ip name lookup failed“. We don’t want that to happen, so I had already added some glue in the Docker configuration to create a level of trust and understanding.

On the host side we have some remaining steps to complete:

  • Announce the Docker IP/hostname mapping
  • Setup a local DNS server
  • Configure SASL authentication mechanisms to be used by the MTA (mail transport agent, eg. Postfix)
  • Create a system user account to be used by Ente Museum when authenticating to the MTA
  • Add SASL AUTH and also TLS encryption capabilities to the MTA
Assign IP address to the Museum server

This section contains the part which is already done. The next sections document what still needs to be done.

To the ‘.env‘ file I had already added some lines to assign the value of an IP address to variable ‘$MUSEUM_IPV4_ADDRESS‘ in the ‘ente.lan‘ network range:

# We hard-code the IP address for Museum server,
# so that we can allow it to send emails:
MUSEUM_IPV4_ADDRESS=172.21.0.5

In ‘docker-compose.yml‘ I had already added the following lines, using the above variable to assign the hard-coded IP to the ‘ente-museum‘ container:

networks:
  ente.lan:
    ipv4_address: ${MUSEUM_IPV4_ADDRESS}
    aliases:
      - ente-museum.ente.lan
Add IP / name mapping to the host

In ‘/etc/hosts‘ you still need to add the following:

172.21.0.5    ente-museum ente-museum.ente.lan

And to ‘/etc/networks‘ add this line:

ente.lan   172.21

DNS serving local IPs

Under the assumption that your Cloud Server does not act as a LAN’s DNS server, we will use dnsmasq as the local nameserver. Dnsmasq is able to use the content of /etc/hosts and /etc/networks when responding to DNS queries. We can use the default, unchanged ‘/etc/dnsmasq.conf‘ configuration file.

But first, add this single line at the top of the host server’s ‘/etc/resolv.conf‘ (it may already be there as a result of setting up Keycloak), so that all local DNS queries will will be handled by our local dnsmasq service:

nameserver 127.0.0.1

If you have not yet done so, (as root) make the startup script ‘/etc/rc.d/rc.dnsmasq‘ executable and start dnsmasq manually (Slackware will take care of starting it on every subsequent reboot):

# chmod +x /etc/rc.d/rc.dnsmasq
# /etc/rc.d/rc.dnsmasq start

If dnsmasq is already running (eg. when you have Keycloak running and sending emails) then send SIGHUP to the program as follows:

# killall -HUP dnsmasq
Done! Check that it’s working and continue to the next step:

# nslookup ente-museum.ente.lan
Server: 127.0.0.1
Address: 127.0.0.1#53
Name: ente-museum.ente.lan
Address: 172.21.0.5
Configuring SASL

The mailserver aka MTA (Sendmail or Postfix) requires that remote clients authenticate themselves. The Simple Authentication and Security Layer (SASL) protocol is used for that, but typically, these MTA’s do not implement SASL themselves. There are two usable SASL implementations available on Slackware: Cyrus SASL and Dovecot; I picked Cyrus SASL just because I know that better.
We need to configure the method of SASL authentication for the SMTP daemon, which is via the saslauthd daemon. That one is not started by default on Slackware.

If the file ‘/etc/sasl2/smtpd.conf‘ does not yet exist, create it and add the following content:

pwcheck_method: saslauthd
mech_list: PLAIN LOGIN

Don’t add any further mechanisms to that list, except for PLAIN LOGIN. The resulting transfer of cleartext credentials is the reason that we also wrap the communication between mail client and server in a TLS encryption layer.

If the startup script ‘/etc/rc.d/rc.saslauthd‘ is not yet executable, make it so and start it manually this time (Slackware will take care of starting it on every subsequent reboot):

# chmod +x /etc/rc.d/rc.saslauthd
# /etc/rc.d/rc.saslauthd start

Create the mail user

We need a system account to allow Ente to authenticate to the SMTP server. Let’s go with userid ‘verification‘.
The following two commands will create the user and set a password:

# /usr/sbin/useradd -c "Ente Verification" -m -g users -s /bin/false verification
# passwd verification

Write down the password you assigned to the user ‘verification‘. Both this userid and its password need to be added to the  ‘username:‘ and ‘password:‘ definitions in the ‘smtpd:‘ section of our ‘museum.yaml‘ file.

When that’s done, you can test whether you configured SASL authentication correctly by running:

# testsaslauthd -u verifification -p thepassword

… which should reply with:

0: OK "Success."

Configuring Sendmail

Since Postfix has replaced Sendmail as the MTA in Slackware a couple of years ago already, I am going to be concise here:
Make Sendmail aware that the Ente Museum container is a known local host by adding the following line to “/etc/mail/local-host-names” and restarting the sendmail daemon:

ente-museum.ente.lan

The Sendmail package for Slackware provides a ‘.mc’ file to help you configure SASL-AUTH-TLS in case you had not yet implemented that: ‘/usr/share/sendmail/cf/cf/sendmail-slackware-tls-sasl.mc‘.

Configuring Postfix

If you use Postfix instead of Sendmail, this is what you have to change in the default configuration:

In ‘/etc/postfix/master.cf‘, uncomment this line to make the Postfix server listen on port 587 as well as 25 (port 25 is often firewalled or otherwise blocked):

submission inet n - n - - smtpd

In ‘/etc/postfix/main.cf‘, add these lines at the bottom:

# ---
# Allow Docker containers to send mail through the host:
mynetworks_style = class
# ---

Assuming you have not configured SASL AUTH before you also need to add:

# ---
# The assumption is that you have created your server's SSL certificates
# using Let's Encrypt and 'dehydrated':
smtpd_tls_cert_file = /etc/dehydrated/certs/darkstar.lan/fullchain.pem
smtpd_tls_key_file = /etc/dehydrated/certs/darkstar.lan/privkey.pem
smtpd_tls_security_level = encrypt

# Enable SASL AUTH:
smtpd_sasl_auth_enable = yes
syslog_name = postfix/submission
smtpd_recipient_restrictions = permit_sasl_authenticated,permit_mynetworks,reject_unauth_destination
# ---

After making modifications to the Postfix configuration, always run a check for correctness of the syntax, and do a reload if you don’t see issues:

# postfix check
# postfix reload

More details about SASL AUTH to be found in ‘/usr/doc/postfix/readme/SASL_README‘ on your own host machine.

Note: if you provide Postfix with SSL certificates through Let’s Encrypt (using the dehydrated tool) be sure to reload the Postfix configuration every time ‘dehydrated’ refreshes its certificates.

  1. In ‘/etc/dehydrated/hook.sh’ look for the ‘deploy_cert()‘ function and add these lines at the end of that function (perhaps the ‘apachectl‘ call is already there):

    # After successfully renewing our Apache certs, the non-root user 'dehydrated_user'
    # uses 'sudo' to reload the Apache configuration:
    sudo /usr/sbin/apachectl -k graceful
    # ... and uses 'sudo' to reload the Postfix configuration:
    sudo /usr/sbin/postfix reload
  2. Assuming you are not running dehydrated as root but instead as ‘dehydrated_user‘, you need to add a file in ‘/etc/sudoers.d/‘ – let’s name it ‘postfix_reload‘ – and copy this line into the file:

    dehydrated_user ALL=NOPASSWD: /usr/sbin/postfix reload
Success or failure

You can query your mail server to see if you were successful in adding SASL AUTH and TLS capabilities:

$ telnet smtp.darkstar.lan 587
Trying XXX.XXX.XXX.XXX...
Connected to smtp.darkstar.lan.
Escape character is '^]'.
220 smtp.darkstar.lan ESMTP Postfix
EHLO foo.org
250-PIPELINING
250-SIZE 10240000
250-VRFY
250-ETRN
250-STARTTLS
250-ENHANCEDSTATUSCODES
250-8BITMIME
250-DSN
250-SMTPUTF8
250 CHUNKING
AUTH LOGIN
530 5.7.0 Must issue a STARTTLS command first
QUIT
221 2.0.0 Bye
Connection closed by foreign host.

In case you succeeded in setting up your mailserver you will find lines like this in the ente-museum log whenever a new user registers their Ente account:

ente-museum | INFO[0083]request_logger.go:83 func3 incoming client_ip=<your_ip> client_pkg=io.ente.auth.web client_version= query= req_body={"email":"friend@foo.org","client":"totp"} req_id=dda6b689-8e96-4e66-bade-68b53fe2d910 req_method=POST req_uri=/users/ott ua=<your_browser_useragent>
ente-museum | INFO[0083]userauth.go:125 SendEmailOTT Added ott for JLhW32XlNU8GchWneMPajdKW+UEDpYzh8+6Yjf6VhK4=: 654321
ente-museum | INFO[0084]request_logger.go:100 func3 outgoing client_ip=<your_ip> client_pkg=io.ente.auth.web client_version= h_latency=130.00083ms latency_time=130.00083ms query= req_body={"email":"friend@foo.org","client":"totp"} req_id=dda6b689-8e96-4e66-bade-68b53fe2d910 req_method=POST req_uri=/users/ott status_code=200 ua=<your-browser_useragent> user_id=0

Considerations

Using Ente server to store your photos

Ente uses a S3 (Amazon’s Simple Storage Service) compatible object storage. The Docker Compose stack contains a MinIO container which effectively hides your local hard drive behind its S3-compatible API. The company Ente itself uses commercial S3 storage facilities in multiple datacenters to provide redundancy and reliability in storing your data. That may be too much (financially) for your self-hosted server though.
In the scope of this article, all you’ll store anyway is your encrypted 2FA token collection. However, Ente’s main business model is to act as an alternative to Google Photos or Apple iCloud. I.e. all your photos can also be stored in a S3 bucket. You just need a different app but your server is already waiting for your picture uploads. Your Ente Auth user account is the same to be used for Ente Photos.

The self-hosted Ente server is just as feature-rich as the commercial offering! You can instruct the mobile Photos client to automatically upload the pictures you take with your phone for instance. Documenting how to setup the Photos Web app is currently beyond the scope of this article, because similar functionality is already provided by Nextcloud Sync and documented in this same Cloud Server series.

If you want to have a public Photos Web application (similar to how we have built an Auth Web app) that’s relatively straightforward: just look at how the Auth Web app is built in the ‘docker-compose.yml’ and copy that block to add an extra container definiton (substituting ‘auth’ with ‘photos’). More info to be found in this Ente Discussion topic.

Now, the issue with acting as a picture library is that you need to grant Ente Photo clients direct access to your S3-compatible storage. The Ente Photos server only acts as the broker between your client app and the remote storage. The photo upload happens through direct connection to the S3 bucket.
When you host your own Ente server, that direct access means you’ll have to open a separate hole in your NAT router (if your server is behind NAT) and/or in your firewall. Your Ente (mobile) client wants to connect to TCP port 3200 of your server to reach the MinIO container. Perhaps this can be solved by configuring another reverse proxy, but I have not actually tried and tested this.

Note that you should not open the MinIO console port (3201) to the general public!

Another thing to be aware of is that users of the Photo app, when they create an account on your server, will effectively be given a “trial period” of one year in which they can store up to 1 GB of photos. The first thing you would do after a new account gets created on your server, is to remove those limitations by running:

docker-compose exec ente-cli /bin/sh -c "./ente-cli account admin update-subscription --no-limit true"

which increases the storage lifetime to a 100 years from now, and the maximum storage capacity to 100 TB. If you need custom values for lifetime and storage capacity, then you’ll have to edit the database directly using a SQL query.


CORS

When testing the Ente Web Auth app against my new Ente backend server, I ran into “Network Error” when submitting my new email address and password, thus preventing my account registration process to complete.

I could debug the connection issue by using Chromium browser and opening a Developer Tools sidebar (Shift-Ctrl-I) and keeping the ‘Network‘ tab open while making the connection. It turned out that the “Network Error” issue was caused by CORS (Cross-Origin Resource Sharing) headers that I had configured my Apache server to send along with every server response to client requests. I was able to verify by inspecting the logs of the ‘ente-museum’ container that my browser input never reached that container.

What I had configured on my server were the following lines which I took from this discussion on Stack Overflow – I think I wanted to address a similar CORS issue in the past – but now it was really not helping me with Ente:

Header always set Access-Control-Allow-Origin "*"
Header always set Access-Control-Allow-Headers "Authorization"
Header always set Access-Control-Allow-Methods "GET"
Header always set Access-Control-Expose-Headers "Content-Security-Policy, Location"
Header always set Access-Control-Max-Age "600"

I fixed this by simply removing this section from my Apache httpd configuration, but I suppose sometimes you cannot just remove a bothersome piece of configuration. In that case, you could prevent Apache from sending these headers, but only for the Ente virtual hosts, by adding these lines inside the ‘VirtualHost‘ declaration for ente.darkstar.lan:

Header unset Access-Control-Allow-Origin
Header unset Access-Control-Allow-Headers
Header unset Access-Control-Allow-Methods
Header unset Access-Control-Expose-Headers

After removing these CORS Header definitions, the Web Auth app could successfully register my new account on the Ente server.

Conclusion

When I sort-of promised to write up the documentation on how to self-host an Ente backend server, I did not know what I was getting into. There’s a lot of scattered documentation about how to setup a Dockerized Ente server. That documentation also seems to have been written with developers in mind, rather than to aid a security-minded spirit who wants complete control over their secret data.
I managed to get all of this working flawlessly and I hope that this Cloud Server episode will benefit more than just my fellow Slackware users!

Leave your constructive comments below where I can hopefully answer them to your satisfaction.

Have fun!

 

Slackware Cloud Server Series, Episode 8: Media Streaming Platform

Here is a new installment in the series which teaches how you can run a variety of services on your own private cloud server for family, friends and your local community, remaining independent of any of the commercial providers out there.

Today we will look into setting up a media streaming platform. You probably have a subscription – or multiple! – for Netflix, Prime, Disney+, AppleTV, HBO Max, Hulu, Peacock or any of the other streaming media providers. But if you already are in possession of your own local media files (movies, pictures, e-books or music) you will be excited to hear that you can make those media available in really similar fashion to those big platforms. I.e. you can stream – and enable others to stream! – these media files from just about anywhere on the globe.
Once we have this streaming server up and running I will show you how to setup our Identity Provider (Keycloak) just like we did for the other services I wrote about in the scope of this article series. The accounts that you already have created for the people that matter to you will then also have access to your streaming content via Single Sign-On (SSO).

Check out the list below which shows past, present and future episodes in the series, if the article has already been written you’ll be able to click on the subject.
The first episode also contains an introduction with some more detail about what you can expect.

  • Episode X: Docker Registry

Introduction

Before we had on-demand video streaming services, linear television was basically the only option to consume movies, documentaries and shows in your home. The broadcasting company decides on the daily programming and you do not have any choice in what you would like to view at any time of any day. Your viewing will be interrupted by advertisements that you cannot skip. Of course, if there’s nothing of interest on television, you could rent a video-tape or DVD to watch a movie in your own time, instead of going to the theater.
Actually, Netflix started as an innovative DVD rental company, sending their customers DVD’s by regular postal mail. They switched that DVD rental service to a subscription model but eventually realized the potential of subscription-based on-demand video streaming. The Netflix as we know it was born.

Nowadays we cannot imagine a world without the ability to fully personalize the way you consume movies and tv-shows. But that creates a dependency on a commercial provider. In this article I want to show you how to setup your own private streaming platform which you fully control. The engine of that platform will be Jellyfin, This is a fully open source program, descended from the final open source version of Emby before that became a closed-source product. Jellyfin has a client-server model where the server is under your control. You will learn how to set it up and run it as a Docker container. Jellyfin offers a variety of clients which can connect to this server and stream its content: there’s a client program for Android phones and Android TV, WebOS, iOS and there’s always the web client which is offered to browsers that connect to the server’s address.
The Jellyfin interface for clients is clean and informative, on par with commercial alternatives. The server collects information about your local content from online sources – as scheduled tasks or whenever you add a new movie, piece of music or e-book.

The good and bad of subscription-based streaming services

The Netflix business model has proven so successful that many content providers followed its lead, and present-day we are spoiled with an abundance of viewing options. If there’s anything you would like to watch, chances are high that the video is available for streaming already. The same is true for music – a Spotify subscription opens up a huge catalog of popular music negating the need to buy physical audio CD media.
The flipside of the coin is of course the fact that we are confronted with a fundamentally fragmented landscape of video streaming offerings. There’s a lot of choice but is that good for the consumer?

Streaming video platforms strategically focus on exclusive content to entice consumers into subscribing to their service offering. Exclusive content can be the pre-existing movie catalog of a film studio (see MGM+, Paramount+, Disney+ and more) or else entirely new content – movies/series that are commissioned by a streaming platform. Netflix, Apple TV, Amazon Prime are pouring billions of euros into the creation of new content since they do not have a library of existing content that they own. Social media are used as the battle field where these content providers try to win you over and subscribe. News outlets review  the content which premieres on all these platforms and you read about that and want to take part in the excitement.

The result is that you, the consumer, are very much aware of all those terrific new movies and series that are released on streaming platforms, but the only way to view them all is to pay for them all. The various platforms will not usually license their own cool stuff to other providers. So what happens? You subscribe to multiple platforms and ultimately you are paying mostly for content you’ll never watch.
Worse, there seems to be a trend where these subscription fees are increasing faster than your salary is growing, and on top of that, the cheaper subscriptions not just reduce the viewing quality but also force you to watch advertisements. At that point you are basically back at where you were trying to get away from: linear television riddled with ads from which you cannot escape.
The big companies make big bucks and have created an over-priced product which sucks. The consumer loses.

Bottom-line, any subscription based service model gives you access to content for as long as you pay. And sometimes you even need to pay extra – simply to have a comfortable viewing experience. You do and will not own any of that content. When you cancel your subscription you lose access to the content permanently.

In order to gain control over what you want to view, where and when, there is quite the choice when it comes to setting up the required infrastructure. Look at Kodi, Plex, Emby or Jellyfin for instance. All of those programs implement private streaming servers, with slightly different goals. The catch is that they can only stream the content which you already own and store on local disks. Did you already back up your DVD’s and music CD’s to hard-disk? Then you are in luck.

Preamble

This section describes the technical details of our setup, as well as the things which you should have prepared before trying to implement the instructions in this article.

For the sake of this instruction, I will use the hostname “https://jellyfin.darkstar.lan” as the URL where users will connect to the Jellyfin server.
Furthermore, “https://sso.darkstar.lan/auth” is the Keycloak base URL (see Episode 2 to read how we setup Keycloak as our identity provider).

Setting up your domain (which will hopefully be something else than “darkstar.lan”…) with new hostnames and then setting up web servers for the hostnames in that domain is an exercise left to the reader. Before continuing, please ensure that your equivalent for the following host has a web server running. It doesn’t have to serve any content yet but we will add some blocks of configuration to the VirtualHost definition during the steps outlined in the remainder of this article:

  • jellyfin.darkstar.lan

I expect that your Keycloak application is already running at your own real-life equivalent of https://sso.darkstar.lan/auth .

Using a  Let’s Encrypt SSL certificate to provide encrypted connections (HTTPS) to your webserver is documented in an earlier blog article.

Note that I am talking about webserver “hosts” but in fact, all of these are just virtual webservers running on the same machine, at the same IP address, served by the same Apache httpd program, but with different DNS entries. There is no need at all for multiple computers when setting up your Slackware Cloud server.

Apache reverse proxy configuration

We are going to run Jellyfin in a Docker container. The configuration will be such that the server will only listen for clients at a single TCP port at the localhost address (127.0.0.1).
To make our Jellyfin available for everyone at the address https://jellyfin.darkstar.lan/ we are using a reverse-proxy setup. This step can be done after the container is up and running, but I prefer to configure Apache in advance of the Jellyfin server start. It is a matter of preference.
Add the following reverse proxy lines to your VirtualHost definition of the “jellyfin.darkstar.lan” web site configuration and restart httpd:

# ---
# Required modules:
# mod_proxy, mod_ssl, proxy_wstunnel, http2, headers, remoteip

ProxyRequests Off
ProxyVia on
ProxyAddHeaders On
ProxyPreserveHost On

<Proxy *>
  Require all granted
</Proxy>

# Letsencrypt places a file in this folder when updating/verifying certs.
# This line will tell apache to not to use the proxy for this folder:
ProxyPass "/.well-known/" "!"

<IfModule mod_ssl.c>
  # Tell Jellyfin to forward that requests came from TLS connections:
  RequestHeader set X-Forwarded-Proto "https"
  RequestHeader set X-Forwarded-Port "443"
</IfModule>

# To work on WebOS TV, which runs the Jellyfin client in an I-Frame,
# you need to mitigate the SAMEORIGIN setting for X-Frame-Options
# if you configured this in your Apache httpd,
# or else you will just see a black screen after login:
Header always unset X-Frame-Options env=HTTPS

# Jellyfin hosted on https://jellyfin.darkstar.lan/
<Location /socket>
  ProxyPreserveHost On
  ProxyPass "ws://127.0.0.1:8096/socket"
  ProxyPassReverse "ws://127.0.0.1:8096/socket"
</Location>

<Location />
  ProxyPass "http://127.0.0.1:8096/"
  ProxyPassReverse "http://127.0.0.1:8096/"
</Location>
# ---

Jellyfin server setup

Prepare the Docker side

The Jellyfin Docker container runs with a specific internal user account. In order to recognize it on the host and to apply proper access control to the data which will be generated by Jellyfin on your host, we start with creating the user account on the host:

# /usr/sbin/groupadd -g 990 jellyfin
# /usr/sbin/useradd -c "Jellyfin" -d /opt/dockerfiles/jellyfin -M -g jellyfin -s /bin/false -u 990 jellyfin

Create the directories where our Jellyfin server will save its configuration and media caches, and let the user jellyfin own these directories:

# mkdir -p /opt/dockerfiles/jellyfin/{cache,config}
# chown -R jellyfin:jellyfin /opt/dockerfiles/jellyfin

If you want to enable GPU hardware-assisted video transcoding in the container, you have to add the jellyfin user as a member of the video group:

# gpasswd -a jellyfin video

Additionally you’ll require a dedicated Nvidia graphics card in your host computer and also install the Nvidia driver on the host, as well as the Nvidia Container Toolkit in Docker. This is an advanced setup which is outside of the scope of this article.

With the preliminaries taken care of, we now create the ‘docker-compose.yml‘ file for the streaming server. Store this one in its own directory:

# mkdir /usr/local/docker-jellyfin
# vi /usr/local/docker-jellyfin/docker-compose.yml

… and copy this content into the file:

version: '3.5'
services:
  jellyfin:
    image: jellyfin/jellyfin
    container_name: jellyfin
    user: '990:990'
    network_mode: 'host'
    ports:
    - 8096:8096
    volumes:
    - /etc/localtime:/etc/localtime:ro
    - /opt/dockerfiles/jellyfin/config:/config
    - /opt/dockerfiles/jellyfin/cache:/cache
    - /data/mp3:/music:ro     # Use the location of your actual mp3 collection here
    - /data/video:/video:ro   # Use the location of your actual video collection here
    - /data/books/:/ebooks:ro # Use the location of your actual e-book collection here
    restart: 'unless-stopped'
    deploy:
      resources:
        limits:
          cpus: '2'
          memory: 1024M
    # Optional - alternative address used for autodiscovery:
    environment:
      - JELLYFIN_PublishedServerUrl="https://jellyfin.darkstar.lan"
    # Optional - may be necessary for docker healthcheck to pass,
    # if running in host network mode
    extra_hosts:
      - "host.docker.internal:host-gateway"

Some remarks about this docker-compose file.

  • In green, I have higlighted the userIdNumber, the exposed TCP port and the URL by which you want to access the Jellyfin server once it is up and running. You will find these being referenced in other sections of this article.
  • I show a few examples of how you can bind your own media library storage into the container so that Jellyfin can be configured to serve them. You would of course replace my example locations with your own local paths to media you want to make available. Following my example, these media directories would be available inside the container as “/music“, “/video” and “/ebooks“. When configuring the media libraries on your Jellyfin server,  you are going to point it to these directories.
  • From experience I can inform you that in its default configuration, the Jellyfin server would often get starved of memory and the OOM-killer would kick in. Therefore I give the server 2 CPU cores and 1 GB of RAM to operate reliably. Tune these numbers to your own specific needs.
  • The ‘host‘ network mode of Docker is required only if you want to make your Jellyfin streaming server discoverable on your local network using DLNA. If you do not care about DLNA auto-discovery then you can add a comment in front of the network_mode: 'host' line out or simply delete the whole line.
    FYI: DLNA will send a broadcast signal from Jellyfin. This broadcast is limited to Jellyfin’s current subnet. When using Docker, the network should use ‘Host Mode’, otherwise the broadcast signal will only be sent to the bridged network inside of docker.
    Note: in the case of ‘Host Mode’, the Docker published port (8096) will not be used.
  • You may have noticed that there’s no database configuration. Jellyfin uses SQLite for its databases.

Start your new server

Starting the server is as simple as:

# cd /usr/local/docker-jellyfin/
# docker-compose up -d

For now, we limit the availability of Jellyfin to only localhost connections (unless you have already setup the Apache reverse configuration). That’s because we have not configured an admin account yet and do not want some random person to hi-jack the server. The Apache httpd reverse proxy makes the server accessible more universally.

Note that the Jellyfin logfiles can be found in /opt/dockerfiles/jellyfin/config/log/. Check these logs for clues if the server misbehaves or won’t even start.

Initial runtime configuration

Once our Jellyfin container is up and running, you can access it via http://127.0.0.1:8096/

The first step to take is connect a browser to this URL and create an admin user account. Jellyfin will provide an initial setup wizard to configure the interface language,  and create that first user account who will have admin rights over the server. You can add more users later via the Dashboard, and if you are going to configure Jellyfin to use Single Sign-On (SSO, see below) then you do not need to create any further users at all.

When the admin user has been created, you can start adding your media libraries:

Depending on the content type which you select for your libraries, Jellyfin will handle these libraries differently when presenting them to users. Movies will be presented along with metadata about the movie, its actors, director etc while E-books will show a synopsis of the story, its author and will offer the option to open and read them in the browser. Picture libraries can be played as a slide-show. And so on.

The next question will be to allow remote access and optionally an automatic port mapping via UPnP:

Leave the first checkbox enabled, since we want people to be able to access the streaming server remotely. Leave the UPnP option un-checked as it is not needed and may affect your internet router’s functioning.

This concludes the initial setup. Jellyfin will immediately start indexing the media libraries you have added during the setup. You can always add more libraries later on, by visiting the Admin Dashboard.

Jellyfin Single Sign On using Keycloak

Jellyfin does not support  OpenID Connect by itself. However, a plugin exists which can add OIDC support. This will allow our server to offer Single Sign On (SSO) to users using our Keycloak identity provider.
Only the admin user will have their own local account. Any Slackware Cloud Server user will have their account already setup in your Keycloak database. The first time they login to your Jellyfin server using SSO, the account will be activated automatically.

We will now define a new Client in Keycloak that we can use with Jellyfin, add the OIDC plugin to Jellyfin, configure that plugin using the newly created Keycloak Client ID details, add a trigger in the login page that calls Keycloak for Single Sign-On, and then finally enable the plugin.

Adding Jellyfin Client ID in Keycloak

Point your browser to the Keycloak Admin console https://sso.darkstar.lan/auth/admin/ to start the configuration process.

Add a ‘confidential’ openid-connect client in the ‘foundation‘ Keycloak realm (the realm where you created your users in the previous Episodes of this article series):

  • Select ‘foundation‘ realm; click on ‘Clients‘ and then click ‘Create‘ button.
    • Client ID‘ = “jellyfin
    • Client Type‘ = “OpenID Connect” (the default)
      Note that in Keycloak < 20.x this field was called ‘Client Protocol‘ and its value “openid-connect”.
    • Toggle ‘Client authentication‘ to “On”. This will set the client access type to “confidential”
      Note that in Keycloak < 20.x this was equivalent to setting ‘Access type‘ to “confidential”.
    • Check that ‘Standard Flow‘ is enabled.
    • Save.
  • Also in ‘Settings‘, allow this app from Keycloak.
    Our Jellyfin container is running on https://jellyfin.darkstar.lan . We add

    • Valid Redirect URIs‘ = https://jellyfin.slackware.nl/sso/OID/redirect/keycloak/*
    • Root URL‘ = https://jellyfin.darkstar.lan/
    • Web Origins‘ = https://jellyfin.darkstar.lan/+
    • Admin URL‘ = https://jellyfin.darkstar.lan
    • Save.

To obtain the secret for the “jellyfin” Client ID:

  • Go to “Credentials > Client authenticator > Client ID and Secret
    • Copy the Secret (MazskzUw7ZTanZUf9ljYsEts4ky7Uo0N)

This secret is an example string of course, yours will be different. I will be re-using this value below. You will use your own generated value.

Finally, configure the protocol mapping. Protocol mappers map items (such as a group name or an email address, for example) to a specific claim in the ‘identity and access token‘ – i.e. the information which is going to be passed between Keycloak Identity Provider and the Jellyfin server.
This mapping will allow Jellyfin to determine whether a user is allowed in, and/or whether the user will have administrator access.

  • For Keycloak versions < 20.x:
    • Open the ‘Mappers‘ tab to add a protocol mapper.
    • Click ‘Add Builtin
    • Select either “Groups”, “Realm Roles”, or “Client Roles”, depending on the role system you are planning on using.
      In our case, the choice is “Realm Roles”.
  • For Keycloak versions >= 20.x:
    • Click ‘Clients‘ in the left sidebar of the realm
    • Click on our “jellyfin” client and switch to the ‘Client Scopes‘ tab
    • In ‘Assigned client scope‘ click on “jellyfin-dedicated” scope
    • In the ‘Mappers‘ tab, click on ‘Add Predefined Mapper
    • You can select either “Groups”, “Realm Roles”, or “Client Roles”, depending on the role system you are planning on using.
      In our case, use “Realm Roles” and click ‘Create‘. The mapping will be created.
  • Once the mapper is added, click on the mapper to edit it
    • Note down the ‘Token Claim Name‘.
      In our case, that name is “realm_access.roles“.
    • Enable all four toggles: “Multivalued”, “Add to ID token”, “Add to access token”, and “Add to userinfo”.

Creating roles and groups in Keycloak

Jellyfin supports more than one admin user. Our initial local user account is an admin user by default. You may want to allow another user to act as an administrator as well. Since all other users will be defined in the Keycloak identity provider, we need to be able to differentiate between regular and admin users in Jellyfin. To achieve this, we use Keycloak groups, and we will use role-mapping to map OIDC roles to these groups.

Our Jellyfin administrators group will be : “jellyfin-admins”. Members of this group will be able to administer the Jellyfin server. The Jellyfin users group will be called: “jellyfin-users”. Only those user accounts who are members of this group will be able to access and use your Jellyfin server.
The Keycloak roles we create will have the same name. Once they have been created, you can forget about them. You will only have to manage the groups to add/remove users.
Let’s create those roles and groups in the Keycloak admin interface:

  • Select the ‘foundation‘ realm; click on ‘Roles‘ and then click ‘Create role‘ button.
    • ‘Role name‘ = “jellyfin-users
    • Click ‘Save‘.
    • Click ‘Create role‘ again: ‘Role name‘ = “jellyfin-admins
    • Click ‘Save‘.
  • Select the ‘foundation‘ realm; click on ‘Groups‘ and then click ‘Create group‘ button.
    • Group name‘ = “jellyfin-users
    • Click ‘Create
    • In the ‘Members‘ tab, add the users you want to become part of this group.
    • Go to the ‘Role mapping‘ tab, click ‘Assign role‘. Select “jellyfin-users” and click ‘Assign
    • Click ‘Save‘.
    • Click ‘Create group‘ again, ‘Group name‘ = “jellyfin-admins
    • Click ‘Create
    • In the ‘Members‘ tab, add the users you want to be the server administrators.
    • Go to the ‘Role mapping‘ tab, click ‘Assign role‘. Select “jellyfin-admins” and click ‘Assign
    • Click ‘Save‘.

Add OIDC plugin to Jellyfin

Install the 9p4/jellyfin-plugin-sso github repository into Jellyfin:

  • Go to your Jellyfin Administrator’s Dashboard:
    • Click your profile icon in top-right and click ‘Dashboard
  • Click ‘Plugins‘ in the left sidebar to open that section.
  • Click ‘Repositories‘:
    • Click ‘+‘ to add the following repository details:
      Repository Name: “Jellyfin SSO
      Repository URL:
      https://raw.githubusercontent.com/9p4/jellyfin-plugin-sso/manifest-release/manifest.json
  • Click ‘Save‘.
  • Click ‘Ok‘ to acknowledge that you know what you are doing – this completes the repository installation.
  • Now, click ‘Catalog‘ in the left sidebar.
    • Select ‘SSO Authentication‘ from the ‘Authentication‘ section.
    • Click ‘Install‘ to install the most recent version (pre-selected).
    • Click ‘Ok‘ to acknowledge that you know what you are doing – this completes the plugin installation.

After installing this plugin but before configuring it, restart the Jellyfin container, for instance via the commands:

# cd /usr/local/docker-jellyfin/
# docker-compose restart

Configure the SSO plugin

  • Go to your Jellyfin Administrator’s Dashboard:
    • Click your profile icon in top-right and click ‘Dashboard
  • Click ‘Plugins‘ in the left sidebar to open that section.
  • Click the ‘SSO-Auth‘ plugin.
  • Add a provider with the following settings:
    • Name of the OIDC Provider: keycloak
    • OID Endpoint: https://sso.darkstar.lan/auth/realms/foundation
    • OpenID Client ID: jellyfin
    • OID Secret: MazskzUw7ZTanZUf9ljYsEts4ky7Uo0N
    • Enabled: Checked
    • Enable Authorization by Plugin: Checked
    • Enable All Folders: Checked
    • Roles: jellyfin-users
    • Admin Roles: jellyfin-admins
    • Role Claim: realm_access.roles
    • Set default username claim: preferred_username
  • All other options may remain unchecked or un-configured.
  • Click ‘Save‘.
  • Enable the plugin.

Note that for Keycloak the default role claim is ‘realm_access.roles’. I tried to use Groups instead of Realm Roles but ‘groups’ are not part of Default Scope. My attempt to configure ‘Request Additional Scopes’ and entering ‘groups’ resulted in ‘illegal scope’ error.
By default the scope is limited in Jellyfin SSO to “openid profile”.

Add a SSO button to the login page

Finally, we need to create the trigger which makes Jellyfin actually connect to the Keycloak identity provider. For this, we make smart use of Jellyfin’s ‘branding’ capability which allows to customize the login page.

  • Go to your Jellyfin Administrator’s Dashboard:
    • Click your profile icon in top-right and click ‘Dashboard
  • Click ‘General‘ in the left sidebar
  • Under ‘Quick Connect‘, make sure that ‘Enable Quick Connect on this server‘ is checked
  • Under ‘Branding‘, add these lines in the ‘Login disclaimer‘ field:
    <form action="https://jellyfin.darkstar.lan/sso/OID/start/keycloak">
    <button class="raised block emby-button button-submit">
    Single Sign-On
    </button>
    </form>
  • Also under ‘Branding‘, add these lines to ‘Custom CSS Code‘:
    a.raised.emby-button {
    padding: 0.9em 1em;
    color: inherit !important;
    }
    .disclaimerContainer {
    display: block !important;
    width: auto !important;
    height: auto !important;
    }

Start Jellyfin with SSO

The jellyfin server needs to be restarted after configuring and enabling the SSO plugin. Once that is done, we have an additional button in our login page, allowing you to login with “Single Sign-On“.

Only the local admin user would still use the User/Password fields, but all other users will click the “Single Sign-On” button to be taken to the Keycloak login page;  and return to the Jellyfin content once they are properly authenticated.

Jellyfin usage

Initial media libraries for first-time users

When you have your server running and are preparing for your first users to get onboarded, you need to consider what level of initial access you want to give to a user who logs in for the first time.

In the SSO Plugin Configuration section, the default access was set to “All folders” meaning all your libraries will be instantaneously visible. If you do not want that, you can alternatively enable only the folder/folders that you want your first-time users to see (which may be ‘None‘). Then, once  a user logs into Jellyfin for the first time and the server adds the user, you can go to that user’s profile and manually enable additional folders aka media libraries for them.

Note that after re-configuring any plugin, you need to restart Jellyfin.

Scheduled tasks

In the Admin Dashboard you’ll find a section ‘Scheduled tasks‘. One of these tasks is scanning for new media that get added to your libraries. The frequency with which this task is triggered may be to low if you add new media regularly. This is definitely not as fancy as how PLEX discovers new media as soon as it is added to a library, but hey! You get what you pay for 🙂

You can always trigger a scan manually if you do not want to wait for the scheduled task to run.

Further considerations

Running Jellyfin at a URL with subfolder

Suppose you want to run Jellyfin at https://darkstar.lan/jellyfin/ – i.e. in a subfolder of your host’s domainname.
To use a subfolder you will have to do some trivial tweaks to the reverse proxy configuration:

# Jellyfin hosted on https://darkstar.lan/jellyfin
<Location /jellyfin/socket>
  ProxyPreserveHost On
  ProxyPass "ws://127.0.0.1:8096/jellyfin/socket"
  ProxyPassReverse "ws://127.0.0.1:8096/jellyfin/socket"
</Location>
<Location />
  ProxyPass "http://127.0.0.1:8096/jellyfin"
  ProxyPassReverse "http://127.0.0.1:8096/jellyfin"
</Location>

More importantly, you also need to set the “Base URL” field in the Jellyfin server. This can be done by navigating to the “Admin Dashboard -> Networking -> Base URL” in the web client. Fill this field with the subfolder name “/jellyfin” and click Save.
The Jellyfin container will need to be restarted before this change takes effect and you may have to force-refresh your browser view.

Custom background for the login page

You saw in the screenshot above that you can customize the backdrop for your login screen. To achieve that, add these lines to ‘Custom CSS Code’ and supply the correct path to your own background image:
/*turn background container transparent*/
.backgroundContainer{
background-color: transparent;
}
/*add image to loginPage only*/
#loginPage{
background-image: url("/graphics/mybg.jpg");
background-size: cover;
/*background-size: cover; scales image to fit bg*/
/*background-size: contain; repeat to fit bg*/
}

Note that the location "/graphics/mybg.jpg" translates to https://jellyfin.darkstar.lan/graphics/mybg.jpg for any web client, so that is where you will have to make it available via Apache on your host.

Conclusion

This concludes the instructions for setting up your private streaming server. I hope I was clear enough, but if I have omitted steps or made mistakes, please let me know in the comments section.
I hope you like this article and when you do implement a Jellyfin server, may it bring you lots of fun.

Cheers, Eric

« Older posts

© 2026 Alien Pastures

Theme by Anders NorenUp ↑