Self-hosting GoToSocial with Podman and systemd
I've been looking to participate in the Fediverse for a while, but as a notorious self-hoster I wasn't comfortable joining someone else's server. And, the alternatives for self-hosting last I had a look at it seemed overly complex and resource intensive. Recently, however, I discovered GoToSocial, which looked very promising.
They offer a container, and excellent documentation.
There's support for PostgreSQL and SQLite, and a host of other options. An example docker-compose.yml is provided with sane defaults.
As I prefer podman over docker these days, I took some time to convert the docker-compose file to a Podman kube file.
This configuration requires that you can provide a reverse proxy to send traffic to the container on the port you select. I use nginx myself.
Podman kube file
# Example: gotosocial-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: gotosocial-pod
labels:
app: gotosocial
spec:
containers:
- name: gotosocial
image: superseriousbusiness/gotosocial:latest
securityContext:
# After trying a bit I couldn't make the volume mounts work with putting actual user UIDs here.
# With 0, the files are correctly owned by the non-root user on the host, and the container runs without root privileges.
runAsUser: 0
runAsGroup: 0
env:
- name: GTS_HOST
value: "example.org"
- name: GTS_DB_TYPE
value: "sqlite"
- name: GTS_DB_ADDRESS
value: "/gotosocial/storage/sqlite.db"
# TLS is handled in my reverse proxy
- name: GTS_LETSENCRYPT_ENABLED
value: "false"
- name: GTS_LETSENCRYPT_EMAIL_ADDRESS
value: ""
- name: GTS_WAZERO_COMPILATION_CACHE
value: "/gotosocial/.cache"
# Set this value to the network address of your podman "kube network"
# You can find it with the command "podman network inspect podman-default-kube-network"
- name: GTS_TRUSTED_PROXIES
value: "10.89.0.0/24"
- name: TZ
value: "Europe/Oslo"
# Host ports mapped to container ports.
ports:
# GoToSocial runs on port 8080 in the container, pick a host port that works for you.
- containerPort: 8080
hostPort: 9191
volumeMounts:
# This is your main "storage" directory (sqlite.db and media files).
- name: gotosocial-data
mountPath: /gotosocial/storage
# Optional Wazero compilation cache volume.
- name: gotosocial-cache
mountPath: /gotosocial/.cache
# Volumes section for the hostPath volumes that store data on the Podman host.
volumes:
- name: gotosocial-data
hostPath:
# Adjust the path to wherever you want to keep your storage.
path: /home/conman/gotosocial/data
type: DirectoryOrCreate
- name: gotosocial-cache
hostPath:
path: /home/conman/gotosocial/.cache
type: DirectoryOrCreate
restartPolicy: Always
Although I have a PostgreSQL instance running, I went with SQLite, as per the documentation it should be more than enough for my single-user instance. It's also pretty easy to keep backed up etc.
Now, the container can be started and installed as a systemd service with the following command:
systemctl --user enable --now podman-kube@$(systemd-escape ~/gotosocial-pod.yaml).service
You now have to create a user as per the documentation. To access the CLI, you can enter the container with:
podman exec -it gotosocial-pod-gotosocial /bin/sh
(Make sure to double check with podman ps
that you get the container name right, though)
Reverse proxy with nginx
I use nginx as a reverse proxy, and here's a simple configuration for it:
server {
server_name example.org;
listen 80;
listen 443 ssl;
ssl_certificate /path/to/your/cert.pem;
ssl_certificate_key /path/to/your/cert.key;
# Turn on OCSP stapling as recommended at
# https://community.letsencrypt.org/t/integration-guide/13123
# requires nginx version >= 1.3.7
ssl_stapling on;
ssl_stapling_verify on;
ssl_prefer_server_ciphers on;
if ($scheme = http) {
rewrite ^ https://$server_name$request_uri? last;
}
location / {
# Adjust to where your container is running.
proxy_pass http://cloudpods:9191/;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}