Proxmox LXC with Docker GPU Passthru Setup Guide

When you click on links to various merchants on this site and make a purchase, this can result in this site earning a commission. Affiliate programs and affiliations include, but are not limited to, the eBay Partner Network. As an Amazon Associate I earn from qualifying purchases. #ad #promotions

Getting a functional, and stable, Proxmox installation that also has working GPU passthrough capabilities for an LXC container hosting Docker is a critical step that you need to achieve for your home server setup to be maximized. These step provide you with just that.

In the host node, enter the shell terminal and run the following commands.


apt update && apt upgrade -y && apt install pve-headers-$(uname -r) build-essential software-properties-common make nvtop htop -y
update-initramfs -u

Download the Debaian Template for LXC to your machine

Create the new LXC container. Enter the container shell. Update the container.

Find latest driver for your cards

wget it to download the .run file



wget HTTP://URLHERE
chmod +x xxxxx.run
./xxxxx.run --dkms

click yes to install it all the way

once its done we will grab the cgroups here by running

It will output something similar to this for a single GPU (nvidia0) and the universal groups for nvidiactl, nvidia-uvm, nvidia-uvm-tools, nvidia-cap1 and nvidia-cap2. Note the 195, 509 and 234 listed here. Those id’s WILL be different for you and we will note what those are and use them in the next step.

root@prox1:~# ls -al /dev/nvidia*
crw-rw-rw- 1 root root 195, 0 Nov 13 01:23 /dev/nvidia0
crw-rw-rw- 1 root root 195, 255 Nov 13 01:23 /dev/nvidiactl
crw-rw-rw- 1 root root 509, 0 Nov 13 01:23 /dev/nvidia-uvm
crw-rw-rw- 1 root root 509, 1 Nov 13 01:23 /dev/nvidia-uvm-tools

/dev/nvidia-caps:
total 0
drwxr-xr-x 2 root root 80 Nov 13 01:23 .
drwxr-xr-x 22 root root 6360 Nov 13 01:23 ..
cr——– 1 root root 234, 1 Nov 13 01:23 nvidia-cap1
cr–r–r– 1 root root 234, 2 Nov 13 01:23 nvidia-cap2

Now we will add these groups to the LXC container for passthru of the hardware.

nano /etc/pve/lxc/105.conf

Append these values with the IDs you noted earlier to your file like so. Note the placement of the 195, 234 and 509. This is for a SINGLE gpu also, if you have multiple add additional

lxc.mount.entry: /dev/nvidia1 dev/nvidia1 none bind,optional,create=file

lines with incrementing like nvidia1,2,3 etc…

lxc.cgroup2.devices.allow: c 195:* rwm
lxc.cgroup2.devices.allow: c 234:* rwm
lxc.cgroup2.devices.allow: c 509:* rwm
lxc.mount.entry: /dev/nvidia0 dev/nvidia0 none bind,optional,create=file
lxc.mount.entry: /dev/nvidiactl dev/nvidiactl none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-modeset dev/nvidia-modeset none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-uvm dev/nvidia-uvm none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-uvm-tools dev/nvidia-uvm-tools none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-caps/nvidia-cap1 dev/nvidia-caps/nvidia-cap1 none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-caps/nvidia-cap2 dev/nvidia-caps/nvidia-cap2 none bind,optional,create=file

Push the .run installer into the container

pct push 105 NVIDIA-Linux-x86_64-550.107.02.run /root/NVIDIA-Linux-x86_64-550.107.02.run

Then enter into the console/terminal for the LXC container. Install the pushed .run installer with --no-kernel-modules flag

./NVIDIA-Linux-x86_64-550.107.02.run --no-kernel-modules

Install the NVIDIA Container Toolkit

apt install gpg curl

curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg && curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | tee /etc/apt/sources.list.d/nvidia-container-toolkit.list

apt update

apt install nvidia-container-toolkit

Edit the config.toml and enable the no-cgroups and set it to true from false.

nano /etc/nvidia-container-runtime/config.toml

#no-cgroups = false
to
no-cgroups = true

save and reboot the container now. This should get your GPU passed into the LXC. Next we will get Docker setup with Dockge and GPU passthru.

Install Docker for Debian Bookworm

apt install ca-certificates

apt update

apt install ca-certificates

install -m 0755 -d /etc/apt/keyrings

curl -fsSL https://download.docker.com/linux/debian/gpg | gpg –dearmor -o /etc/apt/trusted.gpg.d/docker.gpg

chmod a+r /etc/apt/trusted.gpg.d/docker.gpg

echo “deb [arch=$(dpkg –print-architecture) signed-by=/etc/apt/trusted.gpg.d/docker.gpg] https://download.docker.com/linux/debian bookworm stable” | tee /etc/apt/sources.list.d/docker.list > /dev/null

apt update

apt install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

Now we need to enable the nvidia container toolkit to work with Docker.

nvidia-ctk runtime configure --runtime=docker

Install Dockge Docker Container Manager. Your data dir is inside /opt/dockge in this instance.

mkdir -p /opt/stacks /opt/dockge

cd /opt/dockge

curl https://raw.githubusercontent.com/louislam/dockge/master/compose.yaml –output compose.yaml

docker compose up -d

Now you can go to your docker manager to complete the rest of the steps. If you do not know the IP address of the LXC check your networking settings for the container. Then append :5001

192.168.1.70:5001 for instance is my address. You will need to set a username and password on the first visit. Please write this down. Now you are ready to start creating your docker containers for OpenwebUI, Ollama, SearXNG, and Tika

Ollama compose (copy this into your create new view), hit save, then update, then start.



version: "3.3"
services:
ollama:
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: all
capabilities:
- gpu
volumes:
- ollama:/root/.ollama
ports:
- 11434:11434
container_name: ollama
image: ollama/ollama
restart: always
volumes:
ollama: {}
networks:
dockge_default:
external: true
openwebui_default:
external: true
searxng_searxng:
external: true
tika_default:
external: true

OpenwebUI compose (copy this into your create new view), hit save, then update, then start.



version: "3.3"
services:
open-webui:
ports:
- 7000:8080
volumes:
- open-webui:/app/backend/data
container_name: open-webui
restart: always
image: ghcr.io/open-webui/open-webui:latest
volumes:
open-webui: {}
networks:
dockge_default:
external: true
ollama_default:
external: true
searxng_searxng:
external: true
tika_default:
external: true

SearXNG and REDIS compose (copy this into your create new view), hit save, then update, then start.



version: "3.7"
services:
redis:
container_name: redis
image: docker.io/valkey/valkey:7-alpine
command: valkey-server --save 30 1 --loglevel warning
restart: unless-stopped
networks:
- searxng_searxng
volumes:
- valkey-data2:/data
cap_drop:
- ALL
cap_add:
- SETGID
- SETUID
- DAC_OVERRIDE
logging:
driver: json-file
options:
max-size: 4m
max-file: "1"
searxng:
container_name: searxng
image: docker.io/searxng/searxng:latest
restart: unless-stopped
networks:
- searxng_searxng
ports:
- 7777:8080
volumes:
- ./searxng/img:/usr/local/searxng/searx/static/themes/simple/img:rw
- ./searxng:/etc/searxng:rw
environment:
- SEARXNG_BASE_URL=http://${SEARXNG_HOSTNAME:-localhost}/
cap_drop:
- ALL
cap_add:
- CHOWN
- SETGID
- SETUID
logging:
driver: json-file
options:
max-size: 4m
max-file: "1"
networks:
searxng_searxng:
external: true
#searxng: null
dockge_default:
external: true
openwebui_default:
external: true
tika_default:
external: true
ollama_default:
external: true
volumes:
valkey-data2: null

Tika compose (copy this into your create new view), hit save, then update, then start.



version: "3.3"
services:
tika:
ports:
- 9998:9998
image: apache/tika:latest-full
restart: always
networks:
dockge_default:
external: true
openwebui_default:
external: true
ollama_default:
external: true
searxng_searxng:
external: true

Congrats! You now have a VERY nice setup for your AI Homelab! OpenWEBUI is located at https://ipaddress:7000 and Search Engine (SearXNG) is located at http://ipaddress:7777