When you click on links to various merchants on this site and make a purchase, this can result in this site earning a commission. Affiliate programs and affiliations include, but are not limited to, the eBay Partner Network. As an Amazon Associate I earn from qualifying purchases. #ad #promotions



Getting a functional, and stable, Proxmox installation that also has working GPU passthrough capabilities for an LXC container hosting Docker is a critical step that you need to achieve for your home server setup to be maximized. These step provide you with just that.
In the host node, enter the shell terminal and run the following commands.
Download the Debaian Template for LXC to your machine
Create the new LXC container. Enter the container shell. Update the container.
Find latest driver for your cards
wget it to download the .run file
click yes to install it all the way
once its done we will grab the cgroups here by running
It will output something similar to this for a single GPU (nvidia0) and the universal groups for nvidiactl, nvidia-uvm, nvidia-uvm-tools, nvidia-cap1 and nvidia-cap2. Note the 195, 509 and 234 listed here. Those id’s WILL be different for you and we will note what those are and use them in the next step.
root@prox1:~# ls -al /dev/nvidia*
crw-rw-rw- 1 root root 195, 0 Nov 13 01:23 /dev/nvidia0
crw-rw-rw- 1 root root 195, 255 Nov 13 01:23 /dev/nvidiactl
crw-rw-rw- 1 root root 509, 0 Nov 13 01:23 /dev/nvidia-uvm
crw-rw-rw- 1 root root 509, 1 Nov 13 01:23 /dev/nvidia-uvm-tools
/dev/nvidia-caps:
total 0
drwxr-xr-x 2 root root 80 Nov 13 01:23 .
drwxr-xr-x 22 root root 6360 Nov 13 01:23 ..
cr——– 1 root root 234, 1 Nov 13 01:23 nvidia-cap1
cr–r–r– 1 root root 234, 2 Nov 13 01:23 nvidia-cap2
Now we will add these groups to the LXC container for passthru of the hardware.
Append these values with the IDs you noted earlier to your file like so. Note the placement of the 195, 234 and 509. This is for a SINGLE gpu also, if you have multiple add additional
lxc.mount.entry: /dev/nvidia1 dev/nvidia1 none bind,optional,create=filelines with incrementing like nvidia1,2,3 etc…
Push the .run installer into the container
Then enter into the console/terminal for the LXC container. Install the pushed .run installer with
--no-kernel-modulesflagInstall the NVIDIA Container Toolkit
Edit the config.toml and enable the no-cgroups and set it to true from false.
save and reboot the container now. This should get your GPU passed into the LXC. Next we will get Docker setup with Dockge and GPU passthru.
Install Docker for Debian Bookworm
Now we need to enable the nvidia container toolkit to work with Docker.
Install Dockge Docker Container Manager. Your data dir is inside /opt/dockge in this instance.
Now you can go to your docker manager to complete the rest of the steps. If you do not know the IP address of the LXC check your networking settings for the container. Then append :5001
192.168.1.70:5001 for instance is my address. You will need to set a username and password on the first visit. Please write this down. Now you are ready to start creating your docker containers for OpenwebUI, Ollama, SearXNG, and Tika
Ollama compose (copy this into your create new view), hit save, then update, then start.
OpenwebUI compose (copy this into your create new view), hit save, then update, then start.
SearXNG and REDIS compose (copy this into your create new view), hit save, then update, then start.
Tika compose (copy this into your create new view), hit save, then update, then start.
Congrats! You now have a VERY nice setup for your AI Homelab! OpenWEBUI is located at https://ipaddress:7000 and Search Engine (SearXNG) is located at http://ipaddress:7777