I was having a rough time managing Docker and Docker‑Compose directly on my servers. Things got messy, containers weren’t behaving, and troubleshooting felt like whack‑a‑mole. So I did what any tinkerer would do… I tore it all down and started fresh. I wiped an old PC, installed Proxmox, and spun up a VM just for Portainer. Now I manage containers/stacks from Portainer across other VMs — all inside Proxmox.
https://<proxmox-ip>:8006
and accept the cert.sudo sed -i 's/^deb/#deb/g' /etc/apt/sources.list.d/pve-enterprise.list
echo "deb http://download.proxmox.com/debian/pve bookworm pve-no-subscription" | sudo tee /etc/apt/sources.list.d/pve-no-subscription.list
sudo apt update && sudo apt -y full-upgrade
10.20.0.251
in my lab).Use the convenience script for Docker, then run the Portainer Server:
curl -fsSL https://get.docker.com | sudo bash
sudo usermod -aG docker $USER
sudo docker volume create portainer_data
sudo docker run -d -p 8000:8000 -p 9443:9443 --name portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer-ce:latest
Open https://<portainer-vm-ip>:9443
and create the admin user.
On each additional VM/host you want Portainer to manage, run the agent:
sudo docker run -d -p 9001:9001 --name portainer_agent --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v /var/lib/docker/volumes:/var/lib/docker/volumes -v /:/host portainer/agent:2.33.0
Then in Portainer → Environments → Add environment → Agent, enter the VM’s IP and port 9001
.
When you outgrow one box, add another node and join it to the cluster.
# On the first (master) node
pvecm status
# On the new node, join the cluster
pvecm add <master-node-ip>
9001/tcp
, verify routes/DNS.systemctl restart pveproxy pvedaemon pvestatd
./etc/pve/storage.cfg
types/syntax.Expose your Proxmox and Portainer GUIs safely to the internet using Cloudflare Tunnel—no open ports on your router, and protected by Cloudflare Access (SSO + MFA). We’ll install cloudflared
on a small Linux VM (or your Portainer VM) and proxy traffic to internal services.
cloudflared
curl -fsSL https://pkg.cloudflare.com/cloudflare-main.gpg | sudo tee /usr/share/keyrings/cloudflare-main.gpg >/dev/null
echo "deb [signed-by=/usr/share/keyrings/cloudflare-main.gpg] https://pkg.cloudflare.com/cloudflared $(. /etc/os-release && echo $VERSION_CODENAME) main" | sudo tee /etc/apt/sources.list.d/cloudflared.list
sudo apt update && sudo apt install -y cloudflared
cloudflared tunnel login
# Complete auth in browser; selects your Cloudflare account/zone
cloudflared tunnel create homelab-tunnel
Create CNAMEs in Cloudflare that point to the tunnel UUID (the login flow can add these automatically). Example hostnames:
proxmox.weitzman.info
→ internal https://proxmox.lan:8006
portainer.weitzman.info
→ internal https://portainer.lan:9443
Create /etc/cloudflared/config.yml
with upstreams. Proxmox uses HTTPS with a self-signed cert, so add noTLSVerify: true
for that origin only.
tunnel: homelab-tunnel
credentials-file: /etc/cloudflared/UUID.json # replace with your tunnel UUID filename
ingress:
- hostname: proxmox.weitzman.info
service: https://10.20.0.201:8006
originRequest:
noTLSVerify: true # Proxmox self-signed cert
- hostname: portainer.weitzman.info
service: https://10.20.0.251:9443
originRequest:
noTLSVerify: true # if Portainer uses self-signed
- service: http_status:404
sudo mkdir -p /etc/cloudflared
sudo cp ~/.cloudflared/*.json /etc/cloudflared/ # copy the tunnel UUID creds
sudo chown -R root:root /etc/cloudflared
sudo cloudflared service install
sudo systemctl enable --now cloudflared
In the Cloudflare dashboard → Zero Trust → Access → Applications:
proxmox.weitzman.info
(type Self-hosted).portainer.weitzman.info
.systemctl status cloudflared
cloudflared tunnel list
cloudflared tunnel ingress validate
curl -I https://proxmox.weitzman.info
curl -I https://portainer.weitzman.info
This isn’t just about making things work — it’s about making them repeatable, recoverable, and scalable.