Create lightweight, isolated development environments with a single command.
dockvirt is a CLI tool that automates the process of creating virtual machines (VMs) using libvirt/KVM. It allows you to instantly run applications in Docker containers, with a pre-configured Caddy reverse proxy, fully isolated from your host operating system.
The idea for dockvirt was born from the daily problems of developers working on their workstations. The main challenges it solves are:
# A typical developer situation
docker run -p 3000:3000 frontend-app # Port 3000 is busy
docker run -p 8080:8080 backend-app # Port 8080 is busy
docker run -p 5432:5432 postgres # Port 5432 is busy
# Local services on your system also use ports!
# With dockvirt, each application gets its own VM
dockvirt up --name frontend --domain frontend.local --image nginx:latest --port 80
dockvirt up --name backend --domain backend.local --image httpd:latest --port 80
dockvirt up --name db --domain db.local --image postgres:latest --port 5432
# Each VM has its own port space - zero conflicts!
dockvirt down --name frontend
rm -rf ~/.dockvirt/frontend
dockvirt up --name frontend --domain frontend.local --image nginx:latest --port 80
up or down.| Tool | Key Advantages | Key Disadvantages |
|---|---|---|
| dockvirt | Full isolation (VM), simplicity, automation | Requires KVM (Linux only) |
| Docker Compose | Speed, simplicity, high popularity | No full isolation from the host system |
| Vagrant | Support for multiple providers, flexibility | Slower start, more complex configuration |
| Multipass | Very simple to use, good integration with Ubuntu | Limited control, strong ties to Canonical |
libvirt and kvm groups# Install dependencies
sudo apt update
sudo apt install -y qemu-kvm libvirt-daemon-system libvirt-clients bridge-utils \
cloud-image-utils virtinst docker.io wget
# Add user to required groups
sudo usermod -aG libvirt $(whoami)
sudo usermod -aG kvm $(whoami)
# Start and enable services
sudo systemctl start libvirtd
sudo systemctl enable libvirtd
# Install dependencies
sudo dnf install -y qemu-kvm libvirt libvirt-client virt-install \
cloud-utils docker wget
# Add user to required groups
sudo usermod -aG libvirt $(whoami)
sudo usermod -aG kvm $(whoami)
# Start and enable services
sudo systemctl start libvirtd
sudo systemctl enable libvirtd
# Install dependencies
sudo pacman -S qemu-full libvirt virt-install bridge-utils \
cloud-image-utils docker wget
# Add user to required groups
sudo usermod -aG libvirt $(whoami)
sudo usermod -aG kvm $(whoami)
# Start and enable services
sudo systemctl start libvirtd
sudo systemctl enable libvirtd
After installing the required packages, log out and log back in for group changes to take effect. Then verify the installation:
# Verify KVM is available
kvm-ok
# Verify libvirt is running
virsh list --all
We provide an installation script that handles all dependencies:
# Download and run the installer
curl -sSL https://raw.githubusercontent.com/dynapsys/dockvirt/main/scripts/install.sh | bash
# Or clone and install manually
git clone https://github.com/dynapsys/dockvirt.git
cd dockvirt
sudo ./scripts/install.sh
Install system dependencies (see Requirements section above)
pip install dockvirt
git clone https://github.com/dynapsys/dockvirt.git
cd dockvirt
pip install -e .
dockvirt check # Check system dependencies
dockvirt --help # Show available commands
dockvirt works perfectly on WSL2, solving port conflict issues between Windows and your development applications:
# In PowerShell as Administrator
wsl --install -d Ubuntu-22.04
# Update the system
sudo apt update && sudo apt upgrade -y
# Install all required dependencies
sudo apt install -y qemu-kvm libvirt-daemon-system libvirt-clients bridge-utils \
cloud-image-utils virt-install docker.io wget
# Configure services
sudo systemctl start libvirtd
sudo systemctl enable libvirtd
sudo usermod -aG libvirt $USER
sudo usermod -aG docker $USER
# Add your user to the required groups
sudo usermod -a -G libvirt,kvm $USER
newgrp libvirt
# Install dockvirt
pip install dockvirt
sudo systemctl enable --now libvirtd
sudo systemctl start libvirtd
Linux/WSL2:
cloud-localds)βββββββββββββββββββββββββββββββββββββββββββββββ β System Requirements β ββββββββββββββββββ¬βββββββββββββββββββββββββββββ€ β OS β Linux with KVM support β β RAM β 8GB+ (16GB recommended) β β Storage β 20GB+ free (SSD preferred) β β Python β 3.8+ β β User Groups β libvirt, kvm, docker β ββββββββββββββββββ΄βββββββββββββββββββββββββββββ
Checking for virtualization support:
# Check if KVM is available
lsmod | grep kvm
egrep -c '(vmx|svm)' /proc/cpuinfo # Should be > 0
graph TD
A[dockvirt up] --> B{config.yaml exists?}
B -->|No| C[Create default config.yaml]
B -->|Yes| D[Load configuration]
C --> D
D --> E{OS image exists locally?}
E -->|No| F[Download image from URL]
E -->|Yes| G[Use local image]
F --> G
G --> H[Render cloud-init templates]
H --> I[Create cloud-init ISO]
I --> J[Create VM disk with backing file]
J --> K[Run virt-install]
K --> L[VM ready with Docker + Caddy]
graph TD
A[User] -->|dockvirt up| B[Create VM]
B --> C[Install Docker]
B --> D[Configure Network]
C --> E[Pull Container Image]
D --> F[Setup Port Forwarding]
E --> G[Start Container]
F --> G
G --> H[Access Application]
### System Architecture
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β HOST SYSTEM β βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€ β dockvirt CLI β β βββ config.py (configuration management) β β βββ image_manager.py (OS image downloading) β β βββ vm_manager.py (VM creation/destruction) β β βββ cli.py (user interface) β βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€ β ~/.dockvirt/ β β βββ config.yaml (default configuration) β β βββ images/ (OS image cache) β β βββ vm_name/ (cloud-init files for each VM) β βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€ β libvirt/KVM β β βββ virt-install (VM creation) β β βββ virsh (VM management) β β βββ qemu-kvm (virtualization) β βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β βΌ βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β VIRTUAL MACHINE β βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€ β Ubuntu/Fedora OS + cloud-init β β βββ Docker Engine (automatically installed) β β βββ docker-compose (runs containers) β β βββ Caddy (reverse proxy on port 80/443) β β βββ App Container (Your application) β βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
## βοΈ Configuration
### Configuration Hierarchy
`dockvirt` uses a layered configuration system:
1. **Global config** (`~/.dockvirt/config.yaml`) - System-wide defaults
2. **Project config** (`.dockvirt` file) - Project-specific defaults
3. **CLI parameters** - Override any defaults
### Global Configuration
`dockvirt` automatically creates a configuration file at `~/.dockvirt/config.yaml` on its first run:
```yaml
default_os: ubuntu22.04
images:
ubuntu22.04:
name: ubuntu22.04
url: https://cloud-images.ubuntu.com/jammy/current/jammy-server-cloudimg-amd64.img
variant: ubuntu22.04
fedora38:
name: fedora38
url: https://archives.fedoraproject.org/pub/archive/fedora/linux/releases/38/Cloud/x86_64/images/Fedora-Cloud-Base-38-1.6.x86_64.qcow2
variant: fedora38
Note: The key os_images is also accepted for backward compatibility. The CLI merges both images and os_images automatically.
Create a .dockvirt file in your project directory:
# VM Configuration
name=my-project
domain=my-project.local
image=nginx:latest
port=80
os=ubuntu22.04
# Resource Allocation (optional)
mem=4096 # RAM in MB
disk=20 # Disk in GB
cpus=2 # Number of vCPUs
From now on, Docker images are built automatically inside the VM! You no longer need to build images on the host.
# Old way (no longer necessary):
# docker build -t my-app:latest .
# dockvirt up --image my-app:latest
# New way - just run:
cd my-project/ # directory with a Dockerfile
dockvirt up --name my-app --domain my-app.local --image my-app:latest --port 80
# The Dockerfile and your app files are automatically copied to the VM and built there!
The easiest way is to create a .dockvirt file in your project directory (like an .env file):
# Create the .dockvirt file
cat > .dockvirt << EOF
name=my-app
domain=my-app.local
image=my-app:latest
port=80
os=ubuntu22.04
mem=4096
disk=20
cpus=2
EOF
# Now, just run (in the directory with the Dockerfile):
dockvirt up
# Use the default OS (ubuntu22.04)
dockvirt up \
--name my-app \
--domain my-app.local \
--image nginx:latest \
--port 80
# Or choose a specific OS
dockvirt up \
--name fedora-app \
--domain fedora-app.local \
--image httpd:latest \
--port 80 \
--os fedora38
After creating the VM, dockvirt will display its IP address. Add it to your /etc/hosts file:
<ip_address> my-app.local
Tip: If you use a reverse proxy (Caddy) inside the VM, IP-based checks may require a Host header. You can verify with:
curl -H 'Host: my-app.local' http://<ip_address>/
Use the built-in ip subcommand:
dockvirt ip --name <vm_name>
Note: The VM image installs and enables qemu-guest-agent, so IP detection works with both NAT and bridged networking.
By default, VMs use libvirt NAT (network=default). To expose a VM directly in your LAN, use a Linux bridge (e.g., br0) and run with --net bridge=br0.
sudo nmcli con add type bridge ifname br0 con-name br0
sudo nmcli con add type bridge-slave ifname enp3s0 master br0
sudo nmcli con modify br0 ipv4.method auto ipv6.method auto
sudo nmcli con up br0
dockvirt up --net bridge=br0
.dockvirt:net=bridge=br0
With bridge networking, the VM receives a LAN IP and is visible to other machines on your network.
The .dockvirt file has priority over the default parameters, but CLI parameters override everything.
Scenario: Each SaaS customer gets a completely isolated application instance in a separate VM.
# First build your application image
docker build -t myapp:v2.1 .
# Customer A
dockvirt up --name client-a --domain client-a.myaas.com --image myapp:v2.1 --os ubuntu22.04
# Customer B (using different image version)
dockvirt up --name client-b --domain client-b.myaas.com --image nginx:latest --os fedora38
# Customer C (beta tester)
dockvirt up --name client-c --domain beta.myaas.com --image myapp:v2.1 --os ubuntu22.04
Result:
Scenario: The entire development team gets identical environments with a single command.
# .dockvirt-stack (multi-app)
stack:
frontend:
image: myapp-frontend:latest
domain: app.dev.local
os: ubuntu22.04
backend:
image: myapp-api:latest
domain: api.dev.local
os: ubuntu22.04
database:
image: postgres:15
domain: db.dev.local
os: fedora38
# Note: Stack deployment is a planned feature
# For now, create individual VMs:
# Developer One
dockvirt up --name dev-john-frontend --domain app.dev-john.local --image myapp:latest --port 3000
dockvirt up --name dev-john-api --domain api.dev-john.local --image myapp:latest --port 8080
# Developer Two
dockvirt up --name dev-jane-frontend --domain app.dev-jane.local --image myapp:latest --port 3000
dockvirt up --name dev-jane-api --domain api.dev-jane.local --image myapp:latest --port 8080
We have prepared several practical examples to show you the possibilities of the new, simplified API:
Each example now uses the new, simplified API - you no longer need to provide image paths or OS variants!
DockerVirt includes comprehensive diagnostic tools for validating HTTPS domains and troubleshooting connection issues:
Test all aspects of HTTPS connectivity including DNS, port accessibility, SSL certificates, and content:
# Test HTTPS connection comprehensively
python3 scripts/https_connection_tester.py https://your-domain.dockvirt.dev:8443
# Example output includes:
# β
DNS Resolution: domain -> IP
# β
Port Connectivity: Port accessible
# β
SSL Certificate: Details and verification status
# β
HTTP Content: Response and headers
# π Headless Browser Test: Automated browser testing
When encountering HSTS policies or certificate trust issues:
# Generate multiple bypass solutions
python3 scripts/hsts_certificate_bypass.py your-domain.dockvirt.dev 8443
# Solutions provided:
# 1οΈβ£ Alternative domain without HSTS (recommended)
# 2οΈβ£ Firefox developer profile with disabled certificate checks
# 3οΈβ£ Chromium with certificate bypass flags
# 4οΈβ£ Manual HSTS cache clearing instructions
# 5οΈβ£ Locally trusted certificate generation
Problem: βSEC_ERROR_UNKNOWN_ISSUERβ + HSTS Policy
# Quick fix - Use alternative domain:
# 1. Run bypass script to create https-demo.local
python3 scripts/hsts_certificate_bypass.py
# 2. Access via new domain (no HSTS):
# https://https-demo.local:8443/
# 3. Or use Firefox developer profile:
scripts/firefox-dev-https.sh https://your-domain.dockvirt.dev:8443/
Problem: βUnable to Connectβ - VM Not Responding
# Diagnose connection issues:
python3 scripts/https_connection_tester.py https://domain:port
# Common fixes:
# - Wait for VM to fully boot (60s+)
# - Check if VM service is running inside
# - Verify port is accessible: nc -zv IP PORT
For production-like local HTTPS without browser warnings:
# Method 1: Clear HSTS cache manually
# Firefox: about:networking#hsts -> Delete domain
# Method 2: Use alternative domain
# Generated automatically by bypass script
# Method 3: Install local CA (advanced)
# Creates trusted certificates in /tmp/https-certs/
# Install the missing package
sudo apt install cloud-image-utils
# Or on RPM-based systems
sudo dnf install cloud-utils
# Add your user to the libvirt group
sudo usermod -a -G libvirt $USER
newgrp libvirt
# Restart the service
sudo systemctl restart libvirtd
When using the system libvirt (qemu:///system), VMs run as the qemu user and must be able to traverse your home and read VM files. On Fedora/SELinux you may also need proper labels.
Fix (safe to apply):
# Allow qemu to traverse your home
sudo setfacl -m u:qemu:x "$HOME"
# Give qemu read access on dockvirt files
sudo setfacl -R -m u:qemu:rx "$HOME/.dockvirt"
sudo find "$HOME/.dockvirt" -type f -name '*.qcow2' -exec setfacl -m u:qemu:rw {} +
sudo find "$HOME/.dockvirt" -type f -name '*.iso' -exec setfacl -m u:qemu:r {} +
# SELinux labels (Fedora/SELinux)
# IMPORTANT: Label only image files, not the entire directory
# If you previously labeled the whole tree, remove that rule first:
# sudo semanage fcontext -d -t svirt_image_t "$HOME/.dockvirt(/.*)?"
sudo semanage fcontext -a -t svirt_image_t "$HOME/.dockvirt(/.*)?\\.qcow2"
sudo semanage fcontext -a -t svirt_image_t "$HOME/.dockvirt(/.*)?\\.iso"
sudo restorecon -Rv "$HOME/.dockvirt"
Tips:
virsh --connect qemu:///system <subcommand>./var/lib/libvirt/images/dockvirt) and store VM files there to avoid ACLs on $HOME.# Check if virtualization is enabled in your BIOS
egrep -c '(vmx|svm)' /proc/cpuinfo
# On WSL2, make sure Hyper-V is enabled
# In PowerShell as Administrator:
# Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V -All
# Check which ports Windows is using
netstat -an | findstr LISTENING
# In WSL2, all VMs have isolated ports
dockvirt up --name app1 --domain app1.local --image nginx --port 80
dockvirt up --name app2 --domain app2.local --image apache --port 80
# Both run without conflicts!
Use Dockvirt Doctor to diagnose and optionally fix environment issues. This is especially helpful on Homebrew Python (PEP 668 externally-managed environments).
# Quick diagnostics
make doctor
python3 scripts/doctor.py --summary
# Detailed logging + file log
python3 scripts/doctor.py --verbose --log-file ~/.dockvirt/doctor.log
# Auto-fix common issues (may require sudo and re-login for groups)
make doctor-fix
Avoid running make ... with sudo. Doctor already detects SUDO_USER and acts on your real home (/home/<user>), but invoking make with sudo may lead to confusing paths and permissions.
# Use your preferred Python (example shows 3.13)
PY=$(command -v python3.13 || command -v python3)
# Create a venv that can access system packages (e.g. python3-libvirt)
$PY -m venv --system-site-packages .venv-3.13
source .venv-3.13/bin/activate
# Install the project without forcing libvirt-python from pip
pip install -U pip setuptools wheel
pip install -e . --no-deps
pip install jinja2 click pyyaml
# Verify and diagnose
python -m dockvirt.cli --help
python3 scripts/doctor.py --summary
sudo systemctl enable --now libvirtd
sudo virsh net-define /usr/share/libvirt/networks/default.xml || true
sudo virsh net-start default || true
sudo virsh net-autostart default
Tip: The CLI runs a preflight network check and will print hints if the default network is missing or inactive.
Use the built-in Automation Agent to validate your setup end-to-end, including domain reachability. It can optionally apply safe fixes (ACL/SELinux for qemu:///system and /etc/hosts entries) using sudo.
# Summary mode (no sudo changes): tests examples, prints report
make agent
# Auto-fix mode (sudo): doctor --fix, default network, ACL/SELinux, /etc/hosts
make agent-fix
# Filter examples, skip host Docker build (image built inside VM)
PY=.venv-3.13/bin/python \
$PY scripts/agent.py run --example 1-static-nginx-website --skip-host-build
# Select OS variants to test
$PY scripts/agent.py run --os ubuntu22.04 --os fedora38 --skip-host-build
# Report location
cat agent_report.md
Optional local LLM remediation: you can enable a small local model (via Ollama) to propose and apply safe fixes automatically.
export DOCKVIRT_USE_LLM=1
# Optional model selection (default: llama3.2:3b)
export DOCKVIRT_LLM_MODEL=llama3.2:3b
make agent-fix
Note: Do not run sudo make agent-fix. The agent will request sudo where needed and stream commands in the console.
What it does:
dockvirt up per example and OS variant..dockvirt) and checks HTTP via domain./etc/hosts (auto mode) when a domain doesnβt resolve.dockvirt down.Note: Image generation is a planned feature, not yet implemented
# These commands are planned for future releases:
# dockvirt generate-image --type deb-package --output my-app.deb
# dockvirt generate-image --type rpm-package --output my-app.rpm
# For now, use standard VM deployment:
dockvirt up --name production-app --domain app.local --image nginx:latest --port 80
Note: Raspberry Pi support is planned for future releases
# This feature is planned for future releases:
# dockvirt generate-image --type raspberry-pi --output rpi-dockvirt.img
# For now, use standard x86_64 deployment
Note: ISO generation is planned for future releases
# This feature is planned for future releases:
# dockvirt generate-image --type pc-iso --output production-server.iso
Example production-stack.yaml:
apps:
frontend:
image: mycompany/frontend:v2.1
domain: app.company.com
port: 3000
api:
image: mycompany/api:v2.1
domain: api.company.com
port: 8080
monitoring:
image: grafana/grafana:latest
domain: monitoring.company.com
port: 3000
config:
auto_start: true
ssl_enabled: true
backup_enabled: true
# Use Podman instead of Docker
export DOCKVIRT_RUNTIME=podman
dockvirt up --name my-app --image nginx:latest
# Or in the .dockvirt file
runtime=podman
name=my-app
image=nginx:latest
The repository contains a Makefile to facilitate the development process. See the CONTRIBUTING.md file to learn how to contribute to the projectβs development.
If youβre developing locally inside this repository, prefer using the project virtualenv to avoid conflicts with system or Homebrew installations:
make install
.
venv-3.13/bin/dockvirt --help
# If your PATH resolves to another dockvirt (e.g., Homebrew), use the venv binary explicitly:
which dockvirt
./.venv-3.13/bin/dockvirt up
Tom Sapletta - An experienced programmer and open-source enthusiast. Passionate about automation and creating tools that make developersβ lives easier.
This project is licensed under the Apache 2.0 License. See the LICENSE file for details.
.dockvirt: dockvirt/config.pyTo validate README/Makefile commands in a reproducible container with libvirt/KVM, use these Makefile targets (they call scripts in scripts/docker-test/):
make docker-test-build # build the test image
make docker-test-quick # doctor + command tests + build (no VM)
make docker-test-full # full flow: doctor-fix, e2e, examples, agent (VMs)
make docker-test-shell # interactive shell for debugging
Scripts:
+----------------------+ +---------------------------------+
| Host (Linux) | | Docker Container |
|----------------------| |---------------------------------|
| Docker, /dev/kvm? | mount | libvirtd + qemu-kvm + virsh |
| Network: host mode | <-----> | /workspace (repo bind-mounted) |
| \ | Python venv (.venv-3.13) |
+----------------------+ \ | Makefile targets (tests) |
\---->| Default libvirt network (NAT) |
+---------------------------------+
flowchart TD
A[make agent / agent-fix] --> B[Doctor check / fix]
B --> C[Ensure libvirt default network]
C --> D[Iterate examples]
D --> E[dockvirt up]
E --> F[Wait for IP]
F --> G[HTTP by IP]
G --> H[DNS resolve domain]
H --> I[HTTP by domain]
I --> J[dockvirt down]
J --> K[Report: agent_report.md]
Explore different use cases with our examples:
Run a multi-container application with a single command:
# Navigate to the example
cd examples/5-docker-compose
# Start the VM with dockvirt
dockvirt up
# SSH into the VM
ssh -p 2222 ubuntu@$(virsh domifaddr compose-demo | grep -oE '\b([0-9]{1,3}\.){3}[0-9]{1,3}\b' | head -1)
# Inside the VM, start the Docker Compose services
cd /var/lib/dockvirt/compose-demo
docker-compose up -d
This will set up:
Access the application at: http://compose.dockvirt.dev