Skip to content

Proxmox Virtual Machines

Request and manage virtual machines through ARROW’s integrated VM provisioning system powered by Proxmox Virtual Environment.

ARROW’s VM request system provides a streamlined workflow for provisioning virtual machines:

  • Self-Service Requests: Submit VM requests through the ARROW console
  • Automated Provisioning: Requests are automatically processed and deployed to Proxmox infrastructure
  • Real-Time Monitoring: Track build progress with live WebSocket logs and status updates
  • VPN Integration: Automatic NetBird VPN setup for secure VM access
  • Lifecycle Management: Manage VM usage periods and completion with automated cleanup

To request a VM, navigate to Device Requests in the ARROW console and select + New Request with device type Virtual Machine.

Required Information:

FieldDescription
VM NameUnique identifier for the virtual machine
ClientAssociated client organization
Request PeriodStart and end dates for VM usage
VM TypeTarget format (QCOW2 for Proxmox deployment)
ImagesBase image(s) to include in the VM
ConsultantsTeam members who will access the VM

The following sequence diagram illustrates the complete workflow from request submission to VM deployment:

sequenceDiagram
    participant User
    participant Console
    participant Backend as PocketBase Backend
    participant B2 as B2 Storage
    participant GitHub as GitHub Actions
    participant Builder as Build Server
    participant Proxmox as Proxmox VE
    participant NetBird

    User->>Console: Submit VM Request
    Console->>Backend: POST /api/device_requests
    Backend->>Backend: Validate request & Generate VM serial (VM-XXXXX)
    Backend->>Backend: Create device_settings record
    Backend->>Backend: Create vm_build_tasks record (status: queued)

    Backend->>GitHub: POST webhook (event_type: vm_imaging_request)
    GitHub->>Builder: Trigger GitHub Actions runner
    Builder->>Backend: GET /api/vm-build/tasks (poll for work)
    Backend-->>Builder: Return BuildTask with configs

    Builder->>B2: Download base image
    Builder->>Builder: Apply customizations via Ansible
    Builder->>Backend: POST /api/vm-build/task-status (progress updates)
    Builder->>Backend: WebSocket logs via /api/vm-build/logs/stream

    Builder->>Proxmox: Deploy customized VM
    Proxmox->>NetBird: Register VM as VPN peer
    NetBird->>Backend: Confirm peer registration

    Builder->>Backend: POST /api/vm-build/task-status (status: completed)
    Backend->>Backend: Update device_request status to "fulfilled"
    Backend->>Console: Real-time status update
    Console->>User: Display VM access details

Monitor your request status in the Device Requests page:

StatusDescriptionBackend State
PendingRequest submitted, awaiting approvaldevice_request.status = "pending"
ApprovedRequest approved, build queuedvm_build_tasks.status = "queued"
ImagingVM image being customizedvm_build_tasks.status = "building"
ProvisioningVM being deployed to Proxmoxvm_build_tasks.status = "building"
ConfiguringPost-deployment configuration in progressvm_build_tasks.status = "building"
FulfilledVM ready for usedevice_request.status = "fulfilled"
FailedBuild encountered an errorvm_build_tasks.status = "failed"
stateDiagram-v2
    [*] --> queued: Task Created
    queued --> started: Builder Claims Task
    started --> building: Build Begins
    building --> completed: Success
    building --> failed: Error Occurred
    building --> cancelling: Cancel Requested
    cancelling --> cancelled: Cleanup Done
    failed --> queued: Rebuild Triggered
    completed --> [*]
    cancelled --> [*]

Once approved, your VM request enters the automated provisioning pipeline:

  1. Build Task Creation: A build task is queued in the system
  2. GitHub Actions Trigger: The imaging workflow is triggered via webhook
  3. Image Customization: Base image is customized with organization settings
  4. Software Installation: Selected applications installed via Ansible
  5. Proxmox Deployment: Customized image deployed as a VM
  6. VPN Registration: VM registered with NetBird for secure access
  7. Access Configuration: Consultant access groups configured

VMs are provisioned with resources based on the selected configuration:

  • CPU: Virtual CPU cores allocated from Proxmox cluster
  • Memory: RAM allocation based on workload requirements
  • Storage: Disk space provisioned from shared storage pools
  • Network: Virtual network interfaces with VPN connectivity

After your VM is provisioned, access is provided through the NetBird VPN:

  1. Install NetBird Client: Download from the VPN management page
  2. Configure Management URL: Set your organization’s VPN endpoint
  3. Authenticate: Log in using your organization’s identity provider
  4. Connect: Your VPN client will automatically connect to assigned VMs

Once the VM is ready, connection information is available in the console:

  • VPN IP Address: Internal VPN address for the VM
  • Hostname: NetBird hostname for DNS resolution
  • Access Credentials: Initial login credentials (if applicable)

VM access is restricted to assigned consultants:

  • Only consultants listed on the device request can access the VM
  • Access is enforced through NetBird group policies
  • One-way access policies prevent VMs from connecting back to user workstations

For more details on access control, see Network Access Control.

Track VM provisioning progress through the console:

  • Status Updates: Real-time status changes displayed in the request list
  • Build Logs: View detailed logs through the build monitoring interface
  • Progress Indicators: Visual progress bars show completion percentage
  • Error Notifications: Immediate notification if build fails

Access build logs to troubleshoot issues or verify configuration:

  1. Navigate to your device request in the console
  2. Click View Build Logs to open the log viewer
  3. Logs stream in real-time during active builds
  4. Historical logs remain available after completion

For detailed build monitoring information, see Build Monitoring.

If your project requires additional time:

  1. Navigate to the Devices page
  2. Click the […] menu for your VM
  3. Select Edit Request Details
  4. Update the end date as needed

When your engagement ends, complete the VM to release resources:

  1. Ensure all data is backed up or exported
  2. Navigate to the device in the console
  3. Initiate the completion process

Completion Actions:

  • VM is shut down and removed from Proxmox
  • NetBird peer is deleted from the VPN
  • Access control groups are cleaned up
  • Resources are returned to the pool

You can complete a VM before the scheduled end date:

  1. Confirm you no longer need VM access
  2. Submit completion request through the console
  3. The system processes cleanup automatically

When a VM is provisioned, the following settings are stored in the device_settings collection:

FieldDescriptionExample
nb_device_nameNetBird VPN device name for peer identificationarrow-vm-001
build_job_infoJSON object containing build status, progress, and server infoSee below
serial_numberUnique VM identifier (format: VM-XXXXX)VM-12345
arrow_pwGenerated password for arrow userAuto-generated
root_pwGenerated root passwordAuto-generated
api_keyAPI key for builder authenticationAuto-generated

Build Job Info Structure:

{
"status": "completed",
"progress": 100,
"stage": "deployment",
"stage_name": "VM Ready",
"current_step": "Completed",
"build_server": "builder-1",
"build_time_seconds": 1245,
"download_url": "https://..."
}

ARROW supports two VM deployment configurations:

TypeCodeVPN GroupDescription
Proxmox VMpvmpvmDeployed on ARROW’s managed Proxmox cluster
External VMvmvmDeployed on client infrastructure (VMware, VirtualBox, QEMU)

Each VM build creates a vm_build_tasks record:

FieldDescription
task_idUnique task identifier with timestamp
device_settingsReference to device_settings record
organizationOwning organization
statusCurrent build status
progressPercentage complete (0-100)
assigned_serverBuild server handling the task
build_timeDuration in seconds
error_messageDetails if build failed

The VM completion endpoint (POST /api/vm/complete) performs automated cleanup:

flowchart TD
    A[POST /api/vm/complete] --> B[Authenticate User]
    B --> C[Find VM by serial_number]
    C --> D[Verify Organization Match]
    D --> E[Get device_request]
    E --> F[Retrieve device_settings]
    F --> G[Get NetBird Integration Config]
    G --> H[Create NetBirdClient]
    H --> I[Fetch All VPN Peers]
    I --> J[Match Peer by nb_device_name]
    J --> K{Peer Found?}
    K -->|Yes| L[Delete VPN Peer]
    K -->|No| M[Log Warning - Continue]
    L --> N[Delete device-{id} Group]
    M --> N
    N --> O[Remove Users from consultants-{id} Group]
    O --> P[Delete Access Policies]
    P --> Q[Update device_request status = complete]
    Q --> R[Return Success Response]

The VM completion workflow is handled by backend/api/vm_complete/handlers.go:

  • Authentication: Requires valid user session
  • Organization Isolation: Users can only complete VMs in their organization
  • Graceful Cleanup: Continues even if NetBird operations fail
  • Audit Trail: All actions logged via app.Logger()
  • Plan Resources: Estimate CPU, memory, and storage needs
  • Select Images: Choose appropriate base images for your use case
  • Assign Consultants: Ensure all team members are listed
  • Set Dates: Allow sufficient time for the engagement
  • Monitor Resources: Check VM performance through available tools
  • Maintain Access: Keep VPN client updated and connected
  • Backup Data: Regularly export important data
  • Report Issues: Contact support for any VM problems
  • Export Data: Save all necessary data before completion
  • Notify Team: Inform assigned consultants of pending completion
  • Complete Promptly: Release resources when no longer needed