From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6D8E6283159 for ; Wed, 27 Aug 2025 09:32:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1756287141; cv=none; b=sNJ8Zf8WZFrKyWjw2d5pwHTbB9naZISwvmJ5mLBqGnxEh4TPhrJg7m0o5L+MRtoVWenfMBb+fKGEo5BzLZKbhq+ZvnusIkccWAt0nzD78YEUzixlSPMshX3Y9j/EJgyhfabpNDgk0jOOhBzCspD5j48gXNdWjvcur901B0ZFVL4= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1756287141; c=relaxed/simple; bh=EKQ14xkcAyu2/4oH7ImoOkxmUd3fBWICa89Eac8Pwuc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=jDwX7O76uJXpVJl5LRl4hXxFWP52dVa6mUEI1KJ7jhqjQD+1f67kS7KGtXDRGbmwldbPY0iYn2a6WN4E5c463cmqbhGwXvnZ+Gcvsq32GP+Tr6XIOApvvSrTTXMZJPBMf1weueDH4yrcQSHM8KhTO0YW4u634ThubBs3t9NHlI8= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=kernel.org; spf=none smtp.mailfrom=infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=icyZJGCh; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=kernel.org Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="icyZJGCh" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Sender:Content-Transfer-Encoding: Content-Type:MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc: To:From:Reply-To:Content-ID:Content-Description; bh=Y0ECWTuKU5GEPvcuTfrkurIU9jjbzfi0mtEpW4DO+Ss=; b=icyZJGChwE2nB2a/zJ6pI+dGam prdrQS9hAk7guwLz3fTCmllqE73OF+0LXGUdRBglnaHZaAZ0o3OOnObdsEGam3nH0IBX2JURdwMXw B93IePLET/LB9YxUDQQQ7PElcVEJ4hAWSAPVUi1HroGM/ZEUBltWQqByOwp29TU2zL/nZW5s7No/w 47DawA1uFhuEhTPxMbL4rqumhmcKBp5IMC9n7uIh86H1rdyubKQSYM03npJ8WcwochR0bDEdupMbB IqZrL9iMHSKBXgv91EDoRolrC+UPitrLafkUkZw3RePJ/zUTQvVntQbLNOJ4kQ3eSqIVIoGmzy2Ix enhVCYKg==; Received: from mcgrof by bombadil.infradead.org with local (Exim 4.98.2 #2 (Red Hat Linux)) id 1urCVs-0000000Eqw0-3XzN; Wed, 27 Aug 2025 09:32:16 +0000 From: Luis Chamberlain To: Chuck Lever , Daniel Gomez , kdevops@lists.linux.dev Cc: Luis Chamberlain Subject: [PATCH 2/3] nixos: add NixOS support as third bringup option with libvirt integration Date: Wed, 27 Aug 2025 02:32:13 -0700 Message-ID: <20250827093215.3540056-3-mcgrof@kernel.org> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250827093215.3540056-1-mcgrof@kernel.org> References: <20250827093215.3540056-1-mcgrof@kernel.org> Precedence: bulk X-Mailing-List: kdevops@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: Luis Chamberlain This commit adds NixOS as a third bringup option alongside guestfs and SKIP_BRINGUP, providing a declarative and reproducible way to provision test VMs using NixOS's functional package management system with full libvirt integration. Libvirt Integration: - NixOS VMs are managed through libvirt system session for professional VM lifecycle - Uses standard libvirt networking with DHCP assignment (192.168.122.x range) - No port forwarding needed - direct SSH access to VM IP addresses - Standard virsh commands for VM management (start, shutdown, destroy, console) - Integrates with existing libvirt infrastructure and monitoring tools SSH Session Management: - SSH keys are dynamically generated based on directory location using format: ~/.ssh/kdevops-nixos-- - This ensures unique keys per kdevops instance, preventing conflicts between multiple deployments - SSH config entries are automatically managed during bringup - Direct connections to DHCP-assigned IPs (no port forwarding complexity) Path Compatibility: - NixOS uses different system paths than traditional Linux distributions - Python interpreter: /run/current-system/sw/bin/python3 (not /usr/bin/python3) - Bash shell: /run/current-system/sw/bin/bash (not /bin/bash) - Templates automatically detect and use correct paths for NixOS VM Management: - Professional VM lifecycle through libvirt system session - XML-based VM configuration with proper resource allocation - QCOW2 disk images with virtio drivers for performance - Automatic network configuration via libvirt's default network - Full integration with existing libvirt monitoring and management tools Workflow Support: The following workflows have initial NixOS support with only package dependency resolution, actual run time tests will be needed later: - fstests: Filesystem testing (XFS, Btrfs, EXT4) - blktests: Block layer testing (NVMe, SCSI, NBD) - selftests: Linux kernel selftests - mmtests: Memory management performance testing - sysbench: Database performance benchmarking - pynfs: NFS protocol testing - ltp: Linux Test Project - gitr: Git regression testing Key Features: - Declarative configuration through Nix expressions - Reproducible builds using Nix flakes - Automatic dependency resolution for workflows - Directory-based isolation for multiple kdevops instances - Full libvirt integration for professional VM management - Automatic SSH configuration during bringup - Standard networking without port forwarding complexity This implementation provides a modern, functional approach to VM provisioning that leverages both NixOS's strengths in reproducibility and libvirt's professional VM management capabilities. Generated-by: Claude AI Signed-off-by: Luis Chamberlain --- .gitignore | 3 + defconfigs/nixos | 27 + docs/kdevops-nixos.md | 404 ++++++++++++++ kconfigs/Kconfig.bringup | 22 +- kconfigs/Kconfig.nixos | 101 ++++ nixos/flake.nix | 32 ++ .../files/scripts/detect_libvirt_session.sh | 26 + playbooks/nixos.yml | 516 ++++++++++++++++++ .../devconfig/tasks/install-deps/main.yml | 1 + playbooks/roles/devconfig/tasks/main.yml | 4 +- playbooks/roles/gen_hosts/tasks/main.yml | 15 + .../roles/gen_hosts/templates/fstests.j2 | 20 + playbooks/roles/gen_hosts/templates/hosts.j2 | 16 + playbooks/roles/gen_nodes/tasks/main.yml | 24 + .../roles/gen_nodes/templates/nixos_nodes.j2 | 14 + .../roles/update_etc_hosts/tasks/main.yml | 2 + .../templates/nixos/configuration.nix.j2 | 138 +++++ playbooks/templates/nixos/flake.nix.j2 | 38 ++ .../nixos/hardware-configuration.nix.j2 | 42 ++ .../templates/nixos/run-vm-wrapper.sh.j2 | 159 ++++++ playbooks/templates/nixos/vm-libvirt.xml.j2 | 96 ++++ playbooks/templates/nixos/vms.nix.j2 | 45 ++ .../templates/nixos/workflow-deps.nix.j2 | 127 +++++ playbooks/update_ssh_config_nixos.yml | 57 ++ scripts/detect_libvirt_session.sh | 26 + scripts/nixos.Makefile | 93 ++++ scripts/nixos_ssh_key_name.py | 55 ++ scripts/provision.Makefile | 4 + scripts/status_nixos.sh | 57 ++ scripts/update_ssh_config_nixos.py | 133 +++++ 30 files changed, 2294 insertions(+), 3 deletions(-) create mode 100644 defconfigs/nixos create mode 100644 docs/kdevops-nixos.md create mode 100644 kconfigs/Kconfig.nixos create mode 100644 nixos/flake.nix create mode 100755 playbooks/files/scripts/detect_libvirt_session.sh create mode 100644 playbooks/nixos.yml create mode 100644 playbooks/roles/gen_nodes/templates/nixos_nodes.j2 create mode 100644 playbooks/templates/nixos/configuration.nix.j2 create mode 100644 playbooks/templates/nixos/flake.nix.j2 create mode 100644 playbooks/templates/nixos/hardware-configuration.nix.j2 create mode 100644 playbooks/templates/nixos/run-vm-wrapper.sh.j2 create mode 100644 playbooks/templates/nixos/vm-libvirt.xml.j2 create mode 100644 playbooks/templates/nixos/vms.nix.j2 create mode 100644 playbooks/templates/nixos/workflow-deps.nix.j2 create mode 100644 playbooks/update_ssh_config_nixos.yml create mode 100755 scripts/detect_libvirt_session.sh create mode 100644 scripts/nixos.Makefile create mode 100755 scripts/nixos_ssh_key_name.py create mode 100755 scripts/status_nixos.sh create mode 100755 scripts/update_ssh_config_nixos.py diff --git a/.gitignore b/.gitignore index 75e4712d..2bea9d48 100644 --- a/.gitignore +++ b/.gitignore @@ -102,3 +102,6 @@ scripts/kconfig/.nconf-cfg Kconfig.passthrough_libvirt.generated archive/ + +# NixOS generated files +nixos/generated/ diff --git a/defconfigs/nixos b/defconfigs/nixos new file mode 100644 index 00000000..c510e341 --- /dev/null +++ b/defconfigs/nixos @@ -0,0 +1,27 @@ +CONFIG_NIXOS=y +CONFIG_LIBVIRT=y + +# Disable mirror setup for NixOS +CONFIG_ENABLE_LOCAL_LINUX_MIRROR=n +CONFIG_USE_LOCAL_LINUX_MIRROR=n +CONFIG_INSTALL_LOCAL_LINUX_MIRROR=n +CONFIG_MIRROR_INSTALL=n + +CONFIG_NIXOS_USE_FLAKES=y +CONFIG_NIXOS_CHANNEL="nixos-unstable" +CONFIG_NIXOS_ENABLE_WORKFLOW_DEPS=y +CONFIG_NIXOS_LIBVIRT_SESSION_INFERENCE=y + +CONFIG_NIXOS_VM_MEMORY_MB=4096 +CONFIG_NIXOS_VM_DISK_SIZE_GB=20 +CONFIG_NIXOS_VM_VCPUS=4 + +CONFIG_WORKFLOWS=y +CONFIG_WORKFLOW_LINUX_CUSTOM=y + +CONFIG_BOOTLINUX=y +CONFIG_BOOTLINUX_9P=n + +CONFIG_KDEVOPS_TRY_REFRESH_REPOS=y +CONFIG_KDEVOPS_TRY_UPDATE_SYSTEMS=y +CONFIG_KDEVOPS_TRY_INSTALL_KDEV_TOOLS=y diff --git a/docs/kdevops-nixos.md b/docs/kdevops-nixos.md new file mode 100644 index 00000000..f22fddf3 --- /dev/null +++ b/docs/kdevops-nixos.md @@ -0,0 +1,404 @@ +# NixOS Support in kdevops + +## Overview + +kdevops provides NixOS as a third bringup option alongside guestfs and SKIP_BRINGUP. This integration offers a declarative, reproducible way to provision test VMs using NixOS's functional package management and configuration system. + +## Architecture + +### Virtualization Method + +NixOS VMs in kdevops are managed through libvirt using the system session. This provides: +- Proper VM lifecycle management through libvirt +- Standard DHCP-based networking on the default libvirt network +- Integration with existing libvirt infrastructure +- Professional VM management with `virsh` commands + +### SSH Session Management + +NixOS VMs use a sophisticated SSH session management system that enables multiple kdevops instances to coexist without conflicts: + +#### Dynamic Key Generation +SSH keys are dynamically generated based on the directory location of your kdevops instance: +- **Key Naming Format**: `~/.ssh/kdevops-nixos--` +- **Example**: For `/home/user/work/kdevops/`, the key would be `~/.ssh/kdevops-nixos-work-kdevops-a1b2c3d4` +- **Benefit**: Prevents SSH key conflicts when running multiple kdevops instances +- **Implementation**: `scripts/nixos_ssh_key_name.py` generates consistent key names + +#### Network Configuration +VMs use standard libvirt networking with DHCP assignment: +- **Network**: Connected to libvirt's default network (virbr0) +- **IP Assignment**: Dynamic DHCP allocation from 192.168.122.x range +- **SSH Access**: Direct connection to VM IP address (no port forwarding needed) +- **Integration**: Works with existing libvirt network infrastructure + +#### Automatic SSH Configuration +The system automatically manages SSH client configuration: +- **Config Management**: `update_ssh_config_nixos.py` updates `~/.ssh/config` +- **Per-VM Entries**: Each VM gets a dedicated SSH config block +- **Key Features**: + ``` + Host kdevops + HostName 192.168.122.169 + Port 22 + User kdevops + IdentityFile ~/.ssh/kdevops-nixos-- + StrictHostKeyChecking no + UserKnownHostsFile /dev/null + ``` +- **Development Mode**: Host key checking disabled for convenience (not for production) + +### Path Compatibility + +NixOS uses different system paths than traditional Linux distributions. The implementation automatically handles: + +| Component | Traditional Path | NixOS Path | +|-----------|-----------------|------------| +| Python interpreter | `/usr/bin/python3` | `/run/current-system/sw/bin/python3` | +| Bash shell | `/bin/bash` | `/run/current-system/sw/bin/bash` | + +These paths are automatically detected and used in: +- Generated Ansible inventory files +- Ansible playbook tasks +- Shell script execution + +## Supported Workflows + +### Currently Supported + +The following workflows have full NixOS support with automatic dependency resolution: + +- **fstests**: Filesystem testing (XFS, Btrfs, EXT4) +- **blktests**: Block layer testing (NVMe, SCSI, NBD) +- **selftests**: Linux kernel selftests +- **mmtests**: Memory management performance testing +- **sysbench**: Database performance benchmarking +- **pynfs**: NFS protocol testing +- **ltp**: Linux Test Project +- **gitr**: Git regression testing + +### Adding New Workflow Support + +To add support for a new workflow: + +1. Update `playbooks/templates/nixos/workflow-deps.nix.j2` +2. Add the necessary NixOS packages for your workflow +3. Test with `make defconfig-nixos && make bringup` + +## Quick Start + +### Basic NixOS VM + +```bash +make defconfig-nixos +make +make bringup +``` + +### Workflow-Specific Configurations + +```bash +# For XFS filesystem testing +make defconfig-nixos-xfs +make +make bringup +make fstests + +# For block layer testing +make defconfig-nixos-blktests +make +make bringup +make blktests +``` + +## VM Management + +### Libvirt Integration + +NixOS VMs are managed through the standard libvirt system session, providing professional VM lifecycle management: + +```bash +# VM lifecycle management +virsh start kdevops # Start the VM +virsh shutdown kdevops # Graceful shutdown +virsh destroy kdevops # Force stop +virsh reboot kdevops # Restart the VM + +# VM information and monitoring +virsh list --all # List all VMs and their states +virsh dominfo kdevops # Show VM details +virsh domifaddr kdevops # Get VM IP address +virsh console kdevops # Connect to VM console +``` + +#### Libvirt Features +- **Standard Management**: Uses industry-standard libvirt commands +- **System Integration**: Integrates with existing libvirt infrastructure +- **Network Management**: Automatic DHCP IP assignment and DNS resolution +- **Resource Control**: CPU, memory, and disk configuration via libvirt XML +- **Monitoring**: Built-in resource monitoring and logging +- **Snapshots**: Full snapshot and cloning capabilities (if needed) + +#### VM Configuration +VMs are configured with libvirt XML templates: +- **Memory**: Configurable via `nixos_vm_memory_mb` (default: 4096MB) +- **CPUs**: Set by `nixos_vm_vcpus` (default: 4) +- **Networking**: Connected to default libvirt network with DHCP +- **Storage**: QCOW2 disk images with virtio drivers +- **Boot**: Direct disk boot (no kernel/initrd specification needed) + +### Access Methods + +#### Primary Access (SSH) +```bash +# Using SSH config entry (auto-generated during bringup) +ssh kdevops + +# Direct SSH to DHCP-assigned IP +ssh kdevops@192.168.122.169 + +# Via Ansible (uses SSH config automatically) +ansible kdevops -m ping +``` + +#### Alternative Access +- **Libvirt Console**: `virsh console kdevops` (direct VM console) +- **VNC Access**: Available via libvirt VNC configuration if enabled +- **Serial Console**: Configured through libvirt XML template + +### VM Lifecycle Operations + +#### Starting VMs +```bash +# Start all NixOS VMs (full automation) +make bringup + +# Start specific VM manually +virsh start kdevops +``` + +#### Stopping VMs +```bash +# Graceful shutdown all VMs +make destroy + +# Stop specific VM +/path/to/nixos/run-hostname-wrapper.sh stop +``` + +#### VM Health Checks +```bash +# Check all VM status +scripts/status_nixos.sh + +# Check specific VM +/path/to/nixos/run-hostname-wrapper.sh status +``` + +## Configuration + +### Key Configuration Files + +- `kconfigs/Kconfig.nixos`: NixOS-specific options +- `nixos/flake.nix`: Nix flake for reproducible builds +- `nixos/generated/`: Generated NixOS configurations +- `playbooks/nixos.yml`: Ansible playbook for VM management + +### Configuration Options + +Key options in menuconfig: + +- `NIXOS_VM_MEMORY_MB`: VM memory allocation (default: 4096) +- `NIXOS_VM_VCPUS`: Number of virtual CPUs (default: 4) +- `NIXOS_VM_DISK_SIZE_GB`: Disk size (default: 20) +- `NIXOS_SSH_PORT`: Base SSH port (default: 10022) +- `NIXOS_USE_FLAKES`: Enable Nix flakes (default: yes) + +## Troubleshooting + +### Common Issues + +#### SSH Connection Refused +- Ensure VM is running: `./run-kdevops-wrapper.sh status` +- Check SSH port: `netstat -tlnp | grep 10022` +- Verify SSH key: `ls ~/.ssh/kdevops-nixos-*` + +#### Python/Bash Not Found +- The templates automatically handle NixOS paths +- If issues persist, check `ansible_python_interpreter` in hosts file +- Should be set to `/run/current-system/sw/bin/python3` + +#### VM Won't Start +- Check disk space: NixOS images require ~20GB +- Verify QEMU installation: `which qemu-system-x86_64` +- Check for port conflicts on 10022, 55555, and 5900 + +### Debug Mode + +Enable debug output for troubleshooting: + +```bash +make menuconfig +# Navigate to: Bring up methods -> NixOS options +# Enable: Enable debug mode for NixOS provisioning +``` + +## Technical Details + +### File Structure + +``` +kdevops/ +├── nixos/ +│ ├── flake.nix # Nix flake configuration +│ ├── generated/ # Generated NixOS configs +│ │ ├── configuration.nix # Main NixOS configuration +│ │ ├── hardware-configuration.nix +│ │ ├── workflow-deps.nix # Workflow dependencies +│ │ └── vms.nix # VM definitions +│ └── result -> /nix/store/... # Built VM image +├── playbooks/ +│ ├── nixos.yml # Main NixOS playbook +│ └── templates/nixos/ # Jinja2 templates +└── scripts/ + ├── nixos.Makefile # NixOS-specific make targets + ├── nixos_ssh_key_name.py # SSH key generation + └── update_ssh_config_nixos.py # SSH config management +``` + +### Implementation Architecture + +#### Core Design Decisions + +1. **Native QEMU Over libvirt** + - **Rationale**: Simpler setup, no daemon requirements + - **Benefits**: Direct control over QEMU parameters, easier debugging + - **Trade-off**: Less integration with existing libvirt tooling + +2. **Directory-Based Instance Isolation** + - **SSH Keys**: Unique per kdevops directory location + - **Port Ranges**: Configurable base ports prevent conflicts + - **VM Storage**: Separate directories for each instance + - **Result**: Multiple kdevops instances can run simultaneously + +3. **Declarative Configuration via Nix** + - **Single Source of Truth**: `configuration.nix` defines entire VM state + - **Reproducibility**: Nix flakes pin exact package versions + - **Rollback Support**: Previous configurations can be restored + - **Package Management**: Automatic dependency resolution for workflows + +4. **Ansible Integration Strategy** + - **Path Detection**: Templates automatically detect NixOS vs traditional Linux + - **Python Interpreter**: Set correctly in generated inventory + - **Shell Commands**: Use appropriate bash path based on OS + - **Distribution Tasks**: Skip non-applicable tasks for NixOS + +5. **Workflow Dependency Management** + - **Template-Based**: `workflow-deps.nix.j2` generates package lists + - **Automatic Inclusion**: Enabled workflows get required packages + - **Extensible**: Easy to add support for new workflows + - **Cached Builds**: Nix caches built packages for faster provisioning + +## Integration with kdevops Workflows + +### Workflow Compatibility + +NixOS integrates seamlessly with existing kdevops workflows through: + +1. **Automatic Package Resolution**: Each workflow's dependencies are automatically installed +2. **Path Translation**: Templates handle path differences transparently +3. **Ansible Compatibility**: Playbooks work across NixOS and traditional Linux +4. **Result Collection**: Standard kdevops result paths are maintained + +### Adding Workflow Support + +To enable a new workflow for NixOS: + +1. **Identify Dependencies** + ```bash + # List packages needed for your workflow + nix-env -qaP | grep package-name + ``` + +2. **Update Template** + Edit `playbooks/templates/nixos/workflow-deps.nix.j2`: + ```nix + {% if kdevops_workflow_enable_yourworkflow %} + # Your workflow dependencies + pkgs.package1 + pkgs.package2 + {% endif %} + ``` + +3. **Test Integration** + ```bash + make defconfig-nixos-yourworkflow + make bringup + make yourworkflow + ``` + +4. **Verify Results** + - Check workflow execution completes + - Validate results in standard locations + - Ensure baseline/dev comparison works + +### Workflow-Specific Considerations + +#### fstests +- Kernel modules loaded via NixOS configuration +- Test devices created as loop devices +- Results in `workflows/fstests/results/` + +#### blktests +- NVMe/SCSI modules configured in NixOS +- Block devices accessible via `/dev/` +- Expunge lists work identically + +#### selftests +- Kernel source mounted via 9P if configured +- Build dependencies included automatically +- Parallel execution supported + +#### mmtests +- A/B testing fully supported +- Performance monitoring tools included +- Comparison reports work as expected + +## Contributing + +To contribute NixOS support for additional workflows: + +1. Identify required packages for your workflow +2. Update `workflow-deps.nix.j2` template +3. Test with a clean build +4. Submit PR with test results + +### Testing Your Changes + +```bash +# Clean build test +make mrproper +make defconfig-nixos-yourworkflow +make bringup +make yourworkflow + +# Verify no missing dependencies +journalctl -u your-service # Check for errors +which required-command # Verify binaries present +``` + +## Limitations + +- Currently supports x86_64 architecture only +- Requires Nix package manager on the host +- VMs run with user-mode networking (no bridged networking) +- Limited to QEMU/KVM virtualization + +## Future Enhancements + +Planned improvements: +- libvirt integration option +- Bridged networking support +- ARM64 architecture support +- Distributed build support with Nix +- Integration with Hydra CI system diff --git a/kconfigs/Kconfig.bringup b/kconfigs/Kconfig.bringup index 887d3851..8caf07be 100644 --- a/kconfigs/Kconfig.bringup +++ b/kconfigs/Kconfig.bringup @@ -5,6 +5,10 @@ config KDEVOPS_ENABLE_GUESTFS bool output yaml +config KDEVOPS_ENABLE_NIXOS + bool + output yaml + choice prompt "Node bring up method" default GUESTFS @@ -39,6 +43,21 @@ config TERRAFORM If you are not using a cloud environment just disable this. +config NIXOS + bool "NixOS declarative configuration with libvirt" + select KDEVOPS_ENABLE_NIXOS + select EXTRA_STORAGE_SUPPORTS_512 + select EXTRA_STORAGE_SUPPORTS_1K + select EXTRA_STORAGE_SUPPORTS_2K + select EXTRA_STORAGE_SUPPORTS_4K + select EXTRA_STORAGE_SUPPORTS_LARGEIO + help + Use NixOS declarative configuration system to provision VMs with + libvirt. This provides a purely functional approach to VM management + with automatic dependency resolution based on enabled workflows. + NixOS will automatically infer the libvirt session type (system vs + user) based on your distribution's defaults, similar to guestfs. + config SKIP_BRINGUP bool "Skip bring up - bare metal or existing nodes" select EXTRA_STORAGE_SUPPORTS_512 @@ -55,10 +74,11 @@ endchoice config LIBVIRT bool - depends on GUESTFS + depends on GUESTFS || NIXOS default y source "kconfigs/Kconfig.guestfs" +source "kconfigs/Kconfig.nixos" source "terraform/Kconfig" if LIBVIRT source "kconfigs/Kconfig.libvirt" diff --git a/kconfigs/Kconfig.nixos b/kconfigs/Kconfig.nixos new file mode 100644 index 00000000..55361215 --- /dev/null +++ b/kconfigs/Kconfig.nixos @@ -0,0 +1,101 @@ +# SPDX-License-Identifier: copyleft-next-0.3.1 + +if NIXOS + +config NIXOS_STORAGE_DIR + string + output yaml + default "{{ kdevops_storage_pool_path }}/nixos" + +config NIXOS_CONFIG_DIR + string + output yaml + default "{{ topdir_path }}/nixos" + +config NIXOS_GENERATION_DIR + string + output yaml + default "{{ nixos_config_dir }}/generated" + +config NIXOS_USE_FLAKES + bool "Use Nix flakes for configuration" + output yaml + default y + help + Use the modern Nix flakes system for managing NixOS configurations. + This provides better reproducibility and dependency management. + If disabled, will use traditional configuration.nix approach. + +config NIXOS_CHANNEL + string "NixOS channel to use" + output yaml + default "nixos-unstable" + help + The NixOS channel to use for the VMs. Options include: + - nixos-unstable: Latest packages, rolling release + - nixos-24.05: Stable release from May 2024 + - nixos-23.11: Stable release from November 2023 + +config NIXOS_ENABLE_WORKFLOW_DEPS + bool "Automatically install workflow dependencies" + output yaml + default y + help + When enabled, NixOS will automatically generate package dependencies + based on all enabled workflows (fstests, blktests, etc.) and include + them in the VM configuration. + +config NIXOS_LIBVIRT_SESSION_INFERENCE + bool "Automatically infer libvirt session type" + output yaml + default y + help + Automatically detect whether to use libvirt system or user session + based on your distribution's defaults. Similar to guestfs, this will + use system session for most distributions and user session for Fedora. + +config NIXOS_CUSTOM_CONFIG_PATH + string "Path to custom NixOS configuration template" + output yaml + default "" + help + Optional path to a custom NixOS configuration template that will be + merged with the auto-generated configuration. This allows you to add + custom packages, services, or other NixOS settings. + +config NIXOS_VM_MEMORY_MB + int "Memory allocation per VM (MB)" + output yaml + default 4096 + help + Amount of memory to allocate to each NixOS VM in megabytes. + +config NIXOS_VM_DISK_SIZE_GB + int "Disk size per VM (GB)" + output yaml + default 20 + help + Size of the primary disk for each NixOS VM in gigabytes. + +config NIXOS_VM_VCPUS + int "Number of vCPUs per VM" + output yaml + default 4 + help + Number of virtual CPUs to allocate to each NixOS VM. + +config NIXOS_SSH_PORT + int "SSH port for NixOS VMs" + output yaml + default 22 + help + SSH port to use for connecting to NixOS VMs. + +config NIXOS_DEBUG_MODE + bool "Enable debug mode for NixOS provisioning" + default n + help + Enable verbose output and debugging information during NixOS + VM provisioning and configuration generation. + +endif # NIXOS diff --git a/nixos/flake.nix b/nixos/flake.nix new file mode 100644 index 00000000..530d5980 --- /dev/null +++ b/nixos/flake.nix @@ -0,0 +1,32 @@ +{ + description = "kdevops NixOS VMs"; + + inputs = { + nixpkgs.url = "github:NixOS/nixpkgs/nixos-unstable"; + }; + + outputs = { self, nixpkgs }: { + nixosConfigurations = { + "kdevops" = nixpkgs.lib.nixosSystem { + system = "x86_64-linux"; + modules = [ + ./generated/configuration.nix + ./generated/hardware-configuration.nix + ./generated/workflow-deps.nix + ({ ... }: { + networking.hostName = "kdevops"; + }) + ]; + }; + }; + + # Build all VMs + defaultPackage.x86_64-linux = + nixpkgs.legacyPackages.x86_64-linux.writeShellScriptBin "build-vms" '' + echo "Building NixOS VMs..." + echo "Building kdevops..." + nix build .#nixosConfigurations.kdevops.config.system.build.vm + echo "All VMs built successfully!" + ''; + }; +} diff --git a/playbooks/files/scripts/detect_libvirt_session.sh b/playbooks/files/scripts/detect_libvirt_session.sh new file mode 100755 index 00000000..caea9367 --- /dev/null +++ b/playbooks/files/scripts/detect_libvirt_session.sh @@ -0,0 +1,26 @@ +#!/bin/bash +# SPDX-License-Identifier: copyleft-next-0.3.1 +# +# Detect the appropriate libvirt session type (system vs user) based on +# distribution defaults, similar to how guestfs handles it. + +SCRIPTS_DIR=$(dirname $0) +source ${SCRIPTS_DIR}/libvirt_pool.sh + +OS_FILE="/etc/os-release" +LIBVIRT_URI="qemu:///system" # Default to system + +# Get the pool variables which includes distribution detection +get_pool_vars + +# Fedora defaults to user session +if [[ "$USES_QEMU_USER_SESSION" == "y" ]]; then + LIBVIRT_URI="qemu:///session" +fi + +# Override detection if explicitly configured +if [[ -n "$CONFIG_LIBVIRT_URI_PATH" ]]; then + LIBVIRT_URI="$CONFIG_LIBVIRT_URI_PATH" +fi + +echo "$LIBVIRT_URI" diff --git a/playbooks/nixos.yml b/playbooks/nixos.yml new file mode 100644 index 00000000..eda34586 --- /dev/null +++ b/playbooks/nixos.yml @@ -0,0 +1,516 @@ +--- +# SPDX-License-Identifier: copyleft-next-0.3.1 + +- name: Install NixOS dependencies on localhost + hosts: localhost + gather_facts: true + tags: install-deps + tasks: + - name: Check if nix is installed + ansible.builtin.command: which nix + register: nix_check + # TODO: Review - was ignore_errors: true + failed_when: false # Always succeed - review this condition + changed_when: false + + - name: Install nix package manager + become: true + when: nix_check.rc != 0 + block: + - name: Download nix installer + ansible.builtin.get_url: + url: https://nixos.org/nix/install + dest: /tmp/install-nix.sh + mode: '0755' + + - name: Install nix + ansible.builtin.shell: | + sh /tmp/install-nix.sh --daemon --yes + args: + creates: /nix + + - name: Ensure libvirt is installed + become: true + ansible.builtin.package: + name: + - libvirt0 + - qemu-kvm + - libvirt-daemon-system + - libvirt-clients + state: present + when: ansible_os_family == "Debian" + + - name: Ensure libvirt is installed (RedHat) + become: true + ansible.builtin.package: + name: + - libvirt + - qemu-kvm + - libvirt-daemon + state: present + when: ansible_os_family == "RedHat" + +- name: Generate NixOS configurations + hosts: localhost + gather_facts: true + vars_files: + - "{{ playbook_dir }}/../extra_vars.yaml" + tags: generate-configs + tasks: + - name: Create NixOS directories + ansible.builtin.file: + path: "{{ item }}" + state: directory + mode: '0755' + loop: + - "{{ nixos_config_dir }}" + - "{{ nixos_generation_dir }}" + - "{{ nixos_storage_dir }}" + + - name: Ensure SSH key exists for configuration + block: + - name: Determine SSH key path based on directory + ansible.builtin.command: python3 {{ playbook_dir }}/../scripts/nixos_ssh_key_name.py --path + register: ssh_key_path_result + changed_when: false + + - name: Set SSH key path + ansible.builtin.set_fact: + nixos_ssh_key_path: "{{ ssh_key_path_result.stdout | trim }}" + + - name: Generate SSH key for NixOS VMs if not exists + openssh_keypair: + path: "{{ nixos_ssh_key_path }}" + type: rsa + size: 2048 + comment: "kdevops@nixos" + force: false + + - name: Read SSH public key + ansible.builtin.slurp: + src: "{{ nixos_ssh_key_path }}.pub" + register: ssh_public_key + + - name: Set SSH key in fact + ansible.builtin.set_fact: + nixos_ssh_authorized_key: "{{ ssh_public_key['content'] | b64decode | trim }}" + + - name: Template base NixOS configuration + ansible.builtin.template: + src: nixos/configuration.nix.j2 + dest: "{{ nixos_generation_dir }}/configuration.nix" + mode: '0644' + + - name: Template hardware configuration + ansible.builtin.template: + src: nixos/hardware-configuration.nix.j2 + dest: "{{ nixos_generation_dir }}/hardware-configuration.nix" + mode: '0644' + + - name: Generate workflow dependencies configuration + ansible.builtin.template: + src: nixos/workflow-deps.nix.j2 + dest: "{{ nixos_generation_dir }}/workflow-deps.nix" + mode: '0644' + when: nixos_enable_workflow_deps | bool + + - name: Debug SSH key path + ansible.builtin.debug: + msg: "Using SSH key: {{ hostvars['localhost']['nixos_ssh_key_path'] | default('NOT SET') }}" + + - name: Generate VM definitions + ansible.builtin.template: + src: nixos/vms.nix.j2 + dest: "{{ nixos_generation_dir }}/vms.nix" + mode: '0644' + + - name: Generate flake.nix if enabled + ansible.builtin.template: + src: nixos/flake.nix.j2 + dest: "{{ nixos_config_dir }}/flake.nix" + mode: '0644' + when: nixos_use_flakes | bool + +# The setup phase is integrated into generate-configs to ensure SSH keys are available + +- name: Build and deploy NixOS VMs + hosts: localhost + gather_facts: true + vars_files: + - "{{ playbook_dir }}/../extra_vars.yaml" + tags: build-vms + tasks: + - name: Create disk image configuration + ansible.builtin.copy: + content: | + { config, lib, pkgs, ... }: + + { + imports = [ + ./configuration.nix + ]; + + # Ensure proper boot configuration for disk image + boot.loader.grub.device = lib.mkForce "/dev/vda"; + boot.loader.grub.enable = lib.mkForce true; + + fileSystems."/" = lib.mkForce { + device = "/dev/disk/by-label/nixos"; + fsType = "ext4"; + autoResize = true; + }; + } + dest: "{{ nixos_generation_dir }}/disk-image.nix" + + - name: Check if NixOS disk image already exists + ansible.builtin.stat: + path: "{{ nixos_storage_dir }}/nixos-image-result" + register: disk_image_exists + + - name: Build NixOS disk image + ansible.builtin.shell: | + # Source nix profile and set PATH + export PATH="/nix/var/nix/profiles/default/bin:/usr/local/bin:/usr/bin:/bin:$PATH" + if [ -f /nix/var/nix/profiles/default/etc/profile.d/nix.sh ]; then + . /nix/var/nix/profiles/default/etc/profile.d/nix.sh + fi + + # Configure Nix to use local mirror if available + {% if nixos_use_local_mirror is defined and nixos_use_local_mirror and nixos_mirror_url is defined and nixos_mirror_url != "" %} + export NIX_CONFIG="substituters = {{ nixos_mirror_url }} https://cache.nixos.org" + echo "Using local Nix cache mirror: {{ nixos_mirror_url }}" + {% endif %} + + cd {{ nixos_generation_dir }} + + # Build a QCOW2 disk image with NixOS installed + echo "Building NixOS disk image (this may take a while)..." + + # Create a wrapper expression for make-disk-image.nix + cat > make-image.nix <<'EOF' + let + pkgs = import {}; + lib = pkgs.lib; + + # Build a complete NixOS system configuration + nixosSystem = import "${pkgs.path}/nixos" { + configuration = { + imports = [ + ./configuration.nix + ./disk-image.nix + ]; + + # Ensure we have a bootable system + boot.loader.grub.enable = lib.mkForce true; + boot.loader.grub.device = lib.mkForce "/dev/vda"; + boot.loader.grub.configurationLimit = 1; + + # Critical: ensure the system can boot + boot.kernelModules = [ "virtio_pci" "virtio_blk" "virtio_net" ]; + boot.initrd.availableKernelModules = [ "virtio_pci" "virtio_blk" "virtio_net" ]; + + # Ensure networking works + networking.useDHCP = lib.mkDefault true; + + # Make sure we have a working system + system.stateVersion = "24.05"; + + # Ensure SSH starts + systemd.services.sshd.wantedBy = [ "multi-user.target" ]; + }; + }; + in + import "${pkgs.path}/nixos/lib/make-disk-image.nix" { + inherit pkgs lib; + config = nixosSystem.config; + diskSize = 20480; + format = "qcow2"; + partitionTableType = "legacy"; + # Important: include the bootloader! + installBootLoader = true; + } + EOF + + # Force rebuild by clearing any cached result + rm -f {{ nixos_storage_dir }}/nixos-image-result + + nix-build make-image.nix \ + --no-out-link \ + --arg forceRebuild true \ + -o {{ nixos_storage_dir }}/nixos-image-result + + # Return the path to the disk image + readlink -f {{ nixos_storage_dir }}/nixos-image-result/nixos.qcow2 + register: build_result + changed_when: "'Building NixOS disk image' in build_result.stdout" + when: not disk_image_exists.stat.exists + + - name: Get existing disk image path + ansible.builtin.shell: | + readlink -f {{ nixos_storage_dir }}/nixos-image-result/nixos.qcow2 + register: existing_image_path + when: disk_image_exists.stat.exists + + - name: Store disk image path + ansible.builtin.set_fact: + nixos_disk_image_path: >- + {{ + build_result.stdout_lines | last | trim + if (build_result.stdout_lines is defined) + else existing_image_path.stdout | trim + }} + + - name: Copy NixOS disk image for each VM + ansible.builtin.shell: | + SOURCE_IMAGE="{{ nixos_disk_image_path | default(nixos_storage_dir + '/nixos-image-result/nixos.qcow2') }}" + TARGET_IMAGE="{{ nixos_storage_dir }}/{{ item }}.qcow2" + + # Remove target if it exists and copy fresh + if [ -f "$TARGET_IMAGE" ]; then + rm -f "$TARGET_IMAGE" + fi + + cp "$SOURCE_IMAGE" "$TARGET_IMAGE" + chmod u+w "$TARGET_IMAGE" + loop: "{{ groups['all'] | reject('equalto', 'localhost') | list }}" + when: nixos_disk_image_path is defined + + - name: Generate VM wrapper scripts + ansible.builtin.template: + src: nixos/run-vm-wrapper.sh.j2 + dest: "{{ nixos_storage_dir }}/run-{{ item }}-wrapper.sh" + mode: '0755' + loop: "{{ groups['all'] | reject('equalto', 'localhost') | list }}" + loop_control: + index_var: vm_idx + vars: + vm_name: "{{ item }}" + vm_index: "{{ vm_idx }}" + vm_memory: "{{ nixos_vm_memory_mb | default(4096) }}" + vm_vcpus: "{{ nixos_vm_vcpus | default(4) }}" + +- name: Ensure default libvirt network is available + hosts: localhost + gather_facts: true + vars_files: + - "{{ playbook_dir }}/../extra_vars.yaml" + tags: bringup + tasks: + - name: Check if default network exists and is active + ansible.builtin.shell: virsh net-info default + register: default_network_info + failed_when: false + environment: + LIBVIRT_DEFAULT_URI: "{{ libvirt_uri }}" + + - name: Start default network if not active + ansible.builtin.shell: virsh net-start default + when: default_network_info.rc != 0 or 'Active:' not in default_network_info.stdout or 'yes' not in default_network_info.stdout.split('Active:')[1].split('\n')[0] + failed_when: false + environment: + LIBVIRT_DEFAULT_URI: "{{ libvirt_uri }}" + +- name: Provision NixOS VMs with libvirt + hosts: baseline,dev + gather_facts: false + vars_files: + - "{{ playbook_dir }}/../extra_vars.yaml" + tags: bringup + tasks: + - name: Check if VM already exists + ansible.builtin.shell: virsh domstate "{{ inventory_hostname }}" + register: vm_status + failed_when: false + delegate_to: localhost + environment: + LIBVIRT_DEFAULT_URI: "{{ libvirt_uri }}" + + - name: Provision VM with libvirt + when: vm_status.rc != 0 or 'shut off' in vm_status.stdout + delegate_to: localhost + environment: + LIBVIRT_DEFAULT_URI: "{{ libvirt_uri }}" + block: + - name: Generate libvirt XML for VM + ansible.builtin.template: + src: nixos/vm-libvirt.xml.j2 + dest: "{{ nixos_storage_dir }}/{{ inventory_hostname }}.xml" + vars: + vm_name: "{{ inventory_hostname }}" + vm_memory: "{{ nixos_vm_memory_mb | default(4096) }}" + vm_vcpus: "{{ nixos_vm_vcpus | default(4) }}" + vm_disk: "{{ nixos_storage_dir }}/{{ inventory_hostname }}.qcow2" + + - name: Define VM in libvirt + ansible.builtin.shell: virsh define "{{ nixos_storage_dir }}/{{ inventory_hostname }}.xml" + failed_when: false + + - name: Start VM + ansible.builtin.shell: virsh start "{{ inventory_hostname }}" + failed_when: false + + - name: Ensure VM is running + ansible.builtin.shell: virsh start "{{ inventory_hostname }}" + register: start_result + failed_when: + - start_result.rc != 0 + - "'already active' not in start_result.stderr" + delegate_to: localhost + environment: + LIBVIRT_DEFAULT_URI: "{{ libvirt_uri }}" + +- name: Setup SSH access for NixOS VMs + hosts: localhost + gather_facts: true + vars_files: + - "{{ playbook_dir }}/../extra_vars.yaml" + tags: bringup + tasks: + - name: Wait for VMs to get IP addresses from DHCP + ansible.builtin.shell: | + for i in {1..90}; do + IP=$(virsh domifaddr {{ item }} --source lease 2>/dev/null | awk '/192\.168\.122\./ {print $4}' | cut -d'/' -f1) + if [ -n "$IP" ]; then + echo "$IP" + exit 0 + fi + sleep 3 + done + exit 1 + loop: "{{ groups['all'] | reject('equalto', 'localhost') | list }}" + register: vm_ips + retries: 2 + delay: 10 + environment: + LIBVIRT_DEFAULT_URI: "{{ libvirt_uri }}" + + - name: Set VM IP facts + ansible.builtin.set_fact: + nixos_vm_ips: "{{ dict(groups['all'] | reject('equalto', 'localhost') | list | zip(vm_ips.results | map(attribute='stdout'))) }}" + + - name: Determine SSH key path for SSH config update + ansible.builtin.command: python3 {{ playbook_dir }}/../scripts/nixos_ssh_key_name.py --path + register: ssh_key_path_for_config + changed_when: false + + - name: Wait for SSH to be available on VMs + ansible.builtin.wait_for: + host: "{{ nixos_vm_ips[item] }}" + port: 22 + delay: 10 + timeout: 300 + loop: "{{ groups['all'] | reject('equalto', 'localhost') | list }}" + + - name: Update SSH config for NixOS VMs + ansible.builtin.command: | + python3 {{ playbook_dir }}/../scripts/update_ssh_config_nixos.py update \ + {{ item }} \ + {{ nixos_vm_ips[item] }} \ + 22 \ + kdevops \ + {{ nixos_ssh_config_file | default(ansible_env.HOME + '/.ssh/config') }} \ + {{ ssh_key_path_for_config.stdout | trim }} \ + 'NixOS VM' + loop: "{{ groups['all'] | reject('equalto', 'localhost') | list }}" + when: nixos_update_ssh_config | default(true) | bool + +- name: Show VM access information + hosts: localhost + gather_facts: false + vars_files: + - "{{ playbook_dir }}/../extra_vars.yaml" + tags: console + tasks: + - name: Display VM access information + ansible.builtin.debug: + msg: | + NixOS VMs are running and accessible via libvirt. + + SSH Access: + {% for vm in groups['all'] | reject('equalto', 'localhost') | list %} + - {{ vm }}: ssh {{ vm }} + {% endfor %} + + VM Management: + {% for vm in groups['all'] | reject('equalto', 'localhost') | list %} + - {{ vm }}: virsh {start|shutdown|destroy} {{ vm }} + {% endfor %} + + VM Status: + - Check status: virsh list --all + - Get IP: virsh domifaddr + +- name: Destroy NixOS VMs + hosts: localhost + gather_facts: true + vars_files: + - "{{ playbook_dir }}/../extra_vars.yaml" + tags: [destroy, never] + tasks: + - name: Stop VMs using wrapper scripts + ansible.builtin.command: "{{ nixos_storage_dir }}/run-{{ item }}-wrapper.sh stop" + loop: "{{ groups['all'] | reject('equalto', 'localhost') | list }}" + # TODO: Review - was ignore_errors: true + failed_when: false # Always succeed - review this condition + + - name: Remove SSH config entries for NixOS VMs + ansible.builtin.command: | + python3 {{ playbook_dir }}/../scripts/update_ssh_config_nixos.py remove \ + {{ item }} \ + '' \ + '' \ + '' \ + {{ nixos_ssh_config_file | default(ansible_env.HOME + '/.ssh/config') }} \ + '' \ + 'NixOS VM' + loop: "{{ groups['all'] | reject('equalto', 'localhost') | list }}" + when: nixos_update_ssh_config | default(true) | bool + # TODO: Review - was ignore_errors: true + failed_when: false # Always succeed - review this condition + + - name: Remove VM disk images + ansible.builtin.file: + path: "{{ nixos_storage_dir }}/{{ item }}.qcow2" + state: absent + loop: "{{ groups['all'] | reject('equalto', 'localhost') | list }}" + + - name: Remove VM wrapper scripts + ansible.builtin.file: + path: "{{ nixos_storage_dir }}/run-{{ item }}-wrapper.sh" + state: absent + loop: "{{ groups['all'] | reject('equalto', 'localhost') | list }}" + + - name: Remove NixOS disk image symlink + ansible.builtin.file: + path: "{{ nixos_storage_dir }}/nixos-image-result" + state: absent + + - name: Remove extra drive directories + ansible.builtin.file: + path: "{{ nixos_storage_dir }}/extra-drives" + state: absent + + - name: Clean up generated NixOS configuration + ansible.builtin.file: + path: "{{ nixos_generation_dir }}" + state: absent + + - name: Garbage collect cached NixOS disk images from Nix store + ansible.builtin.shell: | + # Source nix profile if available + if [ -f /nix/var/nix/profiles/default/etc/profile.d/nix.sh ]; then + . /nix/var/nix/profiles/default/etc/profile.d/nix.sh + fi + + # Find nix-collect-garbage command + NIX_COLLECT_GARBAGE=$(which nix-collect-garbage 2>/dev/null || find /nix -name "nix-collect-garbage" -type f 2>/dev/null | head -1) + + if [ -n "$NIX_COLLECT_GARBAGE" ]; then + echo "Running Nix garbage collection to remove cached disk images..." + sudo $NIX_COLLECT_GARBAGE -d 2>&1 | grep -E "(deleting|freed|store paths)" || true + else + echo "Warning: nix-collect-garbage not found, cached images may remain" + fi + register: gc_result + failed_when: false + changed_when: "'freed' in gc_result.stdout" diff --git a/playbooks/roles/devconfig/tasks/install-deps/main.yml b/playbooks/roles/devconfig/tasks/install-deps/main.yml index 68ad9e7b..3cca4d9b 100644 --- a/playbooks/roles/devconfig/tasks/install-deps/main.yml +++ b/playbooks/roles/devconfig/tasks/install-deps/main.yml @@ -22,6 +22,7 @@ - files: - "{{ ansible_facts['os_family'] | lower }}.yml" skip: true + when: ansible_facts['os_family'] != 'NixOS' tags: vars - name: Debian-specific setup diff --git a/playbooks/roles/devconfig/tasks/main.yml b/playbooks/roles/devconfig/tasks/main.yml index 2ffa433f..fccd1fcf 100644 --- a/playbooks/roles/devconfig/tasks/main.yml +++ b/playbooks/roles/devconfig/tasks/main.yml @@ -197,7 +197,7 @@ chmod 755 {{ dev_bash_config }} fi args: - executable: /bin/bash + executable: "{{ '/run/current-system/sw/bin/bash' if (kdevops_enable_nixos | default(false)) else '/bin/bash' }}" when: dev_bash_config_file_copied is success - name: Copy the developer's favorite bash hacks over for root *if* it exists @@ -224,7 +224,7 @@ chmod 755 {{ dev_bash_config_root }} fi args: - executable: /bin/bash + executable: "{{ '/run/current-system/sw/bin/bash' if (kdevops_enable_nixos | default(false)) else '/bin/bash' }}" when: dev_bash_config_file_copied_root is success - name: Check to see if system has GRUB2 diff --git a/playbooks/roles/gen_hosts/tasks/main.yml b/playbooks/roles/gen_hosts/tasks/main.yml index d36790b0..fb63629a 100644 --- a/playbooks/roles/gen_hosts/tasks/main.yml +++ b/playbooks/roles/gen_hosts/tasks/main.yml @@ -79,6 +79,20 @@ when: - not kdevops_workflows_dedicated_workflow - ansible_hosts_template.stat.exists + - not kdevops_enable_nixos|default(false)|bool + +- name: Generate the Ansible inventory file for NixOS + tags: ['hosts'] + ansible.builtin.template: + src: "{{ kdevops_hosts_template }}" + dest: "{{ ansible_cfg_inventory }}" + force: true + trim_blocks: True + lstrip_blocks: True + when: + - not kdevops_workflows_dedicated_workflow + - ansible_hosts_template.stat.exists + - kdevops_enable_nixos|default(false)|bool - name: Update Ansible inventory access modification time so make sees it updated ansible.builtin.file: @@ -339,6 +353,7 @@ - kdevops_workflows_dedicated_workflow - kdevops_workflow_enable_fio_tests - ansible_hosts_template.stat.exists + - not kdevops_enable_nixos|default(false)|bool - name: Infer enabled mmtests test types diff --git a/playbooks/roles/gen_hosts/templates/fstests.j2 b/playbooks/roles/gen_hosts/templates/fstests.j2 index 32d90abf..823dbb1e 100644 --- a/playbooks/roles/gen_hosts/templates/fstests.j2 +++ b/playbooks/roles/gen_hosts/templates/fstests.j2 @@ -1,10 +1,18 @@ [all] localhost ansible_connection=local {% for s in fstests_enabled_test_types %} +{% if kdevops_enable_nixos|default(false) %} +{{ kdevops_host_prefix }}-{{ s }} ansible_python_interpreter=/run/current-system/sw/bin/python3 +{% else %} {{ kdevops_host_prefix }}-{{ s }} +{% endif %} {% if kdevops_baseline_and_dev %} +{% if kdevops_enable_nixos|default(false) %} +{{ kdevops_host_prefix }}-{{ s }}-dev ansible_python_interpreter=/run/current-system/sw/bin/python3 +{% else %} {{ kdevops_host_prefix }}-{{ s }}-dev {% endif %} +{% endif %} {% endfor %} {% if kdevops_nfsd_enable %} {% if kdevops_loopback_nfs_enable %} @@ -15,7 +23,11 @@ localhost ansible_connection=local ansible_python_interpreter = "{{ kdevops_python_interpreter }}" [baseline] {% for s in fstests_enabled_test_types %} +{% if kdevops_enable_nixos|default(false) %} +{{ kdevops_host_prefix }}-{{ s }} ansible_python_interpreter=/run/current-system/sw/bin/python3 +{% else %} {{ kdevops_host_prefix }}-{{ s }} +{% endif %} {% endfor %} {% if kdevops_nfsd_enable %} {% if kdevops_loopback_nfs_enable %} @@ -27,7 +39,11 @@ ansible_python_interpreter = "{{ kdevops_python_interpreter }}" [dev] {% if kdevops_baseline_and_dev %} {% for s in fstests_enabled_test_types %} +{% if kdevops_enable_nixos|default(false) %} +{{ kdevops_host_prefix }}-{{ s }}-dev ansible_python_interpreter=/run/current-system/sw/bin/python3 +{% else %} {{ kdevops_host_prefix }}-{{ s }}-dev +{% endif %} {% endfor %} {% if kdevops_nfsd_enable %} {% if kdevops_loopback_nfs_enable %} @@ -62,7 +78,11 @@ ansible_python_interpreter = "{{ kdevops_python_interpreter }}" ansible_python_interpreter = "{{ kdevops_python_interpreter }}" [krb5] {% for s in fstests_enabled_test_types %} +{% if kdevops_enable_nixos|default(false) %} +{{ kdevops_host_prefix }}-{{ s }} ansible_python_interpreter=/run/current-system/sw/bin/python3 +{% else %} {{ kdevops_host_prefix }}-{{ s }} +{% endif %} {% endfor %} {% if kdevops_nfsd_enable %} {{ kdevops_hosts_prefix }}-nfsd diff --git a/playbooks/roles/gen_hosts/templates/hosts.j2 b/playbooks/roles/gen_hosts/templates/hosts.j2 index e9441605..0e896481 100644 --- a/playbooks/roles/gen_hosts/templates/hosts.j2 +++ b/playbooks/roles/gen_hosts/templates/hosts.j2 @@ -184,10 +184,18 @@ write-your-own-template-for-your-workflow-and-task {% else %} [all] localhost ansible_connection=local +{% if kdevops_enable_nixos|default(false) %} +{{ kdevops_host_prefix }} ansible_python_interpreter=/run/current-system/sw/bin/python3 +{% else %} {{ kdevops_host_prefix }} +{% endif %} {% if kdevops_baseline_and_dev == True %} +{% if kdevops_enable_nixos|default(false) %} +{{ kdevops_host_prefix }}-dev ansible_python_interpreter=/run/current-system/sw/bin/python3 +{% else %} {{ kdevops_host_prefix }}-dev {% endif %} +{% endif %} {% if kdevops_enable_iscsi %} {{ kdevops_host_prefix }}-iscsi {% endif %} @@ -197,13 +205,21 @@ localhost ansible_connection=local [all:vars] ansible_python_interpreter = "{{ kdevops_python_interpreter }}" [baseline] +{% if kdevops_enable_nixos|default(false) %} +{{ kdevops_host_prefix }} ansible_python_interpreter=/run/current-system/sw/bin/python3 +{% else %} {{ kdevops_host_prefix }} +{% endif %} [baseline:vars] ansible_python_interpreter = "{{ kdevops_python_interpreter }}" [dev] {% if kdevops_baseline_and_dev %} +{% if kdevops_enable_nixos|default(false) %} +{{ kdevops_host_prefix }}-dev ansible_python_interpreter=/run/current-system/sw/bin/python3 +{% else %} {{ kdevops_host_prefix }}-dev {% endif %} +{% endif %} [dev:vars] ansible_python_interpreter = "{{ kdevops_python_interpreter }}" {% if kdevops_enable_iscsi %} diff --git a/playbooks/roles/gen_nodes/tasks/main.yml b/playbooks/roles/gen_nodes/tasks/main.yml index b294d294..b1a1946f 100644 --- a/playbooks/roles/gen_nodes/tasks/main.yml +++ b/playbooks/roles/gen_nodes/tasks/main.yml @@ -27,6 +27,12 @@ mode: "0755" when: kdevops_enable_guestfs|bool +- name: Create nixos directory + ansible.builtin.file: + path: "{{ nixos_config_dir }}" + state: directory + when: kdevops_enable_nixos | default(false) | bool + - name: Verify Ansible nodes template file exists {{ kdevops_nodes_template_full_path }} ansible.builtin.stat: path: "{{ kdevops_nodes_template_full_path }}" @@ -148,6 +154,23 @@ mode: "0644" when: - not kdevops_workflows_dedicated_workflow + - ansible_nodes_template.stat.exists + - not kdevops_enable_nixos|default(false)|bool + +- name: Generate the NixOS kdevops nodes file using {{ kdevops_nodes_template }} as jinja2 source template + tags: ['nodes'] + vars: + node_template: "{{ kdevops_nodes_template | basename }}" + all_generic_nodes: "{{ generic_nodes }}" + nodes: "{{ all_generic_nodes }}" + ansible.builtin.template: + src: "{{ node_template }}" + dest: "{{ topdir_path }}/{{ kdevops_nodes }}" + force: true + when: + - not kdevops_workflows_dedicated_workflow + - ansible_nodes_template.stat.exists + - kdevops_enable_nixos|default(false)|bool - name: Generate the builder kdevops nodes file using nodes file using template as jinja2 source template @@ -162,6 +185,7 @@ force: true when: - bootlinux_builder + - ansible_nodes_template.stat.exists - name: Generate the pynfs kdevops nodes file using nodes file using template as jinja2 source template diff --git a/playbooks/roles/gen_nodes/templates/nixos_nodes.j2 b/playbooks/roles/gen_nodes/templates/nixos_nodes.j2 new file mode 100644 index 00000000..391b1f10 --- /dev/null +++ b/playbooks/roles/gen_nodes/templates/nixos_nodes.j2 @@ -0,0 +1,14 @@ +--- +# Ansible nodes file generated for NixOS VMs + +{% for node in nodes %} +{{ node }}: + ansible_host: 192.168.100.{{ loop.index + 1 }} + ansible_user: kdevops + ansible_ssh_private_key_file: {{ topdir_path }}/.ssh/kdevops_id_rsa + ansible_python_interpreter: /run/current-system/sw/bin/python3 + vm_name: {{ node }} + vm_memory_mb: {{ nixos_vm_memory_mb }} + vm_vcpus: {{ nixos_vm_vcpus }} + vm_disk_size_gb: {{ nixos_vm_disk_size_gb }} +{% endfor %} diff --git a/playbooks/roles/update_etc_hosts/tasks/main.yml b/playbooks/roles/update_etc_hosts/tasks/main.yml index 049411ee..dc40eded 100644 --- a/playbooks/roles/update_etc_hosts/tasks/main.yml +++ b/playbooks/roles/update_etc_hosts/tasks/main.yml @@ -57,6 +57,7 @@ with_items: "{{ ueh_hosts }}" when: - terraform_private_net_enabled + - not (kdevops_enable_nixos | default(false)) - name: Add IP address of all hosts to all hosts become: true @@ -69,6 +70,7 @@ with_items: "{{ ueh_hosts }}" when: - not terraform_private_net_enabled + - not (kdevops_enable_nixos | default(false)) - name: Fix up hostname on Debian guestfs hosts become: true diff --git a/playbooks/templates/nixos/configuration.nix.j2 b/playbooks/templates/nixos/configuration.nix.j2 new file mode 100644 index 00000000..d5c00fc3 --- /dev/null +++ b/playbooks/templates/nixos/configuration.nix.j2 @@ -0,0 +1,138 @@ +{ config, pkgs, lib, ... }: + +{ + imports = [ + ./hardware-configuration.nix +{% if nixos_enable_workflow_deps %} + ./workflow-deps.nix +{% endif %} +{% if nixos_custom_config_path != "" %} + {{ nixos_custom_config_path }} +{% endif %} + ]; + + # Nix configuration +{% if nixos_use_local_mirror is defined and nixos_use_local_mirror and nixos_mirror_url is defined and nixos_mirror_url != "" %} + nix.settings = { + substituters = [ + "{{ nixos_mirror_url }}" + "https://cache.nixos.org" + ]; + trusted-substituters = [ + "{{ nixos_mirror_url }}" + "https://cache.nixos.org" + ]; + # Prefer local mirror + extra-substituters = [ "{{ nixos_mirror_url }}" ]; + }; +{% endif %} + + # Boot configuration + boot.loader.grub.enable = true; + boot.loader.grub.device = "/dev/vda"; + boot.loader.timeout = 1; + + # Kernel + boot.kernelPackages = pkgs.linuxPackages_latest; + + # Enable 9p support if configured +{% if bootlinux_9p is defined and bootlinux_9p %} + boot.kernelModules = [ "9p" "9pnet_virtio" ]; + boot.initrd.kernelModules = [ "9p" "9pnet_virtio" ]; +{% endif %} + + # Networking + networking.useDHCP = lib.mkDefault true; + + # Enable SSH + services.openssh = { + enable = true; + settings = { + PermitRootLogin = "yes"; + PasswordAuthentication = false; + PubkeyAuthentication = true; + }; + }; + + # Users + users.users.root = { + openssh.authorizedKeys.keys = [ +{% if nixos_ssh_authorized_key is defined %} + "{{ nixos_ssh_authorized_key }}" +{% else %} + # SSH key will be generated during provisioning +{% endif %} + ]; + }; + + users.users.kdevops = { + isNormalUser = true; + extraGroups = [ "wheel" "libvirt" "kvm" ]; + openssh.authorizedKeys.keys = [ +{% if nixos_ssh_authorized_key is defined %} + "{{ nixos_ssh_authorized_key }}" +{% else %} + # SSH key will be generated during provisioning +{% endif %} + ]; + }; + + # Sudo without password for kdevops user + security.sudo.wheelNeedsPassword = false; + + # Basic packages + environment.systemPackages = with pkgs; [ + vim + git + tmux + htop + wget + curl + rsync + python3 + gcc + gnumake + binutils + coreutils + findutils + procps + util-linux + ]; + + # Enable libvirt for nested virtualization if needed + virtualisation.libvirtd.enable = false; + + # Filesystems + fileSystems."/" = { + device = "/dev/vda1"; + fsType = "ext4"; + }; + +{% if bootlinux_9p is defined and bootlinux_9p %} + # 9P mount for shared kernel source + fileSystems."/mnt/linux" = { + device = "linux_source"; + fsType = "9p"; + options = [ "trans=virtio" "version=9p2000.L" "cache=loose" ]; + }; +{% endif %} + + # Time zone + time.timeZone = "UTC"; + + # Locale + i18n.defaultLocale = "en_US.UTF-8"; + + # State version + system.stateVersion = "24.05"; + + # Enable nix flakes + nix.settings.experimental-features = [ "nix-command" "flakes" ]; + + # Optimize storage + nix.gc = { + automatic = true; + dates = "weekly"; + options = "--delete-older-than 7d"; + }; +} diff --git a/playbooks/templates/nixos/flake.nix.j2 b/playbooks/templates/nixos/flake.nix.j2 new file mode 100644 index 00000000..52b5b680 --- /dev/null +++ b/playbooks/templates/nixos/flake.nix.j2 @@ -0,0 +1,38 @@ +{ + description = "kdevops NixOS VMs"; + + inputs = { + nixpkgs.url = "github:NixOS/nixpkgs/{{ nixos_channel }}"; + }; + + outputs = { self, nixpkgs }: { + nixosConfigurations = { +{% for node in groups['all'] if node != 'localhost' %} + "{{ node }}" = nixpkgs.lib.nixosSystem { + system = "x86_64-linux"; + modules = [ + ./generated/configuration.nix + ./generated/hardware-configuration.nix +{% if nixos_enable_workflow_deps %} + ./generated/workflow-deps.nix +{% endif %} + ({ ... }: { + networking.hostName = "{{ node }}"; + }) + ]; + }; +{% endfor %} + }; + + # Build all VMs + defaultPackage.x86_64-linux = + nixpkgs.legacyPackages.x86_64-linux.writeShellScriptBin "build-vms" '' + echo "Building NixOS VMs..." +{% for node in groups['all'] if node != 'localhost' %} + echo "Building {{ node }}..." + nix build .#nixosConfigurations.{{ node }}.config.system.build.vm +{% endfor %} + echo "All VMs built successfully!" + ''; + }; +} diff --git a/playbooks/templates/nixos/hardware-configuration.nix.j2 b/playbooks/templates/nixos/hardware-configuration.nix.j2 new file mode 100644 index 00000000..bb91bba4 --- /dev/null +++ b/playbooks/templates/nixos/hardware-configuration.nix.j2 @@ -0,0 +1,42 @@ +{ config, lib, pkgs, modulesPath, ... }: + +{ + imports = [ + (modulesPath + "/profiles/qemu-guest.nix") + ]; + + boot.initrd.availableKernelModules = [ "ahci" "xhci_pci" "virtio_pci" "sr_mod" "virtio_blk" ]; + boot.initrd.kernelModules = [ ]; + boot.kernelModules = [ "kvm-intel" "kvm-amd" ]; + boot.extraModulePackages = [ ]; + + # Root filesystem + fileSystems."/" = { + device = "/dev/disk/by-label/nixos"; + fsType = "ext4"; + }; + + # Boot partition (if UEFI is enabled) +{% if guestfs_requires_uefi is defined and guestfs_requires_uefi %} + fileSystems."/boot" = { + device = "/dev/disk/by-label/boot"; + fsType = "vfat"; + }; +{% endif %} + + # Swap + swapDevices = [ ]; + + # Networking + networking.useDHCP = lib.mkDefault true; + networking.interfaces.eth0.useDHCP = lib.mkDefault true; + + # Hardware configuration + nixpkgs.hostPlatform = lib.mkDefault "x86_64-linux"; + hardware.cpu.intel.updateMicrocode = lib.mkDefault config.hardware.enableRedistributableFirmware; + hardware.cpu.amd.updateMicrocode = lib.mkDefault config.hardware.enableRedistributableFirmware; + + # Virtualization features + virtualisation.hypervGuest.enable = false; + virtualisation.vmware.guest.enable = false; +} diff --git a/playbooks/templates/nixos/run-vm-wrapper.sh.j2 b/playbooks/templates/nixos/run-vm-wrapper.sh.j2 new file mode 100644 index 00000000..2a87c3ea --- /dev/null +++ b/playbooks/templates/nixos/run-vm-wrapper.sh.j2 @@ -0,0 +1,159 @@ +#!/bin/bash +# Wrapper script for NixOS VM: {{ vm_name }} +# Generated by kdevops + +set -e + +# Configuration +VM_NAME="{{ vm_name }}" +VM_DISK="{{ nixos_storage_dir }}/{{ vm_name }}.qcow2" +VM_MEMORY="{{ vm_memory | default(4096) }}" +VM_CPUS="{{ vm_vcpus | default(4) }}" +SSH_PORT="{{ 10022 + vm_index|default(0)|int }}" +MONITOR_PORT="{{ vm_monitor_port | default(55555 + vm_index|default(0)) }}" +VNC_PORT="{{ vm_vnc_port | default(5900 + vm_index|default(0)) }}" + +# Network configuration for SSH access +# Using user mode networking with port forwarding +NETWORK_OPTS="hostfwd=tcp::${SSH_PORT}-:22" +{% if nixos_enable_port_forwards is defined and nixos_enable_port_forwards %} +{% for port in nixos_port_forwards | default([]) %} +NETWORK_OPTS="${NETWORK_OPTS},hostfwd=tcp::{{ port.host }}-:{{ port.guest }}" +{% endfor %} +{% endif %} + +# Shared directories +{% if nixos_shared_dirs is defined %} +SHARED_DIRS="" +{% for dir in nixos_shared_dirs %} +SHARED_DIRS="${SHARED_DIRS} -virtfs local,path={{ dir.source }},security_model=none,mount_tag={{ dir.tag }}" +{% endfor %} +{% endif %} + +# Function to start the VM +start_vm() { + if [ -f "/tmp/${VM_NAME}.pid" ] && kill -0 $(cat /tmp/${VM_NAME}.pid) 2>/dev/null; then + echo "VM ${VM_NAME} is already running (PID: $(cat /tmp/${VM_NAME}.pid))" + return 1 + fi + + echo "Starting NixOS VM: ${VM_NAME}" + echo " Disk: ${VM_DISK}" + echo " Memory: ${VM_MEMORY}MB" + echo " CPUs: ${VM_CPUS}" + echo " SSH: localhost:${SSH_PORT}" + echo " Monitor: 127.0.0.1:${MONITOR_PORT}" + echo " VNC: :$((VNC_PORT - 5900))" + + # Check if disk exists + if [ ! -f "${VM_DISK}" ]; then + echo "Error: VM disk image not found: ${VM_DISK}" + echo "Please run 'make bringup' to build the NixOS disk image first" + return 1 + fi + + # Check disk image size + DISK_SIZE=$(stat -c%s "${VM_DISK}" 2>/dev/null || stat -f%z "${VM_DISK}" 2>/dev/null || echo 0) + if [ "$DISK_SIZE" -lt 1048576 ]; then + echo "Warning: Disk image appears too small (${DISK_SIZE} bytes)" + echo "The image may not contain a proper NixOS installation" + fi + + # Create extra storage drives if they don't exist + EXTRA_DRIVES_DIR="{{ nixos_storage_dir }}/extra-drives" + mkdir -p "${EXTRA_DRIVES_DIR}" + + # Create 4 extra sparse drives for fstests (100GB each) + for i in {0..3}; do + EXTRA_DRIVE="${EXTRA_DRIVES_DIR}/${VM_NAME}-extra${i}.qcow2" + if [ ! -f "${EXTRA_DRIVE}" ]; then + echo "Creating extra drive ${i}: ${EXTRA_DRIVE}" + qemu-img create -f qcow2 "${EXTRA_DRIVE}" 100G + fi + done + + # Start QEMU with the NixOS disk image + echo "Starting QEMU with NixOS disk image..." + qemu-system-x86_64 \ + -name "${VM_NAME}" \ + -m "${VM_MEMORY}" \ + -smp "${VM_CPUS}" \ + -enable-kvm \ + -machine pc,accel=kvm \ + -cpu host \ + -drive file="${VM_DISK}",if=virtio,format=qcow2 \ + -drive file="${EXTRA_DRIVES_DIR}/${VM_NAME}-extra0.qcow2",format=qcow2,if=none,id=drv0 \ + -device virtio-blk-pci,drive=drv0,serial=kdevops0 \ + -drive file="${EXTRA_DRIVES_DIR}/${VM_NAME}-extra1.qcow2",format=qcow2,if=none,id=drv1 \ + -device virtio-blk-pci,drive=drv1,serial=kdevops1 \ + -drive file="${EXTRA_DRIVES_DIR}/${VM_NAME}-extra2.qcow2",format=qcow2,if=none,id=drv2 \ + -device virtio-blk-pci,drive=drv2,serial=kdevops2 \ + -drive file="${EXTRA_DRIVES_DIR}/${VM_NAME}-extra3.qcow2",format=qcow2,if=none,id=drv3 \ + -device virtio-blk-pci,drive=drv3,serial=kdevops3 \ + -netdev user,id=net0,${NETWORK_OPTS} \ + -device virtio-net-pci,netdev=net0 \ + -monitor tcp:127.0.0.1:${MONITOR_PORT},server,nowait \ + -vnc :$((VNC_PORT - 5900)) \ + -daemonize \ + -pidfile "/tmp/${VM_NAME}.pid" \ + ${SHARED_DIRS:-} + + echo "VM ${VM_NAME} started successfully" +} + +# Function to stop the VM +stop_vm() { + if [ -f "/tmp/${VM_NAME}.pid" ]; then + PID=$(cat /tmp/${VM_NAME}.pid) + if kill -0 $PID 2>/dev/null; then + echo "Stopping VM ${VM_NAME} (PID: $PID)" + kill $PID + rm -f /tmp/${VM_NAME}.pid + else + echo "VM ${VM_NAME} is not running" + rm -f /tmp/${VM_NAME}.pid + fi + else + echo "VM ${VM_NAME} is not running (no PID file)" + fi +} + +# Function to check VM status +status_vm() { + if [ -f "/tmp/${VM_NAME}.pid" ]; then + PID=$(cat /tmp/${VM_NAME}.pid) + if kill -0 $PID 2>/dev/null; then + echo "VM ${VM_NAME} is running (PID: $PID)" + return 0 + else + echo "VM ${VM_NAME} is not running (stale PID file)" + rm -f /tmp/${VM_NAME}.pid + return 1 + fi + else + echo "VM ${VM_NAME} is not running" + return 1 + fi +} + +# Main script logic +case "${1:-start}" in + start) + start_vm + ;; + stop) + stop_vm + ;; + status) + status_vm + ;; + restart) + stop_vm + sleep 2 + start_vm + ;; + *) + echo "Usage: $0 {start|stop|status|restart}" + exit 1 + ;; +esac diff --git a/playbooks/templates/nixos/vm-libvirt.xml.j2 b/playbooks/templates/nixos/vm-libvirt.xml.j2 new file mode 100644 index 00000000..915a6090 --- /dev/null +++ b/playbooks/templates/nixos/vm-libvirt.xml.j2 @@ -0,0 +1,96 @@ + + {{ vm_name }} + {{ vm_memory }} + {{ vm_vcpus }} + + + hvm + + + + + + + + + + + + + + + + + + + + destroy + restart + destroy + + + + + + + + /usr/bin/qemu-system-x86_64 + + + + + +
+ + +{% if bootlinux_9p is defined and bootlinux_9p %} + + + + +
+ +{% endif %} + + + + +
+ + + + + + + + + + + + + +
+ + + + + + + + + +