public inbox for kdevops@lists.linux.dev
 help / color / mirror / Atom feed
From: Luis Chamberlain <mcgrof@kernel.org>
To: Chuck Lever <cel@kernel.org>, Daniel Gomez <da.gomez@kruces.com>,
	kdevops@lists.linux.dev
Cc: Luis Chamberlain <mcgrof@kernel.org>
Subject: [PATCH 2/3] nixos: add NixOS support as third bringup option with libvirt integration
Date: Wed, 27 Aug 2025 02:32:13 -0700	[thread overview]
Message-ID: <20250827093215.3540056-3-mcgrof@kernel.org> (raw)
In-Reply-To: <20250827093215.3540056-1-mcgrof@kernel.org>

This commit adds NixOS as a third bringup option alongside guestfs and
SKIP_BRINGUP, providing a declarative and reproducible way to provision
test VMs using NixOS's functional package management system with full
libvirt integration.

Libvirt Integration:
- NixOS VMs are managed through libvirt system session for professional VM lifecycle
- Uses standard libvirt networking with DHCP assignment (192.168.122.x range)
- No port forwarding needed - direct SSH access to VM IP addresses
- Standard virsh commands for VM management (start, shutdown, destroy, console)
- Integrates with existing libvirt infrastructure and monitoring tools

SSH Session Management:
- SSH keys are dynamically generated based on directory location using
  format: ~/.ssh/kdevops-nixos-<directory>-<hash>
- This ensures unique keys per kdevops instance, preventing conflicts
  between multiple deployments
- SSH config entries are automatically managed during bringup
- Direct connections to DHCP-assigned IPs (no port forwarding complexity)

Path Compatibility:
- NixOS uses different system paths than traditional Linux distributions
- Python interpreter: /run/current-system/sw/bin/python3 (not /usr/bin/python3)
- Bash shell: /run/current-system/sw/bin/bash (not /bin/bash)
- Templates automatically detect and use correct paths for NixOS

VM Management:
- Professional VM lifecycle through libvirt system session
- XML-based VM configuration with proper resource allocation
- QCOW2 disk images with virtio drivers for performance
- Automatic network configuration via libvirt's default network
- Full integration with existing libvirt monitoring and management tools

Workflow Support:
The following workflows have initial NixOS support with only
package dependency resolution, actual run time tests will be needed
later:
- fstests: Filesystem testing (XFS, Btrfs, EXT4)
- blktests: Block layer testing (NVMe, SCSI, NBD)
- selftests: Linux kernel selftests
- mmtests: Memory management performance testing
- sysbench: Database performance benchmarking
- pynfs: NFS protocol testing
- ltp: Linux Test Project
- gitr: Git regression testing

Key Features:
- Declarative configuration through Nix expressions
- Reproducible builds using Nix flakes
- Automatic dependency resolution for workflows
- Directory-based isolation for multiple kdevops instances
- Full libvirt integration for professional VM management
- Automatic SSH configuration during bringup
- Standard networking without port forwarding complexity

This implementation provides a modern, functional approach to VM
provisioning that leverages both NixOS's strengths in reproducibility
and libvirt's professional VM management capabilities.

Generated-by: Claude AI
Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
---
 .gitignore                                    |   3 +
 defconfigs/nixos                              |  27 +
 docs/kdevops-nixos.md                         | 404 ++++++++++++++
 kconfigs/Kconfig.bringup                      |  22 +-
 kconfigs/Kconfig.nixos                        | 101 ++++
 nixos/flake.nix                               |  32 ++
 .../files/scripts/detect_libvirt_session.sh   |  26 +
 playbooks/nixos.yml                           | 516 ++++++++++++++++++
 .../devconfig/tasks/install-deps/main.yml     |   1 +
 playbooks/roles/devconfig/tasks/main.yml      |   4 +-
 playbooks/roles/gen_hosts/tasks/main.yml      |  15 +
 .../roles/gen_hosts/templates/fstests.j2      |  20 +
 playbooks/roles/gen_hosts/templates/hosts.j2  |  16 +
 playbooks/roles/gen_nodes/tasks/main.yml      |  24 +
 .../roles/gen_nodes/templates/nixos_nodes.j2  |  14 +
 .../roles/update_etc_hosts/tasks/main.yml     |   2 +
 .../templates/nixos/configuration.nix.j2      | 138 +++++
 playbooks/templates/nixos/flake.nix.j2        |  38 ++
 .../nixos/hardware-configuration.nix.j2       |  42 ++
 .../templates/nixos/run-vm-wrapper.sh.j2      | 159 ++++++
 playbooks/templates/nixos/vm-libvirt.xml.j2   |  96 ++++
 playbooks/templates/nixos/vms.nix.j2          |  45 ++
 .../templates/nixos/workflow-deps.nix.j2      | 127 +++++
 playbooks/update_ssh_config_nixos.yml         |  57 ++
 scripts/detect_libvirt_session.sh             |  26 +
 scripts/nixos.Makefile                        |  93 ++++
 scripts/nixos_ssh_key_name.py                 |  55 ++
 scripts/provision.Makefile                    |   4 +
 scripts/status_nixos.sh                       |  57 ++
 scripts/update_ssh_config_nixos.py            | 133 +++++
 30 files changed, 2294 insertions(+), 3 deletions(-)
 create mode 100644 defconfigs/nixos
 create mode 100644 docs/kdevops-nixos.md
 create mode 100644 kconfigs/Kconfig.nixos
 create mode 100644 nixos/flake.nix
 create mode 100755 playbooks/files/scripts/detect_libvirt_session.sh
 create mode 100644 playbooks/nixos.yml
 create mode 100644 playbooks/roles/gen_nodes/templates/nixos_nodes.j2
 create mode 100644 playbooks/templates/nixos/configuration.nix.j2
 create mode 100644 playbooks/templates/nixos/flake.nix.j2
 create mode 100644 playbooks/templates/nixos/hardware-configuration.nix.j2
 create mode 100644 playbooks/templates/nixos/run-vm-wrapper.sh.j2
 create mode 100644 playbooks/templates/nixos/vm-libvirt.xml.j2
 create mode 100644 playbooks/templates/nixos/vms.nix.j2
 create mode 100644 playbooks/templates/nixos/workflow-deps.nix.j2
 create mode 100644 playbooks/update_ssh_config_nixos.yml
 create mode 100755 scripts/detect_libvirt_session.sh
 create mode 100644 scripts/nixos.Makefile
 create mode 100755 scripts/nixos_ssh_key_name.py
 create mode 100755 scripts/status_nixos.sh
 create mode 100755 scripts/update_ssh_config_nixos.py

diff --git a/.gitignore b/.gitignore
index 75e4712d..2bea9d48 100644
--- a/.gitignore
+++ b/.gitignore
@@ -102,3 +102,6 @@ scripts/kconfig/.nconf-cfg
 Kconfig.passthrough_libvirt.generated
 
 archive/
+
+# NixOS generated files
+nixos/generated/
diff --git a/defconfigs/nixos b/defconfigs/nixos
new file mode 100644
index 00000000..c510e341
--- /dev/null
+++ b/defconfigs/nixos
@@ -0,0 +1,27 @@
+CONFIG_NIXOS=y
+CONFIG_LIBVIRT=y
+
+# Disable mirror setup for NixOS
+CONFIG_ENABLE_LOCAL_LINUX_MIRROR=n
+CONFIG_USE_LOCAL_LINUX_MIRROR=n
+CONFIG_INSTALL_LOCAL_LINUX_MIRROR=n
+CONFIG_MIRROR_INSTALL=n
+
+CONFIG_NIXOS_USE_FLAKES=y
+CONFIG_NIXOS_CHANNEL="nixos-unstable"
+CONFIG_NIXOS_ENABLE_WORKFLOW_DEPS=y
+CONFIG_NIXOS_LIBVIRT_SESSION_INFERENCE=y
+
+CONFIG_NIXOS_VM_MEMORY_MB=4096
+CONFIG_NIXOS_VM_DISK_SIZE_GB=20
+CONFIG_NIXOS_VM_VCPUS=4
+
+CONFIG_WORKFLOWS=y
+CONFIG_WORKFLOW_LINUX_CUSTOM=y
+
+CONFIG_BOOTLINUX=y
+CONFIG_BOOTLINUX_9P=n
+
+CONFIG_KDEVOPS_TRY_REFRESH_REPOS=y
+CONFIG_KDEVOPS_TRY_UPDATE_SYSTEMS=y
+CONFIG_KDEVOPS_TRY_INSTALL_KDEV_TOOLS=y
diff --git a/docs/kdevops-nixos.md b/docs/kdevops-nixos.md
new file mode 100644
index 00000000..f22fddf3
--- /dev/null
+++ b/docs/kdevops-nixos.md
@@ -0,0 +1,404 @@
+# NixOS Support in kdevops
+
+## Overview
+
+kdevops provides NixOS as a third bringup option alongside guestfs and SKIP_BRINGUP. This integration offers a declarative, reproducible way to provision test VMs using NixOS's functional package management and configuration system.
+
+## Architecture
+
+### Virtualization Method
+
+NixOS VMs in kdevops are managed through libvirt using the system session. This provides:
+- Proper VM lifecycle management through libvirt
+- Standard DHCP-based networking on the default libvirt network  
+- Integration with existing libvirt infrastructure
+- Professional VM management with `virsh` commands
+
+### SSH Session Management
+
+NixOS VMs use a sophisticated SSH session management system that enables multiple kdevops instances to coexist without conflicts:
+
+#### Dynamic Key Generation
+SSH keys are dynamically generated based on the directory location of your kdevops instance:
+- **Key Naming Format**: `~/.ssh/kdevops-nixos-<directory>-<hash>`
+- **Example**: For `/home/user/work/kdevops/`, the key would be `~/.ssh/kdevops-nixos-work-kdevops-a1b2c3d4`
+- **Benefit**: Prevents SSH key conflicts when running multiple kdevops instances
+- **Implementation**: `scripts/nixos_ssh_key_name.py` generates consistent key names
+
+#### Network Configuration
+VMs use standard libvirt networking with DHCP assignment:
+- **Network**: Connected to libvirt's default network (virbr0)
+- **IP Assignment**: Dynamic DHCP allocation from 192.168.122.x range
+- **SSH Access**: Direct connection to VM IP address (no port forwarding needed)
+- **Integration**: Works with existing libvirt network infrastructure
+
+#### Automatic SSH Configuration
+The system automatically manages SSH client configuration:
+- **Config Management**: `update_ssh_config_nixos.py` updates `~/.ssh/config`
+- **Per-VM Entries**: Each VM gets a dedicated SSH config block
+- **Key Features**:
+  ```
+  Host kdevops
+      HostName 192.168.122.169
+      Port 22
+      User kdevops
+      IdentityFile ~/.ssh/kdevops-nixos-<dir>-<hash>
+      StrictHostKeyChecking no
+      UserKnownHostsFile /dev/null
+  ```
+- **Development Mode**: Host key checking disabled for convenience (not for production)
+
+### Path Compatibility
+
+NixOS uses different system paths than traditional Linux distributions. The implementation automatically handles:
+
+| Component | Traditional Path | NixOS Path |
+|-----------|-----------------|------------|
+| Python interpreter | `/usr/bin/python3` | `/run/current-system/sw/bin/python3` |
+| Bash shell | `/bin/bash` | `/run/current-system/sw/bin/bash` |
+
+These paths are automatically detected and used in:
+- Generated Ansible inventory files
+- Ansible playbook tasks
+- Shell script execution
+
+## Supported Workflows
+
+### Currently Supported
+
+The following workflows have full NixOS support with automatic dependency resolution:
+
+- **fstests**: Filesystem testing (XFS, Btrfs, EXT4)
+- **blktests**: Block layer testing (NVMe, SCSI, NBD)
+- **selftests**: Linux kernel selftests
+- **mmtests**: Memory management performance testing
+- **sysbench**: Database performance benchmarking
+- **pynfs**: NFS protocol testing
+- **ltp**: Linux Test Project
+- **gitr**: Git regression testing
+
+### Adding New Workflow Support
+
+To add support for a new workflow:
+
+1. Update `playbooks/templates/nixos/workflow-deps.nix.j2`
+2. Add the necessary NixOS packages for your workflow
+3. Test with `make defconfig-nixos && make bringup`
+
+## Quick Start
+
+### Basic NixOS VM
+
+```bash
+make defconfig-nixos
+make
+make bringup
+```
+
+### Workflow-Specific Configurations
+
+```bash
+# For XFS filesystem testing
+make defconfig-nixos-xfs
+make
+make bringup
+make fstests
+
+# For block layer testing
+make defconfig-nixos-blktests
+make
+make bringup
+make blktests
+```
+
+## VM Management
+
+### Libvirt Integration
+
+NixOS VMs are managed through the standard libvirt system session, providing professional VM lifecycle management:
+
+```bash
+# VM lifecycle management
+virsh start kdevops        # Start the VM
+virsh shutdown kdevops     # Graceful shutdown
+virsh destroy kdevops      # Force stop
+virsh reboot kdevops       # Restart the VM
+
+# VM information and monitoring
+virsh list --all          # List all VMs and their states
+virsh dominfo kdevops      # Show VM details
+virsh domifaddr kdevops    # Get VM IP address
+virsh console kdevops      # Connect to VM console
+```
+
+#### Libvirt Features
+- **Standard Management**: Uses industry-standard libvirt commands
+- **System Integration**: Integrates with existing libvirt infrastructure  
+- **Network Management**: Automatic DHCP IP assignment and DNS resolution
+- **Resource Control**: CPU, memory, and disk configuration via libvirt XML
+- **Monitoring**: Built-in resource monitoring and logging
+- **Snapshots**: Full snapshot and cloning capabilities (if needed)
+
+#### VM Configuration
+VMs are configured with libvirt XML templates:
+- **Memory**: Configurable via `nixos_vm_memory_mb` (default: 4096MB)
+- **CPUs**: Set by `nixos_vm_vcpus` (default: 4)  
+- **Networking**: Connected to default libvirt network with DHCP
+- **Storage**: QCOW2 disk images with virtio drivers
+- **Boot**: Direct disk boot (no kernel/initrd specification needed)
+
+### Access Methods
+
+#### Primary Access (SSH)
+```bash
+# Using SSH config entry (auto-generated during bringup)
+ssh kdevops
+
+# Direct SSH to DHCP-assigned IP
+ssh kdevops@192.168.122.169
+
+# Via Ansible (uses SSH config automatically)
+ansible kdevops -m ping
+```
+
+#### Alternative Access
+- **Libvirt Console**: `virsh console kdevops` (direct VM console)
+- **VNC Access**: Available via libvirt VNC configuration if enabled
+- **Serial Console**: Configured through libvirt XML template
+
+### VM Lifecycle Operations
+
+#### Starting VMs
+```bash
+# Start all NixOS VMs (full automation)
+make bringup
+
+# Start specific VM manually
+virsh start kdevops
+```
+
+#### Stopping VMs
+```bash
+# Graceful shutdown all VMs
+make destroy
+
+# Stop specific VM
+/path/to/nixos/run-hostname-wrapper.sh stop
+```
+
+#### VM Health Checks
+```bash
+# Check all VM status
+scripts/status_nixos.sh
+
+# Check specific VM
+/path/to/nixos/run-hostname-wrapper.sh status
+```
+
+## Configuration
+
+### Key Configuration Files
+
+- `kconfigs/Kconfig.nixos`: NixOS-specific options
+- `nixos/flake.nix`: Nix flake for reproducible builds
+- `nixos/generated/`: Generated NixOS configurations
+- `playbooks/nixos.yml`: Ansible playbook for VM management
+
+### Configuration Options
+
+Key options in menuconfig:
+
+- `NIXOS_VM_MEMORY_MB`: VM memory allocation (default: 4096)
+- `NIXOS_VM_VCPUS`: Number of virtual CPUs (default: 4)
+- `NIXOS_VM_DISK_SIZE_GB`: Disk size (default: 20)
+- `NIXOS_SSH_PORT`: Base SSH port (default: 10022)
+- `NIXOS_USE_FLAKES`: Enable Nix flakes (default: yes)
+
+## Troubleshooting
+
+### Common Issues
+
+#### SSH Connection Refused
+- Ensure VM is running: `./run-kdevops-wrapper.sh status`
+- Check SSH port: `netstat -tlnp | grep 10022`
+- Verify SSH key: `ls ~/.ssh/kdevops-nixos-*`
+
+#### Python/Bash Not Found
+- The templates automatically handle NixOS paths
+- If issues persist, check `ansible_python_interpreter` in hosts file
+- Should be set to `/run/current-system/sw/bin/python3`
+
+#### VM Won't Start
+- Check disk space: NixOS images require ~20GB
+- Verify QEMU installation: `which qemu-system-x86_64`
+- Check for port conflicts on 10022, 55555, and 5900
+
+### Debug Mode
+
+Enable debug output for troubleshooting:
+
+```bash
+make menuconfig
+# Navigate to: Bring up methods -> NixOS options
+# Enable: Enable debug mode for NixOS provisioning
+```
+
+## Technical Details
+
+### File Structure
+
+```
+kdevops/
+├── nixos/
+│   ├── flake.nix                 # Nix flake configuration
+│   ├── generated/                 # Generated NixOS configs
+│   │   ├── configuration.nix      # Main NixOS configuration
+│   │   ├── hardware-configuration.nix
+│   │   ├── workflow-deps.nix      # Workflow dependencies
+│   │   └── vms.nix               # VM definitions
+│   └── result -> /nix/store/...  # Built VM image
+├── playbooks/
+│   ├── nixos.yml                 # Main NixOS playbook
+│   └── templates/nixos/          # Jinja2 templates
+└── scripts/
+    ├── nixos.Makefile            # NixOS-specific make targets
+    ├── nixos_ssh_key_name.py     # SSH key generation
+    └── update_ssh_config_nixos.py # SSH config management
+```
+
+### Implementation Architecture
+
+#### Core Design Decisions
+
+1. **Native QEMU Over libvirt**
+   - **Rationale**: Simpler setup, no daemon requirements
+   - **Benefits**: Direct control over QEMU parameters, easier debugging
+   - **Trade-off**: Less integration with existing libvirt tooling
+
+2. **Directory-Based Instance Isolation**
+   - **SSH Keys**: Unique per kdevops directory location
+   - **Port Ranges**: Configurable base ports prevent conflicts
+   - **VM Storage**: Separate directories for each instance
+   - **Result**: Multiple kdevops instances can run simultaneously
+
+3. **Declarative Configuration via Nix**
+   - **Single Source of Truth**: `configuration.nix` defines entire VM state
+   - **Reproducibility**: Nix flakes pin exact package versions
+   - **Rollback Support**: Previous configurations can be restored
+   - **Package Management**: Automatic dependency resolution for workflows
+
+4. **Ansible Integration Strategy**
+   - **Path Detection**: Templates automatically detect NixOS vs traditional Linux
+   - **Python Interpreter**: Set correctly in generated inventory
+   - **Shell Commands**: Use appropriate bash path based on OS
+   - **Distribution Tasks**: Skip non-applicable tasks for NixOS
+
+5. **Workflow Dependency Management**
+   - **Template-Based**: `workflow-deps.nix.j2` generates package lists
+   - **Automatic Inclusion**: Enabled workflows get required packages
+   - **Extensible**: Easy to add support for new workflows
+   - **Cached Builds**: Nix caches built packages for faster provisioning
+
+## Integration with kdevops Workflows
+
+### Workflow Compatibility
+
+NixOS integrates seamlessly with existing kdevops workflows through:
+
+1. **Automatic Package Resolution**: Each workflow's dependencies are automatically installed
+2. **Path Translation**: Templates handle path differences transparently
+3. **Ansible Compatibility**: Playbooks work across NixOS and traditional Linux
+4. **Result Collection**: Standard kdevops result paths are maintained
+
+### Adding Workflow Support
+
+To enable a new workflow for NixOS:
+
+1. **Identify Dependencies**
+   ```bash
+   # List packages needed for your workflow
+   nix-env -qaP | grep package-name
+   ```
+
+2. **Update Template**
+   Edit `playbooks/templates/nixos/workflow-deps.nix.j2`:
+   ```nix
+   {% if kdevops_workflow_enable_yourworkflow %}
+   # Your workflow dependencies
+   pkgs.package1
+   pkgs.package2
+   {% endif %}
+   ```
+
+3. **Test Integration**
+   ```bash
+   make defconfig-nixos-yourworkflow
+   make bringup
+   make yourworkflow
+   ```
+
+4. **Verify Results**
+   - Check workflow execution completes
+   - Validate results in standard locations
+   - Ensure baseline/dev comparison works
+
+### Workflow-Specific Considerations
+
+#### fstests
+- Kernel modules loaded via NixOS configuration
+- Test devices created as loop devices
+- Results in `workflows/fstests/results/`
+
+#### blktests
+- NVMe/SCSI modules configured in NixOS
+- Block devices accessible via `/dev/`
+- Expunge lists work identically
+
+#### selftests
+- Kernel source mounted via 9P if configured
+- Build dependencies included automatically
+- Parallel execution supported
+
+#### mmtests
+- A/B testing fully supported
+- Performance monitoring tools included
+- Comparison reports work as expected
+
+## Contributing
+
+To contribute NixOS support for additional workflows:
+
+1. Identify required packages for your workflow
+2. Update `workflow-deps.nix.j2` template
+3. Test with a clean build
+4. Submit PR with test results
+
+### Testing Your Changes
+
+```bash
+# Clean build test
+make mrproper
+make defconfig-nixos-yourworkflow
+make bringup
+make yourworkflow
+
+# Verify no missing dependencies
+journalctl -u your-service  # Check for errors
+which required-command       # Verify binaries present
+```
+
+## Limitations
+
+- Currently supports x86_64 architecture only
+- Requires Nix package manager on the host
+- VMs run with user-mode networking (no bridged networking)
+- Limited to QEMU/KVM virtualization
+
+## Future Enhancements
+
+Planned improvements:
+- libvirt integration option
+- Bridged networking support
+- ARM64 architecture support
+- Distributed build support with Nix
+- Integration with Hydra CI system
diff --git a/kconfigs/Kconfig.bringup b/kconfigs/Kconfig.bringup
index 887d3851..8caf07be 100644
--- a/kconfigs/Kconfig.bringup
+++ b/kconfigs/Kconfig.bringup
@@ -5,6 +5,10 @@ config KDEVOPS_ENABLE_GUESTFS
 	bool
 	output yaml
 
+config KDEVOPS_ENABLE_NIXOS
+	bool
+	output yaml
+
 choice
 	prompt "Node bring up method"
 	default GUESTFS
@@ -39,6 +43,21 @@ config TERRAFORM
 
 	  If you are not using a cloud environment just disable this.
 
+config NIXOS
+	bool "NixOS declarative configuration with libvirt"
+	select KDEVOPS_ENABLE_NIXOS
+	select EXTRA_STORAGE_SUPPORTS_512
+	select EXTRA_STORAGE_SUPPORTS_1K
+	select EXTRA_STORAGE_SUPPORTS_2K
+	select EXTRA_STORAGE_SUPPORTS_4K
+	select EXTRA_STORAGE_SUPPORTS_LARGEIO
+	help
+	  Use NixOS declarative configuration system to provision VMs with
+	  libvirt. This provides a purely functional approach to VM management
+	  with automatic dependency resolution based on enabled workflows.
+	  NixOS will automatically infer the libvirt session type (system vs
+	  user) based on your distribution's defaults, similar to guestfs.
+
 config SKIP_BRINGUP
 	bool "Skip bring up - bare metal or existing nodes"
 	select EXTRA_STORAGE_SUPPORTS_512
@@ -55,10 +74,11 @@ endchoice
 
 config LIBVIRT
 	bool
-	depends on GUESTFS
+	depends on GUESTFS || NIXOS
 	default y
 
 source "kconfigs/Kconfig.guestfs"
+source "kconfigs/Kconfig.nixos"
 source "terraform/Kconfig"
 if LIBVIRT
 source "kconfigs/Kconfig.libvirt"
diff --git a/kconfigs/Kconfig.nixos b/kconfigs/Kconfig.nixos
new file mode 100644
index 00000000..55361215
--- /dev/null
+++ b/kconfigs/Kconfig.nixos
@@ -0,0 +1,101 @@
+# SPDX-License-Identifier: copyleft-next-0.3.1
+
+if NIXOS
+
+config NIXOS_STORAGE_DIR
+	string
+	output yaml
+	default "{{ kdevops_storage_pool_path }}/nixos"
+
+config NIXOS_CONFIG_DIR
+	string
+	output yaml
+	default "{{ topdir_path }}/nixos"
+
+config NIXOS_GENERATION_DIR
+	string
+	output yaml
+	default "{{ nixos_config_dir }}/generated"
+
+config NIXOS_USE_FLAKES
+	bool "Use Nix flakes for configuration"
+	output yaml
+	default y
+	help
+	  Use the modern Nix flakes system for managing NixOS configurations.
+	  This provides better reproducibility and dependency management.
+	  If disabled, will use traditional configuration.nix approach.
+
+config NIXOS_CHANNEL
+	string "NixOS channel to use"
+	output yaml
+	default "nixos-unstable"
+	help
+	  The NixOS channel to use for the VMs. Options include:
+	  - nixos-unstable: Latest packages, rolling release
+	  - nixos-24.05: Stable release from May 2024
+	  - nixos-23.11: Stable release from November 2023
+
+config NIXOS_ENABLE_WORKFLOW_DEPS
+	bool "Automatically install workflow dependencies"
+	output yaml
+	default y
+	help
+	  When enabled, NixOS will automatically generate package dependencies
+	  based on all enabled workflows (fstests, blktests, etc.) and include
+	  them in the VM configuration.
+
+config NIXOS_LIBVIRT_SESSION_INFERENCE
+	bool "Automatically infer libvirt session type"
+	output yaml
+	default y
+	help
+	  Automatically detect whether to use libvirt system or user session
+	  based on your distribution's defaults. Similar to guestfs, this will
+	  use system session for most distributions and user session for Fedora.
+
+config NIXOS_CUSTOM_CONFIG_PATH
+	string "Path to custom NixOS configuration template"
+	output yaml
+	default ""
+	help
+	  Optional path to a custom NixOS configuration template that will be
+	  merged with the auto-generated configuration. This allows you to add
+	  custom packages, services, or other NixOS settings.
+
+config NIXOS_VM_MEMORY_MB
+	int "Memory allocation per VM (MB)"
+	output yaml
+	default 4096
+	help
+	  Amount of memory to allocate to each NixOS VM in megabytes.
+
+config NIXOS_VM_DISK_SIZE_GB
+	int "Disk size per VM (GB)"
+	output yaml
+	default 20
+	help
+	  Size of the primary disk for each NixOS VM in gigabytes.
+
+config NIXOS_VM_VCPUS
+	int "Number of vCPUs per VM"
+	output yaml
+	default 4
+	help
+	  Number of virtual CPUs to allocate to each NixOS VM.
+
+config NIXOS_SSH_PORT
+	int "SSH port for NixOS VMs"
+	output yaml
+	default 22
+	help
+	  SSH port to use for connecting to NixOS VMs.
+
+config NIXOS_DEBUG_MODE
+	bool "Enable debug mode for NixOS provisioning"
+	default n
+	help
+	  Enable verbose output and debugging information during NixOS
+	  VM provisioning and configuration generation.
+
+endif # NIXOS
diff --git a/nixos/flake.nix b/nixos/flake.nix
new file mode 100644
index 00000000..530d5980
--- /dev/null
+++ b/nixos/flake.nix
@@ -0,0 +1,32 @@
+{
+  description = "kdevops NixOS VMs";
+
+  inputs = {
+    nixpkgs.url = "github:NixOS/nixpkgs/nixos-unstable";
+  };
+
+  outputs = { self, nixpkgs }: {
+    nixosConfigurations = {
+      "kdevops" = nixpkgs.lib.nixosSystem {
+        system = "x86_64-linux";
+        modules = [
+          ./generated/configuration.nix
+          ./generated/hardware-configuration.nix
+          ./generated/workflow-deps.nix
+          ({ ... }: {
+            networking.hostName = "kdevops";
+          })
+        ];
+      };
+    };
+
+    # Build all VMs
+    defaultPackage.x86_64-linux =
+      nixpkgs.legacyPackages.x86_64-linux.writeShellScriptBin "build-vms" ''
+        echo "Building NixOS VMs..."
+        echo "Building kdevops..."
+        nix build .#nixosConfigurations.kdevops.config.system.build.vm
+        echo "All VMs built successfully!"
+      '';
+  };
+}
diff --git a/playbooks/files/scripts/detect_libvirt_session.sh b/playbooks/files/scripts/detect_libvirt_session.sh
new file mode 100755
index 00000000..caea9367
--- /dev/null
+++ b/playbooks/files/scripts/detect_libvirt_session.sh
@@ -0,0 +1,26 @@
+#!/bin/bash
+# SPDX-License-Identifier: copyleft-next-0.3.1
+#
+# Detect the appropriate libvirt session type (system vs user) based on
+# distribution defaults, similar to how guestfs handles it.
+
+SCRIPTS_DIR=$(dirname $0)
+source ${SCRIPTS_DIR}/libvirt_pool.sh
+
+OS_FILE="/etc/os-release"
+LIBVIRT_URI="qemu:///system"  # Default to system
+
+# Get the pool variables which includes distribution detection
+get_pool_vars
+
+# Fedora defaults to user session
+if [[ "$USES_QEMU_USER_SESSION" == "y" ]]; then
+    LIBVIRT_URI="qemu:///session"
+fi
+
+# Override detection if explicitly configured
+if [[ -n "$CONFIG_LIBVIRT_URI_PATH" ]]; then
+    LIBVIRT_URI="$CONFIG_LIBVIRT_URI_PATH"
+fi
+
+echo "$LIBVIRT_URI"
diff --git a/playbooks/nixos.yml b/playbooks/nixos.yml
new file mode 100644
index 00000000..eda34586
--- /dev/null
+++ b/playbooks/nixos.yml
@@ -0,0 +1,516 @@
+---
+# SPDX-License-Identifier: copyleft-next-0.3.1
+
+- name: Install NixOS dependencies on localhost
+  hosts: localhost
+  gather_facts: true
+  tags: install-deps
+  tasks:
+    - name: Check if nix is installed
+      ansible.builtin.command: which nix
+      register: nix_check
+      # TODO: Review - was ignore_errors: true
+      failed_when: false  # Always succeed - review this condition
+      changed_when: false
+
+    - name: Install nix package manager
+      become: true
+      when: nix_check.rc != 0
+      block:
+        - name: Download nix installer
+          ansible.builtin.get_url:
+            url: https://nixos.org/nix/install
+            dest: /tmp/install-nix.sh
+            mode: '0755'
+
+        - name: Install nix
+          ansible.builtin.shell: |
+            sh /tmp/install-nix.sh --daemon --yes
+          args:
+            creates: /nix
+
+    - name: Ensure libvirt is installed
+      become: true
+      ansible.builtin.package:
+        name:
+          - libvirt0
+          - qemu-kvm
+          - libvirt-daemon-system
+          - libvirt-clients
+        state: present
+      when: ansible_os_family == "Debian"
+
+    - name: Ensure libvirt is installed (RedHat)
+      become: true
+      ansible.builtin.package:
+        name:
+          - libvirt
+          - qemu-kvm
+          - libvirt-daemon
+        state: present
+      when: ansible_os_family == "RedHat"
+
+- name: Generate NixOS configurations
+  hosts: localhost
+  gather_facts: true
+  vars_files:
+    - "{{ playbook_dir }}/../extra_vars.yaml"
+  tags: generate-configs
+  tasks:
+    - name: Create NixOS directories
+      ansible.builtin.file:
+        path: "{{ item }}"
+        state: directory
+        mode: '0755'
+      loop:
+        - "{{ nixos_config_dir }}"
+        - "{{ nixos_generation_dir }}"
+        - "{{ nixos_storage_dir }}"
+
+    - name: Ensure SSH key exists for configuration
+      block:
+        - name: Determine SSH key path based on directory
+          ansible.builtin.command: python3 {{ playbook_dir }}/../scripts/nixos_ssh_key_name.py --path
+          register: ssh_key_path_result
+          changed_when: false
+
+        - name: Set SSH key path
+          ansible.builtin.set_fact:
+            nixos_ssh_key_path: "{{ ssh_key_path_result.stdout | trim }}"
+
+        - name: Generate SSH key for NixOS VMs if not exists
+          openssh_keypair:
+            path: "{{ nixos_ssh_key_path }}"
+            type: rsa
+            size: 2048
+            comment: "kdevops@nixos"
+            force: false
+
+        - name: Read SSH public key
+          ansible.builtin.slurp:
+            src: "{{ nixos_ssh_key_path }}.pub"
+          register: ssh_public_key
+
+        - name: Set SSH key in fact
+          ansible.builtin.set_fact:
+            nixos_ssh_authorized_key: "{{ ssh_public_key['content'] | b64decode | trim }}"
+
+    - name: Template base NixOS configuration
+      ansible.builtin.template:
+        src: nixos/configuration.nix.j2
+        dest: "{{ nixos_generation_dir }}/configuration.nix"
+        mode: '0644'
+
+    - name: Template hardware configuration
+      ansible.builtin.template:
+        src: nixos/hardware-configuration.nix.j2
+        dest: "{{ nixos_generation_dir }}/hardware-configuration.nix"
+        mode: '0644'
+
+    - name: Generate workflow dependencies configuration
+      ansible.builtin.template:
+        src: nixos/workflow-deps.nix.j2
+        dest: "{{ nixos_generation_dir }}/workflow-deps.nix"
+        mode: '0644'
+      when: nixos_enable_workflow_deps | bool
+
+    - name: Debug SSH key path
+      ansible.builtin.debug:
+        msg: "Using SSH key: {{ hostvars['localhost']['nixos_ssh_key_path'] | default('NOT SET') }}"
+
+    - name: Generate VM definitions
+      ansible.builtin.template:
+        src: nixos/vms.nix.j2
+        dest: "{{ nixos_generation_dir }}/vms.nix"
+        mode: '0644'
+
+    - name: Generate flake.nix if enabled
+      ansible.builtin.template:
+        src: nixos/flake.nix.j2
+        dest: "{{ nixos_config_dir }}/flake.nix"
+        mode: '0644'
+      when: nixos_use_flakes | bool
+
+# The setup phase is integrated into generate-configs to ensure SSH keys are available
+
+- name: Build and deploy NixOS VMs
+  hosts: localhost
+  gather_facts: true
+  vars_files:
+    - "{{ playbook_dir }}/../extra_vars.yaml"
+  tags: build-vms
+  tasks:
+    - name: Create disk image configuration
+      ansible.builtin.copy:
+        content: |
+          { config, lib, pkgs, ... }:
+
+          {
+            imports = [
+              ./configuration.nix
+            ];
+
+            # Ensure proper boot configuration for disk image
+            boot.loader.grub.device = lib.mkForce "/dev/vda";
+            boot.loader.grub.enable = lib.mkForce true;
+
+            fileSystems."/" = lib.mkForce {
+              device = "/dev/disk/by-label/nixos";
+              fsType = "ext4";
+              autoResize = true;
+            };
+          }
+        dest: "{{ nixos_generation_dir }}/disk-image.nix"
+
+    - name: Check if NixOS disk image already exists
+      ansible.builtin.stat:
+        path: "{{ nixos_storage_dir }}/nixos-image-result"
+      register: disk_image_exists
+
+    - name: Build NixOS disk image
+      ansible.builtin.shell: |
+        # Source nix profile and set PATH
+        export PATH="/nix/var/nix/profiles/default/bin:/usr/local/bin:/usr/bin:/bin:$PATH"
+        if [ -f /nix/var/nix/profiles/default/etc/profile.d/nix.sh ]; then
+          . /nix/var/nix/profiles/default/etc/profile.d/nix.sh
+        fi
+
+        # Configure Nix to use local mirror if available
+        {% if nixos_use_local_mirror is defined and nixos_use_local_mirror and nixos_mirror_url is defined and nixos_mirror_url != "" %}
+        export NIX_CONFIG="substituters = {{ nixos_mirror_url }} https://cache.nixos.org"
+        echo "Using local Nix cache mirror: {{ nixos_mirror_url }}"
+        {% endif %}
+
+        cd {{ nixos_generation_dir }}
+
+        # Build a QCOW2 disk image with NixOS installed
+        echo "Building NixOS disk image (this may take a while)..."
+
+        # Create a wrapper expression for make-disk-image.nix
+        cat > make-image.nix <<'EOF'
+        let
+          pkgs = import <nixpkgs> {};
+          lib = pkgs.lib;
+
+          # Build a complete NixOS system configuration
+          nixosSystem = import "${pkgs.path}/nixos" {
+            configuration = {
+              imports = [
+                ./configuration.nix
+                ./disk-image.nix
+              ];
+
+              # Ensure we have a bootable system
+              boot.loader.grub.enable = lib.mkForce true;
+              boot.loader.grub.device = lib.mkForce "/dev/vda";
+              boot.loader.grub.configurationLimit = 1;
+
+              # Critical: ensure the system can boot
+              boot.kernelModules = [ "virtio_pci" "virtio_blk" "virtio_net" ];
+              boot.initrd.availableKernelModules = [ "virtio_pci" "virtio_blk" "virtio_net" ];
+
+              # Ensure networking works
+              networking.useDHCP = lib.mkDefault true;
+
+              # Make sure we have a working system
+              system.stateVersion = "24.05";
+
+              # Ensure SSH starts
+              systemd.services.sshd.wantedBy = [ "multi-user.target" ];
+            };
+          };
+        in
+        import "${pkgs.path}/nixos/lib/make-disk-image.nix" {
+          inherit pkgs lib;
+          config = nixosSystem.config;
+          diskSize = 20480;
+          format = "qcow2";
+          partitionTableType = "legacy";
+          # Important: include the bootloader!
+          installBootLoader = true;
+        }
+        EOF
+
+        # Force rebuild by clearing any cached result
+        rm -f {{ nixos_storage_dir }}/nixos-image-result
+
+        nix-build make-image.nix \
+          --no-out-link \
+          --arg forceRebuild true \
+          -o {{ nixos_storage_dir }}/nixos-image-result
+
+        # Return the path to the disk image
+        readlink -f {{ nixos_storage_dir }}/nixos-image-result/nixos.qcow2
+      register: build_result
+      changed_when: "'Building NixOS disk image' in build_result.stdout"
+      when: not disk_image_exists.stat.exists
+
+    - name: Get existing disk image path
+      ansible.builtin.shell: |
+        readlink -f {{ nixos_storage_dir }}/nixos-image-result/nixos.qcow2
+      register: existing_image_path
+      when: disk_image_exists.stat.exists
+
+    - name: Store disk image path
+      ansible.builtin.set_fact:
+        nixos_disk_image_path: >-
+          {{
+            build_result.stdout_lines | last | trim
+            if (build_result.stdout_lines is defined)
+            else existing_image_path.stdout | trim
+          }}
+
+    - name: Copy NixOS disk image for each VM
+      ansible.builtin.shell: |
+        SOURCE_IMAGE="{{ nixos_disk_image_path | default(nixos_storage_dir + '/nixos-image-result/nixos.qcow2') }}"
+        TARGET_IMAGE="{{ nixos_storage_dir }}/{{ item }}.qcow2"
+
+        # Remove target if it exists and copy fresh
+        if [ -f "$TARGET_IMAGE" ]; then
+          rm -f "$TARGET_IMAGE"
+        fi
+
+        cp "$SOURCE_IMAGE" "$TARGET_IMAGE"
+        chmod u+w "$TARGET_IMAGE"
+      loop: "{{ groups['all'] | reject('equalto', 'localhost') | list }}"
+      when: nixos_disk_image_path is defined
+
+    - name: Generate VM wrapper scripts
+      ansible.builtin.template:
+        src: nixos/run-vm-wrapper.sh.j2
+        dest: "{{ nixos_storage_dir }}/run-{{ item }}-wrapper.sh"
+        mode: '0755'
+      loop: "{{ groups['all'] | reject('equalto', 'localhost') | list }}"
+      loop_control:
+        index_var: vm_idx
+      vars:
+        vm_name: "{{ item }}"
+        vm_index: "{{ vm_idx }}"
+        vm_memory: "{{ nixos_vm_memory_mb | default(4096) }}"
+        vm_vcpus: "{{ nixos_vm_vcpus | default(4) }}"
+
+- name: Ensure default libvirt network is available
+  hosts: localhost
+  gather_facts: true
+  vars_files:
+    - "{{ playbook_dir }}/../extra_vars.yaml"
+  tags: bringup
+  tasks:
+    - name: Check if default network exists and is active
+      ansible.builtin.shell: virsh net-info default
+      register: default_network_info
+      failed_when: false
+      environment:
+        LIBVIRT_DEFAULT_URI: "{{ libvirt_uri }}"
+
+    - name: Start default network if not active
+      ansible.builtin.shell: virsh net-start default
+      when: default_network_info.rc != 0 or 'Active:' not in default_network_info.stdout or 'yes' not in default_network_info.stdout.split('Active:')[1].split('\n')[0]
+      failed_when: false
+      environment:
+        LIBVIRT_DEFAULT_URI: "{{ libvirt_uri }}"
+
+- name: Provision NixOS VMs with libvirt
+  hosts: baseline,dev
+  gather_facts: false
+  vars_files:
+    - "{{ playbook_dir }}/../extra_vars.yaml"
+  tags: bringup
+  tasks:
+    - name: Check if VM already exists
+      ansible.builtin.shell: virsh domstate "{{ inventory_hostname }}"
+      register: vm_status
+      failed_when: false
+      delegate_to: localhost
+      environment:
+        LIBVIRT_DEFAULT_URI: "{{ libvirt_uri }}"
+
+    - name: Provision VM with libvirt
+      when: vm_status.rc != 0 or 'shut off' in vm_status.stdout
+      delegate_to: localhost
+      environment:
+        LIBVIRT_DEFAULT_URI: "{{ libvirt_uri }}"
+      block:
+        - name: Generate libvirt XML for VM
+          ansible.builtin.template:
+            src: nixos/vm-libvirt.xml.j2
+            dest: "{{ nixos_storage_dir }}/{{ inventory_hostname }}.xml"
+          vars:
+            vm_name: "{{ inventory_hostname }}"
+            vm_memory: "{{ nixos_vm_memory_mb | default(4096) }}"
+            vm_vcpus: "{{ nixos_vm_vcpus | default(4) }}"
+            vm_disk: "{{ nixos_storage_dir }}/{{ inventory_hostname }}.qcow2"
+
+        - name: Define VM in libvirt
+          ansible.builtin.shell: virsh define "{{ nixos_storage_dir }}/{{ inventory_hostname }}.xml"
+          failed_when: false
+
+        - name: Start VM
+          ansible.builtin.shell: virsh start "{{ inventory_hostname }}"
+          failed_when: false
+
+    - name: Ensure VM is running
+      ansible.builtin.shell: virsh start "{{ inventory_hostname }}"
+      register: start_result
+      failed_when:
+        - start_result.rc != 0
+        - "'already active' not in start_result.stderr"
+      delegate_to: localhost
+      environment:
+        LIBVIRT_DEFAULT_URI: "{{ libvirt_uri }}"
+
+- name: Setup SSH access for NixOS VMs
+  hosts: localhost
+  gather_facts: true
+  vars_files:
+    - "{{ playbook_dir }}/../extra_vars.yaml"
+  tags: bringup
+  tasks:
+    - name: Wait for VMs to get IP addresses from DHCP
+      ansible.builtin.shell: |
+        for i in {1..90}; do
+          IP=$(virsh domifaddr {{ item }} --source lease 2>/dev/null | awk '/192\.168\.122\./ {print $4}' | cut -d'/' -f1)
+          if [ -n "$IP" ]; then
+            echo "$IP"
+            exit 0
+          fi
+          sleep 3
+        done
+        exit 1
+      loop: "{{ groups['all'] | reject('equalto', 'localhost') | list }}"
+      register: vm_ips
+      retries: 2
+      delay: 10
+      environment:
+        LIBVIRT_DEFAULT_URI: "{{ libvirt_uri }}"
+
+    - name: Set VM IP facts
+      ansible.builtin.set_fact:
+        nixos_vm_ips: "{{ dict(groups['all'] | reject('equalto', 'localhost') | list | zip(vm_ips.results | map(attribute='stdout'))) }}"
+
+    - name: Determine SSH key path for SSH config update
+      ansible.builtin.command: python3 {{ playbook_dir }}/../scripts/nixos_ssh_key_name.py --path
+      register: ssh_key_path_for_config
+      changed_when: false
+
+    - name: Wait for SSH to be available on VMs
+      ansible.builtin.wait_for:
+        host: "{{ nixos_vm_ips[item] }}"
+        port: 22
+        delay: 10
+        timeout: 300
+      loop: "{{ groups['all'] | reject('equalto', 'localhost') | list }}"
+
+    - name: Update SSH config for NixOS VMs
+      ansible.builtin.command: |
+        python3 {{ playbook_dir }}/../scripts/update_ssh_config_nixos.py update \
+          {{ item }} \
+          {{ nixos_vm_ips[item] }} \
+          22 \
+          kdevops \
+          {{ nixos_ssh_config_file | default(ansible_env.HOME + '/.ssh/config') }} \
+          {{ ssh_key_path_for_config.stdout | trim }} \
+          'NixOS VM'
+      loop: "{{ groups['all'] | reject('equalto', 'localhost') | list }}"
+      when: nixos_update_ssh_config | default(true) | bool
+
+- name: Show VM access information
+  hosts: localhost
+  gather_facts: false
+  vars_files:
+    - "{{ playbook_dir }}/../extra_vars.yaml"
+  tags: console
+  tasks:
+    - name: Display VM access information
+      ansible.builtin.debug:
+        msg: |
+          NixOS VMs are running and accessible via libvirt.
+
+          SSH Access:
+          {% for vm in groups['all'] | reject('equalto', 'localhost') | list %}
+          - {{ vm }}: ssh {{ vm }}
+          {% endfor %}
+
+          VM Management:
+          {% for vm in groups['all'] | reject('equalto', 'localhost') | list %}
+          - {{ vm }}: virsh {start|shutdown|destroy} {{ vm }}
+          {% endfor %}
+
+          VM Status:
+          - Check status: virsh list --all
+          - Get IP: virsh domifaddr <vm_name>
+
+- name: Destroy NixOS VMs
+  hosts: localhost
+  gather_facts: true
+  vars_files:
+    - "{{ playbook_dir }}/../extra_vars.yaml"
+  tags: [destroy, never]
+  tasks:
+    - name: Stop VMs using wrapper scripts
+      ansible.builtin.command: "{{ nixos_storage_dir }}/run-{{ item }}-wrapper.sh stop"
+      loop: "{{ groups['all'] | reject('equalto', 'localhost') | list }}"
+      # TODO: Review - was ignore_errors: true
+      failed_when: false  # Always succeed - review this condition
+
+    - name: Remove SSH config entries for NixOS VMs
+      ansible.builtin.command: |
+        python3 {{ playbook_dir }}/../scripts/update_ssh_config_nixos.py remove \
+          {{ item }} \
+          '' \
+          '' \
+          '' \
+          {{ nixos_ssh_config_file | default(ansible_env.HOME + '/.ssh/config') }} \
+          '' \
+          'NixOS VM'
+      loop: "{{ groups['all'] | reject('equalto', 'localhost') | list }}"
+      when: nixos_update_ssh_config | default(true) | bool
+      # TODO: Review - was ignore_errors: true
+      failed_when: false  # Always succeed - review this condition
+
+    - name: Remove VM disk images
+      ansible.builtin.file:
+        path: "{{ nixos_storage_dir }}/{{ item }}.qcow2"
+        state: absent
+      loop: "{{ groups['all'] | reject('equalto', 'localhost') | list }}"
+
+    - name: Remove VM wrapper scripts
+      ansible.builtin.file:
+        path: "{{ nixos_storage_dir }}/run-{{ item }}-wrapper.sh"
+        state: absent
+      loop: "{{ groups['all'] | reject('equalto', 'localhost') | list }}"
+
+    - name: Remove NixOS disk image symlink
+      ansible.builtin.file:
+        path: "{{ nixos_storage_dir }}/nixos-image-result"
+        state: absent
+
+    - name: Remove extra drive directories
+      ansible.builtin.file:
+        path: "{{ nixos_storage_dir }}/extra-drives"
+        state: absent
+
+    - name: Clean up generated NixOS configuration
+      ansible.builtin.file:
+        path: "{{ nixos_generation_dir }}"
+        state: absent
+
+    - name: Garbage collect cached NixOS disk images from Nix store
+      ansible.builtin.shell: |
+        # Source nix profile if available
+        if [ -f /nix/var/nix/profiles/default/etc/profile.d/nix.sh ]; then
+          . /nix/var/nix/profiles/default/etc/profile.d/nix.sh
+        fi
+
+        # Find nix-collect-garbage command
+        NIX_COLLECT_GARBAGE=$(which nix-collect-garbage 2>/dev/null || find /nix -name "nix-collect-garbage" -type f 2>/dev/null | head -1)
+
+        if [ -n "$NIX_COLLECT_GARBAGE" ]; then
+          echo "Running Nix garbage collection to remove cached disk images..."
+          sudo $NIX_COLLECT_GARBAGE -d 2>&1 | grep -E "(deleting|freed|store paths)" || true
+        else
+          echo "Warning: nix-collect-garbage not found, cached images may remain"
+        fi
+      register: gc_result
+      failed_when: false
+      changed_when: "'freed' in gc_result.stdout"
diff --git a/playbooks/roles/devconfig/tasks/install-deps/main.yml b/playbooks/roles/devconfig/tasks/install-deps/main.yml
index 68ad9e7b..3cca4d9b 100644
--- a/playbooks/roles/devconfig/tasks/install-deps/main.yml
+++ b/playbooks/roles/devconfig/tasks/install-deps/main.yml
@@ -22,6 +22,7 @@
     - files:
         - "{{ ansible_facts['os_family'] | lower }}.yml"
       skip: true
+  when: ansible_facts['os_family'] != 'NixOS'
   tags: vars
 
 - name: Debian-specific setup
diff --git a/playbooks/roles/devconfig/tasks/main.yml b/playbooks/roles/devconfig/tasks/main.yml
index 2ffa433f..fccd1fcf 100644
--- a/playbooks/roles/devconfig/tasks/main.yml
+++ b/playbooks/roles/devconfig/tasks/main.yml
@@ -197,7 +197,7 @@
       chmod 755 {{ dev_bash_config }}
     fi
   args:
-    executable: /bin/bash
+    executable: "{{ '/run/current-system/sw/bin/bash' if (kdevops_enable_nixos | default(false)) else '/bin/bash' }}"
   when: dev_bash_config_file_copied is success
 
 - name: Copy the developer's favorite bash hacks over for root *if* it exists
@@ -224,7 +224,7 @@
       chmod 755 {{ dev_bash_config_root }}
     fi
   args:
-    executable: /bin/bash
+    executable: "{{ '/run/current-system/sw/bin/bash' if (kdevops_enable_nixos | default(false)) else '/bin/bash' }}"
   when: dev_bash_config_file_copied_root is success
 
 - name: Check to see if system has GRUB2
diff --git a/playbooks/roles/gen_hosts/tasks/main.yml b/playbooks/roles/gen_hosts/tasks/main.yml
index d36790b0..fb63629a 100644
--- a/playbooks/roles/gen_hosts/tasks/main.yml
+++ b/playbooks/roles/gen_hosts/tasks/main.yml
@@ -79,6 +79,20 @@
   when:
     - not kdevops_workflows_dedicated_workflow
     - ansible_hosts_template.stat.exists
+    - not kdevops_enable_nixos|default(false)|bool
+
+- name: Generate the Ansible inventory file for NixOS
+  tags: ['hosts']
+  ansible.builtin.template:
+    src: "{{ kdevops_hosts_template }}"
+    dest: "{{ ansible_cfg_inventory }}"
+    force: true
+    trim_blocks: True
+    lstrip_blocks: True
+  when:
+    - not kdevops_workflows_dedicated_workflow
+    - ansible_hosts_template.stat.exists
+    - kdevops_enable_nixos|default(false)|bool
 
 - name: Update Ansible inventory access modification time so make sees it updated
   ansible.builtin.file:
@@ -339,6 +353,7 @@
     - kdevops_workflows_dedicated_workflow
     - kdevops_workflow_enable_fio_tests
     - ansible_hosts_template.stat.exists
+    - not kdevops_enable_nixos|default(false)|bool
 
 
 - name: Infer enabled mmtests test types
diff --git a/playbooks/roles/gen_hosts/templates/fstests.j2 b/playbooks/roles/gen_hosts/templates/fstests.j2
index 32d90abf..823dbb1e 100644
--- a/playbooks/roles/gen_hosts/templates/fstests.j2
+++ b/playbooks/roles/gen_hosts/templates/fstests.j2
@@ -1,10 +1,18 @@
 [all]
 localhost ansible_connection=local
 {% for s in fstests_enabled_test_types %}
+{% if kdevops_enable_nixos|default(false) %}
+{{ kdevops_host_prefix }}-{{ s }} ansible_python_interpreter=/run/current-system/sw/bin/python3
+{% else %}
 {{ kdevops_host_prefix }}-{{ s }}
+{% endif %}
 {% if kdevops_baseline_and_dev %}
+{% if kdevops_enable_nixos|default(false) %}
+{{ kdevops_host_prefix }}-{{ s }}-dev ansible_python_interpreter=/run/current-system/sw/bin/python3
+{% else %}
 {{ kdevops_host_prefix }}-{{ s }}-dev
 {% endif %}
+{% endif %}
 {% endfor %}
 {% if kdevops_nfsd_enable %}
 {% if kdevops_loopback_nfs_enable %}
@@ -15,7 +23,11 @@ localhost ansible_connection=local
 ansible_python_interpreter =  "{{ kdevops_python_interpreter }}"
 [baseline]
 {% for s in fstests_enabled_test_types %}
+{% if kdevops_enable_nixos|default(false) %}
+{{ kdevops_host_prefix }}-{{ s }} ansible_python_interpreter=/run/current-system/sw/bin/python3
+{% else %}
 {{ kdevops_host_prefix }}-{{ s }}
+{% endif %}
 {% endfor %}
 {% if kdevops_nfsd_enable %}
 {% if kdevops_loopback_nfs_enable %}
@@ -27,7 +39,11 @@ ansible_python_interpreter =  "{{ kdevops_python_interpreter }}"
 [dev]
 {% if kdevops_baseline_and_dev %}
   {% for s in fstests_enabled_test_types %}
+{% if kdevops_enable_nixos|default(false) %}
+{{ kdevops_host_prefix }}-{{ s }}-dev ansible_python_interpreter=/run/current-system/sw/bin/python3
+{% else %}
 {{ kdevops_host_prefix }}-{{ s }}-dev
+{% endif %}
   {% endfor %}
 {% if kdevops_nfsd_enable %}
 {% if kdevops_loopback_nfs_enable %}
@@ -62,7 +78,11 @@ ansible_python_interpreter =  "{{ kdevops_python_interpreter }}"
 ansible_python_interpreter =  "{{ kdevops_python_interpreter }}"
 [krb5]
 {% for s in fstests_enabled_test_types %}
+{% if kdevops_enable_nixos|default(false) %}
+{{ kdevops_host_prefix }}-{{ s }} ansible_python_interpreter=/run/current-system/sw/bin/python3
+{% else %}
 {{ kdevops_host_prefix }}-{{ s }}
+{% endif %}
 {% endfor %}
 {% if kdevops_nfsd_enable %}
 {{ kdevops_hosts_prefix }}-nfsd
diff --git a/playbooks/roles/gen_hosts/templates/hosts.j2 b/playbooks/roles/gen_hosts/templates/hosts.j2
index e9441605..0e896481 100644
--- a/playbooks/roles/gen_hosts/templates/hosts.j2
+++ b/playbooks/roles/gen_hosts/templates/hosts.j2
@@ -184,10 +184,18 @@ write-your-own-template-for-your-workflow-and-task
 {% else %}
 [all]
 localhost ansible_connection=local
+{% if kdevops_enable_nixos|default(false) %}
+{{ kdevops_host_prefix }} ansible_python_interpreter=/run/current-system/sw/bin/python3
+{% else %}
 {{ kdevops_host_prefix }}
+{% endif %}
 {% if kdevops_baseline_and_dev == True %}
+{% if kdevops_enable_nixos|default(false) %}
+{{ kdevops_host_prefix }}-dev ansible_python_interpreter=/run/current-system/sw/bin/python3
+{% else %}
 {{ kdevops_host_prefix }}-dev
 {% endif %}
+{% endif %}
 {% if kdevops_enable_iscsi %}
 {{ kdevops_host_prefix }}-iscsi
 {% endif %}
@@ -197,13 +205,21 @@ localhost ansible_connection=local
 [all:vars]
 ansible_python_interpreter =  "{{ kdevops_python_interpreter }}"
 [baseline]
+{% if kdevops_enable_nixos|default(false) %}
+{{ kdevops_host_prefix }} ansible_python_interpreter=/run/current-system/sw/bin/python3
+{% else %}
 {{ kdevops_host_prefix }}
+{% endif %}
 [baseline:vars]
 ansible_python_interpreter =  "{{ kdevops_python_interpreter }}"
 [dev]
 {% if kdevops_baseline_and_dev %}
+{% if kdevops_enable_nixos|default(false) %}
+{{ kdevops_host_prefix }}-dev ansible_python_interpreter=/run/current-system/sw/bin/python3
+{% else %}
 {{ kdevops_host_prefix }}-dev
 {% endif %}
+{% endif %}
 [dev:vars]
 ansible_python_interpreter =  "{{ kdevops_python_interpreter }}"
 {% if kdevops_enable_iscsi %}
diff --git a/playbooks/roles/gen_nodes/tasks/main.yml b/playbooks/roles/gen_nodes/tasks/main.yml
index b294d294..b1a1946f 100644
--- a/playbooks/roles/gen_nodes/tasks/main.yml
+++ b/playbooks/roles/gen_nodes/tasks/main.yml
@@ -27,6 +27,12 @@
     mode: "0755"
   when: kdevops_enable_guestfs|bool
 
+- name: Create nixos directory
+  ansible.builtin.file:
+    path: "{{ nixos_config_dir }}"
+    state: directory
+  when: kdevops_enable_nixos | default(false) | bool
+
 - name: Verify Ansible nodes template file exists {{ kdevops_nodes_template_full_path }}
   ansible.builtin.stat:
     path: "{{ kdevops_nodes_template_full_path }}"
@@ -148,6 +154,23 @@
     mode: "0644"
   when:
     - not kdevops_workflows_dedicated_workflow
+    - ansible_nodes_template.stat.exists
+    - not kdevops_enable_nixos|default(false)|bool
+
+- name: Generate the NixOS kdevops nodes file using {{ kdevops_nodes_template }} as jinja2 source template
+  tags: ['nodes']
+  vars:
+    node_template: "{{ kdevops_nodes_template | basename }}"
+    all_generic_nodes: "{{ generic_nodes }}"
+    nodes: "{{ all_generic_nodes }}"
+  ansible.builtin.template:
+    src: "{{ node_template }}"
+    dest: "{{ topdir_path }}/{{ kdevops_nodes }}"
+    force: true
+  when:
+    - not kdevops_workflows_dedicated_workflow
+    - ansible_nodes_template.stat.exists
+    - kdevops_enable_nixos|default(false)|bool
 
 
 - name: Generate the builder kdevops nodes file using nodes file using template as jinja2 source template
@@ -162,6 +185,7 @@
     force: true
   when:
     - bootlinux_builder
+    - ansible_nodes_template.stat.exists
 
 
 - name: Generate the pynfs kdevops nodes file using nodes file using template as jinja2 source template
diff --git a/playbooks/roles/gen_nodes/templates/nixos_nodes.j2 b/playbooks/roles/gen_nodes/templates/nixos_nodes.j2
new file mode 100644
index 00000000..391b1f10
--- /dev/null
+++ b/playbooks/roles/gen_nodes/templates/nixos_nodes.j2
@@ -0,0 +1,14 @@
+---
+# Ansible nodes file generated for NixOS VMs
+
+{% for node in nodes %}
+{{ node }}:
+  ansible_host: 192.168.100.{{ loop.index + 1 }}
+  ansible_user: kdevops
+  ansible_ssh_private_key_file: {{ topdir_path }}/.ssh/kdevops_id_rsa
+  ansible_python_interpreter: /run/current-system/sw/bin/python3
+  vm_name: {{ node }}
+  vm_memory_mb: {{ nixos_vm_memory_mb }}
+  vm_vcpus: {{ nixos_vm_vcpus }}
+  vm_disk_size_gb: {{ nixos_vm_disk_size_gb }}
+{% endfor %}
diff --git a/playbooks/roles/update_etc_hosts/tasks/main.yml b/playbooks/roles/update_etc_hosts/tasks/main.yml
index 049411ee..dc40eded 100644
--- a/playbooks/roles/update_etc_hosts/tasks/main.yml
+++ b/playbooks/roles/update_etc_hosts/tasks/main.yml
@@ -57,6 +57,7 @@
   with_items: "{{ ueh_hosts }}"
   when:
     - terraform_private_net_enabled
+    - not (kdevops_enable_nixos | default(false))
 
 - name: Add IP address of all hosts to all hosts
   become: true
@@ -69,6 +70,7 @@
   with_items: "{{ ueh_hosts }}"
   when:
     - not terraform_private_net_enabled
+    - not (kdevops_enable_nixos | default(false))
 
 - name: Fix up hostname on Debian guestfs hosts
   become: true
diff --git a/playbooks/templates/nixos/configuration.nix.j2 b/playbooks/templates/nixos/configuration.nix.j2
new file mode 100644
index 00000000..d5c00fc3
--- /dev/null
+++ b/playbooks/templates/nixos/configuration.nix.j2
@@ -0,0 +1,138 @@
+{ config, pkgs, lib, ... }:
+
+{
+  imports = [
+    ./hardware-configuration.nix
+{% if nixos_enable_workflow_deps %}
+    ./workflow-deps.nix
+{% endif %}
+{% if nixos_custom_config_path != "" %}
+    {{ nixos_custom_config_path }}
+{% endif %}
+  ];
+
+  # Nix configuration
+{% if nixos_use_local_mirror is defined and nixos_use_local_mirror and nixos_mirror_url is defined and nixos_mirror_url != "" %}
+  nix.settings = {
+    substituters = [
+      "{{ nixos_mirror_url }}"
+      "https://cache.nixos.org"
+    ];
+    trusted-substituters = [
+      "{{ nixos_mirror_url }}"
+      "https://cache.nixos.org"
+    ];
+    # Prefer local mirror
+    extra-substituters = [ "{{ nixos_mirror_url }}" ];
+  };
+{% endif %}
+
+  # Boot configuration
+  boot.loader.grub.enable = true;
+  boot.loader.grub.device = "/dev/vda";
+  boot.loader.timeout = 1;
+
+  # Kernel
+  boot.kernelPackages = pkgs.linuxPackages_latest;
+
+  # Enable 9p support if configured
+{% if bootlinux_9p is defined and bootlinux_9p %}
+  boot.kernelModules = [ "9p" "9pnet_virtio" ];
+  boot.initrd.kernelModules = [ "9p" "9pnet_virtio" ];
+{% endif %}
+
+  # Networking
+  networking.useDHCP = lib.mkDefault true;
+
+  # Enable SSH
+  services.openssh = {
+    enable = true;
+    settings = {
+      PermitRootLogin = "yes";
+      PasswordAuthentication = false;
+      PubkeyAuthentication = true;
+    };
+  };
+
+  # Users
+  users.users.root = {
+    openssh.authorizedKeys.keys = [
+{% if nixos_ssh_authorized_key is defined %}
+      "{{ nixos_ssh_authorized_key }}"
+{% else %}
+      # SSH key will be generated during provisioning
+{% endif %}
+    ];
+  };
+
+  users.users.kdevops = {
+    isNormalUser = true;
+    extraGroups = [ "wheel" "libvirt" "kvm" ];
+    openssh.authorizedKeys.keys = [
+{% if nixos_ssh_authorized_key is defined %}
+      "{{ nixos_ssh_authorized_key }}"
+{% else %}
+      # SSH key will be generated during provisioning
+{% endif %}
+    ];
+  };
+
+  # Sudo without password for kdevops user
+  security.sudo.wheelNeedsPassword = false;
+
+  # Basic packages
+  environment.systemPackages = with pkgs; [
+    vim
+    git
+    tmux
+    htop
+    wget
+    curl
+    rsync
+    python3
+    gcc
+    gnumake
+    binutils
+    coreutils
+    findutils
+    procps
+    util-linux
+  ];
+
+  # Enable libvirt for nested virtualization if needed
+  virtualisation.libvirtd.enable = false;
+
+  # Filesystems
+  fileSystems."/" = {
+    device = "/dev/vda1";
+    fsType = "ext4";
+  };
+
+{% if bootlinux_9p is defined and bootlinux_9p %}
+  # 9P mount for shared kernel source
+  fileSystems."/mnt/linux" = {
+    device = "linux_source";
+    fsType = "9p";
+    options = [ "trans=virtio" "version=9p2000.L" "cache=loose" ];
+  };
+{% endif %}
+
+  # Time zone
+  time.timeZone = "UTC";
+
+  # Locale
+  i18n.defaultLocale = "en_US.UTF-8";
+
+  # State version
+  system.stateVersion = "24.05";
+
+  # Enable nix flakes
+  nix.settings.experimental-features = [ "nix-command" "flakes" ];
+
+  # Optimize storage
+  nix.gc = {
+    automatic = true;
+    dates = "weekly";
+    options = "--delete-older-than 7d";
+  };
+}
diff --git a/playbooks/templates/nixos/flake.nix.j2 b/playbooks/templates/nixos/flake.nix.j2
new file mode 100644
index 00000000..52b5b680
--- /dev/null
+++ b/playbooks/templates/nixos/flake.nix.j2
@@ -0,0 +1,38 @@
+{
+  description = "kdevops NixOS VMs";
+
+  inputs = {
+    nixpkgs.url = "github:NixOS/nixpkgs/{{ nixos_channel }}";
+  };
+
+  outputs = { self, nixpkgs }: {
+    nixosConfigurations = {
+{% for node in groups['all'] if node != 'localhost' %}
+      "{{ node }}" = nixpkgs.lib.nixosSystem {
+        system = "x86_64-linux";
+        modules = [
+          ./generated/configuration.nix
+          ./generated/hardware-configuration.nix
+{% if nixos_enable_workflow_deps %}
+          ./generated/workflow-deps.nix
+{% endif %}
+          ({ ... }: {
+            networking.hostName = "{{ node }}";
+          })
+        ];
+      };
+{% endfor %}
+    };
+
+    # Build all VMs
+    defaultPackage.x86_64-linux =
+      nixpkgs.legacyPackages.x86_64-linux.writeShellScriptBin "build-vms" ''
+        echo "Building NixOS VMs..."
+{% for node in groups['all'] if node != 'localhost' %}
+        echo "Building {{ node }}..."
+        nix build .#nixosConfigurations.{{ node }}.config.system.build.vm
+{% endfor %}
+        echo "All VMs built successfully!"
+      '';
+  };
+}
diff --git a/playbooks/templates/nixos/hardware-configuration.nix.j2 b/playbooks/templates/nixos/hardware-configuration.nix.j2
new file mode 100644
index 00000000..bb91bba4
--- /dev/null
+++ b/playbooks/templates/nixos/hardware-configuration.nix.j2
@@ -0,0 +1,42 @@
+{ config, lib, pkgs, modulesPath, ... }:
+
+{
+  imports = [
+    (modulesPath + "/profiles/qemu-guest.nix")
+  ];
+
+  boot.initrd.availableKernelModules = [ "ahci" "xhci_pci" "virtio_pci" "sr_mod" "virtio_blk" ];
+  boot.initrd.kernelModules = [ ];
+  boot.kernelModules = [ "kvm-intel" "kvm-amd" ];
+  boot.extraModulePackages = [ ];
+
+  # Root filesystem
+  fileSystems."/" = {
+    device = "/dev/disk/by-label/nixos";
+    fsType = "ext4";
+  };
+
+  # Boot partition (if UEFI is enabled)
+{% if guestfs_requires_uefi is defined and guestfs_requires_uefi %}
+  fileSystems."/boot" = {
+    device = "/dev/disk/by-label/boot";
+    fsType = "vfat";
+  };
+{% endif %}
+
+  # Swap
+  swapDevices = [ ];
+
+  # Networking
+  networking.useDHCP = lib.mkDefault true;
+  networking.interfaces.eth0.useDHCP = lib.mkDefault true;
+
+  # Hardware configuration
+  nixpkgs.hostPlatform = lib.mkDefault "x86_64-linux";
+  hardware.cpu.intel.updateMicrocode = lib.mkDefault config.hardware.enableRedistributableFirmware;
+  hardware.cpu.amd.updateMicrocode = lib.mkDefault config.hardware.enableRedistributableFirmware;
+
+  # Virtualization features
+  virtualisation.hypervGuest.enable = false;
+  virtualisation.vmware.guest.enable = false;
+}
diff --git a/playbooks/templates/nixos/run-vm-wrapper.sh.j2 b/playbooks/templates/nixos/run-vm-wrapper.sh.j2
new file mode 100644
index 00000000..2a87c3ea
--- /dev/null
+++ b/playbooks/templates/nixos/run-vm-wrapper.sh.j2
@@ -0,0 +1,159 @@
+#!/bin/bash
+# Wrapper script for NixOS VM: {{ vm_name }}
+# Generated by kdevops
+
+set -e
+
+# Configuration
+VM_NAME="{{ vm_name }}"
+VM_DISK="{{ nixos_storage_dir }}/{{ vm_name }}.qcow2"
+VM_MEMORY="{{ vm_memory | default(4096) }}"
+VM_CPUS="{{ vm_vcpus | default(4) }}"
+SSH_PORT="{{ 10022 + vm_index|default(0)|int }}"
+MONITOR_PORT="{{ vm_monitor_port | default(55555 + vm_index|default(0)) }}"
+VNC_PORT="{{ vm_vnc_port | default(5900 + vm_index|default(0)) }}"
+
+# Network configuration for SSH access
+# Using user mode networking with port forwarding
+NETWORK_OPTS="hostfwd=tcp::${SSH_PORT}-:22"
+{% if nixos_enable_port_forwards is defined and nixos_enable_port_forwards %}
+{% for port in nixos_port_forwards | default([]) %}
+NETWORK_OPTS="${NETWORK_OPTS},hostfwd=tcp::{{ port.host }}-:{{ port.guest }}"
+{% endfor %}
+{% endif %}
+
+# Shared directories
+{% if nixos_shared_dirs is defined %}
+SHARED_DIRS=""
+{% for dir in nixos_shared_dirs %}
+SHARED_DIRS="${SHARED_DIRS} -virtfs local,path={{ dir.source }},security_model=none,mount_tag={{ dir.tag }}"
+{% endfor %}
+{% endif %}
+
+# Function to start the VM
+start_vm() {
+    if [ -f "/tmp/${VM_NAME}.pid" ] && kill -0 $(cat /tmp/${VM_NAME}.pid) 2>/dev/null; then
+        echo "VM ${VM_NAME} is already running (PID: $(cat /tmp/${VM_NAME}.pid))"
+        return 1
+    fi
+
+    echo "Starting NixOS VM: ${VM_NAME}"
+    echo "  Disk: ${VM_DISK}"
+    echo "  Memory: ${VM_MEMORY}MB"
+    echo "  CPUs: ${VM_CPUS}"
+    echo "  SSH: localhost:${SSH_PORT}"
+    echo "  Monitor: 127.0.0.1:${MONITOR_PORT}"
+    echo "  VNC: :$((VNC_PORT - 5900))"
+
+    # Check if disk exists
+    if [ ! -f "${VM_DISK}" ]; then
+        echo "Error: VM disk image not found: ${VM_DISK}"
+        echo "Please run 'make bringup' to build the NixOS disk image first"
+        return 1
+    fi
+
+    # Check disk image size
+    DISK_SIZE=$(stat -c%s "${VM_DISK}" 2>/dev/null || stat -f%z "${VM_DISK}" 2>/dev/null || echo 0)
+    if [ "$DISK_SIZE" -lt 1048576 ]; then
+        echo "Warning: Disk image appears too small (${DISK_SIZE} bytes)"
+        echo "The image may not contain a proper NixOS installation"
+    fi
+
+    # Create extra storage drives if they don't exist
+    EXTRA_DRIVES_DIR="{{ nixos_storage_dir }}/extra-drives"
+    mkdir -p "${EXTRA_DRIVES_DIR}"
+
+    # Create 4 extra sparse drives for fstests (100GB each)
+    for i in {0..3}; do
+        EXTRA_DRIVE="${EXTRA_DRIVES_DIR}/${VM_NAME}-extra${i}.qcow2"
+        if [ ! -f "${EXTRA_DRIVE}" ]; then
+            echo "Creating extra drive ${i}: ${EXTRA_DRIVE}"
+            qemu-img create -f qcow2 "${EXTRA_DRIVE}" 100G
+        fi
+    done
+
+    # Start QEMU with the NixOS disk image
+    echo "Starting QEMU with NixOS disk image..."
+    qemu-system-x86_64 \
+        -name "${VM_NAME}" \
+        -m "${VM_MEMORY}" \
+        -smp "${VM_CPUS}" \
+        -enable-kvm \
+        -machine pc,accel=kvm \
+        -cpu host \
+        -drive file="${VM_DISK}",if=virtio,format=qcow2 \
+        -drive file="${EXTRA_DRIVES_DIR}/${VM_NAME}-extra0.qcow2",format=qcow2,if=none,id=drv0 \
+        -device virtio-blk-pci,drive=drv0,serial=kdevops0 \
+        -drive file="${EXTRA_DRIVES_DIR}/${VM_NAME}-extra1.qcow2",format=qcow2,if=none,id=drv1 \
+        -device virtio-blk-pci,drive=drv1,serial=kdevops1 \
+        -drive file="${EXTRA_DRIVES_DIR}/${VM_NAME}-extra2.qcow2",format=qcow2,if=none,id=drv2 \
+        -device virtio-blk-pci,drive=drv2,serial=kdevops2 \
+        -drive file="${EXTRA_DRIVES_DIR}/${VM_NAME}-extra3.qcow2",format=qcow2,if=none,id=drv3 \
+        -device virtio-blk-pci,drive=drv3,serial=kdevops3 \
+        -netdev user,id=net0,${NETWORK_OPTS} \
+        -device virtio-net-pci,netdev=net0 \
+        -monitor tcp:127.0.0.1:${MONITOR_PORT},server,nowait \
+        -vnc :$((VNC_PORT - 5900)) \
+        -daemonize \
+        -pidfile "/tmp/${VM_NAME}.pid" \
+        ${SHARED_DIRS:-}
+
+    echo "VM ${VM_NAME} started successfully"
+}
+
+# Function to stop the VM
+stop_vm() {
+    if [ -f "/tmp/${VM_NAME}.pid" ]; then
+        PID=$(cat /tmp/${VM_NAME}.pid)
+        if kill -0 $PID 2>/dev/null; then
+            echo "Stopping VM ${VM_NAME} (PID: $PID)"
+            kill $PID
+            rm -f /tmp/${VM_NAME}.pid
+        else
+            echo "VM ${VM_NAME} is not running"
+            rm -f /tmp/${VM_NAME}.pid
+        fi
+    else
+        echo "VM ${VM_NAME} is not running (no PID file)"
+    fi
+}
+
+# Function to check VM status
+status_vm() {
+    if [ -f "/tmp/${VM_NAME}.pid" ]; then
+        PID=$(cat /tmp/${VM_NAME}.pid)
+        if kill -0 $PID 2>/dev/null; then
+            echo "VM ${VM_NAME} is running (PID: $PID)"
+            return 0
+        else
+            echo "VM ${VM_NAME} is not running (stale PID file)"
+            rm -f /tmp/${VM_NAME}.pid
+            return 1
+        fi
+    else
+        echo "VM ${VM_NAME} is not running"
+        return 1
+    fi
+}
+
+# Main script logic
+case "${1:-start}" in
+    start)
+        start_vm
+        ;;
+    stop)
+        stop_vm
+        ;;
+    status)
+        status_vm
+        ;;
+    restart)
+        stop_vm
+        sleep 2
+        start_vm
+        ;;
+    *)
+        echo "Usage: $0 {start|stop|status|restart}"
+        exit 1
+        ;;
+esac
diff --git a/playbooks/templates/nixos/vm-libvirt.xml.j2 b/playbooks/templates/nixos/vm-libvirt.xml.j2
new file mode 100644
index 00000000..915a6090
--- /dev/null
+++ b/playbooks/templates/nixos/vm-libvirt.xml.j2
@@ -0,0 +1,96 @@
+<domain type='kvm'>
+  <name>{{ vm_name }}</name>
+  <memory unit='MiB'>{{ vm_memory }}</memory>
+  <vcpu placement='static'>{{ vm_vcpus }}</vcpu>
+
+  <os>
+    <type arch='x86_64' machine='q35'>hvm</type>
+    <boot dev='hd'/>
+  </os>
+
+  <features>
+    <acpi/>
+    <apic/>
+    <vmport state='off'/>
+  </features>
+
+  <cpu mode='host-passthrough'>
+    <topology sockets='1' cores='{{ vm_vcpus }}' threads='1'/>
+  </cpu>
+
+  <clock offset='utc'>
+    <timer name='rtc' tickpolicy='catchup'/>
+    <timer name='pit' tickpolicy='delay'/>
+    <timer name='hpet' present='no'/>
+  </clock>
+
+  <on_poweroff>destroy</on_poweroff>
+  <on_reboot>restart</on_reboot>
+  <on_crash>destroy</on_crash>
+
+  <pm>
+    <suspend-to-mem enabled='no'/>
+    <suspend-to-disk enabled='no'/>
+  </pm>
+
+  <devices>
+    <emulator>/usr/bin/qemu-system-x86_64</emulator>
+
+    <disk type='file' device='disk'>
+      <driver name='qemu' type='qcow2' cache='none' io='native'/>
+      <source file='{{ vm_disk }}'/>
+      <target dev='vda' bus='virtio'/>
+      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
+    </disk>
+
+{% if bootlinux_9p is defined and bootlinux_9p %}
+    <!-- 9P filesystem for kernel source sharing -->
+    <filesystem type='mount' accessmode='passthrough'>
+      <source dir='{{ topdir_path }}/linux'/>
+      <target dir='linux_source'/>
+      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
+    </filesystem>
+{% endif %}
+
+    <interface type='network'>
+      <source network='default'/>
+      <model type='virtio'/>
+      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
+    </interface>
+
+    <serial type='pty'>
+      <target type='isa-serial' port='0'>
+        <model name='isa-serial'/>
+      </target>
+    </serial>
+
+    <console type='pty'>
+      <target type='serial' port='0'/>
+    </console>
+
+    <input type='tablet' bus='usb'>
+      <address type='usb' bus='0' port='1'/>
+    </input>
+
+    <input type='mouse' bus='ps2'/>
+    <input type='keyboard' bus='ps2'/>
+
+    <graphics type='vnc' port='-1' autoport='yes'>
+      <listen type='address' address='127.0.0.1'/>
+    </graphics>
+
+    <video>
+      <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/>
+      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
+    </video>
+
+    <memballoon model='virtio'>
+      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
+    </memballoon>
+
+    <rng model='virtio'>
+      <backend model='random'>/dev/urandom</backend>
+      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
+    </rng>
+  </devices>
+</domain>
diff --git a/playbooks/templates/nixos/vms.nix.j2 b/playbooks/templates/nixos/vms.nix.j2
new file mode 100644
index 00000000..f2af2b96
--- /dev/null
+++ b/playbooks/templates/nixos/vms.nix.j2
@@ -0,0 +1,45 @@
+# NixOS VM configurations for kdevops
+{ config, pkgs, ... }:
+
+{
+  # VM-specific configurations
+  virtualisation.memorySize = {{ nixos_vm_memory_mb }};
+  virtualisation.diskSize = {{ nixos_vm_disk_size_gb * 1024 }};  # Convert GB to MB
+  virtualisation.cores = {{ nixos_vm_vcpus }};
+
+  # Enable virtio for better performance
+  virtualisation.qemu.options = [
+    "-machine q35,accel=kvm"
+    "-cpu host"
+    "-smp {{ nixos_vm_vcpus }}"
+    "-m {{ nixos_vm_memory_mb }}M"
+  ];
+
+  # Network configuration for VMs
+  networking.useDHCP = true;
+  networking.nameservers = [ "8.8.8.8" "8.8.4.4" ];
+
+  # VM-specific services
+  services.qemuGuest.enable = true;
+
+  # Enable console access
+  services.getty.autologinUser = "kdevops";
+  boot.kernelParams = [ "console=ttyS0,115200n8" "console=tty0" ];
+
+  # Additional VM optimizations
+  boot.initrd.availableKernelModules = [ "virtio_net" "virtio_pci" "virtio_blk" "virtio_scsi" "9p" "9pnet_virtio" ];
+  boot.kernelModules = [ "virtio_balloon" "virtio_rng" ];
+
+{% if bootlinux_9p is defined and bootlinux_9p %}
+  # 9P filesystem support for kernel development
+  fileSystems."/mnt/linux" = {
+    device = "linux_source";
+    fsType = "9p";
+    options = [ "trans=virtio" "version=9p2000.L" "rw" ];
+  };
+{% endif %}
+
+  # Ensure VM can be reached via SSH
+  services.openssh.permitRootLogin = "yes";
+  services.openssh.passwordAuthentication = false;
+}
diff --git a/playbooks/templates/nixos/workflow-deps.nix.j2 b/playbooks/templates/nixos/workflow-deps.nix.j2
new file mode 100644
index 00000000..d01aac08
--- /dev/null
+++ b/playbooks/templates/nixos/workflow-deps.nix.j2
@@ -0,0 +1,127 @@
+{ config, pkgs, lib, ... }:
+
+{
+  # Workflow-specific dependencies based on enabled kdevops workflows
+  environment.systemPackages = with pkgs; [
+{% if kdevops_workflow_enable_fstests is defined and kdevops_workflow_enable_fstests %}
+    # fstests dependencies
+    xfsprogs
+    btrfs-progs
+    e2fsprogs
+    f2fs-tools
+    fio
+    dbench
+    stress-ng
+    attr
+    acl
+    quota
+    nfs-utils
+    cifs-utils
+{% endif %}
+
+{% if kdevops_workflow_enable_blktests is defined and kdevops_workflow_enable_blktests %}
+    # blktests dependencies
+    nvme-cli
+    sg3_utils
+    targetcli
+    multipath-tools
+    dmraid
+    lvm2
+    mdadm
+{% endif %}
+
+{% if kdevops_workflow_enable_selftests is defined and kdevops_workflow_enable_selftests %}
+    # selftests dependencies
+    perf-tools
+    numactl
+    libcap
+    libseccomp
+    keyutils
+    iproute2
+    ethtool
+    tc
+{% endif %}
+
+{% if kdevops_workflow_enable_mmtests is defined and kdevops_workflow_enable_mmtests %}
+    # mmtests dependencies
+    gnuplot
+    perl
+    cpupower
+    dmidecode
+    sysstat
+    iotop
+    powertop
+{% endif %}
+
+{% if kdevops_workflow_enable_pynfs is defined and kdevops_workflow_enable_pynfs %}
+    # pynfs dependencies
+    python3
+    python3Packages.ply
+    nfs-utils
+{% endif %}
+
+{% if kdevops_workflow_enable_ltp is defined and kdevops_workflow_enable_ltp %}
+    # LTP dependencies
+    autoconf
+    automake
+    m4
+    libtool
+    pkg-config
+    flex
+    bison
+    libacl
+    libcap
+    libaio
+    libnuma
+    libsepol
+    libselinux
+    libssl
+{% endif %}
+
+{% if kdevops_workflow_enable_sysbench is defined and kdevops_workflow_enable_sysbench %}
+    # sysbench dependencies
+    sysbench
+    mysql
+    postgresql
+{% endif %}
+
+{% if kdevops_workflow_enable_gitr is defined and kdevops_workflow_enable_gitr %}
+    # git regression testing dependencies
+    git
+    gitFull
+    perl
+    subversion
+    mercurial
+{% endif %}
+
+    # Common build tools often needed
+    autoconf
+    automake
+    libtool
+    pkg-config
+    flex
+    bison
+    bc
+    openssl
+    elfutils
+    libelf
+  ];
+
+{% if kdevops_workflow_enable_fstests is defined and kdevops_workflow_enable_fstests %}
+  # Enable required services for fstests
+  services.nfs.server.enable = true;
+  services.rpcbind.enable = true;
+{% endif %}
+
+{% if kdevops_workflow_enable_sysbench is defined and kdevops_workflow_enable_sysbench %}
+  # Database services for sysbench
+  services.mysql = {
+    enable = true;
+    package = pkgs.mariadb;
+  };
+
+  services.postgresql = {
+    enable = true;
+  };
+{% endif %}
+}
diff --git a/playbooks/update_ssh_config_nixos.yml b/playbooks/update_ssh_config_nixos.yml
new file mode 100644
index 00000000..e3275b80
--- /dev/null
+++ b/playbooks/update_ssh_config_nixos.yml
@@ -0,0 +1,57 @@
+---
+# SPDX-License-Identifier: copyleft-next-0.3.1
+
+- name: Update SSH configuration for NixOS VMs
+  hosts: localhost
+  gather_facts: false
+  tasks:
+    - name: Ensure .ssh directory exists
+      ansible.builtin.file:
+        path: "{{ topdir_path }}/.ssh"
+        state: directory
+        mode: "0700"
+
+    - name: Check if SSH key exists
+      ansible.builtin.stat:
+        path: "{{ topdir_path }}/.ssh/kdevops_id_rsa"
+      register: ssh_key
+
+    - name: Generate SSH key pair if not exists
+      ansible.builtin.command: |
+        ssh-keygen -t rsa -b 4096 -f {{ topdir_path }}/.ssh/kdevops_id_rsa -N '' -C 'kdevops@nixos'
+      when: not ssh_key.stat.exists
+
+    - name: Set proper permissions on SSH key
+      ansible.builtin.file:
+        path: "{{ item }}"
+        mode: "0600"
+      loop:
+        - "{{ topdir_path }}/.ssh/kdevops_id_rsa"
+        - "{{ topdir_path }}/.ssh/kdevops_id_rsa.pub"
+      when: ssh_key.stat.exists or ssh_key.changed
+
+    - name: Get list of NixOS VMs
+      ansible.builtin.shell: |
+        virsh -c {{ libvirt_uri | default('qemu:///system') }} list --name | grep -E "({{ kdevops_host_prefix }}|nixos)" || true
+      register: nixos_vms
+      changed_when: false
+
+    - name: Update SSH config entries
+      ansible.builtin.blockinfile:
+        path: "{{ topdir_path }}/.ssh/config"
+        create: true
+        mode: "0600"
+        marker: "# {mark} ANSIBLE MANAGED BLOCK - NixOS VMs"
+        block: |
+          {% for vm in nixos_vms.stdout_lines %}
+          Host {{ vm }}
+              HostName {{ hostvars[vm]['ansible_host'] | default('192.168.100.2') }}
+              User kdevops
+              Port 22
+              IdentityFile {{ topdir_path }}/.ssh/kdevops_id_rsa
+              StrictHostKeyChecking no
+              UserKnownHostsFile /dev/null
+              LogLevel ERROR
+
+          {% endfor %}
+      when: nixos_vms.stdout_lines | length > 0
diff --git a/scripts/detect_libvirt_session.sh b/scripts/detect_libvirt_session.sh
new file mode 100755
index 00000000..caea9367
--- /dev/null
+++ b/scripts/detect_libvirt_session.sh
@@ -0,0 +1,26 @@
+#!/bin/bash
+# SPDX-License-Identifier: copyleft-next-0.3.1
+#
+# Detect the appropriate libvirt session type (system vs user) based on
+# distribution defaults, similar to how guestfs handles it.
+
+SCRIPTS_DIR=$(dirname $0)
+source ${SCRIPTS_DIR}/libvirt_pool.sh
+
+OS_FILE="/etc/os-release"
+LIBVIRT_URI="qemu:///system"  # Default to system
+
+# Get the pool variables which includes distribution detection
+get_pool_vars
+
+# Fedora defaults to user session
+if [[ "$USES_QEMU_USER_SESSION" == "y" ]]; then
+    LIBVIRT_URI="qemu:///session"
+fi
+
+# Override detection if explicitly configured
+if [[ -n "$CONFIG_LIBVIRT_URI_PATH" ]]; then
+    LIBVIRT_URI="$CONFIG_LIBVIRT_URI_PATH"
+fi
+
+echo "$LIBVIRT_URI"
diff --git a/scripts/nixos.Makefile b/scripts/nixos.Makefile
new file mode 100644
index 00000000..7a88c527
--- /dev/null
+++ b/scripts/nixos.Makefile
@@ -0,0 +1,93 @@
+# SPDX-License-Identifier: copyleft-next-0.3.1
+
+NIXOS_ARGS :=
+
+KDEVOPS_NODES_TEMPLATE :=	$(KDEVOPS_NODES_ROLE_TEMPLATE_DIR)/nixos_nodes.j2
+KDEVOPS_NODES :=		nixos/kdevops_nodes.yaml
+
+export KDEVOPS_PROVISIONED_SSH := $(KDEVOPS_PROVISIONED_SSH_DEFAULT_GUARD)
+
+NIXOS_ARGS += nixos_path='$(TOPDIR_PATH)/nixos'
+NIXOS_ARGS += data_home_dir=/home/kdevops
+NIXOS_ARGS += nixos_channel=$(CONFIG_NIXOS_CHANNEL)
+
+NIXOS_ARGS += libvirt_provider=True
+
+QEMU_GROUP:=$(subst ",,$(CONFIG_LIBVIRT_QEMU_GROUP))
+NIXOS_ARGS += kdevops_storage_pool_group='$(QEMU_GROUP)'
+NIXOS_ARGS += storage_pool_group='$(QEMU_GROUP)'
+
+9P_HOST_CLONE :=
+ifeq (y,$(CONFIG_BOOTLINUX_9P))
+9P_HOST_CLONE := 9p_linux_clone
+endif
+
+LIBVIRT_PCIE_PASSTHROUGH :=
+ifeq (y,$(CONFIG_KDEVOPS_LIBVIRT_PCIE_PASSTHROUGH))
+LIBVIRT_PCIE_PASSTHROUGH := libvirt_pcie_passthrough_permissions
+endif
+
+ANSIBLE_EXTRA_ARGS += $(NIXOS_ARGS)
+
+NIXOS_BRINGUP_DEPS :=
+NIXOS_BRINGUP_DEPS +=  $(9P_HOST_CLONE)
+NIXOS_BRINGUP_DEPS +=  $(LIBVIRT_PCIE_PASSTHROUGH)
+NIXOS_BRINGUP_DEPS +=  install_nixos_deps
+
+KDEVOPS_PROVISION_METHOD		:= bringup_nixos
+KDEVOPS_PROVISION_STATUS_METHOD		:= status_nixos
+KDEVOPS_PROVISION_DESTROY_METHOD	:= destroy_nixos
+
+9p_linux_clone:
+	$(Q)make linux-clone
+
+libvirt_pcie_passthrough_permissions:
+	$(Q)ansible-playbook $(ANSIBLE_VERBOSE) \
+		playbooks/libvirt_pcie_passthrough.yml
+
+$(KDEVOPS_PROVISIONED_SSH): $(KDEVOPS_HOSTS_PREFIX)
+	$(Q)# The SSH connectivity is verified during NixOS VM provisioning
+	$(Q)# VMs get DHCP IPs and SSH is tested directly in the playbook
+	$(Q)touch $(KDEVOPS_PROVISIONED_SSH)
+
+install_nixos_deps:
+	$(Q)ansible-playbook $(ANSIBLE_VERBOSE) \
+		--limit 'localhost' \
+		playbooks/nixos.yml \
+		--extra-vars=@./extra_vars.yaml \
+		--tags install-deps
+
+generate_nixos_configs:
+	$(Q)ansible-playbook $(ANSIBLE_VERBOSE) \
+		--limit 'localhost' \
+		playbooks/nixos.yml \
+		--extra-vars=@./extra_vars.yaml \
+		--tags generate-configs
+
+bringup_nixos: $(NIXOS_BRINGUP_DEPS) generate_nixos_configs
+	$(Q)ansible-playbook $(ANSIBLE_VERBOSE) \
+		$(KDEVOPS_PLAYBOOKS_DIR)/nixos.yml \
+		--extra-vars=@./extra_vars.yaml \
+		--tags build-vms,bringup,console
+PHONY += bringup_nixos
+
+status_nixos:
+	$(Q)scripts/status_nixos.sh
+PHONY += status_nixos
+
+destroy_nixos:
+	$(Q)ansible-playbook $(ANSIBLE_VERBOSE) \
+		playbooks/nixos.yml \
+		--extra-vars=@./extra_vars.yaml \
+		--tags destroy
+PHONY += destroy_nixos
+
+clean_nixos_cache: destroy_nixos
+	$(Q)echo "Performing deep clean of NixOS cached images..."
+	$(Q)sudo /nix/store/*/bin/nix-collect-garbage -d 2>/dev/null || \
+		sudo nix-collect-garbage -d 2>/dev/null || \
+		echo "Warning: Could not run nix garbage collection"
+	$(Q)rm -rf nixos/generated nixos/result nixos/*.qcow2 2>/dev/null || true
+	$(Q)rm -rf /xfs1/libvirt/kdevops/nixos/nixos-image-* 2>/dev/null || true
+	$(Q)echo "NixOS cache cleaned"
+PHONY += clean_nixos_cache
diff --git a/scripts/nixos_ssh_key_name.py b/scripts/nixos_ssh_key_name.py
new file mode 100755
index 00000000..8dffb993
--- /dev/null
+++ b/scripts/nixos_ssh_key_name.py
@@ -0,0 +1,55 @@
+#!/usr/bin/env python3
+# SPDX-License-Identifier: copyleft-next-0.3.1
+
+"""Generate SSH key name for NixOS VMs based on directory location."""
+
+import os
+import sys
+import hashlib
+
+
+def get_ssh_key_name():
+    """Generate SSH key name based on kdevops project directory."""
+    # Find the kdevops root directory
+    # Start from the script's location
+    script_dir = os.path.dirname(os.path.abspath(__file__))
+
+    # The script is in kdevops/scripts/, so go up one level
+    kdevops_root = os.path.dirname(script_dir)
+
+    # Use the kdevops root directory for consistent key naming
+    # This ensures the same key is used regardless of where the script is called from
+    cwd = kdevops_root
+
+    # Get the last two directory components for the key name
+    path_parts = cwd.split("/")
+    if len(path_parts) >= 2:
+        # Use last two directories
+        key_suffix = "-".join(path_parts[-2:])
+    else:
+        # Use just the last directory
+        key_suffix = path_parts[-1] if path_parts else "kdevops"
+
+    # Create a short hash to ensure uniqueness
+    path_hash = hashlib.sha256(cwd.encode()).hexdigest()[:8]
+
+    # Construct the key name
+    key_name = f"kdevops-nixos-{key_suffix}-{path_hash}"
+
+    return key_name
+
+
+def main():
+    """Main function."""
+    if len(sys.argv) > 1 and sys.argv[1] == "--path":
+        # Return full path to key
+        key_name = get_ssh_key_name()
+        key_path = os.path.expanduser(f"~/.ssh/{key_name}")
+        print(key_path)
+    else:
+        # Return just the key name
+        print(get_ssh_key_name())
+
+
+if __name__ == "__main__":
+    main()
diff --git a/scripts/provision.Makefile b/scripts/provision.Makefile
index a5ab8b84..f04264f6 100644
--- a/scripts/provision.Makefile
+++ b/scripts/provision.Makefile
@@ -60,6 +60,10 @@ ifeq (y,$(CONFIG_GUESTFS))
 include scripts/guestfs.Makefile
 endif
 
+ifeq (y,$(CONFIG_NIXOS))
+include scripts/nixos.Makefile
+endif
+
 KDEVOPS_MRPROPER += $(KDEVOPS_PROVISIONED_SSH)
 KDEVOPS_MRPROPER += $(KDEVOPS_PROVISIONED_DEVCONFIG)
 
diff --git a/scripts/status_nixos.sh b/scripts/status_nixos.sh
new file mode 100755
index 00000000..0ee9e09f
--- /dev/null
+++ b/scripts/status_nixos.sh
@@ -0,0 +1,57 @@
+#!/bin/bash
+# SPDX-License-Identifier: copyleft-next-0.3.1
+#
+# Show status of NixOS VMs
+
+SCRIPTS_DIR=$(dirname $0)
+source ${SCRIPTS_DIR}/libvirt_pool.sh
+
+# Get libvirt session settings
+get_pool_vars
+
+# Detect libvirt URI
+if [[ -x "${SCRIPTS_DIR}/detect_libvirt_session.sh" ]]; then
+    LIBVIRT_URI=$("${SCRIPTS_DIR}/detect_libvirt_session.sh")
+else
+    LIBVIRT_URI="qemu:///system"
+fi
+
+export LIBVIRT_DEFAULT_URI="$LIBVIRT_URI"
+
+echo "NixOS VM Status (using $LIBVIRT_URI):"
+echo "======================================"
+
+# Check if virsh is available
+if ! command -v virsh &> /dev/null; then
+    echo "Error: virsh command not found. Please install libvirt."
+    exit 1
+fi
+
+# List all VMs with nixos prefix or from kdevops
+if [[ "$USES_QEMU_USER_SESSION" != "y" && "$CAN_SUDO" == "y" ]]; then
+    sudo virsh list --all | grep -E "(nixos|kdevops)" || echo "No NixOS VMs found."
+else
+    virsh list --all | grep -E "(nixos|kdevops)" || echo "No NixOS VMs found."
+fi
+
+echo ""
+echo "Network Status:"
+echo "==============="
+
+# Show network status
+if [[ "$USES_QEMU_USER_SESSION" != "y" && "$CAN_SUDO" == "y" ]]; then
+    sudo virsh net-list --all | grep nixos || echo "No NixOS networks found."
+else
+    virsh net-list --all | grep nixos || echo "No NixOS networks found."
+fi
+
+echo ""
+echo "Storage Pool Status:"
+echo "===================="
+
+# Show storage pool status
+if [[ "$USES_QEMU_USER_SESSION" != "y" && "$CAN_SUDO" == "y" ]]; then
+    sudo virsh pool-list --all | grep nixos || echo "No NixOS storage pools found."
+else
+    virsh pool-list --all | grep nixos || echo "No NixOS storage pools found."
+fi
diff --git a/scripts/update_ssh_config_nixos.py b/scripts/update_ssh_config_nixos.py
new file mode 100755
index 00000000..206c69cc
--- /dev/null
+++ b/scripts/update_ssh_config_nixos.py
@@ -0,0 +1,133 @@
+#!/usr/bin/env python3
+# SPDX-License-Identifier: copyleft-next-0.3.1
+
+"""Update SSH config for NixOS VMs.
+
+This script manages SSH configuration entries for NixOS VMs running
+via native QEMU virtualization (not libvirt). It handles both adding
+and removing SSH config entries.
+
+Usage:
+    update_ssh_config_nixos.py update <hostname> <host> <port> <user> <ssh_config> <privkey> <tag>
+    update_ssh_config_nixos.py remove <hostname> '' '' '' <ssh_config> '' <tag>
+"""
+
+import os
+import sys
+import re
+from pathlib import Path
+
+
+def update_ssh_config(
+    action, hostname, host_ip, port, username, ssh_config_path, ssh_key_path, tag
+):
+    """Update or remove SSH config entries for NixOS VMs.
+
+    Args:
+        action: 'update' to add/update entry, 'remove' to remove entry
+        hostname: VM hostname
+        host_ip: Host IP (usually localhost for NixOS VMs)
+        port: SSH port number
+        username: SSH username
+        ssh_config_path: Path to SSH config file
+        ssh_key_path: Path to SSH private key
+        tag: Tag to identify entries (e.g., 'NixOS VM')
+    """
+
+    ssh_config_path = os.path.expanduser(ssh_config_path)
+
+    # Ensure SSH config directory exists
+    os.makedirs(os.path.dirname(ssh_config_path), exist_ok=True)
+
+    # Read existing config
+    config_content = ""
+    if os.path.exists(ssh_config_path):
+        with open(ssh_config_path, "r") as f:
+            config_content = f.read()
+
+    # Pattern to match our managed entries
+    entry_pattern = re.compile(
+        rf"^# kdevops-managed: {re.escape(tag)} - {re.escape(hostname)}\n"
+        r"Host [^\n]+\n"
+        r"(?:[ \t]+[^\n]+\n)*",
+        re.MULTILINE,
+    )
+
+    if action == "remove":
+        # Remove existing entry
+        config_content = entry_pattern.sub("", config_content)
+        print(f"Removed SSH config entry for {hostname}")
+
+    elif action == "update":
+        # Remove existing entry first
+        config_content = entry_pattern.sub("", config_content)
+
+        # Create new entry
+        new_entry = f"""# kdevops-managed: {tag} - {hostname}
+Host {hostname}
+    HostName {host_ip}
+    Port {port}
+    User {username}
+    IdentityFile {ssh_key_path}
+    StrictHostKeyChecking no
+    UserKnownHostsFile /dev/null
+    LogLevel ERROR
+
+"""
+
+        # Add new entry at the end
+        config_content = config_content.rstrip() + "\n\n" + new_entry
+        print(f"Updated SSH config entry for {hostname} (port {port})")
+
+    # Write updated config
+    with open(ssh_config_path, "w") as f:
+        f.write(config_content)
+
+
+def main():
+    """Main function to handle command line arguments."""
+    if len(sys.argv) < 8:
+        print(
+            "Usage: update_ssh_config_nixos.py <action> <hostname> <host> <port> <user> <ssh_config> <privkey> <tag>"
+        )
+        print("  action: 'update' or 'remove'")
+        print("  hostname: VM hostname")
+        print("  host: Host IP (use 'localhost' for local VMs)")
+        print("  port: SSH port number")
+        print("  user: SSH username")
+        print("  ssh_config: Path to SSH config file")
+        print("  privkey: Path to SSH private key")
+        print("  tag: Tag to identify entries (e.g., 'NixOS VM')")
+        sys.exit(1)
+
+    action = sys.argv[1]
+    hostname = sys.argv[2]
+    host_ip = sys.argv[3] if sys.argv[3] else "localhost"
+    port = sys.argv[4] if sys.argv[4] else "22"
+    username = sys.argv[5] if sys.argv[5] else "kdevops"
+    ssh_config_path = sys.argv[6]
+    ssh_key_path = sys.argv[7] if len(sys.argv) > 7 else ""
+    tag = sys.argv[8] if len(sys.argv) > 8 else "NixOS VM"
+
+    if action not in ["update", "remove"]:
+        print(f"Error: Invalid action '{action}'. Use 'update' or 'remove'.")
+        sys.exit(1)
+
+    try:
+        update_ssh_config(
+            action,
+            hostname,
+            host_ip,
+            port,
+            username,
+            ssh_config_path,
+            ssh_key_path,
+            tag,
+        )
+    except Exception as e:
+        print(f"Error: {e}")
+        sys.exit(1)
+
+
+if __name__ == "__main__":
+    main()
-- 
2.50.1


  parent reply	other threads:[~2025-08-27  9:32 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-08-27  9:32 [PATCH 0/3] kdevops: add initial nixos support Luis Chamberlain
2025-08-27  9:32 ` [PATCH 1/3] common: use fallback for group inference on remote systems Luis Chamberlain
2025-08-27  9:32 ` Luis Chamberlain [this message]
2025-08-27  9:32 ` [PATCH 3/3] mirror: add Nix binary cache mirroring support Luis Chamberlain
2025-08-29  7:50 ` [PATCH 0/3] kdevops: add initial nixos support Luis Chamberlain

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20250827093215.3540056-3-mcgrof@kernel.org \
    --to=mcgrof@kernel.org \
    --cc=cel@kernel.org \
    --cc=da.gomez@kruces.com \
    --cc=kdevops@lists.linux.dev \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox