public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH 6.6.y 0/4] Fix CVE-2024-27022: fork/hugetlb race with vfio prerequisites
@ 2026-04-02 16:13 tugrul.kukul
  2026-04-02 16:13 ` [PATCH 6.6.y 1/4] vfio: Create vfio_fs_type with inode per device tugrul.kukul
                   ` (4 more replies)
  0 siblings, 5 replies; 6+ messages in thread
From: tugrul.kukul @ 2026-04-02 16:13 UTC (permalink / raw)
  To: gregkh, sashal, stable
  Cc: alex.williamson, kevin.tian, jgg, lorenzo.stoakes, david, akpm,
	mike.kravetz, linmiaohe, yi.l.liu, axelrasmussen, leah.rumancik,
	kvm, linux-kernel, david.nystrom

From: Tugrul Kukul <tugrul.kukul@est.tech>

This series fixes CVE-2024-27022 on 6.6 stable by first backporting the
necessary vfio refactoring, then applying the fork fix.

== Background ==

CVE-2024-27022 is a race condition in dup_mmap() during fork() where a
file-backed VMA becomes visible through the i_mmap tree before it is
fully initialized. A concurrent hugetlbfs operation (fallocate/punch_hole)
can access the VMA with a NULL or inconsistent vma_lock, causing a kernel
deadlock or WARNING.

The mainline fix (35e351780fa9, v6.9-rc5) defers linking the file VMA
into the i_mmap tree until the VMA is fully initialized.

== Why this hasn't been fixed in 6.6 until now ==

This CVE has had a troubled backport history on 6.6 stable:

1. cec11fa2eb51 - Incomplete backport to 6.6, only moved
   hugetlb_dup_vma_private() and vm_ops->open() but left
   vma_iter_bulk_store() and mm->map_count++ in place.
   Caused xfstests failures.

2. dd782da47076 - Sam James reverted the incomplete backport. [1]

3. Leah Rumancik attempted a correct backport but discovered it
   introduced a vfio-pci ordering issue: vm_ops->open() being called
   before copy_page_range() breaks vfio-pci's zap-then-fault mechanism.
   Leah withdrew the patch. [2]

4. Axel Rasmussen backported Alex Williamson's 3 vfio refactor
   commits to both 6.9 and 6.6 stable [3][4]. The 6.9 backport was
   accepted [5], but for 6.6 Alex Williamson pointed out that the
   fork fix was still reverted — without it, the vfio patches alone
   are unnecessary. Axel withdrew the 6.6 series.

5. 6.6 stable has remained unfixed since July 2024.

== This series ==

This series picks up Axel's withdrawn 6.6 backport of the vfio
refactor patches [4] and adds the missing fork fix on top, completing
the work that was left unfinished. Patches 1-3 are Alex Williamson's
vfio refactor (backported by Axel Rasmussen), patch 4 is the CVE fix
adapted for 6.6 stable.

  1/4 vfio: Create vfio_fs_type with inode per device
  2/4 vfio/pci: Use unmap_mapping_range()
  3/4 vfio/pci: Insert full vma on mmap'd MMIO fault
  4/4 fork: defer linking file vma until vma is fully initialized

== 6.6 stable adaptations ==

Patch 4/4 (fork: defer linking file vma):
 - 6.6 uses vma_iter_bulk_store() which can fail, unlike mainline's
   __mt_dup(). Error handling via goto fail_nomem_vmi_store is preserved.

== Testing ==

CVE reproducer (custom fork/punch_hole stress test, 60s):
 - Unpatched: deadlock in hugetlb_fault within seconds
 - Patched: 2174 forks completed, zero warnings (KASAN+LOCKDEP enabled)

xfstests quick group (672 tests, ext4, virtme-ng):
 - 65 failures, all pre-existing or KASAN-overhead timeouts
 - Zero patch-attributable regressions
 - Leah's 4 specific tests that caused the original revert
   (ext4/303, generic/051, generic/054, generic/069) all pass

VFIO + fork stress test (CONFIG_VFIO=y, hugetlbfs):
 - CVE reproducer with vfio modules active: zero warnings

Yocto CI integration (~87,900 tests per build, LTP+ptest+runtime):
 - No known regressions

dmesg analysis (KASAN, LOCKDEP, PROVE_LOCKING, DEBUG_VM, DEBUG_LIST):
 - Zero memory safety, locking, or VMA state issues across ~38 hours
   of testing

== References ==

[1] Revert discussion:
    https://lore.kernel.org/stable/20240604004751.3883227-1-leah.rumancik@gmail.com/

[2] Leah's backport attempt and vfio discovery:
    https://lore.kernel.org/stable/CACzhbgRjDNkpaQOYsUN+v+jn3E2DVxX0Q4WuQWNjfwEx4Fps6g@mail.gmail.com/T/#u

[3] Axel's vfio series and Alex's feedback:
    https://lore.kernel.org/stable/20240716112530.2562c41b.alex.williamson@redhat.com/T/#u

[4] Axel's 6.6 vfio series (withdrawn):
    https://lore.kernel.org/stable/20240717222429.2011540-1-axelrasmussen@google.com/T/#u

[5] Axel's 6.9 vfio series (accepted):
    https://lore.kernel.org/stable/20240717213339.1921530-1-axelrasmussen@google.com/T/#u

[6] CVE details:
    https://nvd.nist.gov/vuln/detail/CVE-2024-27022

[7] Original report:
    https://lore.kernel.org/linux-mm/20240129161735.6gmjsswx62o4pbja@revolver/T/

[8] Mainline fix:
    https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=35e351780fa9d8240dd6f7e4f245f9ea37e96c19


Alex Williamson (3):
  vfio: Create vfio_fs_type with inode per device
  vfio/pci: Use unmap_mapping_range()
  vfio/pci: Insert full vma on mmap'd MMIO fault

Miaohe Lin (1):
  fork: defer linking file vma until vma is fully initialized

 drivers/vfio/device_cdev.c       |   7 +
 drivers/vfio/group.c             |   7 +
 drivers/vfio/pci/vfio_pci_core.c | 271 ++++++++-----------------------
 drivers/vfio/vfio_main.c         |  44 +++++
 include/linux/vfio.h             |   1 +
 include/linux/vfio_pci_core.h    |   2 -
 kernel/fork.c                    |  29 ++--
 7 files changed, 140 insertions(+), 221 deletions(-)

-- 
2.34.1


^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH 6.6.y 1/4] vfio: Create vfio_fs_type with inode per device
  2026-04-02 16:13 [PATCH 6.6.y 0/4] Fix CVE-2024-27022: fork/hugetlb race with vfio prerequisites tugrul.kukul
@ 2026-04-02 16:13 ` tugrul.kukul
  2026-04-02 16:13 ` [PATCH 6.6.y 2/4] vfio/pci: Use unmap_mapping_range() tugrul.kukul
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 6+ messages in thread
From: tugrul.kukul @ 2026-04-02 16:13 UTC (permalink / raw)
  To: gregkh, sashal, stable
  Cc: alex.williamson, kevin.tian, jgg, lorenzo.stoakes, david, akpm,
	mike.kravetz, linmiaohe, yi.l.liu, axelrasmussen, leah.rumancik,
	kvm, linux-kernel, david.nystrom

From: Alex Williamson <alex.williamson@redhat.com>

commit b7c5e64fecfa88764791679cca4786ac65de739e upstream.

By linking all the device fds we provide to userspace to an
address space through a new pseudo fs, we can use tools like
unmap_mapping_range() to zap all vmas associated with a device.

Suggested-by: Jason Gunthorpe <jgg@nvidia.com>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Link: https://lore.kernel.org/r/20240530045236.1005864-2-alex.williamson@redhat.com
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Axel Rasmussen <axelrasmussen@google.com>
Signed-off-by: Tugrul Kukul <tugrul.kukul@est.tech>
---
 drivers/vfio/device_cdev.c |  7 ++++++
 drivers/vfio/group.c       |  7 ++++++
 drivers/vfio/vfio_main.c   | 44 ++++++++++++++++++++++++++++++++++++++
 include/linux/vfio.h       |  1 +
 4 files changed, 59 insertions(+)

diff --git a/drivers/vfio/device_cdev.c b/drivers/vfio/device_cdev.c
index e75da0a70d1f8..bb1817bd4ff31 100644
--- a/drivers/vfio/device_cdev.c
+++ b/drivers/vfio/device_cdev.c
@@ -39,6 +39,13 @@ int vfio_device_fops_cdev_open(struct inode *inode, struct file *filep)
 
 	filep->private_data = df;
 
+	/*
+	 * Use the pseudo fs inode on the device to link all mmaps
+	 * to the same address space, allowing us to unmap all vmas
+	 * associated to this device using unmap_mapping_range().
+	 */
+	filep->f_mapping = device->inode->i_mapping;
+
 	return 0;
 
 err_put_registration:
diff --git a/drivers/vfio/group.c b/drivers/vfio/group.c
index 54c3079031e16..4cd857ff0259b 100644
--- a/drivers/vfio/group.c
+++ b/drivers/vfio/group.c
@@ -285,6 +285,13 @@ static struct file *vfio_device_open_file(struct vfio_device *device)
 	 */
 	filep->f_mode |= (FMODE_PREAD | FMODE_PWRITE);
 
+	/*
+	 * Use the pseudo fs inode on the device to link all mmaps
+	 * to the same address space, allowing us to unmap all vmas
+	 * associated to this device using unmap_mapping_range().
+	 */
+	filep->f_mapping = device->inode->i_mapping;
+
 	if (device->group->type == VFIO_NO_IOMMU)
 		dev_warn(device->dev, "vfio-noiommu device opened by user "
 			 "(%s:%d)\n", current->comm, task_pid_nr(current));
diff --git a/drivers/vfio/vfio_main.c b/drivers/vfio/vfio_main.c
index 6dfb290c339f9..ec4fbd993bf00 100644
--- a/drivers/vfio/vfio_main.c
+++ b/drivers/vfio/vfio_main.c
@@ -22,8 +22,10 @@
 #include <linux/list.h>
 #include <linux/miscdevice.h>
 #include <linux/module.h>
+#include <linux/mount.h>
 #include <linux/mutex.h>
 #include <linux/pci.h>
+#include <linux/pseudo_fs.h>
 #include <linux/rwsem.h>
 #include <linux/sched.h>
 #include <linux/slab.h>
@@ -43,9 +45,13 @@
 #define DRIVER_AUTHOR	"Alex Williamson <alex.williamson@redhat.com>"
 #define DRIVER_DESC	"VFIO - User Level meta-driver"
 
+#define VFIO_MAGIC 0x5646494f /* "VFIO" */
+
 static struct vfio {
 	struct class			*device_class;
 	struct ida			device_ida;
+	struct vfsmount			*vfs_mount;
+	int				fs_count;
 } vfio;
 
 #ifdef CONFIG_VFIO_NOIOMMU
@@ -186,6 +192,8 @@ static void vfio_device_release(struct device *dev)
 	if (device->ops->release)
 		device->ops->release(device);
 
+	iput(device->inode);
+	simple_release_fs(&vfio.vfs_mount, &vfio.fs_count);
 	kvfree(device);
 }
 
@@ -228,6 +236,34 @@ struct vfio_device *_vfio_alloc_device(size_t size, struct device *dev,
 }
 EXPORT_SYMBOL_GPL(_vfio_alloc_device);
 
+static int vfio_fs_init_fs_context(struct fs_context *fc)
+{
+	return init_pseudo(fc, VFIO_MAGIC) ? 0 : -ENOMEM;
+}
+
+static struct file_system_type vfio_fs_type = {
+	.name = "vfio",
+	.owner = THIS_MODULE,
+	.init_fs_context = vfio_fs_init_fs_context,
+	.kill_sb = kill_anon_super,
+};
+
+static struct inode *vfio_fs_inode_new(void)
+{
+	struct inode *inode;
+	int ret;
+
+	ret = simple_pin_fs(&vfio_fs_type, &vfio.vfs_mount, &vfio.fs_count);
+	if (ret)
+		return ERR_PTR(ret);
+
+	inode = alloc_anon_inode(vfio.vfs_mount->mnt_sb);
+	if (IS_ERR(inode))
+		simple_release_fs(&vfio.vfs_mount, &vfio.fs_count);
+
+	return inode;
+}
+
 /*
  * Initialize a vfio_device so it can be registered to vfio core.
  */
@@ -246,6 +282,11 @@ static int vfio_init_device(struct vfio_device *device, struct device *dev,
 	init_completion(&device->comp);
 	device->dev = dev;
 	device->ops = ops;
+	device->inode = vfio_fs_inode_new();
+	if (IS_ERR(device->inode)) {
+		ret = PTR_ERR(device->inode);
+		goto out_inode;
+	}
 
 	if (ops->init) {
 		ret = ops->init(device);
@@ -260,6 +301,9 @@ static int vfio_init_device(struct vfio_device *device, struct device *dev,
 	return 0;
 
 out_uninit:
+	iput(device->inode);
+	simple_release_fs(&vfio.vfs_mount, &vfio.fs_count);
+out_inode:
 	vfio_release_device_set(device);
 	ida_free(&vfio.device_ida, device->index);
 	return ret;
diff --git a/include/linux/vfio.h b/include/linux/vfio.h
index 5ac5f182ce0bb..514a7f9b3ef4b 100644
--- a/include/linux/vfio.h
+++ b/include/linux/vfio.h
@@ -64,6 +64,7 @@ struct vfio_device {
 	struct completion comp;
 	struct iommufd_access *iommufd_access;
 	void (*put_kvm)(struct kvm *kvm);
+	struct inode *inode;
 #if IS_ENABLED(CONFIG_IOMMUFD)
 	struct iommufd_device *iommufd_device;
 	u8 iommufd_attached:1;
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH 6.6.y 2/4] vfio/pci: Use unmap_mapping_range()
  2026-04-02 16:13 [PATCH 6.6.y 0/4] Fix CVE-2024-27022: fork/hugetlb race with vfio prerequisites tugrul.kukul
  2026-04-02 16:13 ` [PATCH 6.6.y 1/4] vfio: Create vfio_fs_type with inode per device tugrul.kukul
@ 2026-04-02 16:13 ` tugrul.kukul
  2026-04-02 16:13 ` [PATCH 6.6.y 3/4] vfio/pci: Insert full vma on mmap'd MMIO fault tugrul.kukul
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 6+ messages in thread
From: tugrul.kukul @ 2026-04-02 16:13 UTC (permalink / raw)
  To: gregkh, sashal, stable
  Cc: alex.williamson, kevin.tian, jgg, lorenzo.stoakes, david, akpm,
	mike.kravetz, linmiaohe, yi.l.liu, axelrasmussen, leah.rumancik,
	kvm, linux-kernel, david.nystrom

From: Alex Williamson <alex.williamson@redhat.com>

commit aac6db75a9fc2c7a6f73e152df8f15101dda38e6 upstream.

With the vfio device fd tied to the address space of the pseudo fs
inode, we can use the mm to track all vmas that might be mmap'ing
device BARs, which removes our vma_list and all the complicated lock
ordering necessary to manually zap each related vma.

Note that we can no longer store the pfn in vm_pgoff if we want to use
unmap_mapping_range() to zap a selective portion of the device fd
corresponding to BAR mappings.

This also converts our mmap fault handler to use vmf_insert_pfn()
because we no longer have a vma_list to avoid the concurrency problem
with io_remap_pfn_range().  The goal is to eventually use the vm_ops
huge_fault handler to avoid the additional faulting overhead, but
vmf_insert_pfn_{pmd,pud}() need to learn about pfnmaps first.

Also, Jason notes that a race exists between unmap_mapping_range() and
the fops mmap callback if we were to call io_remap_pfn_range() to
populate the vma on mmap.  Specifically, mmap_region() does call_mmap()
before it does vma_link_file() which gives a window where the vma is
populated but invisible to unmap_mapping_range().

Suggested-by: Jason Gunthorpe <jgg@nvidia.com>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Link: https://lore.kernel.org/r/20240530045236.1005864-3-alex.williamson@redhat.com
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Axel Rasmussen <axelrasmussen@google.com>
Signed-off-by: Tugrul Kukul <tugrul.kukul@est.tech>
---
 drivers/vfio/pci/vfio_pci_core.c | 264 +++++++------------------------
 include/linux/vfio_pci_core.h    |   2 -
 2 files changed, 55 insertions(+), 211 deletions(-)

diff --git a/drivers/vfio/pci/vfio_pci_core.c b/drivers/vfio/pci/vfio_pci_core.c
index 3f139360752e2..e05d6ee9d4cab 100644
--- a/drivers/vfio/pci/vfio_pci_core.c
+++ b/drivers/vfio/pci/vfio_pci_core.c
@@ -1599,100 +1599,20 @@ ssize_t vfio_pci_core_write(struct vfio_device *core_vdev, const char __user *bu
 }
 EXPORT_SYMBOL_GPL(vfio_pci_core_write);
 
-/* Return 1 on zap and vma_lock acquired, 0 on contention (only with @try) */
-static int vfio_pci_zap_and_vma_lock(struct vfio_pci_core_device *vdev, bool try)
+static void vfio_pci_zap_bars(struct vfio_pci_core_device *vdev)
 {
-	struct vfio_pci_mmap_vma *mmap_vma, *tmp;
+	struct vfio_device *core_vdev = &vdev->vdev;
+	loff_t start = VFIO_PCI_INDEX_TO_OFFSET(VFIO_PCI_BAR0_REGION_INDEX);
+	loff_t end = VFIO_PCI_INDEX_TO_OFFSET(VFIO_PCI_ROM_REGION_INDEX);
+	loff_t len = end - start;
 
-	/*
-	 * Lock ordering:
-	 * vma_lock is nested under mmap_lock for vm_ops callback paths.
-	 * The memory_lock semaphore is used by both code paths calling
-	 * into this function to zap vmas and the vm_ops.fault callback
-	 * to protect the memory enable state of the device.
-	 *
-	 * When zapping vmas we need to maintain the mmap_lock => vma_lock
-	 * ordering, which requires using vma_lock to walk vma_list to
-	 * acquire an mm, then dropping vma_lock to get the mmap_lock and
-	 * reacquiring vma_lock.  This logic is derived from similar
-	 * requirements in uverbs_user_mmap_disassociate().
-	 *
-	 * mmap_lock must always be the top-level lock when it is taken.
-	 * Therefore we can only hold the memory_lock write lock when
-	 * vma_list is empty, as we'd need to take mmap_lock to clear
-	 * entries.  vma_list can only be guaranteed empty when holding
-	 * vma_lock, thus memory_lock is nested under vma_lock.
-	 *
-	 * This enables the vm_ops.fault callback to acquire vma_lock,
-	 * followed by memory_lock read lock, while already holding
-	 * mmap_lock without risk of deadlock.
-	 */
-	while (1) {
-		struct mm_struct *mm = NULL;
-
-		if (try) {
-			if (!mutex_trylock(&vdev->vma_lock))
-				return 0;
-		} else {
-			mutex_lock(&vdev->vma_lock);
-		}
-		while (!list_empty(&vdev->vma_list)) {
-			mmap_vma = list_first_entry(&vdev->vma_list,
-						    struct vfio_pci_mmap_vma,
-						    vma_next);
-			mm = mmap_vma->vma->vm_mm;
-			if (mmget_not_zero(mm))
-				break;
-
-			list_del(&mmap_vma->vma_next);
-			kfree(mmap_vma);
-			mm = NULL;
-		}
-		if (!mm)
-			return 1;
-		mutex_unlock(&vdev->vma_lock);
-
-		if (try) {
-			if (!mmap_read_trylock(mm)) {
-				mmput(mm);
-				return 0;
-			}
-		} else {
-			mmap_read_lock(mm);
-		}
-		if (try) {
-			if (!mutex_trylock(&vdev->vma_lock)) {
-				mmap_read_unlock(mm);
-				mmput(mm);
-				return 0;
-			}
-		} else {
-			mutex_lock(&vdev->vma_lock);
-		}
-		list_for_each_entry_safe(mmap_vma, tmp,
-					 &vdev->vma_list, vma_next) {
-			struct vm_area_struct *vma = mmap_vma->vma;
-
-			if (vma->vm_mm != mm)
-				continue;
-
-			list_del(&mmap_vma->vma_next);
-			kfree(mmap_vma);
-
-			zap_vma_ptes(vma, vma->vm_start,
-				     vma->vm_end - vma->vm_start);
-		}
-		mutex_unlock(&vdev->vma_lock);
-		mmap_read_unlock(mm);
-		mmput(mm);
-	}
+	unmap_mapping_range(core_vdev->inode->i_mapping, start, len, true);
 }
 
 void vfio_pci_zap_and_down_write_memory_lock(struct vfio_pci_core_device *vdev)
 {
-	vfio_pci_zap_and_vma_lock(vdev, false);
 	down_write(&vdev->memory_lock);
-	mutex_unlock(&vdev->vma_lock);
+	vfio_pci_zap_bars(vdev);
 }
 
 u16 vfio_pci_memory_lock_and_enable(struct vfio_pci_core_device *vdev)
@@ -1714,99 +1634,41 @@ void vfio_pci_memory_unlock_and_restore(struct vfio_pci_core_device *vdev, u16 c
 	up_write(&vdev->memory_lock);
 }
 
-/* Caller holds vma_lock */
-static int __vfio_pci_add_vma(struct vfio_pci_core_device *vdev,
-			      struct vm_area_struct *vma)
-{
-	struct vfio_pci_mmap_vma *mmap_vma;
-
-	mmap_vma = kmalloc(sizeof(*mmap_vma), GFP_KERNEL_ACCOUNT);
-	if (!mmap_vma)
-		return -ENOMEM;
-
-	mmap_vma->vma = vma;
-	list_add(&mmap_vma->vma_next, &vdev->vma_list);
-
-	return 0;
-}
-
-/*
- * Zap mmaps on open so that we can fault them in on access and therefore
- * our vma_list only tracks mappings accessed since last zap.
- */
-static void vfio_pci_mmap_open(struct vm_area_struct *vma)
-{
-	zap_vma_ptes(vma, vma->vm_start, vma->vm_end - vma->vm_start);
-}
-
-static void vfio_pci_mmap_close(struct vm_area_struct *vma)
+static unsigned long vma_to_pfn(struct vm_area_struct *vma)
 {
 	struct vfio_pci_core_device *vdev = vma->vm_private_data;
-	struct vfio_pci_mmap_vma *mmap_vma;
+	int index = vma->vm_pgoff >> (VFIO_PCI_OFFSET_SHIFT - PAGE_SHIFT);
+	u64 pgoff;
 
-	mutex_lock(&vdev->vma_lock);
-	list_for_each_entry(mmap_vma, &vdev->vma_list, vma_next) {
-		if (mmap_vma->vma == vma) {
-			list_del(&mmap_vma->vma_next);
-			kfree(mmap_vma);
-			break;
-		}
-	}
-	mutex_unlock(&vdev->vma_lock);
+	pgoff = vma->vm_pgoff &
+		((1U << (VFIO_PCI_OFFSET_SHIFT - PAGE_SHIFT)) - 1);
+
+	return (pci_resource_start(vdev->pdev, index) >> PAGE_SHIFT) + pgoff;
 }
 
 static vm_fault_t vfio_pci_mmap_fault(struct vm_fault *vmf)
 {
 	struct vm_area_struct *vma = vmf->vma;
 	struct vfio_pci_core_device *vdev = vma->vm_private_data;
-	struct vfio_pci_mmap_vma *mmap_vma;
-	vm_fault_t ret = VM_FAULT_NOPAGE;
+	unsigned long pfn, pgoff = vmf->pgoff - vma->vm_pgoff;
+	vm_fault_t ret = VM_FAULT_SIGBUS;
 
-	mutex_lock(&vdev->vma_lock);
-	down_read(&vdev->memory_lock);
+	pfn = vma_to_pfn(vma);
 
-	/*
-	 * Memory region cannot be accessed if the low power feature is engaged
-	 * or memory access is disabled.
-	 */
-	if (vdev->pm_runtime_engaged || !__vfio_pci_memory_enabled(vdev)) {
-		ret = VM_FAULT_SIGBUS;
-		goto up_out;
-	}
+	down_read(&vdev->memory_lock);
 
-	/*
-	 * We populate the whole vma on fault, so we need to test whether
-	 * the vma has already been mapped, such as for concurrent faults
-	 * to the same vma.  io_remap_pfn_range() will trigger a BUG_ON if
-	 * we ask it to fill the same range again.
-	 */
-	list_for_each_entry(mmap_vma, &vdev->vma_list, vma_next) {
-		if (mmap_vma->vma == vma)
-			goto up_out;
-	}
+	if (vdev->pm_runtime_engaged || !__vfio_pci_memory_enabled(vdev))
+		goto out_disabled;
 
-	if (io_remap_pfn_range(vma, vma->vm_start, vma->vm_pgoff,
-			       vma->vm_end - vma->vm_start,
-			       vma->vm_page_prot)) {
-		ret = VM_FAULT_SIGBUS;
-		zap_vma_ptes(vma, vma->vm_start, vma->vm_end - vma->vm_start);
-		goto up_out;
-	}
+	ret = vmf_insert_pfn(vma, vmf->address, pfn + pgoff);
 
-	if (__vfio_pci_add_vma(vdev, vma)) {
-		ret = VM_FAULT_OOM;
-		zap_vma_ptes(vma, vma->vm_start, vma->vm_end - vma->vm_start);
-	}
-
-up_out:
+out_disabled:
 	up_read(&vdev->memory_lock);
-	mutex_unlock(&vdev->vma_lock);
+
 	return ret;
 }
 
 static const struct vm_operations_struct vfio_pci_mmap_ops = {
-	.open = vfio_pci_mmap_open,
-	.close = vfio_pci_mmap_close,
 	.fault = vfio_pci_mmap_fault,
 };
 
@@ -1869,11 +1731,12 @@ int vfio_pci_core_mmap(struct vfio_device *core_vdev, struct vm_area_struct *vma
 
 	vma->vm_private_data = vdev;
 	vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
-	vma->vm_pgoff = (pci_resource_start(pdev, index) >> PAGE_SHIFT) + pgoff;
+	vma->vm_page_prot = pgprot_decrypted(vma->vm_page_prot);
 
 	/*
-	 * See remap_pfn_range(), called from vfio_pci_fault() but we can't
-	 * change vm_flags within the fault handler.  Set them now.
+	 * Set vm_flags now, they should not be changed in the fault handler.
+	 * We want the same flags and page protection (decrypted above) as
+	 * io_remap_pfn_range() would set.
 	 */
 	vm_flags_set(vma, VM_IO | VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP);
 	vma->vm_ops = &vfio_pci_mmap_ops;
@@ -2173,8 +2036,6 @@ int vfio_pci_core_init_dev(struct vfio_device *core_vdev)
 	mutex_init(&vdev->ioeventfds_lock);
 	INIT_LIST_HEAD(&vdev->dummy_resources_list);
 	INIT_LIST_HEAD(&vdev->ioeventfds_list);
-	mutex_init(&vdev->vma_lock);
-	INIT_LIST_HEAD(&vdev->vma_list);
 	INIT_LIST_HEAD(&vdev->sriov_pfs_item);
 	init_rwsem(&vdev->memory_lock);
 	xa_init(&vdev->ctx);
@@ -2190,7 +2051,6 @@ void vfio_pci_core_release_dev(struct vfio_device *core_vdev)
 
 	mutex_destroy(&vdev->igate);
 	mutex_destroy(&vdev->ioeventfds_lock);
-	mutex_destroy(&vdev->vma_lock);
 	kfree(vdev->region);
 	kfree(vdev->pm_save);
 }
@@ -2468,26 +2328,15 @@ static int vfio_pci_dev_set_pm_runtime_get(struct vfio_device_set *dev_set)
 	return ret;
 }
 
-/*
- * We need to get memory_lock for each device, but devices can share mmap_lock,
- * therefore we need to zap and hold the vma_lock for each device, and only then
- * get each memory_lock.
- */
 static int vfio_pci_dev_set_hot_reset(struct vfio_device_set *dev_set,
 				      struct vfio_pci_group_info *groups,
 				      struct iommufd_ctx *iommufd_ctx)
 {
-	struct vfio_pci_core_device *cur_mem;
-	struct vfio_pci_core_device *cur_vma;
-	struct vfio_pci_core_device *cur;
+	struct vfio_pci_core_device *vdev;
 	struct pci_dev *pdev;
-	bool is_mem = true;
 	int ret;
 
 	mutex_lock(&dev_set->lock);
-	cur_mem = list_first_entry(&dev_set->device_list,
-				   struct vfio_pci_core_device,
-				   vdev.dev_set_list);
 
 	pdev = vfio_pci_dev_set_resettable(dev_set);
 	if (!pdev) {
@@ -2504,7 +2353,7 @@ static int vfio_pci_dev_set_hot_reset(struct vfio_device_set *dev_set,
 	if (ret)
 		goto err_unlock;
 
-	list_for_each_entry(cur_vma, &dev_set->device_list, vdev.dev_set_list) {
+	list_for_each_entry(vdev, &dev_set->device_list, vdev.dev_set_list) {
 		bool owned;
 
 		/*
@@ -2528,38 +2377,38 @@ static int vfio_pci_dev_set_hot_reset(struct vfio_device_set *dev_set,
 		 * Otherwise, reset is not allowed.
 		 */
 		if (iommufd_ctx) {
-			int devid = vfio_iommufd_get_dev_id(&cur_vma->vdev,
+			int devid = vfio_iommufd_get_dev_id(&vdev->vdev,
 							    iommufd_ctx);
 
 			owned = (devid > 0 || devid == -ENOENT);
 		} else {
-			owned = vfio_dev_in_groups(&cur_vma->vdev, groups);
+			owned = vfio_dev_in_groups(&vdev->vdev, groups);
 		}
 
 		if (!owned) {
 			ret = -EINVAL;
-			goto err_undo;
+			break;
 		}
 
 		/*
-		 * Locking multiple devices is prone to deadlock, runaway and
-		 * unwind if we hit contention.
+		 * Take the memory write lock for each device and zap BAR
+		 * mappings to prevent the user accessing the device while in
+		 * reset.  Locking multiple devices is prone to deadlock,
+		 * runaway and unwind if we hit contention.
 		 */
-		if (!vfio_pci_zap_and_vma_lock(cur_vma, true)) {
+		if (!down_write_trylock(&vdev->memory_lock)) {
 			ret = -EBUSY;
-			goto err_undo;
+			break;
 		}
+
+		vfio_pci_zap_bars(vdev);
 	}
-	cur_vma = NULL;
 
-	list_for_each_entry(cur_mem, &dev_set->device_list, vdev.dev_set_list) {
-		if (!down_write_trylock(&cur_mem->memory_lock)) {
-			ret = -EBUSY;
-			goto err_undo;
-		}
-		mutex_unlock(&cur_mem->vma_lock);
+	if (!list_entry_is_head(vdev,
+				&dev_set->device_list, vdev.dev_set_list)) {
+		vdev = list_prev_entry(vdev, vdev.dev_set_list);
+		goto err_undo;
 	}
-	cur_mem = NULL;
 
 	/*
 	 * The pci_reset_bus() will reset all the devices in the bus.
@@ -2570,25 +2419,22 @@ static int vfio_pci_dev_set_hot_reset(struct vfio_device_set *dev_set,
 	 * cause the PCI config space reset without restoring the original
 	 * state (saved locally in 'vdev->pm_save').
 	 */
-	list_for_each_entry(cur, &dev_set->device_list, vdev.dev_set_list)
-		vfio_pci_set_power_state(cur, PCI_D0);
+	list_for_each_entry(vdev, &dev_set->device_list, vdev.dev_set_list)
+		vfio_pci_set_power_state(vdev, PCI_D0);
 
 	ret = pci_reset_bus(pdev);
 
+	vdev = list_last_entry(&dev_set->device_list,
+			       struct vfio_pci_core_device, vdev.dev_set_list);
+
 err_undo:
-	list_for_each_entry(cur, &dev_set->device_list, vdev.dev_set_list) {
-		if (cur == cur_mem)
-			is_mem = false;
-		if (cur == cur_vma)
-			break;
-		if (is_mem)
-			up_write(&cur->memory_lock);
-		else
-			mutex_unlock(&cur->vma_lock);
-	}
+	list_for_each_entry_from_reverse(vdev, &dev_set->device_list,
+					 vdev.dev_set_list)
+		up_write(&vdev->memory_lock);
+
+	list_for_each_entry(vdev, &dev_set->device_list, vdev.dev_set_list)
+		pm_runtime_put(&vdev->pdev->dev);
 
-	list_for_each_entry(cur, &dev_set->device_list, vdev.dev_set_list)
-		pm_runtime_put(&cur->pdev->dev);
 err_unlock:
 	mutex_unlock(&dev_set->lock);
 	return ret;
diff --git a/include/linux/vfio_pci_core.h b/include/linux/vfio_pci_core.h
index 562e8754869da..4f283514a1ed6 100644
--- a/include/linux/vfio_pci_core.h
+++ b/include/linux/vfio_pci_core.h
@@ -93,8 +93,6 @@ struct vfio_pci_core_device {
 	struct list_head		sriov_pfs_item;
 	struct vfio_pci_core_device	*sriov_pf_core_dev;
 	struct notifier_block	nb;
-	struct mutex		vma_lock;
-	struct list_head	vma_list;
 	struct rw_semaphore	memory_lock;
 };
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH 6.6.y 3/4] vfio/pci: Insert full vma on mmap'd MMIO fault
  2026-04-02 16:13 [PATCH 6.6.y 0/4] Fix CVE-2024-27022: fork/hugetlb race with vfio prerequisites tugrul.kukul
  2026-04-02 16:13 ` [PATCH 6.6.y 1/4] vfio: Create vfio_fs_type with inode per device tugrul.kukul
  2026-04-02 16:13 ` [PATCH 6.6.y 2/4] vfio/pci: Use unmap_mapping_range() tugrul.kukul
@ 2026-04-02 16:13 ` tugrul.kukul
  2026-04-02 16:13 ` [PATCH 6.6.y 4/4] fork: defer linking file vma until vma is fully initialized tugrul.kukul
  2026-04-07 18:16 ` [PATCH 6.6.y 0/4] Fix CVE-2024-27022: fork/hugetlb race with vfio prerequisites Alex Williamson
  4 siblings, 0 replies; 6+ messages in thread
From: tugrul.kukul @ 2026-04-02 16:13 UTC (permalink / raw)
  To: gregkh, sashal, stable
  Cc: alex.williamson, kevin.tian, jgg, lorenzo.stoakes, david, akpm,
	mike.kravetz, linmiaohe, yi.l.liu, axelrasmussen, leah.rumancik,
	kvm, linux-kernel, david.nystrom

From: Alex Williamson <alex.williamson@redhat.com>

commit d71a989cf5d961989c273093cdff2550acdde314 upstream.

In order to improve performance of typical scenarios we can try to insert
the entire vma on fault.  This accelerates typical cases, such as when
the MMIO region is DMA mapped by QEMU.  The vfio_iommu_type1 driver will
fault in the entire DMA mapped range through fixup_user_fault().

In synthetic testing, this improves the time required to walk a PCI BAR
mapping from userspace by roughly 1/3rd.

This is likely an interim solution until vmf_insert_pfn_{pmd,pud}() gain
support for pfnmaps.

Suggested-by: Yan Zhao <yan.y.zhao@intel.com>
Link: https://lore.kernel.org/all/Zl6XdUkt%2FzMMGOLF@yzhao56-desk.sh.intel.com/
Reviewed-by: Yan Zhao <yan.y.zhao@intel.com>
Link: https://lore.kernel.org/r/20240607035213.2054226-1-alex.williamson@redhat.com
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Axel Rasmussen <axelrasmussen@google.com>
Signed-off-by: Tugrul Kukul <tugrul.kukul@est.tech>
---
 drivers/vfio/pci/vfio_pci_core.c | 19 +++++++++++++++++--
 1 file changed, 17 insertions(+), 2 deletions(-)

diff --git a/drivers/vfio/pci/vfio_pci_core.c b/drivers/vfio/pci/vfio_pci_core.c
index e05d6ee9d4cab..55e28feba475e 100644
--- a/drivers/vfio/pci/vfio_pci_core.c
+++ b/drivers/vfio/pci/vfio_pci_core.c
@@ -1651,6 +1651,7 @@ static vm_fault_t vfio_pci_mmap_fault(struct vm_fault *vmf)
 	struct vm_area_struct *vma = vmf->vma;
 	struct vfio_pci_core_device *vdev = vma->vm_private_data;
 	unsigned long pfn, pgoff = vmf->pgoff - vma->vm_pgoff;
+	unsigned long addr = vma->vm_start;
 	vm_fault_t ret = VM_FAULT_SIGBUS;
 
 	pfn = vma_to_pfn(vma);
@@ -1658,11 +1659,25 @@ static vm_fault_t vfio_pci_mmap_fault(struct vm_fault *vmf)
 	down_read(&vdev->memory_lock);
 
 	if (vdev->pm_runtime_engaged || !__vfio_pci_memory_enabled(vdev))
-		goto out_disabled;
+		goto out_unlock;
 
 	ret = vmf_insert_pfn(vma, vmf->address, pfn + pgoff);
+	if (ret & VM_FAULT_ERROR)
+		goto out_unlock;
 
-out_disabled:
+	/*
+	 * Pre-fault the remainder of the vma, abort further insertions and
+	 * supress error if fault is encountered during pre-fault.
+	 */
+	for (; addr < vma->vm_end; addr += PAGE_SIZE, pfn++) {
+		if (addr == vmf->address)
+			continue;
+
+		if (vmf_insert_pfn(vma, addr, pfn) & VM_FAULT_ERROR)
+			break;
+	}
+
+out_unlock:
 	up_read(&vdev->memory_lock);
 
 	return ret;
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH 6.6.y 4/4] fork: defer linking file vma until vma is fully initialized
  2026-04-02 16:13 [PATCH 6.6.y 0/4] Fix CVE-2024-27022: fork/hugetlb race with vfio prerequisites tugrul.kukul
                   ` (2 preceding siblings ...)
  2026-04-02 16:13 ` [PATCH 6.6.y 3/4] vfio/pci: Insert full vma on mmap'd MMIO fault tugrul.kukul
@ 2026-04-02 16:13 ` tugrul.kukul
  2026-04-07 18:16 ` [PATCH 6.6.y 0/4] Fix CVE-2024-27022: fork/hugetlb race with vfio prerequisites Alex Williamson
  4 siblings, 0 replies; 6+ messages in thread
From: tugrul.kukul @ 2026-04-02 16:13 UTC (permalink / raw)
  To: gregkh, sashal, stable
  Cc: alex.williamson, kevin.tian, jgg, lorenzo.stoakes, david, akpm,
	mike.kravetz, linmiaohe, yi.l.liu, axelrasmussen, leah.rumancik,
	kvm, linux-kernel, david.nystrom

From: Miaohe Lin <linmiaohe@huawei.com>

[ Upstream commit 35e351780fa9d8240dd6f7e4f245f9ea37e96c19 ]

Thorvald reported a WARNING [1]. And the root cause is below race:

 CPU 1					CPU 2
 fork					hugetlbfs_fallocate
  dup_mmap				 hugetlbfs_punch_hole
   i_mmap_lock_write(mapping);
   vma_interval_tree_insert_after -- Child vma is visible through i_mmap tree.
   i_mmap_unlock_write(mapping);
   hugetlb_dup_vma_private -- Clear vma_lock outside i_mmap_rwsem!
					 i_mmap_lock_write(mapping);
   					 hugetlb_vmdelete_list
					  vma_interval_tree_foreach
					   hugetlb_vma_trylock_write -- Vma_lock is cleared.
   tmp->vm_ops->open -- Alloc new vma_lock outside i_mmap_rwsem!
					   hugetlb_vma_unlock_write -- Vma_lock is assigned!!!
					 i_mmap_unlock_write(mapping);

hugetlb_dup_vma_private() and hugetlb_vm_op_open() are called outside
i_mmap_rwsem lock while vma lock can be used in the same time.  Fix this
by deferring linking file vma until vma is fully initialized.  Those vmas
should be initialized first before they can be used.

[tk: Adapted to 6.6 stable where vma_iter_bulk_store() can fail
(unlike mainline which uses __mt_dup() for pre-allocation).
Preserved error handling via goto fail_nomem_vmi_store. Previous
backport (cec11fa2eb512) was reverted (dd782da470761) due to
xfstests failures.]

Link: https://lkml.kernel.org/r/20240410091441.3539905-1-linmiaohe@huawei.com
Fixes: 8d9bfb260814 ("hugetlb: add vma based lock for pmd sharing")
Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
Reported-by: Thorvald Natvig <thorvald@google.com>
Closes: https://lore.kernel.org/linux-mm/20240129161735.6gmjsswx62o4pbja@revolver/T/ [1]
Reviewed-by: Jane Chu <jane.chu@oracle.com>
Cc: Christian Brauner <brauner@kernel.org>
Cc: Heiko Carstens <hca@linux.ibm.com>
Cc: Kent Overstreet <kent.overstreet@linux.dev>
Cc: Liam R. Howlett <Liam.Howlett@oracle.com>
Cc: Mateusz Guzik <mjguzik@gmail.com>
Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: Miaohe Lin <linmiaohe@huawei.com>
Cc: Muchun Song <muchun.song@linux.dev>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Peng Zhang <zhangpeng.00@bytedance.com>
Cc: Tycho Andersen <tandersen@netflix.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Assisted-by: Claude:claude-opus-4.6
Suggested-by: David Nyström <david.nystrom@est.tech>
Signed-off-by: Tugrul Kukul <tugrul.kukul@est.tech>
---
 kernel/fork.c | 29 +++++++++++++++--------------
 1 file changed, 15 insertions(+), 14 deletions(-)

diff --git a/kernel/fork.c b/kernel/fork.c
index ce6f6e1e39057..5b60692b1a4ea 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -733,6 +733,21 @@ static __latent_entropy int dup_mmap(struct mm_struct *mm,
 		} else if (anon_vma_fork(tmp, mpnt))
 			goto fail_nomem_anon_vma_fork;
 		vm_flags_clear(tmp, VM_LOCKED_MASK);
+		/*
+		 * Copy/update hugetlb private vma information.
+		 */
+		if (is_vm_hugetlb_page(tmp))
+			hugetlb_dup_vma_private(tmp);
+
+		/* Link the vma into the MT */
+		if (vma_iter_bulk_store(&vmi, tmp))
+			goto fail_nomem_vmi_store;
+
+		mm->map_count++;
+
+		if (tmp->vm_ops && tmp->vm_ops->open)
+			tmp->vm_ops->open(tmp);
+
 		file = tmp->vm_file;
 		if (file) {
 			struct address_space *mapping = file->f_mapping;
@@ -749,23 +764,9 @@ static __latent_entropy int dup_mmap(struct mm_struct *mm,
 			i_mmap_unlock_write(mapping);
 		}
 
-		/*
-		 * Copy/update hugetlb private vma information.
-		 */
-		if (is_vm_hugetlb_page(tmp))
-			hugetlb_dup_vma_private(tmp);
-
-		/* Link the vma into the MT */
-		if (vma_iter_bulk_store(&vmi, tmp))
-			goto fail_nomem_vmi_store;
-
-		mm->map_count++;
 		if (!(tmp->vm_flags & VM_WIPEONFORK))
 			retval = copy_page_range(tmp, mpnt);
 
-		if (tmp->vm_ops && tmp->vm_ops->open)
-			tmp->vm_ops->open(tmp);
-
 		if (retval)
 			goto loop_out;
 	}
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH 6.6.y 0/4] Fix CVE-2024-27022: fork/hugetlb race with vfio prerequisites
  2026-04-02 16:13 [PATCH 6.6.y 0/4] Fix CVE-2024-27022: fork/hugetlb race with vfio prerequisites tugrul.kukul
                   ` (3 preceding siblings ...)
  2026-04-02 16:13 ` [PATCH 6.6.y 4/4] fork: defer linking file vma until vma is fully initialized tugrul.kukul
@ 2026-04-07 18:16 ` Alex Williamson
  4 siblings, 0 replies; 6+ messages in thread
From: Alex Williamson @ 2026-04-07 18:16 UTC (permalink / raw)
  To: tugrul.kukul
  Cc: gregkh, sashal, stable, kevin.tian, jgg, lorenzo.stoakes, david,
	akpm, mike.kravetz, linmiaohe, yi.l.liu, axelrasmussen,
	leah.rumancik, kvm, linux-kernel, david.nystrom, alex

On Thu,  2 Apr 2026 18:13:07 +0200
tugrul.kukul@est.tech wrote:

> From: Tugrul Kukul <tugrul.kukul@est.tech>
> 
> This series fixes CVE-2024-27022 on 6.6 stable by first backporting the
> necessary vfio refactoring, then applying the fork fix.
> 
> == Background ==
> 
> CVE-2024-27022 is a race condition in dup_mmap() during fork() where a
> file-backed VMA becomes visible through the i_mmap tree before it is
> fully initialized. A concurrent hugetlbfs operation (fallocate/punch_hole)
> can access the VMA with a NULL or inconsistent vma_lock, causing a kernel
> deadlock or WARNING.
> 
> The mainline fix (35e351780fa9, v6.9-rc5) defers linking the file VMA
> into the i_mmap tree until the VMA is fully initialized.
> 
> == Why this hasn't been fixed in 6.6 until now ==
> 
> This CVE has had a troubled backport history on 6.6 stable:
> 
> 1. cec11fa2eb51 - Incomplete backport to 6.6, only moved
>    hugetlb_dup_vma_private() and vm_ops->open() but left
>    vma_iter_bulk_store() and mm->map_count++ in place.
>    Caused xfstests failures.
> 
> 2. dd782da47076 - Sam James reverted the incomplete backport. [1]
> 
> 3. Leah Rumancik attempted a correct backport but discovered it
>    introduced a vfio-pci ordering issue: vm_ops->open() being called
>    before copy_page_range() breaks vfio-pci's zap-then-fault mechanism.
>    Leah withdrew the patch. [2]
> 
> 4. Axel Rasmussen backported Alex Williamson's 3 vfio refactor
>    commits to both 6.9 and 6.6 stable [3][4]. The 6.9 backport was
>    accepted [5], but for 6.6 Alex Williamson pointed out that the
>    fork fix was still reverted — without it, the vfio patches alone
>    are unnecessary. Axel withdrew the 6.6 series.
> 
> 5. 6.6 stable has remained unfixed since July 2024.
> 
> == This series ==
> 
> This series picks up Axel's withdrawn 6.6 backport of the vfio
> refactor patches [4] and adds the missing fork fix on top, completing
> the work that was left unfinished. Patches 1-3 are Alex Williamson's
> vfio refactor (backported by Axel Rasmussen), patch 4 is the CVE fix
> adapted for 6.6 stable.
> 
>   1/4 vfio: Create vfio_fs_type with inode per device
>   2/4 vfio/pci: Use unmap_mapping_range()
>   3/4 vfio/pci: Insert full vma on mmap'd MMIO fault
>   4/4 fork: defer linking file vma until vma is fully initialized
> 
> == 6.6 stable adaptations ==
> 
> Patch 4/4 (fork: defer linking file vma):
>  - 6.6 uses vma_iter_bulk_store() which can fail, unlike mainline's
>    __mt_dup(). Error handling via goto fail_nomem_vmi_store is preserved.
> 
> == Testing ==
> 
> CVE reproducer (custom fork/punch_hole stress test, 60s):
>  - Unpatched: deadlock in hugetlb_fault within seconds
>  - Patched: 2174 forks completed, zero warnings (KASAN+LOCKDEP enabled)
> 
> xfstests quick group (672 tests, ext4, virtme-ng):
>  - 65 failures, all pre-existing or KASAN-overhead timeouts
>  - Zero patch-attributable regressions
>  - Leah's 4 specific tests that caused the original revert
>    (ext4/303, generic/051, generic/054, generic/069) all pass
> 
> VFIO + fork stress test (CONFIG_VFIO=y, hugetlbfs):
>  - CVE reproducer with vfio modules active: zero warnings
> 
> Yocto CI integration (~87,900 tests per build, LTP+ptest+runtime):
>  - No known regressions
> 
> dmesg analysis (KASAN, LOCKDEP, PROVE_LOCKING, DEBUG_VM, DEBUG_LIST):
>  - Zero memory safety, locking, or VMA state issues across ~38 hours
>    of testing
> 
> == References ==
> 
> [1] Revert discussion:
>     https://lore.kernel.org/stable/20240604004751.3883227-1-leah.rumancik@gmail.com/
> 
> [2] Leah's backport attempt and vfio discovery:
>     https://lore.kernel.org/stable/CACzhbgRjDNkpaQOYsUN+v+jn3E2DVxX0Q4WuQWNjfwEx4Fps6g@mail.gmail.com/T/#u
> 
> [3] Axel's vfio series and Alex's feedback:
>     https://lore.kernel.org/stable/20240716112530.2562c41b.alex.williamson@redhat.com/T/#u
> 
> [4] Axel's 6.6 vfio series (withdrawn):
>     https://lore.kernel.org/stable/20240717222429.2011540-1-axelrasmussen@google.com/T/#u
> 
> [5] Axel's 6.9 vfio series (accepted):
>     https://lore.kernel.org/stable/20240717213339.1921530-1-axelrasmussen@google.com/T/#u
> 
> [6] CVE details:
>     https://nvd.nist.gov/vuln/detail/CVE-2024-27022
> 
> [7] Original report:
>     https://lore.kernel.org/linux-mm/20240129161735.6gmjsswx62o4pbja@revolver/T/
> 
> [8] Mainline fix:
>     https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=35e351780fa9d8240dd6f7e4f245f9ea37e96c19
> 
> 
> Alex Williamson (3):
>   vfio: Create vfio_fs_type with inode per device
>   vfio/pci: Use unmap_mapping_range()
>   vfio/pci: Insert full vma on mmap'd MMIO fault
> 
> Miaohe Lin (1):
>   fork: defer linking file vma until vma is fully initialized
> 
>  drivers/vfio/device_cdev.c       |   7 +
>  drivers/vfio/group.c             |   7 +
>  drivers/vfio/pci/vfio_pci_core.c | 271 ++++++++-----------------------
>  drivers/vfio/vfio_main.c         |  44 +++++
>  include/linux/vfio.h             |   1 +
>  include/linux/vfio_pci_core.h    |   2 -
>  kernel/fork.c                    |  29 ++--
>  7 files changed, 140 insertions(+), 221 deletions(-)
> 

For vfio bits:

Acked-by: Alex Williamson <alex@shazbot.org>

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2026-04-07 18:16 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-04-02 16:13 [PATCH 6.6.y 0/4] Fix CVE-2024-27022: fork/hugetlb race with vfio prerequisites tugrul.kukul
2026-04-02 16:13 ` [PATCH 6.6.y 1/4] vfio: Create vfio_fs_type with inode per device tugrul.kukul
2026-04-02 16:13 ` [PATCH 6.6.y 2/4] vfio/pci: Use unmap_mapping_range() tugrul.kukul
2026-04-02 16:13 ` [PATCH 6.6.y 3/4] vfio/pci: Insert full vma on mmap'd MMIO fault tugrul.kukul
2026-04-02 16:13 ` [PATCH 6.6.y 4/4] fork: defer linking file vma until vma is fully initialized tugrul.kukul
2026-04-07 18:16 ` [PATCH 6.6.y 0/4] Fix CVE-2024-27022: fork/hugetlb race with vfio prerequisites Alex Williamson

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox