* [PATCH 0/9] vfio/pci: Add mmap() for DMABUFs
@ 2026-04-16 13:17 Matt Evans
2026-04-16 13:17 ` [PATCH 1/9] vfio/pci: Fix vfio_pci_dma_buf_cleanup() double-put Matt Evans
` (8 more replies)
0 siblings, 9 replies; 10+ messages in thread
From: Matt Evans @ 2026-04-16 13:17 UTC (permalink / raw)
To: Alex Williamson, Leon Romanovsky, Jason Gunthorpe, Alex Mastro,
Christian König
Cc: Mahmoud Adam, David Matlack, Björn Töpel, Sumit Semwal,
Kevin Tian, Ankit Agrawal, Pranjal Shrivastava, Alistair Popple,
Vivek Kasireddy, linux-kernel, linux-media, dri-devel,
linaro-mm-sig, kvm
Hi all,
This series is based on previous RFCs/discussions:
Tech topic: https://lore.kernel.org/linux-iommu/20250918214425.2677057-1-amastro@fb.com/
RFCv1: https://lore.kernel.org/all/20260226202211.929005-1-mattev@meta.com/
RFCv2: https://lore.kernel.org/kvm/20260312184613.3710705-1-mattev@meta.com/
The background/rationale is covered in more detail in the RFC cover
letters. The TL;DR is:
The goal is to enable userspace driver designs that use VFIO to export
DMABUFs representing subsets of PCI device BARs, and "vend" those
buffers from a primary process to other subordinate processes by fd.
These processes then mmap() the buffers and their access to the device
is isolated to the exported ranges. This is an improvement on sharing
the VFIO device fd to subordinate processes, which would allow
unfettered access .
This is achieved by enabling mmap() of vfio-pci DMABUFs. Second, a
new ioctl()-based revocation mechanism is added to allow the primary
process to forcibly revoke access to previously-shared BAR spans, even
if the subordinate processes haven't cleanly exited.
(The related topic of safe delegation of iommufd control to the
subordinate processes is not addressed here, and is follow-up work.)
As well as isolation and revocation, another advantage to accessing a
BAR through a VMA backed by a DMABUF is that it's straightforward to
create the buffer with access attributes, such as write-combining.
Notes on patches
================
Feedback from the RFCs requested that, instead of creating
DMABUF-specific vm_ops and .fault paths, to go the whole way and
migrate the existing VFIO PCI BAR mmap() to be backed by a DMABUF too,
resulting in a common vm_ops and fault handler for mmap()s of both the
VFIO device and explicitly-exported DMABUFs. This has been done for
vfio-pci, but not sub-drivers (nvgrace-gpu's special-case mappings are
unchanged).
vfio/pci: Fix vfio_pci_dma_buf_cleanup() double-put
A bug fix to a related are, whose context is a depdency for later
patches.
vfio/pci: Add a helper to look up PFNs for DMABUFs
vfio/pci: Add a helper to create a DMABUF for a BAR-map VMA
The first is for a DMABUF VMA fault handler to determine
arbitrary-sized PFNs from ranges in DMABUF. Secondly, refactor
DMABUF export for use by the existing export feature and a new
helper that creates a DMABUF corresponding to a VFIO BAR mmap()
request.
vfio/pci: Convert BAR mmap() to use a DMABUF
The vfio-pci core mmap() creates a DMABUF with the helper, and the
vm_ops fault handler uses the other helper to resolve the fault.
Because this depends on DMABUF structs/code, CONFIG_VFIO_PCI_CORE
needs to depend on CONFIG_DMA_SHARED_BUFFER. The
CONFIG_VFIO_PCI_DMABUF still conditionally enables the export
support code.
NOTE: The user mmap()s a device fd, but the resulting VMA's vm_file
becomes that of the DMABUF which takes ownership of the device and
puts it on release. This maintains the existing behaviour of a VMA
keeping the VFIO device open.
BAR zapping then happens via the existing vfio_pci_dma_buf_move()
path, which now needs to unmap PTEs in the DMABUF's address_space.
vfio/pci: Provide a user-facing name for BAR mappings
There was a request for decent debug naming in /proc/<pid>/maps
etc. comparable to the existing VFIO names: since the VMAs are
DMABUFs, they have a "dmabuf:" prefix and can't be 100% identical
to before. This is a user-visible change, but this patch at least
now gives us extra info on the BDF & BAR being mapped.
vfio/pci: Clean up BAR zap and revocation
In general (see NOTE!) the vfio_pci_zap_bars() is now obsolete,
since it unmaps PTEs in the VFIO device address_space which is now
unused. This consolidates all calls (e.g. around reset) with the
neighbouring vfio_pci_dma_buf_move()s into new functions, to
revoke-zap/unrevoke.
NOTE: the nvgrace-gpu driver continues to use its own private
vm_ops, fault handler, etc. for its special memregions, and these
DO still add PTEs to the VFIO device address_space. So, a
temporary flag, vdev->bar_needs_zap, maintains the old behaviour
for this use. At least this patch's consolidation makes it easy
to remove the remaining zap when this need goes away.
A FIXME is added: if nvgrace-gpu is converted to DMABUFs, remove
the flag and final zap.
vfio/pci: Support mmap() of a VFIO DMABUF
Adds mmap() for a DMABUF fd exported from vfio-pci.
It was a goal to keep the VFIO device fd lifetime behaviour
unchanged with respect to the DMABUFs. An application can close
all device fds, and this will revoke/clean up all DMABUFs; no
mappings or other access can be performed now. When enabling
mmap() of the DMABUFs, this means access through the VMA is also
revoked. This complicates the fault handler because whilst the
DMABUF exists, it has no guarantee that the corresponding VFIO
device is still alive. Adds synchronisation ensuring the vdev is
available before vdev->memory_lock is touched.
(I decided against the alternative of preventing cleanup by holding
the VFIO device open if any DMABUFs exist, because it's both a
change of behaviour and less clean overall.)
I've added a chonky comment in place, happy to clarify more if you
have ideas.
vfio/pci: Permanently revoke a DMABUF on request
By weight, this is mostly a rename of revoked to an enum, status.
There are now 3 states for a buffer, usable and revoked
temporary/permanent. A new VFIO device ioctl is added,
VFIO_DEVICE_PCI_DMABUF_REVOKE, which passes a DMABUF (exported from
that device) and permanently revokes it. Thus a userspace driver
can guarantee any downstream consumers of a shared fd are prevented
from accessing a BAR range, and that range can be reused.
The code doing revocation in vfio_pci_dma_buf_move() is moved,
unchanged, to a common function for use by _move() and the new
ioctl path.
Q: I can't think of a good reason to temporarily revoke/unrevoke
buffers from userspace, so didn't add a 'flags' field to the ioctl
struct. Easy to add if people think it's worthwhile for future
use.
vfio/pci: Add mmap() attributes to DMABUF feature
Reserves bits [31:28] in vfio_device_feature_dma_buf to allow a
(CPU) mapping attribute to be specified for an exported set of
ranges. The default is the current UC, and a new flag can specify
CPU access as WC.
Q: I've taken 4 bits; the intention is for this field to be a
scalar not a bitmap (i.e. mutually-exclusive access properties).
Perhaps 4 is a bit too many?
Testing
=======
(The [RFC ONLY] userspace test program, for QEMU edu-plus, has been
dropped, but can be found in the GitHub branch below.)
This code has been tested in mapping DMABUFs of single/multiple
ranges, aliasing mmap()s, aliasing ranges across DMABUFs, vm_pgoff >
0, revocation, shutdown/cleanup scenarios, and hugepage mappings seem
to work correctly. I've lightly tested WC mappings also (by observing
resulting PTEs as having the correct attributes...). No regressions
observed on the VFIO selftests, or on our internal vfio-pci
applications.
End
===
This is based on -next (next-20260414 but will merge earlier), as it
depends on Leon's series "vfio: Wait for dma-buf invalidation to
complete":
https://lore.kernel.org/linux-iommu/20260205-nocturnal-poetic-chamois-f566ad@houat/T/#m310cd07011e3a1461b6fda45e3f9b886ba76571a
These commits are on GitHub, along with "[RFC ONLY] selftests: vfio: Add
standalone vfio_dmabuf_mmap_test":
https://github.com/metamev/linux/compare/next-20260414...metamev:linux:dev/mev/vfio-dmabuf-mmap
Thanks for reading,
Matt
================================================================================
Change log:
v1:
- Cleanup of the common DMABUF-aware VMA vm_ops fault handler and
export code.
- Fixed a lot of races, particularly faults racing with DMABUF
cleanup (if the VFIO device fds close, for example).
- Added nicer human-readable names for VFIO mmap() VMAs
RFCv2: Respin based on the feedback/suggestions:
https://lore.kernel.org/kvm/20260312184613.3710705-1-mattev@meta.com/
- Transform the existing VFIO BAR mmap path to also use DMABUFs
behind the scenes, and then simply share that code for
explicitly-mapped DMABUFs. Jason wanted to go that direction to
enable iommufd VFIO type 1 emulation to pick up a DMABUF for an IO
mapping.
- Revoke buffers using a VFIO device fd ioctl
RFCv1:
https://lore.kernel.org/all/20260226202211.929005-1-mattev@meta.com/
Matt Evans (9):
vfio/pci: Fix vfio_pci_dma_buf_cleanup() double-put
vfio/pci: Add a helper to look up PFNs for DMABUFs
vfio/pci: Add a helper to create a DMABUF for a BAR-map VMA
vfio/pci: Convert BAR mmap() to use a DMABUF
vfio/pci: Provide a user-facing name for BAR mappings
vfio/pci: Clean up BAR zap and revocation
vfio/pci: Support mmap() of a VFIO DMABUF
vfio/pci: Permanently revoke a DMABUF on request
vfio/pci: Add mmap() attributes to DMABUF feature
drivers/vfio/pci/Kconfig | 3 +-
drivers/vfio/pci/Makefile | 3 +-
drivers/vfio/pci/nvgrace-gpu/main.c | 5 +
drivers/vfio/pci/vfio_pci_config.c | 30 +-
drivers/vfio/pci/vfio_pci_core.c | 224 ++++++++++---
drivers/vfio/pci/vfio_pci_dmabuf.c | 500 +++++++++++++++++++++++-----
drivers/vfio/pci/vfio_pci_priv.h | 49 ++-
include/linux/vfio_pci_core.h | 1 +
include/uapi/linux/vfio.h | 42 ++-
9 files changed, 690 insertions(+), 167 deletions(-)
--
2.47.3
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH 1/9] vfio/pci: Fix vfio_pci_dma_buf_cleanup() double-put
2026-04-16 13:17 [PATCH 0/9] vfio/pci: Add mmap() for DMABUFs Matt Evans
@ 2026-04-16 13:17 ` Matt Evans
2026-04-16 13:17 ` [PATCH 2/9] vfio/pci: Add a helper to look up PFNs for DMABUFs Matt Evans
` (7 subsequent siblings)
8 siblings, 0 replies; 10+ messages in thread
From: Matt Evans @ 2026-04-16 13:17 UTC (permalink / raw)
To: Alex Williamson, Leon Romanovsky, Jason Gunthorpe, Alex Mastro,
Christian König
Cc: Mahmoud Adam, David Matlack, Björn Töpel, Sumit Semwal,
Kevin Tian, Ankit Agrawal, Pranjal Shrivastava, Alistair Popple,
Vivek Kasireddy, linux-kernel, linux-media, dri-devel,
linaro-mm-sig, kvm
vfio_pci_dma_buf_cleanup() assumed all VFIO device DMABUFs need to be
revoked. However, if vfio_pci_dma_buf_move() revokes DMABUFs before
the fd/device closes, then vfio_pci_dma_buf_cleanup() would do a
second/underflowing kref_put() then wait_for_completion() on a
completion that never fires. Fixed by predicating on revocation
status.
This could happen if PCI_COMMAND_MEMORY is cleared before closing the
device fd (but the scenario is more likely to hit when future commits
add more methods to revoke DMABUFs).
Fixes: 1a8a5227f2299 ("vfio: Wait for dma-buf invalidation to complete")
Signed-off-by: Matt Evans <mattev@meta.com>
---
(Just a fix, but later "vfio/pci: Convert BAR mmap() to use a DMABUF"
and "vfio/pci: Permanently revoke a DMABUF on request" depend on this
context, so including in this series.)
drivers/vfio/pci/vfio_pci_dmabuf.c | 9 +++++++--
1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/drivers/vfio/pci/vfio_pci_dmabuf.c b/drivers/vfio/pci/vfio_pci_dmabuf.c
index 281ba7d69567..04478b7415a0 100644
--- a/drivers/vfio/pci/vfio_pci_dmabuf.c
+++ b/drivers/vfio/pci/vfio_pci_dmabuf.c
@@ -395,20 +395,25 @@ void vfio_pci_dma_buf_cleanup(struct vfio_pci_core_device *vdev)
down_write(&vdev->memory_lock);
list_for_each_entry_safe(priv, tmp, &vdev->dmabufs, dmabufs_elm) {
+ bool was_revoked;
+
if (!get_file_active(&priv->dmabuf->file))
continue;
dma_resv_lock(priv->dmabuf->resv, NULL);
list_del_init(&priv->dmabufs_elm);
priv->vdev = NULL;
+ was_revoked = priv->revoked;
priv->revoked = true;
dma_buf_invalidate_mappings(priv->dmabuf);
dma_resv_wait_timeout(priv->dmabuf->resv,
DMA_RESV_USAGE_BOOKKEEP, false,
MAX_SCHEDULE_TIMEOUT);
dma_resv_unlock(priv->dmabuf->resv);
- kref_put(&priv->kref, vfio_pci_dma_buf_done);
- wait_for_completion(&priv->comp);
+ if (!was_revoked) {
+ kref_put(&priv->kref, vfio_pci_dma_buf_done);
+ wait_for_completion(&priv->comp);
+ }
vfio_device_put_registration(&vdev->vdev);
fput(priv->dmabuf->file);
}
--
2.47.3
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH 2/9] vfio/pci: Add a helper to look up PFNs for DMABUFs
2026-04-16 13:17 [PATCH 0/9] vfio/pci: Add mmap() for DMABUFs Matt Evans
2026-04-16 13:17 ` [PATCH 1/9] vfio/pci: Fix vfio_pci_dma_buf_cleanup() double-put Matt Evans
@ 2026-04-16 13:17 ` Matt Evans
2026-04-16 13:17 ` [PATCH 3/9] vfio/pci: Add a helper to create a DMABUF for a BAR-map VMA Matt Evans
` (6 subsequent siblings)
8 siblings, 0 replies; 10+ messages in thread
From: Matt Evans @ 2026-04-16 13:17 UTC (permalink / raw)
To: Alex Williamson, Leon Romanovsky, Jason Gunthorpe, Alex Mastro,
Christian König
Cc: Mahmoud Adam, David Matlack, Björn Töpel, Sumit Semwal,
Kevin Tian, Ankit Agrawal, Pranjal Shrivastava, Alistair Popple,
Vivek Kasireddy, linux-kernel, linux-media, dri-devel,
linaro-mm-sig, kvm
Add vfio_pci_dma_buf_find_pfn(), which a VMA fault handler can use to
find a PFN.
This supports multi-range DMABUFs, which typically would be used to
represent scattered spans but might even represent overlapping or
aliasing spans of PFNs.
Because this is intended to be used in vfio_pci_core.c, we also need
to expose the struct vfio_pci_dma_buf in the vfio_pci_priv.h header.
Signed-off-by: Matt Evans <mattev@meta.com>
---
drivers/vfio/pci/vfio_pci_dmabuf.c | 124 ++++++++++++++++++++++++++---
drivers/vfio/pci/vfio_pci_priv.h | 19 +++++
2 files changed, 130 insertions(+), 13 deletions(-)
diff --git a/drivers/vfio/pci/vfio_pci_dmabuf.c b/drivers/vfio/pci/vfio_pci_dmabuf.c
index 04478b7415a0..8b6bae56bbf2 100644
--- a/drivers/vfio/pci/vfio_pci_dmabuf.c
+++ b/drivers/vfio/pci/vfio_pci_dmabuf.c
@@ -9,19 +9,6 @@
MODULE_IMPORT_NS("DMA_BUF");
-struct vfio_pci_dma_buf {
- struct dma_buf *dmabuf;
- struct vfio_pci_core_device *vdev;
- struct list_head dmabufs_elm;
- size_t size;
- struct phys_vec *phys_vec;
- struct p2pdma_provider *provider;
- u32 nr_ranges;
- struct kref kref;
- struct completion comp;
- u8 revoked : 1;
-};
-
static int vfio_pci_dma_buf_attach(struct dma_buf *dmabuf,
struct dma_buf_attachment *attachment)
{
@@ -106,6 +93,117 @@ static const struct dma_buf_ops vfio_pci_dmabuf_ops = {
.release = vfio_pci_dma_buf_release,
};
+int vfio_pci_dma_buf_find_pfn(struct vfio_pci_dma_buf *vpdmabuf,
+ struct vm_area_struct *vma,
+ unsigned long address,
+ unsigned int order,
+ unsigned long *out_pfn)
+{
+ /*
+ * Given a VMA (start, end, pgoffs) and a fault address,
+ * search the corresponding DMABUF's phys_vec[] to find the
+ * range representing the address's offset into the VMA, and
+ * its PFN.
+ *
+ * The phys_vec[] ranges represent contiguous spans of VAs
+ * upwards from the buffer offset 0; the actual PFNs might be
+ * in any order, overlap/alias, etc. Calculate an offset of
+ * the desired page given VMA start/pgoff and address, then
+ * search upwards from 0 to find which span contains it.
+ *
+ * On success, a valid PFN for a page sized by 'order' is
+ * returned into out_pfn.
+ *
+ * Failure occurs if:
+ * - The page would cross the edge of the VMA
+ * - The page isn't entirely contained within a range
+ * - We find a range, but the final PFN isn't aligned to the
+ * requested order.
+ *
+ * (Upon failure, the caller is expected to try again with a
+ * smaller order; the tests above will always succeed for
+ * order=0 as the limit case.)
+ *
+ * It's suboptimal if DMABUFs are created with neigbouring
+ * ranges that are physically contiguous, since hugepages
+ * can't straddle range boundaries. (The construction of the
+ * ranges vector should merge such ranges.)
+ */
+
+ const unsigned long pagesize = PAGE_SIZE << order;
+ unsigned long rounded_page_addr = address & ~(pagesize - 1);
+ unsigned long rounded_page_end = rounded_page_addr + pagesize;
+ unsigned long buf_page_offset;
+ unsigned long buf_offset = 0;
+ unsigned int i;
+
+ if (rounded_page_addr < vma->vm_start || rounded_page_end > vma->vm_end) {
+ if (order > 0)
+ return -EAGAIN;
+
+ /* A fault address outside of the VMA is absurd. */
+ WARN(1, "Fault addr 0x%lx outside VMA 0x%lx-0x%lx\n",
+ address, vma->vm_start, vma->vm_end);
+ return -EFAULT;
+ }
+
+ if (unlikely(check_add_overflow(rounded_page_addr - vma->vm_start,
+ vma->vm_pgoff << PAGE_SHIFT, &buf_page_offset)))
+ return -EFAULT;
+
+ for (i = 0; i < vpdmabuf->nr_ranges; i++) {
+ size_t range_len = vpdmabuf->phys_vec[i].len;
+ phys_addr_t range_start = vpdmabuf->phys_vec[i].paddr;
+
+ /*
+ * If the current range starts after the page's span,
+ * this and any future range won't match. Bail early.
+ */
+ if (buf_page_offset + pagesize <= buf_offset)
+ break;
+
+ if (buf_page_offset >= buf_offset &&
+ buf_page_offset + pagesize <= buf_offset + range_len) {
+ /*
+ * The faulting page is wholly contained
+ * within the span represented by the range.
+ * Validate PFN alignment for the order:
+ */
+ unsigned long pfn = (range_start >> PAGE_SHIFT) +
+ ((buf_page_offset - buf_offset) >> PAGE_SHIFT);
+
+ if (IS_ALIGNED(pfn, 1 << order)) {
+ *out_pfn = pfn;
+ return 0;
+ }
+ /* Retry with smaller order */
+ return -EAGAIN;
+ }
+ buf_offset += range_len;
+ }
+
+ /*
+ * A hugepage straddling a range boundary will fail to match a
+ * range, but the address will (eventually) match when retried
+ * with a smaller page.
+ */
+ if (order > 0)
+ return -EAGAIN;
+
+ /*
+ * If we get here, the address fell outside of the span
+ * represented by the (concatenated) ranges. Setup of a
+ * mapping must ensure that the VMA is <= the total size of
+ * the ranges, so this should never happen. But, if it does,
+ * force SIGBUS for the access and warn.
+ */
+ WARN_ONCE(1, "No range for addr 0x%lx, order %d: VMA 0x%lx-0x%lx pgoff 0x%lx, %u ranges, size 0x%zx\n",
+ address, order, vma->vm_start, vma->vm_end, vma->vm_pgoff,
+ vpdmabuf->nr_ranges, vpdmabuf->size);
+
+ return -EFAULT;
+}
+
/*
* This is a temporary "private interconnect" between VFIO DMABUF and iommufd.
* It allows the two co-operating drivers to exchange the physical address of
diff --git a/drivers/vfio/pci/vfio_pci_priv.h b/drivers/vfio/pci/vfio_pci_priv.h
index fca9d0dfac90..317170a5b407 100644
--- a/drivers/vfio/pci/vfio_pci_priv.h
+++ b/drivers/vfio/pci/vfio_pci_priv.h
@@ -23,6 +23,19 @@ struct vfio_pci_ioeventfd {
bool test_mem;
};
+struct vfio_pci_dma_buf {
+ struct dma_buf *dmabuf;
+ struct vfio_pci_core_device *vdev;
+ struct list_head dmabufs_elm;
+ size_t size;
+ struct phys_vec *phys_vec;
+ struct p2pdma_provider *provider;
+ u32 nr_ranges;
+ struct kref kref;
+ struct completion comp;
+ u8 revoked : 1;
+};
+
bool vfio_pci_intx_mask(struct vfio_pci_core_device *vdev);
void vfio_pci_intx_unmask(struct vfio_pci_core_device *vdev);
@@ -114,6 +127,12 @@ static inline bool vfio_pci_is_vga(struct pci_dev *pdev)
return (pdev->class >> 8) == PCI_CLASS_DISPLAY_VGA;
}
+int vfio_pci_dma_buf_find_pfn(struct vfio_pci_dma_buf *vpdmabuf,
+ struct vm_area_struct *vma,
+ unsigned long address,
+ unsigned int order,
+ unsigned long *out_pfn);
+
#ifdef CONFIG_VFIO_PCI_DMABUF
int vfio_pci_core_feature_dma_buf(struct vfio_pci_core_device *vdev, u32 flags,
struct vfio_device_feature_dma_buf __user *arg,
--
2.47.3
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH 3/9] vfio/pci: Add a helper to create a DMABUF for a BAR-map VMA
2026-04-16 13:17 [PATCH 0/9] vfio/pci: Add mmap() for DMABUFs Matt Evans
2026-04-16 13:17 ` [PATCH 1/9] vfio/pci: Fix vfio_pci_dma_buf_cleanup() double-put Matt Evans
2026-04-16 13:17 ` [PATCH 2/9] vfio/pci: Add a helper to look up PFNs for DMABUFs Matt Evans
@ 2026-04-16 13:17 ` Matt Evans
2026-04-16 13:17 ` [PATCH 4/9] vfio/pci: Convert BAR mmap() to use a DMABUF Matt Evans
` (5 subsequent siblings)
8 siblings, 0 replies; 10+ messages in thread
From: Matt Evans @ 2026-04-16 13:17 UTC (permalink / raw)
To: Alex Williamson, Leon Romanovsky, Jason Gunthorpe, Alex Mastro,
Christian König
Cc: Mahmoud Adam, David Matlack, Björn Töpel, Sumit Semwal,
Kevin Tian, Ankit Agrawal, Pranjal Shrivastava, Alistair Popple,
Vivek Kasireddy, linux-kernel, linux-media, dri-devel,
linaro-mm-sig, kvm
This helper, vfio_pci_core_mmap_prep_dmabuf(), creates a single-range
DMABUF for the purpose of mapping a PCI BAR. This is used in a future
commit by VFIO's ordinary mmap() path.
This function transfers ownership of the VFIO device fd to the
DMABUF, which fput()s when it's released.
Refactor the existing vfio_pci_core_feature_dma_buf() to split out
export code common to the two paths, VFIO_DEVICE_FEATURE_DMA_BUF and
this new VFIO_BAR mmap().
Signed-off-by: Matt Evans <mattev@meta.com>
---
drivers/vfio/pci/vfio_pci_dmabuf.c | 143 +++++++++++++++++++++++------
drivers/vfio/pci/vfio_pci_priv.h | 5 +
2 files changed, 118 insertions(+), 30 deletions(-)
diff --git a/drivers/vfio/pci/vfio_pci_dmabuf.c b/drivers/vfio/pci/vfio_pci_dmabuf.c
index 8b6bae56bbf2..3554afbc8ebc 100644
--- a/drivers/vfio/pci/vfio_pci_dmabuf.c
+++ b/drivers/vfio/pci/vfio_pci_dmabuf.c
@@ -82,6 +82,8 @@ static void vfio_pci_dma_buf_release(struct dma_buf *dmabuf)
up_write(&priv->vdev->memory_lock);
vfio_device_put_registration(&priv->vdev->vdev);
}
+ if (priv->vfile)
+ fput(priv->vfile);
kfree(priv->phys_vec);
kfree(priv);
}
@@ -204,6 +206,45 @@ int vfio_pci_dma_buf_find_pfn(struct vfio_pci_dma_buf *vpdmabuf,
return -EFAULT;
}
+/*
+ * Create a DMABUF corresponding to priv, add it to vdev->dmabufs list
+ * for tracking (meaning cleanup or revocation will zap it), and take
+ * a vfio_device registration.
+ */
+static int vfio_pci_dmabuf_export(struct vfio_pci_core_device *vdev,
+ struct vfio_pci_dma_buf *priv, uint32_t flags)
+{
+ DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
+
+ if (!vfio_device_try_get_registration(&vdev->vdev))
+ return -ENODEV;
+
+ exp_info.ops = &vfio_pci_dmabuf_ops;
+ exp_info.size = priv->size;
+ exp_info.flags = flags;
+ exp_info.priv = priv;
+
+ priv->dmabuf = dma_buf_export(&exp_info);
+ if (IS_ERR(priv->dmabuf)) {
+ vfio_device_put_registration(&vdev->vdev);
+ return PTR_ERR(priv->dmabuf);
+ }
+
+ kref_init(&priv->kref);
+ init_completion(&priv->comp);
+
+ /* dma_buf_put() now frees priv */
+ INIT_LIST_HEAD(&priv->dmabufs_elm);
+ down_write(&vdev->memory_lock);
+ dma_resv_lock(priv->dmabuf->resv, NULL);
+ priv->revoked = !__vfio_pci_memory_enabled(vdev);
+ list_add_tail(&priv->dmabufs_elm, &vdev->dmabufs);
+ dma_resv_unlock(priv->dmabuf->resv);
+ up_write(&vdev->memory_lock);
+
+ return 0;
+}
+
/*
* This is a temporary "private interconnect" between VFIO DMABUF and iommufd.
* It allows the two co-operating drivers to exchange the physical address of
@@ -322,7 +363,6 @@ int vfio_pci_core_feature_dma_buf(struct vfio_pci_core_device *vdev, u32 flags,
{
struct vfio_device_feature_dma_buf get_dma_buf = {};
struct vfio_region_dma_range *dma_ranges;
- DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
struct vfio_pci_dma_buf *priv;
size_t length;
int ret;
@@ -392,34 +432,9 @@ int vfio_pci_core_feature_dma_buf(struct vfio_pci_core_device *vdev, u32 flags,
kfree(dma_ranges);
dma_ranges = NULL;
- if (!vfio_device_try_get_registration(&vdev->vdev)) {
- ret = -ENODEV;
+ ret = vfio_pci_dmabuf_export(vdev, priv, get_dma_buf.open_flags);
+ if (ret)
goto err_free_phys;
- }
-
- exp_info.ops = &vfio_pci_dmabuf_ops;
- exp_info.size = priv->size;
- exp_info.flags = get_dma_buf.open_flags;
- exp_info.priv = priv;
-
- priv->dmabuf = dma_buf_export(&exp_info);
- if (IS_ERR(priv->dmabuf)) {
- ret = PTR_ERR(priv->dmabuf);
- goto err_dev_put;
- }
-
- kref_init(&priv->kref);
- init_completion(&priv->comp);
-
- /* dma_buf_put() now frees priv */
- INIT_LIST_HEAD(&priv->dmabufs_elm);
- down_write(&vdev->memory_lock);
- dma_resv_lock(priv->dmabuf->resv, NULL);
- priv->revoked = !__vfio_pci_memory_enabled(vdev);
- list_add_tail(&priv->dmabufs_elm, &vdev->dmabufs);
- dma_resv_unlock(priv->dmabuf->resv);
- up_write(&vdev->memory_lock);
-
/*
* dma_buf_fd() consumes the reference, when the file closes the dmabuf
* will be released.
@@ -430,8 +445,6 @@ int vfio_pci_core_feature_dma_buf(struct vfio_pci_core_device *vdev, u32 flags,
return ret;
-err_dev_put:
- vfio_device_put_registration(&vdev->vdev);
err_free_phys:
kfree(priv->phys_vec);
err_free_priv:
@@ -441,6 +454,76 @@ int vfio_pci_core_feature_dma_buf(struct vfio_pci_core_device *vdev, u32 flags,
return ret;
}
+int vfio_pci_core_mmap_prep_dmabuf(struct vfio_pci_core_device *vdev,
+ struct vm_area_struct *vma,
+ u64 phys_start, u64 req_len,
+ unsigned int res_index)
+{
+ struct vfio_pci_dma_buf *priv;
+ const unsigned int nr_ranges = 1;
+ int ret;
+
+ priv = kzalloc_obj(*priv);
+ if (!priv)
+ return -ENOMEM;
+
+ priv->phys_vec = kzalloc_obj(*priv->phys_vec);
+ if (!priv->phys_vec) {
+ ret = -ENOMEM;
+ goto err_free_priv;
+ }
+
+ /*
+ * The mmap() request's vma->vm_offs might be non-zero, but
+ * the DMABUF is created from _offset zero_ of the BAR. The
+ * portion between zero and the vm_offs is inaccessible
+ * through this VMA, but this approach keeps the
+ * /proc/<pid>/maps offset somewhat consistent with the
+ * pre-DMABUF code. Size includes the offset portion.
+ *
+ * This differs from an mmap() of an explicitly-exported
+ * DMABUF which is an arbitrary slice of the BAR, would be
+ * created with the desired offset+size, and would usually be
+ * mmap()ed with pgoff = 0.
+ *
+ * Both are equivalent and vfio_pci_dma_buf_find_pfn() finds
+ * the same PFNs.
+ */
+ priv->vdev = vdev;
+ priv->nr_ranges = nr_ranges;
+ priv->size = (vma->vm_pgoff << PAGE_SHIFT) + req_len;
+ priv->provider = pcim_p2pdma_provider(vdev->pdev, res_index);
+ if (!priv->provider) {
+ ret = -EINVAL;
+ goto err_free_phys;
+ }
+
+ priv->phys_vec[0].paddr = phys_start;
+ priv->phys_vec[0].len = priv->size;
+
+ ret = vfio_pci_dmabuf_export(vdev, priv, O_CLOEXEC | O_RDWR);
+ if (ret)
+ goto err_free_phys;
+
+ /*
+ * The VMA gets the DMABUF file so that other users can locate
+ * the DMABUF via a VA. Ownership of the original VFIO device
+ * file being mmap()ed transfers to priv, and is put when the
+ * DMABUF is released.
+ */
+ priv->vfile = vma->vm_file;
+ vma->vm_file = priv->dmabuf->file;
+ vma->vm_private_data = priv;
+
+ return 0;
+
+err_free_phys:
+ kfree(priv->phys_vec);
+err_free_priv:
+ kfree(priv);
+ return ret;
+}
+
void vfio_pci_dma_buf_move(struct vfio_pci_core_device *vdev, bool revoked)
{
struct vfio_pci_dma_buf *priv;
diff --git a/drivers/vfio/pci/vfio_pci_priv.h b/drivers/vfio/pci/vfio_pci_priv.h
index 317170a5b407..3cff1b7eb47b 100644
--- a/drivers/vfio/pci/vfio_pci_priv.h
+++ b/drivers/vfio/pci/vfio_pci_priv.h
@@ -30,6 +30,7 @@ struct vfio_pci_dma_buf {
size_t size;
struct phys_vec *phys_vec;
struct p2pdma_provider *provider;
+ struct file *vfile;
u32 nr_ranges;
struct kref kref;
struct completion comp;
@@ -132,6 +133,10 @@ int vfio_pci_dma_buf_find_pfn(struct vfio_pci_dma_buf *vpdmabuf,
unsigned long address,
unsigned int order,
unsigned long *out_pfn);
+int vfio_pci_core_mmap_prep_dmabuf(struct vfio_pci_core_device *vdev,
+ struct vm_area_struct *vma,
+ u64 phys_start, u64 req_len,
+ unsigned int res_index);
#ifdef CONFIG_VFIO_PCI_DMABUF
int vfio_pci_core_feature_dma_buf(struct vfio_pci_core_device *vdev, u32 flags,
--
2.47.3
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH 4/9] vfio/pci: Convert BAR mmap() to use a DMABUF
2026-04-16 13:17 [PATCH 0/9] vfio/pci: Add mmap() for DMABUFs Matt Evans
` (2 preceding siblings ...)
2026-04-16 13:17 ` [PATCH 3/9] vfio/pci: Add a helper to create a DMABUF for a BAR-map VMA Matt Evans
@ 2026-04-16 13:17 ` Matt Evans
2026-04-16 13:17 ` [PATCH 5/9] vfio/pci: Provide a user-facing name for BAR mappings Matt Evans
` (4 subsequent siblings)
8 siblings, 0 replies; 10+ messages in thread
From: Matt Evans @ 2026-04-16 13:17 UTC (permalink / raw)
To: Alex Williamson, Leon Romanovsky, Jason Gunthorpe, Alex Mastro,
Christian König
Cc: Mahmoud Adam, David Matlack, Björn Töpel, Sumit Semwal,
Kevin Tian, Ankit Agrawal, Pranjal Shrivastava, Alistair Popple,
Vivek Kasireddy, linux-kernel, linux-media, dri-devel,
linaro-mm-sig, kvm
Convert the VFIO device fd fops->mmap to create a DMABUF representing
the BAR mapping, and make the VMA fault handler look up PFNs from the
corresponding DMABUF. This supports future code mmap()ing BAR
DMABUFs, and iommufd work to support Type1 P2P.
First, vfio_pci_core_mmap() uses the new
vfio_pci_core_mmap_prep_dmabuf() helper to export a DMABUF
representing a single BAR range. Then, the vfio_pci_mmap_huge_fault()
callback is updated to understand revoked buffers, and uses the new
vfio_pci_dma_buf_find_pfn() helper to determine the PFN for a given
fault address.
Now that the VFIO DMABUFs can be mmap()ed, vfio_pci_dma_buf_move() and
vfio_pci_dma_buf_cleanup() need to zap PTEs on revocation and cleanup
paths.
CONFIG_VFIO_PCI_CORE now unconditionally depends on
CONFIG_DMA_SHARED_BUFFER. CONFIG_VFIO_PCI_DMABUF remains, to
conditionally include support for VFIO_DEVICE_FEATURE_DMA_BUF, and
depends on CONFIG_PCI_P2PDMA.
Signed-off-by: Matt Evans <mattev@meta.com>
---
drivers/vfio/pci/Kconfig | 3 +-
drivers/vfio/pci/Makefile | 3 +-
drivers/vfio/pci/vfio_pci_core.c | 86 ++++++++++++++++++------------
drivers/vfio/pci/vfio_pci_dmabuf.c | 14 +++++
drivers/vfio/pci/vfio_pci_priv.h | 11 +---
5 files changed, 71 insertions(+), 46 deletions(-)
diff --git a/drivers/vfio/pci/Kconfig b/drivers/vfio/pci/Kconfig
index 296bf01e185e..2074f2a941e1 100644
--- a/drivers/vfio/pci/Kconfig
+++ b/drivers/vfio/pci/Kconfig
@@ -6,6 +6,7 @@ config VFIO_PCI_CORE
tristate
select VFIO_VIRQFD
select IRQ_BYPASS_MANAGER
+ select DMA_SHARED_BUFFER
config VFIO_PCI_INTX
def_bool y if !S390
@@ -56,7 +57,7 @@ config VFIO_PCI_ZDEV_KVM
To enable s390x KVM vfio-pci extensions, say Y.
config VFIO_PCI_DMABUF
- def_bool y if VFIO_PCI_CORE && PCI_P2PDMA && DMA_SHARED_BUFFER
+ def_bool y if PCI_P2PDMA
source "drivers/vfio/pci/mlx5/Kconfig"
diff --git a/drivers/vfio/pci/Makefile b/drivers/vfio/pci/Makefile
index 6138f1bf241d..881452ea89be 100644
--- a/drivers/vfio/pci/Makefile
+++ b/drivers/vfio/pci/Makefile
@@ -1,8 +1,7 @@
# SPDX-License-Identifier: GPL-2.0-only
-vfio-pci-core-y := vfio_pci_core.o vfio_pci_intrs.o vfio_pci_rdwr.o vfio_pci_config.o
+vfio-pci-core-y := vfio_pci_core.o vfio_pci_intrs.o vfio_pci_rdwr.o vfio_pci_config.o vfio_pci_dmabuf.o
vfio-pci-core-$(CONFIG_VFIO_PCI_ZDEV_KVM) += vfio_pci_zdev.o
-vfio-pci-core-$(CONFIG_VFIO_PCI_DMABUF) += vfio_pci_dmabuf.o
obj-$(CONFIG_VFIO_PCI_CORE) += vfio-pci-core.o
vfio-pci-y := vfio_pci.o
diff --git a/drivers/vfio/pci/vfio_pci_core.c b/drivers/vfio/pci/vfio_pci_core.c
index 4e9091e5fcc2..c00a61d61250 100644
--- a/drivers/vfio/pci/vfio_pci_core.c
+++ b/drivers/vfio/pci/vfio_pci_core.c
@@ -1648,18 +1648,6 @@ void vfio_pci_memory_unlock_and_restore(struct vfio_pci_core_device *vdev, u16 c
up_write(&vdev->memory_lock);
}
-static unsigned long vma_to_pfn(struct vm_area_struct *vma)
-{
- struct vfio_pci_core_device *vdev = vma->vm_private_data;
- int index = vma->vm_pgoff >> (VFIO_PCI_OFFSET_SHIFT - PAGE_SHIFT);
- u64 pgoff;
-
- pgoff = vma->vm_pgoff &
- ((1U << (VFIO_PCI_OFFSET_SHIFT - PAGE_SHIFT)) - 1);
-
- return (pci_resource_start(vdev->pdev, index) >> PAGE_SHIFT) + pgoff;
-}
-
vm_fault_t vfio_pci_vmf_insert_pfn(struct vfio_pci_core_device *vdev,
struct vm_fault *vmf,
unsigned long pfn,
@@ -1687,23 +1675,42 @@ static vm_fault_t vfio_pci_mmap_huge_fault(struct vm_fault *vmf,
unsigned int order)
{
struct vm_area_struct *vma = vmf->vma;
- struct vfio_pci_core_device *vdev = vma->vm_private_data;
- unsigned long addr = vmf->address & ~((PAGE_SIZE << order) - 1);
- unsigned long pgoff = (addr - vma->vm_start) >> PAGE_SHIFT;
- unsigned long pfn = vma_to_pfn(vma) + pgoff;
- vm_fault_t ret = VM_FAULT_FALLBACK;
-
- if (is_aligned_for_order(vma, addr, pfn, order)) {
- scoped_guard(rwsem_read, &vdev->memory_lock)
- ret = vfio_pci_vmf_insert_pfn(vdev, vmf, pfn, order);
- }
+ struct vfio_pci_dma_buf *priv = vma->vm_private_data;
+ struct vfio_pci_core_device *vdev;
+ unsigned long pfn = 0;
+ vm_fault_t ret = VM_FAULT_SIGBUS;
+
+ /*
+ * We can rely on the existence of both a DMABUF (priv) and
+ * the VFIO device it was exported from (vdev). This fault's
+ * VMA was established using vfio_pci_core_mmap_prep_dmabuf()
+ * which transfers ownership of the VFIO device fd to the
+ * DMABUF, and so the VFIO device is held open because the
+ * VMA's vm_file (DMABUF) is open.
+ *
+ * Since vfio_pci_dma_buf_cleanup() cannot have happened,
+ * vdev must be valid; we can take memory_lock.
+ */
+ vdev = READ_ONCE(priv->vdev);
+
+ scoped_guard(rwsem_read, &vdev->memory_lock) {
+ if (!priv->revoked) {
+ int pres = vfio_pci_dma_buf_find_pfn(priv, vma,
+ vmf->address,
+ order, &pfn);
+
+ if (pres == 0)
+ ret = vfio_pci_vmf_insert_pfn(vdev, vmf,
+ pfn, order);
+ else if (pres == -EAGAIN)
+ ret = VM_FAULT_FALLBACK;
+ }
- dev_dbg_ratelimited(&vdev->pdev->dev,
- "%s(,order = %d) BAR %ld page offset 0x%lx: 0x%x\n",
- __func__, order,
- vma->vm_pgoff >>
- (VFIO_PCI_OFFSET_SHIFT - PAGE_SHIFT),
- pgoff, (unsigned int)ret);
+ dev_dbg_ratelimited(&vdev->pdev->dev,
+ "%s(order = %d) PFN 0x%lx, VA 0x%lx, pgoff 0x%lx: 0x%x\n",
+ __func__, order, pfn, vmf->address,
+ vma->vm_pgoff, (unsigned int)ret);
+ }
return ret;
}
@@ -1726,7 +1733,7 @@ int vfio_pci_core_mmap(struct vfio_device *core_vdev, struct vm_area_struct *vma
container_of(core_vdev, struct vfio_pci_core_device, vdev);
struct pci_dev *pdev = vdev->pdev;
unsigned int index;
- u64 phys_len, req_len, pgoff, req_start;
+ u64 phys_len, req_len;
int ret;
index = vma->vm_pgoff >> (VFIO_PCI_OFFSET_SHIFT - PAGE_SHIFT);
@@ -1753,11 +1760,9 @@ int vfio_pci_core_mmap(struct vfio_device *core_vdev, struct vm_area_struct *vma
phys_len = PAGE_ALIGN(pci_resource_len(pdev, index));
req_len = vma->vm_end - vma->vm_start;
- pgoff = vma->vm_pgoff &
- ((1U << (VFIO_PCI_OFFSET_SHIFT - PAGE_SHIFT)) - 1);
- req_start = pgoff << PAGE_SHIFT;
+ vma->vm_pgoff &= VFIO_PCI_OFFSET_MASK >> PAGE_SHIFT;
- if (req_start + req_len > phys_len)
+ if ((vma->vm_pgoff << PAGE_SHIFT) + req_len > phys_len)
return -EINVAL;
/*
@@ -1768,7 +1773,20 @@ int vfio_pci_core_mmap(struct vfio_device *core_vdev, struct vm_area_struct *vma
if (ret)
return ret;
- vma->vm_private_data = vdev;
+ /*
+ * Create a DMABUF with a single range corresponding to this
+ * mapping, and wire it into vma->vm_private_data. The VMA's
+ * vm_file becomes that of the DMABUF, and the DMABUF takes
+ * ownership of the VFIO device file (put upon DMABUF
+ * release). This maintains the behaviour of a live VMA
+ * mapping holding the VFIO device file open.
+ */
+ ret = vfio_pci_core_mmap_prep_dmabuf(vdev, vma,
+ pci_resource_start(pdev, index),
+ req_len, index);
+ if (ret)
+ return ret;
+
vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
vma->vm_page_prot = pgprot_decrypted(vma->vm_page_prot);
diff --git a/drivers/vfio/pci/vfio_pci_dmabuf.c b/drivers/vfio/pci/vfio_pci_dmabuf.c
index 3554afbc8ebc..a12432825e5e 100644
--- a/drivers/vfio/pci/vfio_pci_dmabuf.c
+++ b/drivers/vfio/pci/vfio_pci_dmabuf.c
@@ -9,6 +9,7 @@
MODULE_IMPORT_NS("DMA_BUF");
+#ifdef CONFIG_VFIO_PCI_DMABUF
static int vfio_pci_dma_buf_attach(struct dma_buf *dmabuf,
struct dma_buf_attachment *attachment)
{
@@ -25,6 +26,7 @@ static int vfio_pci_dma_buf_attach(struct dma_buf *dmabuf,
return 0;
}
+#endif /* CONFIG_VFIO_PCI_DMABUF */
static void vfio_pci_dma_buf_done(struct kref *kref)
{
@@ -89,7 +91,9 @@ static void vfio_pci_dma_buf_release(struct dma_buf *dmabuf)
}
static const struct dma_buf_ops vfio_pci_dmabuf_ops = {
+#ifdef CONFIG_VFIO_PCI_DMABUF
.attach = vfio_pci_dma_buf_attach,
+#endif
.map_dma_buf = vfio_pci_dma_buf_map,
.unmap_dma_buf = vfio_pci_dma_buf_unmap,
.release = vfio_pci_dma_buf_release,
@@ -245,6 +249,7 @@ static int vfio_pci_dmabuf_export(struct vfio_pci_core_device *vdev,
return 0;
}
+#ifdef CONFIG_VFIO_PCI_DMABUF
/*
* This is a temporary "private interconnect" between VFIO DMABUF and iommufd.
* It allows the two co-operating drivers to exchange the physical address of
@@ -453,6 +458,7 @@ int vfio_pci_core_feature_dma_buf(struct vfio_pci_core_device *vdev, u32 flags,
kfree(dma_ranges);
return ret;
}
+#endif /* CONFIG_VFIO_PCI_DMABUF */
int vfio_pci_core_mmap_prep_dmabuf(struct vfio_pci_core_device *vdev,
struct vm_area_struct *vma,
@@ -530,6 +536,10 @@ void vfio_pci_dma_buf_move(struct vfio_pci_core_device *vdev, bool revoked)
struct vfio_pci_dma_buf *tmp;
lockdep_assert_held_write(&vdev->memory_lock);
+ /*
+ * Holding memory_lock ensures a racing VMA fault observes
+ * priv->revoked properly.
+ */
list_for_each_entry_safe(priv, tmp, &vdev->dmabufs, dmabufs_elm) {
if (!get_file_active(&priv->dmabuf->file))
@@ -547,6 +557,8 @@ void vfio_pci_dma_buf_move(struct vfio_pci_core_device *vdev, bool revoked)
if (revoked) {
kref_put(&priv->kref, vfio_pci_dma_buf_done);
wait_for_completion(&priv->comp);
+ unmap_mapping_range(priv->dmabuf->file->f_mapping,
+ 0, priv->size, 1);
} else {
/*
* Kref is initialize again, because when revoke
@@ -594,6 +606,8 @@ void vfio_pci_dma_buf_cleanup(struct vfio_pci_core_device *vdev)
if (!was_revoked) {
kref_put(&priv->kref, vfio_pci_dma_buf_done);
wait_for_completion(&priv->comp);
+ unmap_mapping_range(priv->dmabuf->file->f_mapping,
+ 0, priv->size, 1);
}
vfio_device_put_registration(&vdev->vdev);
fput(priv->dmabuf->file);
diff --git a/drivers/vfio/pci/vfio_pci_priv.h b/drivers/vfio/pci/vfio_pci_priv.h
index 3cff1b7eb47b..868a54ba482c 100644
--- a/drivers/vfio/pci/vfio_pci_priv.h
+++ b/drivers/vfio/pci/vfio_pci_priv.h
@@ -137,13 +137,13 @@ int vfio_pci_core_mmap_prep_dmabuf(struct vfio_pci_core_device *vdev,
struct vm_area_struct *vma,
u64 phys_start, u64 req_len,
unsigned int res_index);
+void vfio_pci_dma_buf_cleanup(struct vfio_pci_core_device *vdev);
+void vfio_pci_dma_buf_move(struct vfio_pci_core_device *vdev, bool revoked);
#ifdef CONFIG_VFIO_PCI_DMABUF
int vfio_pci_core_feature_dma_buf(struct vfio_pci_core_device *vdev, u32 flags,
struct vfio_device_feature_dma_buf __user *arg,
size_t argsz);
-void vfio_pci_dma_buf_cleanup(struct vfio_pci_core_device *vdev);
-void vfio_pci_dma_buf_move(struct vfio_pci_core_device *vdev, bool revoked);
#else
static inline int
vfio_pci_core_feature_dma_buf(struct vfio_pci_core_device *vdev, u32 flags,
@@ -152,13 +152,6 @@ vfio_pci_core_feature_dma_buf(struct vfio_pci_core_device *vdev, u32 flags,
{
return -ENOTTY;
}
-static inline void vfio_pci_dma_buf_cleanup(struct vfio_pci_core_device *vdev)
-{
-}
-static inline void vfio_pci_dma_buf_move(struct vfio_pci_core_device *vdev,
- bool revoked)
-{
-}
#endif
#endif
--
2.47.3
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH 5/9] vfio/pci: Provide a user-facing name for BAR mappings
2026-04-16 13:17 [PATCH 0/9] vfio/pci: Add mmap() for DMABUFs Matt Evans
` (3 preceding siblings ...)
2026-04-16 13:17 ` [PATCH 4/9] vfio/pci: Convert BAR mmap() to use a DMABUF Matt Evans
@ 2026-04-16 13:17 ` Matt Evans
2026-04-16 13:17 ` [PATCH 6/9] vfio/pci: Clean up BAR zap and revocation Matt Evans
` (3 subsequent siblings)
8 siblings, 0 replies; 10+ messages in thread
From: Matt Evans @ 2026-04-16 13:17 UTC (permalink / raw)
To: Alex Williamson, Leon Romanovsky, Jason Gunthorpe, Alex Mastro,
Christian König
Cc: Mahmoud Adam, David Matlack, Björn Töpel, Sumit Semwal,
Kevin Tian, Ankit Agrawal, Pranjal Shrivastava, Alistair Popple,
Vivek Kasireddy, linux-kernel, linux-media, dri-devel,
linaro-mm-sig, kvm
Since converting BAR mmap()s to using DMABUFs, we lose the original
device path in /proc/<pid>/maps, lsof, etc. Generate a debug-oriented
synthetic 'filename' based on the cdev, plus BDF, plus resource index.
This applies only to BAR mappings via the VFIO device fd, as
explicitly-exported DMABUFs are named by userspace via the
DMA_BUF_SET_NAME ioctl.
Signed-off-by: Matt Evans <mattev@meta.com>
---
drivers/vfio/pci/vfio_pci_dmabuf.c | 27 +++++++++++++++++++++++++--
1 file changed, 25 insertions(+), 2 deletions(-)
diff --git a/drivers/vfio/pci/vfio_pci_dmabuf.c b/drivers/vfio/pci/vfio_pci_dmabuf.c
index a12432825e5e..04c7733fe712 100644
--- a/drivers/vfio/pci/vfio_pci_dmabuf.c
+++ b/drivers/vfio/pci/vfio_pci_dmabuf.c
@@ -4,6 +4,7 @@
#include <linux/dma-buf-mapping.h>
#include <linux/pci-p2pdma.h>
#include <linux/dma-resv.h>
+#include <uapi/linux/dma-buf.h>
#include "vfio_pci_priv.h"
@@ -467,6 +468,7 @@ int vfio_pci_core_mmap_prep_dmabuf(struct vfio_pci_core_device *vdev,
{
struct vfio_pci_dma_buf *priv;
const unsigned int nr_ranges = 1;
+ char *bufname;
int ret;
priv = kzalloc_obj(*priv);
@@ -479,6 +481,20 @@ int vfio_pci_core_mmap_prep_dmabuf(struct vfio_pci_core_device *vdev,
goto err_free_priv;
}
+ bufname = kzalloc(DMA_BUF_NAME_LEN, GFP_KERNEL);
+ if (!bufname) {
+ ret = -ENOMEM;
+ goto err_free_phys;
+ }
+
+ /*
+ * Maximum size of the friendly debug name is
+ * vfio1234567890:ffff:ff:3f.7-9 = 30, which fits within
+ * DMA_BUF_NAME_LEN.
+ */
+ snprintf(bufname, DMA_BUF_NAME_LEN, "%s:%s/%x",
+ dev_name(&vdev->vdev.device), pci_name(vdev->pdev), res_index);
+
/*
* The mmap() request's vma->vm_offs might be non-zero, but
* the DMABUF is created from _offset zero_ of the BAR. The
@@ -501,7 +517,7 @@ int vfio_pci_core_mmap_prep_dmabuf(struct vfio_pci_core_device *vdev,
priv->provider = pcim_p2pdma_provider(vdev->pdev, res_index);
if (!priv->provider) {
ret = -EINVAL;
- goto err_free_phys;
+ goto err_free_name;
}
priv->phys_vec[0].paddr = phys_start;
@@ -509,7 +525,7 @@ int vfio_pci_core_mmap_prep_dmabuf(struct vfio_pci_core_device *vdev,
ret = vfio_pci_dmabuf_export(vdev, priv, O_CLOEXEC | O_RDWR);
if (ret)
- goto err_free_phys;
+ goto err_free_name;
/*
* The VMA gets the DMABUF file so that other users can locate
@@ -521,8 +537,15 @@ int vfio_pci_core_mmap_prep_dmabuf(struct vfio_pci_core_device *vdev,
vma->vm_file = priv->dmabuf->file;
vma->vm_private_data = priv;
+ spin_lock(&priv->dmabuf->name_lock);
+ kfree(priv->dmabuf->name);
+ priv->dmabuf->name = bufname;
+ spin_unlock(&priv->dmabuf->name_lock);
+
return 0;
+err_free_name:
+ kfree(bufname);
err_free_phys:
kfree(priv->phys_vec);
err_free_priv:
--
2.47.3
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH 6/9] vfio/pci: Clean up BAR zap and revocation
2026-04-16 13:17 [PATCH 0/9] vfio/pci: Add mmap() for DMABUFs Matt Evans
` (4 preceding siblings ...)
2026-04-16 13:17 ` [PATCH 5/9] vfio/pci: Provide a user-facing name for BAR mappings Matt Evans
@ 2026-04-16 13:17 ` Matt Evans
2026-04-16 13:17 ` [PATCH 7/9] vfio/pci: Support mmap() of a VFIO DMABUF Matt Evans
` (2 subsequent siblings)
8 siblings, 0 replies; 10+ messages in thread
From: Matt Evans @ 2026-04-16 13:17 UTC (permalink / raw)
To: Alex Williamson, Leon Romanovsky, Jason Gunthorpe, Alex Mastro,
Christian König
Cc: Mahmoud Adam, David Matlack, Björn Töpel, Sumit Semwal,
Kevin Tian, Ankit Agrawal, Pranjal Shrivastava, Alistair Popple,
Vivek Kasireddy, linux-kernel, linux-media, dri-devel,
linaro-mm-sig, kvm
Previously, vfio_pci_zap_bars() (and the wrapper
vfio_pci_zap_and_down_write_memory_lock()) calls were paired with
calls of vfio_pci_dma_buf_move().
This commit replaces them a unified new function,
vfio_pci_zap_revoke_bars() containing both the vfio_pci_dma_buf_move()
and the unmap_mapping_range(), making it harder for callers to omit
one. It adds a wrapper, vfio_pci_lock_zap_revoke_bars(), which takes
the write memory_lock before zapping, and adds a new
vfio_pci_unrevoke_bars() for the re-enable path.
However, as of "vfio/pci: Convert BAR mmap() to use a DMABUF" the
unmap_mapping_range() to zap is entirely redundant for plain vfio-pci,
since the DMABUFs used for BAR mappings already zap PTEs when the
vfio_pci_dma_buf_move() occurs.
One exception remains as a FIXME: in nvgrace-gpu, some BAR VMAs
conditionally use custom vm_ops, which have not moved to be backed by
DMABUFs. If these BARs are mmap()ed, the vdev enables the existing
behaviour of unmap_mapping_range() for the device fd address space.
Signed-off-by: Matt Evans <mattev@meta.com>
---
drivers/vfio/pci/nvgrace-gpu/main.c | 5 +++
drivers/vfio/pci/vfio_pci_config.c | 30 ++++++--------
drivers/vfio/pci/vfio_pci_core.c | 62 +++++++++++++++++++----------
drivers/vfio/pci/vfio_pci_priv.h | 3 +-
include/linux/vfio_pci_core.h | 1 +
5 files changed, 62 insertions(+), 39 deletions(-)
diff --git a/drivers/vfio/pci/nvgrace-gpu/main.c b/drivers/vfio/pci/nvgrace-gpu/main.c
index c1df437754f9..5304d15b9a2b 100644
--- a/drivers/vfio/pci/nvgrace-gpu/main.c
+++ b/drivers/vfio/pci/nvgrace-gpu/main.c
@@ -358,6 +358,8 @@ static int nvgrace_gpu_mmap(struct vfio_device *core_vdev,
struct nvgrace_gpu_pci_core_device *nvdev =
container_of(core_vdev, struct nvgrace_gpu_pci_core_device,
core_device.vdev);
+ struct vfio_pci_core_device *vdev =
+ container_of(core_vdev, struct vfio_pci_core_device, vdev);
struct mem_region *memregion;
u64 req_len, pgoff, end;
unsigned int index;
@@ -368,6 +370,9 @@ static int nvgrace_gpu_mmap(struct vfio_device *core_vdev,
if (!memregion)
return vfio_pci_core_mmap(core_vdev, vma);
+ /* Non-DMABUF BAR mappings need an extra zap */
+ vdev->bar_needs_zap = true;
+
/*
* Request to mmap the BAR. Map to the CPU accessible memory on the
* GPU using the memory information gathered from the system ACPI
diff --git a/drivers/vfio/pci/vfio_pci_config.c b/drivers/vfio/pci/vfio_pci_config.c
index a10ed733f0e3..8bfab0da481c 100644
--- a/drivers/vfio/pci/vfio_pci_config.c
+++ b/drivers/vfio/pci/vfio_pci_config.c
@@ -590,12 +590,10 @@ static int vfio_basic_config_write(struct vfio_pci_core_device *vdev, int pos,
virt_mem = !!(le16_to_cpu(*virt_cmd) & PCI_COMMAND_MEMORY);
new_mem = !!(new_cmd & PCI_COMMAND_MEMORY);
- if (!new_mem) {
- vfio_pci_zap_and_down_write_memory_lock(vdev);
- vfio_pci_dma_buf_move(vdev, true);
- } else {
+ if (!new_mem)
+ vfio_pci_lock_zap_revoke_bars(vdev);
+ else
down_write(&vdev->memory_lock);
- }
/*
* If the user is writing mem/io enable (new_mem/io) and we
@@ -631,7 +629,7 @@ static int vfio_basic_config_write(struct vfio_pci_core_device *vdev, int pos,
*virt_cmd |= cpu_to_le16(new_cmd & mask);
if (__vfio_pci_memory_enabled(vdev))
- vfio_pci_dma_buf_move(vdev, false);
+ vfio_pci_unrevoke_bars(vdev);
up_write(&vdev->memory_lock);
}
@@ -712,16 +710,14 @@ static int __init init_pci_cap_basic_perm(struct perm_bits *perm)
static void vfio_lock_and_set_power_state(struct vfio_pci_core_device *vdev,
pci_power_t state)
{
- if (state >= PCI_D3hot) {
- vfio_pci_zap_and_down_write_memory_lock(vdev);
- vfio_pci_dma_buf_move(vdev, true);
- } else {
+ if (state >= PCI_D3hot)
+ vfio_pci_lock_zap_revoke_bars(vdev);
+ else
down_write(&vdev->memory_lock);
- }
vfio_pci_set_power_state(vdev, state);
if (__vfio_pci_memory_enabled(vdev))
- vfio_pci_dma_buf_move(vdev, false);
+ vfio_pci_unrevoke_bars(vdev);
up_write(&vdev->memory_lock);
}
@@ -908,11 +904,10 @@ static int vfio_exp_config_write(struct vfio_pci_core_device *vdev, int pos,
&cap);
if (!ret && (cap & PCI_EXP_DEVCAP_FLR)) {
- vfio_pci_zap_and_down_write_memory_lock(vdev);
- vfio_pci_dma_buf_move(vdev, true);
+ vfio_pci_lock_zap_revoke_bars(vdev);
pci_try_reset_function(vdev->pdev);
if (__vfio_pci_memory_enabled(vdev))
- vfio_pci_dma_buf_move(vdev, false);
+ vfio_pci_unrevoke_bars(vdev);
up_write(&vdev->memory_lock);
}
}
@@ -993,11 +988,10 @@ static int vfio_af_config_write(struct vfio_pci_core_device *vdev, int pos,
&cap);
if (!ret && (cap & PCI_AF_CAP_FLR) && (cap & PCI_AF_CAP_TP)) {
- vfio_pci_zap_and_down_write_memory_lock(vdev);
- vfio_pci_dma_buf_move(vdev, true);
+ vfio_pci_lock_zap_revoke_bars(vdev);
pci_try_reset_function(vdev->pdev);
if (__vfio_pci_memory_enabled(vdev))
- vfio_pci_dma_buf_move(vdev, false);
+ vfio_pci_unrevoke_bars(vdev);
up_write(&vdev->memory_lock);
}
}
diff --git a/drivers/vfio/pci/vfio_pci_core.c b/drivers/vfio/pci/vfio_pci_core.c
index c00a61d61250..464b63585bef 100644
--- a/drivers/vfio/pci/vfio_pci_core.c
+++ b/drivers/vfio/pci/vfio_pci_core.c
@@ -319,8 +319,7 @@ static int vfio_pci_runtime_pm_entry(struct vfio_pci_core_device *vdev,
* The vdev power related flags are protected with 'memory_lock'
* semaphore.
*/
- vfio_pci_zap_and_down_write_memory_lock(vdev);
- vfio_pci_dma_buf_move(vdev, true);
+ vfio_pci_lock_zap_revoke_bars(vdev);
if (vdev->pm_runtime_engaged) {
up_write(&vdev->memory_lock);
@@ -406,7 +405,7 @@ static void vfio_pci_runtime_pm_exit(struct vfio_pci_core_device *vdev)
down_write(&vdev->memory_lock);
__vfio_pci_runtime_pm_exit(vdev);
if (__vfio_pci_memory_enabled(vdev))
- vfio_pci_dma_buf_move(vdev, false);
+ vfio_pci_unrevoke_bars(vdev);
up_write(&vdev->memory_lock);
}
@@ -1229,7 +1228,7 @@ static int vfio_pci_ioctl_reset(struct vfio_pci_core_device *vdev,
if (!vdev->reset_works)
return -EINVAL;
- vfio_pci_zap_and_down_write_memory_lock(vdev);
+ vfio_pci_lock_zap_revoke_bars(vdev);
/*
* This function can be invoked while the power state is non-D0. If
@@ -1242,10 +1241,9 @@ static int vfio_pci_ioctl_reset(struct vfio_pci_core_device *vdev,
*/
vfio_pci_set_power_state(vdev, PCI_D0);
- vfio_pci_dma_buf_move(vdev, true);
ret = pci_try_reset_function(vdev->pdev);
if (__vfio_pci_memory_enabled(vdev))
- vfio_pci_dma_buf_move(vdev, false);
+ vfio_pci_unrevoke_bars(vdev);
up_write(&vdev->memory_lock);
return ret;
@@ -1613,20 +1611,44 @@ ssize_t vfio_pci_core_write(struct vfio_device *core_vdev, const char __user *bu
}
EXPORT_SYMBOL_GPL(vfio_pci_core_write);
-static void vfio_pci_zap_bars(struct vfio_pci_core_device *vdev)
+static void vfio_pci_zap_revoke_bars(struct vfio_pci_core_device *vdev)
{
- struct vfio_device *core_vdev = &vdev->vdev;
- loff_t start = VFIO_PCI_INDEX_TO_OFFSET(VFIO_PCI_BAR0_REGION_INDEX);
- loff_t end = VFIO_PCI_INDEX_TO_OFFSET(VFIO_PCI_ROM_REGION_INDEX);
- loff_t len = end - start;
+ lockdep_assert_held_write(&vdev->memory_lock);
+ vfio_pci_dma_buf_move(vdev, true);
- unmap_mapping_range(core_vdev->inode->i_mapping, start, len, true);
+ /*
+ * All VFIO PCI BARs are backed by DMABUFs, with the current
+ * exception of the nvgrace-gpu device which uses its own
+ * vm_ops for a subset of BARs. For this, BAR mappings are
+ * still made in the vdev's address_space, and a zap is
+ * required. The tracking is crude, and will (harmlessly)
+ * continue to zap if the special BAR is unmapped, but that
+ * behaviour isn't the common case.
+ *
+ * FIXME: This can go away if the special nvgrace-gpu mapping
+ * is converted to use DMABUF.
+ */
+ if (vdev->bar_needs_zap) {
+ struct vfio_device *core_vdev = &vdev->vdev;
+ loff_t start = VFIO_PCI_INDEX_TO_OFFSET(VFIO_PCI_BAR0_REGION_INDEX);
+ loff_t end = VFIO_PCI_INDEX_TO_OFFSET(VFIO_PCI_ROM_REGION_INDEX);
+ loff_t len = end - start;
+
+ unmap_mapping_range(core_vdev->inode->i_mapping,
+ start, len, true);
+ }
}
-void vfio_pci_zap_and_down_write_memory_lock(struct vfio_pci_core_device *vdev)
+void vfio_pci_lock_zap_revoke_bars(struct vfio_pci_core_device *vdev)
{
down_write(&vdev->memory_lock);
- vfio_pci_zap_bars(vdev);
+ vfio_pci_zap_revoke_bars(vdev);
+}
+
+void vfio_pci_unrevoke_bars(struct vfio_pci_core_device *vdev)
+{
+ lockdep_assert_held_write(&vdev->memory_lock);
+ vfio_pci_dma_buf_move(vdev, false);
}
u16 vfio_pci_memory_lock_and_enable(struct vfio_pci_core_device *vdev)
@@ -2480,9 +2502,10 @@ static int vfio_pci_dev_set_hot_reset(struct vfio_device_set *dev_set,
}
/*
- * Take the memory write lock for each device and zap BAR
- * mappings to prevent the user accessing the device while in
- * reset. Locking multiple devices is prone to deadlock,
+ * Take the memory write lock for each device and
+ * zap/revoke BAR mappings to prevent the user (or
+ * peers) accessing the device while in reset.
+ * Locking multiple devices is prone to deadlock,
* runaway and unwind if we hit contention.
*/
if (!down_write_trylock(&vdev->memory_lock)) {
@@ -2490,8 +2513,7 @@ static int vfio_pci_dev_set_hot_reset(struct vfio_device_set *dev_set,
break;
}
- vfio_pci_dma_buf_move(vdev, true);
- vfio_pci_zap_bars(vdev);
+ vfio_pci_zap_revoke_bars(vdev);
}
if (!list_entry_is_head(vdev,
@@ -2521,7 +2543,7 @@ static int vfio_pci_dev_set_hot_reset(struct vfio_device_set *dev_set,
list_for_each_entry_from_reverse(vdev, &dev_set->device_list,
vdev.dev_set_list) {
if (vdev->vdev.open_count && __vfio_pci_memory_enabled(vdev))
- vfio_pci_dma_buf_move(vdev, false);
+ vfio_pci_unrevoke_bars(vdev);
up_write(&vdev->memory_lock);
}
diff --git a/drivers/vfio/pci/vfio_pci_priv.h b/drivers/vfio/pci/vfio_pci_priv.h
index 868a54ba482c..a8edbee6ce56 100644
--- a/drivers/vfio/pci/vfio_pci_priv.h
+++ b/drivers/vfio/pci/vfio_pci_priv.h
@@ -82,7 +82,8 @@ void vfio_config_free(struct vfio_pci_core_device *vdev);
int vfio_pci_set_power_state(struct vfio_pci_core_device *vdev,
pci_power_t state);
-void vfio_pci_zap_and_down_write_memory_lock(struct vfio_pci_core_device *vdev);
+void vfio_pci_lock_zap_revoke_bars(struct vfio_pci_core_device *vdev);
+void vfio_pci_unrevoke_bars(struct vfio_pci_core_device *vdev);
u16 vfio_pci_memory_lock_and_enable(struct vfio_pci_core_device *vdev);
void vfio_pci_memory_unlock_and_restore(struct vfio_pci_core_device *vdev,
u16 cmd);
diff --git a/include/linux/vfio_pci_core.h b/include/linux/vfio_pci_core.h
index 2ea4e773c121..c1cd67741125 100644
--- a/include/linux/vfio_pci_core.h
+++ b/include/linux/vfio_pci_core.h
@@ -127,6 +127,7 @@ struct vfio_pci_core_device {
bool needs_pm_restore:1;
bool pm_intx_masked:1;
bool pm_runtime_engaged:1;
+ bool bar_needs_zap:1;
struct pci_saved_state *pci_saved_state;
struct pci_saved_state *pm_save;
int ioeventfds_nr;
--
2.47.3
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH 7/9] vfio/pci: Support mmap() of a VFIO DMABUF
2026-04-16 13:17 [PATCH 0/9] vfio/pci: Add mmap() for DMABUFs Matt Evans
` (5 preceding siblings ...)
2026-04-16 13:17 ` [PATCH 6/9] vfio/pci: Clean up BAR zap and revocation Matt Evans
@ 2026-04-16 13:17 ` Matt Evans
2026-04-16 13:17 ` [PATCH 8/9] vfio/pci: Permanently revoke a DMABUF on request Matt Evans
2026-04-16 13:17 ` [PATCH 9/9] vfio/pci: Add mmap() attributes to DMABUF feature Matt Evans
8 siblings, 0 replies; 10+ messages in thread
From: Matt Evans @ 2026-04-16 13:17 UTC (permalink / raw)
To: Alex Williamson, Leon Romanovsky, Jason Gunthorpe, Alex Mastro,
Christian König
Cc: Mahmoud Adam, David Matlack, Björn Töpel, Sumit Semwal,
Kevin Tian, Ankit Agrawal, Pranjal Shrivastava, Alistair Popple,
Vivek Kasireddy, linux-kernel, linux-media, dri-devel,
linaro-mm-sig, kvm
A VFIO DMABUF can export a subset of a BAR to userspace by fd; add
support for mmap() of this fd. This provides another route for a
process to map BARs, except one where the process can only map a specific
subset of a BAR represented by the exported DMABUF.
mmap() support enables userspace driver designs that safely delegate
access to BAR sub-ranges to other client processes by sharing a DMABUF
fd, without having to share the (omnipotent) VFIO device fd with them.
Since the main VFIO BAR mmap() is now DMABUF-aware, this path reuses
the existing vm_ops. But, since the lifecycle of an exported DMABUF
is still decoupled from that of the device fd it came from, the device
fd might now be closed concurrent with a VMA fault.
Extra synchronisation is added to deal with the possibility of a fault
racing with the DMABUF cleanup path. (Note that this differs to a
DMABUF implicitly created on the mmap() path, which holds ownership of
the device fd and so prevents close-during-fault scenarios in order to
maintain the same user-facing behaviour on close.) It does this by
temporarily taking a VFIO device registration to ensure vdev remains
valid, then vdev->memory_lock can be taken.
Signed-off-by: Matt Evans <mattev@meta.com>
---
drivers/vfio/pci/vfio_pci_core.c | 79 ++++++++++++++++++++++++++----
drivers/vfio/pci/vfio_pci_dmabuf.c | 28 +++++++++++
drivers/vfio/pci/vfio_pci_priv.h | 2 +
3 files changed, 99 insertions(+), 10 deletions(-)
diff --git a/drivers/vfio/pci/vfio_pci_core.c b/drivers/vfio/pci/vfio_pci_core.c
index 464b63585bef..cad126cf8737 100644
--- a/drivers/vfio/pci/vfio_pci_core.c
+++ b/drivers/vfio/pci/vfio_pci_core.c
@@ -12,6 +12,8 @@
#include <linux/aperture.h>
#include <linux/device.h>
+#include <linux/dma-buf.h>
+#include <linux/dma-resv.h>
#include <linux/eventfd.h>
#include <linux/file.h>
#include <linux/interrupt.h>
@@ -1703,20 +1705,76 @@ static vm_fault_t vfio_pci_mmap_huge_fault(struct vm_fault *vmf,
vm_fault_t ret = VM_FAULT_SIGBUS;
/*
- * We can rely on the existence of both a DMABUF (priv) and
- * the VFIO device it was exported from (vdev). This fault's
- * VMA was established using vfio_pci_core_mmap_prep_dmabuf()
- * which transfers ownership of the VFIO device fd to the
- * DMABUF, and so the VFIO device is held open because the
- * VMA's vm_file (DMABUF) is open.
+ * The only thing this can rely on is that the DMABUF relating
+ * to the VMA's vm_file exists (priv).
*
- * Since vfio_pci_dma_buf_cleanup() cannot have happened,
- * vdev must be valid; we can take memory_lock.
+ * A DMABUF for a VFIO device fd mmap() holds a reference to
+ * the original VFIO device fd, but an explicitly-exported
+ * DMABUF does not. The original fd might have closed,
+ * meaning this fault can race with
+ * vfio_pci_dma_buf_cleanup(), meaning priv->vdev might be
+ * NULL, and the VFIO device registration might have been
+ * dropped.
+ *
+ * With the goal of taking vdev->memory_lock in a world where
+ * vdev might not still exist:
+ *
+ * 1. Take the resv lock on the DMABUF:
+ * - If racing cleanup got in first, vdev == NULL and buffer
+ * is revoked; stop/exit if so.
+ * - If we got in first, vdev is non-NULL, accessible, and
+ * cleanup _has not yet put the VFIO device registration_,
+ * so the device refcount must be >0.
+ *
+ * 2. Take vfio_device registration (refcount guaranteed >0
+ * hereafter).
+ *
+ * 3. Unlock the DMABUF's resv lock:
+ * - A racing cleanup can now complete.
+ * - But, the device refcount >0, meaning the vfio_device
+ * (and vfio_pcie_core device vdev) have not yet been
+ * freed. vdev is accessible, even if the DMABUF has been
+ * revoked or cleanup has happened, because
+ * vfio_unregister_group_dev() can't complete.
+ *
+ * 4. Take the vdev->memory_lock
+ * - Either the DMABUF is usable, or has been cleaned up.
+ * Whichever, it can no longer change under us.
+ * - Test the DMABUF revocation status again: if it was
+ * revoked between 1 and 4 return a SIGBUS. Otherwise,
+ * return a PFN.
+ * - It's not necessary to also take the resv lock, because
+ * the status/vdev can't change while memory_lock is held.
+ *
+ * 5. Unlock, done.
*/
+
+ dma_resv_lock(priv->dmabuf->resv, NULL);
vdev = READ_ONCE(priv->vdev);
+ if (READ_ONCE(priv->revoked) || !vdev) {
+ pr_debug_ratelimited("%s VA 0x%lx, pgoff 0x%lx: DMABUF revoked/cleaned up\n",
+ __func__, vmf->address, vma->vm_pgoff);
+ dma_resv_unlock(priv->dmabuf->resv);
+ return VM_FAULT_SIGBUS;
+ }
+ /* vdev is usable */
+
+ if (!vfio_device_try_get_registration(&vdev->vdev)) {
+ /*
+ * If vdev != NULL (above), the registration should
+ * already be >0 and so this try_get should never
+ * fail.
+ */
+ dev_warn(&vdev->pdev->dev, "%s: Unexpected registration failure\n",
+ __func__);
+ dma_resv_unlock(priv->dmabuf->resv);
+ return VM_FAULT_SIGBUS;
+ }
+ dma_resv_unlock(priv->dmabuf->resv);
+
scoped_guard(rwsem_read, &vdev->memory_lock) {
- if (!priv->revoked) {
+ if (!READ_ONCE(priv->revoked)) {
int pres = vfio_pci_dma_buf_find_pfn(priv, vma,
vmf->address,
order, &pfn);
@@ -1734,6 +1792,7 @@ static vm_fault_t vfio_pci_mmap_huge_fault(struct vm_fault *vmf,
vma->vm_pgoff, (unsigned int)ret);
}
+ vfio_device_put_registration(&vdev->vdev);
return ret;
}
@@ -1742,7 +1801,7 @@ static vm_fault_t vfio_pci_mmap_page_fault(struct vm_fault *vmf)
return vfio_pci_mmap_huge_fault(vmf, 0);
}
-static const struct vm_operations_struct vfio_pci_mmap_ops = {
+const struct vm_operations_struct vfio_pci_mmap_ops = {
.fault = vfio_pci_mmap_page_fault,
#ifdef CONFIG_ARCH_SUPPORTS_HUGE_PFNMAP
.huge_fault = vfio_pci_mmap_huge_fault,
diff --git a/drivers/vfio/pci/vfio_pci_dmabuf.c b/drivers/vfio/pci/vfio_pci_dmabuf.c
index 04c7733fe712..cc477f46a7d5 100644
--- a/drivers/vfio/pci/vfio_pci_dmabuf.c
+++ b/drivers/vfio/pci/vfio_pci_dmabuf.c
@@ -27,6 +27,33 @@ static int vfio_pci_dma_buf_attach(struct dma_buf *dmabuf,
return 0;
}
+
+static int vfio_pci_dma_buf_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma)
+{
+ struct vfio_pci_dma_buf *priv = dmabuf->priv;
+ u64 req_len, req_start;
+
+ if (priv->revoked)
+ return -ENODEV;
+ if ((vma->vm_flags & VM_SHARED) == 0)
+ return -EINVAL;
+
+ req_len = vma->vm_end - vma->vm_start;
+ req_start = vma->vm_pgoff << PAGE_SHIFT;
+ if (req_start + req_len > priv->size)
+ return -EINVAL;
+
+ vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
+ vma->vm_page_prot = pgprot_decrypted(vma->vm_page_prot);
+
+ /* See comments in vfio_pci_core_mmap() re VM_ALLOW_ANY_UNCACHED. */
+ vm_flags_set(vma, VM_ALLOW_ANY_UNCACHED | VM_IO | VM_PFNMAP |
+ VM_DONTEXPAND | VM_DONTDUMP);
+ vma->vm_private_data = priv;
+ vma->vm_ops = &vfio_pci_mmap_ops;
+
+ return 0;
+}
#endif /* CONFIG_VFIO_PCI_DMABUF */
static void vfio_pci_dma_buf_done(struct kref *kref)
@@ -94,6 +121,7 @@ static void vfio_pci_dma_buf_release(struct dma_buf *dmabuf)
static const struct dma_buf_ops vfio_pci_dmabuf_ops = {
#ifdef CONFIG_VFIO_PCI_DMABUF
.attach = vfio_pci_dma_buf_attach,
+ .mmap = vfio_pci_dma_buf_mmap,
#endif
.map_dma_buf = vfio_pci_dma_buf_map,
.unmap_dma_buf = vfio_pci_dma_buf_unmap,
diff --git a/drivers/vfio/pci/vfio_pci_priv.h b/drivers/vfio/pci/vfio_pci_priv.h
index a8edbee6ce56..f837d6c8bddc 100644
--- a/drivers/vfio/pci/vfio_pci_priv.h
+++ b/drivers/vfio/pci/vfio_pci_priv.h
@@ -37,6 +37,8 @@ struct vfio_pci_dma_buf {
u8 revoked : 1;
};
+extern const struct vm_operations_struct vfio_pci_mmap_ops;
+
bool vfio_pci_intx_mask(struct vfio_pci_core_device *vdev);
void vfio_pci_intx_unmask(struct vfio_pci_core_device *vdev);
--
2.47.3
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH 8/9] vfio/pci: Permanently revoke a DMABUF on request
2026-04-16 13:17 [PATCH 0/9] vfio/pci: Add mmap() for DMABUFs Matt Evans
` (6 preceding siblings ...)
2026-04-16 13:17 ` [PATCH 7/9] vfio/pci: Support mmap() of a VFIO DMABUF Matt Evans
@ 2026-04-16 13:17 ` Matt Evans
2026-04-16 13:17 ` [PATCH 9/9] vfio/pci: Add mmap() attributes to DMABUF feature Matt Evans
8 siblings, 0 replies; 10+ messages in thread
From: Matt Evans @ 2026-04-16 13:17 UTC (permalink / raw)
To: Alex Williamson, Leon Romanovsky, Jason Gunthorpe, Alex Mastro,
Christian König
Cc: Mahmoud Adam, David Matlack, Björn Töpel, Sumit Semwal,
Kevin Tian, Ankit Agrawal, Pranjal Shrivastava, Alistair Popple,
Vivek Kasireddy, linux-kernel, linux-media, dri-devel,
linaro-mm-sig, kvm
Expand the VFIO DMABUF revocation state to three states:
Not revoked, temporarily revoked, and permanently revoked.
The first two are for existing transient revocation, e.g. across a
function reset, and the DMABUF is put into the last in response to a
new ioctl(VFIO_DEVICE_PCI_DMABUF_REVOKE) request.
This VFIO device fd ioctl passes a DMABUF by fd and requests that the
DMABUF is permanently revoked. On success, it's guaranteed that the
buffer can never be imported/attached/mmap()ed in future, that dynamic
imports have been cleanly detached, and all mappings made
inaccessible/PTEs zapped.
This is useful for lifecycle management, to reclaim VFIO PCI BAR
ranges previously delegated to a subordinate client process: The
driver process can ensure that the loaned resources are revoked when
the client is deemed "done", and exported ranges can be safely re-used
elsewhere.
Signed-off-by: Matt Evans <mattev@meta.com>
---
drivers/vfio/pci/vfio_pci_core.c | 21 +++-
drivers/vfio/pci/vfio_pci_dmabuf.c | 158 +++++++++++++++++++++--------
drivers/vfio/pci/vfio_pci_priv.h | 14 ++-
include/uapi/linux/vfio.h | 30 ++++++
4 files changed, 179 insertions(+), 44 deletions(-)
diff --git a/drivers/vfio/pci/vfio_pci_core.c b/drivers/vfio/pci/vfio_pci_core.c
index cad126cf8737..59582fcfba97 100644
--- a/drivers/vfio/pci/vfio_pci_core.c
+++ b/drivers/vfio/pci/vfio_pci_core.c
@@ -1461,6 +1461,21 @@ static int vfio_pci_ioctl_ioeventfd(struct vfio_pci_core_device *vdev,
ioeventfd.fd);
}
+static int vfio_pci_ioctl_dmabuf_revoke(struct vfio_pci_core_device *vdev,
+ struct vfio_pci_dmabuf_revoke __user *arg)
+{
+ unsigned long minsz = offsetofend(struct vfio_pci_dmabuf_revoke, dmabuf_fd);
+ struct vfio_pci_dmabuf_revoke revoke;
+
+ if (copy_from_user(&revoke, arg, minsz))
+ return -EFAULT;
+
+ if (revoke.argsz < minsz)
+ return -EINVAL;
+
+ return vfio_pci_dma_buf_revoke(vdev, revoke.dmabuf_fd);
+}
+
long vfio_pci_core_ioctl(struct vfio_device *core_vdev, unsigned int cmd,
unsigned long arg)
{
@@ -1483,6 +1498,8 @@ long vfio_pci_core_ioctl(struct vfio_device *core_vdev, unsigned int cmd,
return vfio_pci_ioctl_reset(vdev, uarg);
case VFIO_DEVICE_SET_IRQS:
return vfio_pci_ioctl_set_irqs(vdev, uarg);
+ case VFIO_DEVICE_PCI_DMABUF_REVOKE:
+ return vfio_pci_ioctl_dmabuf_revoke(vdev, uarg);
default:
return -ENOTTY;
}
@@ -1752,7 +1769,7 @@ static vm_fault_t vfio_pci_mmap_huge_fault(struct vm_fault *vmf,
dma_resv_lock(priv->dmabuf->resv, NULL);
vdev = READ_ONCE(priv->vdev);
- if (READ_ONCE(priv->revoked) || !vdev) {
+ if (READ_ONCE(priv->status) != VFIO_PCI_DMABUF_OK || !vdev) {
pr_debug_ratelimited("%s VA 0x%lx, pgoff 0x%lx: DMABUF revoked/cleaned up\n",
__func__, vmf->address, vma->vm_pgoff);
dma_resv_unlock(priv->dmabuf->resv);
@@ -1774,7 +1791,7 @@ static vm_fault_t vfio_pci_mmap_huge_fault(struct vm_fault *vmf,
dma_resv_unlock(priv->dmabuf->resv);
scoped_guard(rwsem_read, &vdev->memory_lock) {
- if (!READ_ONCE(priv->revoked)) {
+ if (READ_ONCE(priv->status) == VFIO_PCI_DMABUF_OK) {
int pres = vfio_pci_dma_buf_find_pfn(priv, vma,
vmf->address,
order, &pfn);
diff --git a/drivers/vfio/pci/vfio_pci_dmabuf.c b/drivers/vfio/pci/vfio_pci_dmabuf.c
index cc477f46a7d5..48ec4da2db8b 100644
--- a/drivers/vfio/pci/vfio_pci_dmabuf.c
+++ b/drivers/vfio/pci/vfio_pci_dmabuf.c
@@ -19,7 +19,7 @@ static int vfio_pci_dma_buf_attach(struct dma_buf *dmabuf,
if (!attachment->peer2peer)
return -EOPNOTSUPP;
- if (priv->revoked)
+ if (priv->status != VFIO_PCI_DMABUF_OK)
return -ENODEV;
if (!dma_buf_attach_revocable(attachment))
@@ -33,7 +33,7 @@ static int vfio_pci_dma_buf_mmap(struct dma_buf *dmabuf, struct vm_area_struct *
struct vfio_pci_dma_buf *priv = dmabuf->priv;
u64 req_len, req_start;
- if (priv->revoked)
+ if (priv->status != VFIO_PCI_DMABUF_OK)
return -ENODEV;
if ((vma->vm_flags & VM_SHARED) == 0)
return -EINVAL;
@@ -73,7 +73,7 @@ vfio_pci_dma_buf_map(struct dma_buf_attachment *attachment,
dma_resv_assert_held(priv->dmabuf->resv);
- if (priv->revoked)
+ if (priv->status != VFIO_PCI_DMABUF_OK)
return ERR_PTR(-ENODEV);
ret = dma_buf_phys_vec_to_sgt(attachment, priv->provider,
@@ -270,7 +270,8 @@ static int vfio_pci_dmabuf_export(struct vfio_pci_core_device *vdev,
INIT_LIST_HEAD(&priv->dmabufs_elm);
down_write(&vdev->memory_lock);
dma_resv_lock(priv->dmabuf->resv, NULL);
- priv->revoked = !__vfio_pci_memory_enabled(vdev);
+ priv->status = __vfio_pci_memory_enabled(vdev) ? VFIO_PCI_DMABUF_OK :
+ VFIO_PCI_DMABUF_TEMP_REVOKED;
list_add_tail(&priv->dmabufs_elm, &vdev->dmabufs);
dma_resv_unlock(priv->dmabuf->resv);
up_write(&vdev->memory_lock);
@@ -301,7 +302,7 @@ int vfio_pci_dma_buf_iommufd_map(struct dma_buf_attachment *attachment,
return -EOPNOTSUPP;
priv = attachment->dmabuf->priv;
- if (priv->revoked)
+ if (priv->status != VFIO_PCI_DMABUF_OK)
return -ENODEV;
/* More than one range to iommufd will require proper DMABUF support */
@@ -581,6 +582,64 @@ int vfio_pci_core_mmap_prep_dmabuf(struct vfio_pci_core_device *vdev,
return ret;
}
+static void __vfio_pci_dma_buf_revoke(struct vfio_pci_dma_buf *priv, bool revoked,
+ bool permanently)
+{
+ bool was_revoked;
+
+ lockdep_assert_held_write(&priv->vdev->memory_lock);
+
+ if ((priv->status == VFIO_PCI_DMABUF_PERM_REVOKED) ||
+ (priv->status == VFIO_PCI_DMABUF_OK && !revoked) ||
+ (priv->status == VFIO_PCI_DMABUF_TEMP_REVOKED && revoked && !permanently)) {
+ return;
+ }
+
+ dma_resv_lock(priv->dmabuf->resv, NULL);
+ was_revoked = priv->status != VFIO_PCI_DMABUF_OK;
+
+ if (revoked)
+ priv->status = permanently ?
+ VFIO_PCI_DMABUF_PERM_REVOKED : VFIO_PCI_DMABUF_TEMP_REVOKED;
+
+ /*
+ * If TEMP_REVOKED is being upgraded to PERM_REVOKED, the
+ * buffer is already gone. Don't wait on it again.
+ */
+ if (was_revoked && revoked) {
+ dma_resv_unlock(priv->dmabuf->resv);
+ return;
+ }
+
+ dma_buf_invalidate_mappings(priv->dmabuf);
+ dma_resv_wait_timeout(priv->dmabuf->resv,
+ DMA_RESV_USAGE_BOOKKEEP, false,
+ MAX_SCHEDULE_TIMEOUT);
+ dma_resv_unlock(priv->dmabuf->resv);
+ if (revoked) {
+ kref_put(&priv->kref, vfio_pci_dma_buf_done);
+ wait_for_completion(&priv->comp);
+ unmap_mapping_range(priv->dmabuf->file->f_mapping,
+ 0, priv->size, 1);
+ } else {
+ /*
+ * Kref is initialize again, because when revoke
+ * was performed the reference counter was decreased
+ * to zero to trigger completion.
+ */
+ kref_init(&priv->kref);
+ /*
+ * There is no need to wait as no mapping was
+ * performed when the previous status was
+ * priv->status == *REVOKED.
+ */
+ reinit_completion(&priv->comp);
+ dma_resv_lock(priv->dmabuf->resv, NULL);
+ priv->status = VFIO_PCI_DMABUF_OK;
+ dma_resv_unlock(priv->dmabuf->resv);
+ }
+}
+
void vfio_pci_dma_buf_move(struct vfio_pci_core_device *vdev, bool revoked)
{
struct vfio_pci_dma_buf *priv;
@@ -589,45 +648,13 @@ void vfio_pci_dma_buf_move(struct vfio_pci_core_device *vdev, bool revoked)
lockdep_assert_held_write(&vdev->memory_lock);
/*
* Holding memory_lock ensures a racing VMA fault observes
- * priv->revoked properly.
+ * priv->status properly.
*/
list_for_each_entry_safe(priv, tmp, &vdev->dmabufs, dmabufs_elm) {
if (!get_file_active(&priv->dmabuf->file))
continue;
-
- if (priv->revoked != revoked) {
- dma_resv_lock(priv->dmabuf->resv, NULL);
- if (revoked)
- priv->revoked = true;
- dma_buf_invalidate_mappings(priv->dmabuf);
- dma_resv_wait_timeout(priv->dmabuf->resv,
- DMA_RESV_USAGE_BOOKKEEP, false,
- MAX_SCHEDULE_TIMEOUT);
- dma_resv_unlock(priv->dmabuf->resv);
- if (revoked) {
- kref_put(&priv->kref, vfio_pci_dma_buf_done);
- wait_for_completion(&priv->comp);
- unmap_mapping_range(priv->dmabuf->file->f_mapping,
- 0, priv->size, 1);
- } else {
- /*
- * Kref is initialize again, because when revoke
- * was performed the reference counter was decreased
- * to zero to trigger completion.
- */
- kref_init(&priv->kref);
- /*
- * There is no need to wait as no mapping was
- * performed when the previous status was
- * priv->revoked == true.
- */
- reinit_completion(&priv->comp);
- dma_resv_lock(priv->dmabuf->resv, NULL);
- priv->revoked = false;
- dma_resv_unlock(priv->dmabuf->resv);
- }
- }
+ __vfio_pci_dma_buf_revoke(priv, revoked, false);
fput(priv->dmabuf->file);
}
}
@@ -647,8 +674,8 @@ void vfio_pci_dma_buf_cleanup(struct vfio_pci_core_device *vdev)
dma_resv_lock(priv->dmabuf->resv, NULL);
list_del_init(&priv->dmabufs_elm);
priv->vdev = NULL;
- was_revoked = priv->revoked;
- priv->revoked = true;
+ was_revoked = (priv->status != VFIO_PCI_DMABUF_OK);
+ priv->status = VFIO_PCI_DMABUF_PERM_REVOKED;
dma_buf_invalidate_mappings(priv->dmabuf);
dma_resv_wait_timeout(priv->dmabuf->resv,
DMA_RESV_USAGE_BOOKKEEP, false,
@@ -665,3 +692,52 @@ void vfio_pci_dma_buf_cleanup(struct vfio_pci_core_device *vdev)
}
up_write(&vdev->memory_lock);
}
+
+#ifdef CONFIG_VFIO_PCI_DMABUF
+int vfio_pci_dma_buf_revoke(struct vfio_pci_core_device *vdev, int dmabuf_fd)
+{
+ struct dma_buf *dmabuf;
+ struct vfio_pci_dma_buf *priv;
+ int ret = 0;
+
+ dmabuf = dma_buf_get(dmabuf_fd);
+ if (IS_ERR(dmabuf))
+ return PTR_ERR(dmabuf);
+
+ /*
+ * Sanity-check the DMABUF is really a vfio_pci_dma_buf _and_
+ * (below) relates to the VFIO device it was provided with:
+ */
+ if (dmabuf->ops != &vfio_pci_dmabuf_ops) {
+ ret = -ENODEV;
+ goto out_put_buf;
+ }
+
+ priv = dmabuf->priv;
+
+ scoped_guard(rwsem_write, &vdev->memory_lock) {
+ struct vfio_pci_core_device *db_vdev = READ_ONCE(priv->vdev);
+
+ /*
+ * Reading priv->vdev inside the lock is conservative,
+ * because cleanup (changes vdev) is (today) prevented
+ * from running concurrently by the VFIO device fd
+ * being held open by the caller, ioctl.
+ */
+ if (!db_vdev || db_vdev != vdev) {
+ ret = -ENODEV;
+ break;
+ }
+
+ if (priv->status == VFIO_PCI_DMABUF_PERM_REVOKED)
+ ret = -EBADFD;
+ else
+ __vfio_pci_dma_buf_revoke(priv, true, true);
+ }
+
+ out_put_buf:
+ dma_buf_put(dmabuf);
+
+ return ret;
+}
+#endif /* CONFIG_VFIO_PCI_DMABUF */
diff --git a/drivers/vfio/pci/vfio_pci_priv.h b/drivers/vfio/pci/vfio_pci_priv.h
index f837d6c8bddc..eac5606ca161 100644
--- a/drivers/vfio/pci/vfio_pci_priv.h
+++ b/drivers/vfio/pci/vfio_pci_priv.h
@@ -23,6 +23,12 @@ struct vfio_pci_ioeventfd {
bool test_mem;
};
+enum vfio_pci_dma_buf_status {
+ VFIO_PCI_DMABUF_OK = 0,
+ VFIO_PCI_DMABUF_TEMP_REVOKED = 1,
+ VFIO_PCI_DMABUF_PERM_REVOKED = 2,
+};
+
struct vfio_pci_dma_buf {
struct dma_buf *dmabuf;
struct vfio_pci_core_device *vdev;
@@ -34,7 +40,7 @@ struct vfio_pci_dma_buf {
u32 nr_ranges;
struct kref kref;
struct completion comp;
- u8 revoked : 1;
+ enum vfio_pci_dma_buf_status status;
};
extern const struct vm_operations_struct vfio_pci_mmap_ops;
@@ -147,6 +153,7 @@ void vfio_pci_dma_buf_move(struct vfio_pci_core_device *vdev, bool revoked);
int vfio_pci_core_feature_dma_buf(struct vfio_pci_core_device *vdev, u32 flags,
struct vfio_device_feature_dma_buf __user *arg,
size_t argsz);
+int vfio_pci_dma_buf_revoke(struct vfio_pci_core_device *vdev, int dmabuf_fd);
#else
static inline int
vfio_pci_core_feature_dma_buf(struct vfio_pci_core_device *vdev, u32 flags,
@@ -155,6 +162,11 @@ vfio_pci_core_feature_dma_buf(struct vfio_pci_core_device *vdev, u32 flags,
{
return -ENOTTY;
}
+static inline int vfio_pci_dma_buf_revoke(struct vfio_pci_core_device *vdev,
+ int dmabuf_fd)
+{
+ return -ENODEV;
+}
#endif
#endif
diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
index 5de618a3a5ee..77225ed8115f 100644
--- a/include/uapi/linux/vfio.h
+++ b/include/uapi/linux/vfio.h
@@ -1321,6 +1321,36 @@ struct vfio_precopy_info {
#define VFIO_MIG_GET_PRECOPY_INFO _IO(VFIO_TYPE, VFIO_BASE + 21)
+/**
+ * VFIO_DEVICE_PCI_DMABUF_REVOKE - _IO(VFIO_TYPE, VFIO_BASE + 22)
+ *
+ * This ioctl is used on the device FD, and requests that access to
+ * the buffer corresponding to the DMABUF FD parameter is immediately
+ * and permanently revoked. On successful return, the buffer is not
+ * accessible through any mmap() or dma-buf import. The request fails
+ * if the buffer is pinned; otherwise, the exporter marks the buffer
+ * as inaccessible and uses the move_notify callback to inform
+ * importers of the change. The buffer is permanently disabled, and
+ * VFIO refuses all map, mmap, attach, etc. requests.
+ *
+ * Returns:
+ *
+ * Return: 0 on success, -1 and errno set on failure:
+ *
+ * ENODEV if the associated dmabuf FD no longer exists/is closed,
+ * or is not a DMABUF created for this device.
+ * EINVAL if the dmabuf_fd parameter isn't a DMABUF.
+ * EBADF if the dmabuf_fd parameter isn't a valid file number.
+ * EBADFD if the buffer has already been revoked.
+ *
+ */
+struct vfio_pci_dmabuf_revoke {
+ __u32 argsz;
+ __u32 dmabuf_fd;
+};
+
+#define VFIO_DEVICE_PCI_DMABUF_REVOKE _IO(VFIO_TYPE, VFIO_BASE + 22)
+
/*
* Upon VFIO_DEVICE_FEATURE_SET, allow the device to be moved into a low power
* state with the platform-based power management. Device use of lower power
--
2.47.3
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH 9/9] vfio/pci: Add mmap() attributes to DMABUF feature
2026-04-16 13:17 [PATCH 0/9] vfio/pci: Add mmap() for DMABUFs Matt Evans
` (7 preceding siblings ...)
2026-04-16 13:17 ` [PATCH 8/9] vfio/pci: Permanently revoke a DMABUF on request Matt Evans
@ 2026-04-16 13:17 ` Matt Evans
8 siblings, 0 replies; 10+ messages in thread
From: Matt Evans @ 2026-04-16 13:17 UTC (permalink / raw)
To: Alex Williamson, Leon Romanovsky, Jason Gunthorpe, Alex Mastro,
Christian König
Cc: Mahmoud Adam, David Matlack, Björn Töpel, Sumit Semwal,
Kevin Tian, Ankit Agrawal, Pranjal Shrivastava, Alistair Popple,
Vivek Kasireddy, linux-kernel, linux-media, dri-devel,
linaro-mm-sig, kvm
A new field is reserved in vfio_device_feature_dma_buf.flags to
request CPU-facing memory type attributes for mmap()s of the buffer.
Add a flag VFIO_DEVICE_FEATURE_DMA_BUF_ATTR_WC, which results in WC
PTEs for the DMABUF's BAR region.
Signed-off-by: Matt Evans <mattev@meta.com>
---
drivers/vfio/pci/vfio_pci_dmabuf.c | 15 +++++++++++++--
drivers/vfio/pci/vfio_pci_priv.h | 1 +
include/uapi/linux/vfio.h | 12 +++++++++---
3 files changed, 23 insertions(+), 5 deletions(-)
diff --git a/drivers/vfio/pci/vfio_pci_dmabuf.c b/drivers/vfio/pci/vfio_pci_dmabuf.c
index 48ec4da2db8b..00cedfe3a57d 100644
--- a/drivers/vfio/pci/vfio_pci_dmabuf.c
+++ b/drivers/vfio/pci/vfio_pci_dmabuf.c
@@ -43,7 +43,10 @@ static int vfio_pci_dma_buf_mmap(struct dma_buf *dmabuf, struct vm_area_struct *
if (req_start + req_len > priv->size)
return -EINVAL;
- vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
+ if (priv->attrs == VFIO_DEVICE_FEATURE_DMA_BUF_ATTR_WC)
+ vma->vm_page_prot = pgprot_writecombine(vma->vm_page_prot);
+ else
+ vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
vma->vm_page_prot = pgprot_decrypted(vma->vm_page_prot);
/* See comments in vfio_pci_core_mmap() re VM_ALLOW_ANY_UNCACHED. */
@@ -370,6 +373,12 @@ static int validate_dmabuf_input(struct vfio_device_feature_dma_buf *dma_buf,
size_t length = 0;
u32 i;
+ if ((dma_buf->flags != 0) &&
+ ((dma_buf->flags & ~VFIO_DEVICE_FEATURE_DMA_BUF_ATTR_MASK) ||
+ ((dma_buf->flags & VFIO_DEVICE_FEATURE_DMA_BUF_ATTR_MASK) !=
+ VFIO_DEVICE_FEATURE_DMA_BUF_ATTR_WC)))
+ return -EINVAL;
+
for (i = 0; i < dma_buf->nr_ranges; i++) {
u64 offset = dma_ranges[i].offset;
u64 len = dma_ranges[i].length;
@@ -413,7 +422,7 @@ int vfio_pci_core_feature_dma_buf(struct vfio_pci_core_device *vdev, u32 flags,
if (copy_from_user(&get_dma_buf, arg, sizeof(get_dma_buf)))
return -EFAULT;
- if (!get_dma_buf.nr_ranges || get_dma_buf.flags)
+ if (!get_dma_buf.nr_ranges)
return -EINVAL;
/*
@@ -457,6 +466,7 @@ int vfio_pci_core_feature_dma_buf(struct vfio_pci_core_device *vdev, u32 flags,
priv->vdev = vdev;
priv->nr_ranges = get_dma_buf.nr_ranges;
priv->size = length;
+ priv->attrs = get_dma_buf.flags & VFIO_DEVICE_FEATURE_DMA_BUF_ATTR_MASK;
ret = vdev->pci_ops->get_dmabuf_phys(vdev, &priv->provider,
get_dma_buf.region_index,
priv->phys_vec, dma_ranges,
@@ -542,6 +552,7 @@ int vfio_pci_core_mmap_prep_dmabuf(struct vfio_pci_core_device *vdev,
*/
priv->vdev = vdev;
priv->nr_ranges = nr_ranges;
+ priv->attrs = 0;
priv->size = (vma->vm_pgoff << PAGE_SHIFT) + req_len;
priv->provider = pcim_p2pdma_provider(vdev->pdev, res_index);
if (!priv->provider) {
diff --git a/drivers/vfio/pci/vfio_pci_priv.h b/drivers/vfio/pci/vfio_pci_priv.h
index eac5606ca161..aeffd9f7f3b5 100644
--- a/drivers/vfio/pci/vfio_pci_priv.h
+++ b/drivers/vfio/pci/vfio_pci_priv.h
@@ -40,6 +40,7 @@ struct vfio_pci_dma_buf {
u32 nr_ranges;
struct kref kref;
struct completion comp;
+ u32 attrs;
enum vfio_pci_dma_buf_status status;
};
diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
index 77225ed8115f..93eef95dc7f3 100644
--- a/include/uapi/linux/vfio.h
+++ b/include/uapi/linux/vfio.h
@@ -1535,7 +1535,9 @@ struct vfio_device_feature_bus_master {
* etc. offset/length specify a slice of the region to create the dmabuf from.
* nr_ranges is the total number of (P2P DMA) ranges that comprise the dmabuf.
*
- * flags should be 0.
+ * flags contains:
+ * - A field for userspace mapping attribute: by default, suitable for regular
+ * MMIO. Alternate attributes (such as WC) can be selected.
*
* Return: The fd number on success, -1 and errno is set on failure.
*/
@@ -1549,8 +1551,12 @@ struct vfio_region_dma_range {
struct vfio_device_feature_dma_buf {
__u32 region_index;
__u32 open_flags;
- __u32 flags;
- __u32 nr_ranges;
+ __u32 flags;
+ /* Flags sub-field reserved for attribute enum */
+#define VFIO_DEVICE_FEATURE_DMA_BUF_ATTR_MASK (0xfU << 28)
+#define VFIO_DEVICE_FEATURE_DMA_BUF_ATTR_UC (0 << 28)
+#define VFIO_DEVICE_FEATURE_DMA_BUF_ATTR_WC (1 << 28)
+ __u32 nr_ranges;
struct vfio_region_dma_range dma_ranges[] __counted_by(nr_ranges);
};
--
2.47.3
^ permalink raw reply related [flat|nested] 10+ messages in thread
end of thread, other threads:[~2026-04-16 13:19 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-04-16 13:17 [PATCH 0/9] vfio/pci: Add mmap() for DMABUFs Matt Evans
2026-04-16 13:17 ` [PATCH 1/9] vfio/pci: Fix vfio_pci_dma_buf_cleanup() double-put Matt Evans
2026-04-16 13:17 ` [PATCH 2/9] vfio/pci: Add a helper to look up PFNs for DMABUFs Matt Evans
2026-04-16 13:17 ` [PATCH 3/9] vfio/pci: Add a helper to create a DMABUF for a BAR-map VMA Matt Evans
2026-04-16 13:17 ` [PATCH 4/9] vfio/pci: Convert BAR mmap() to use a DMABUF Matt Evans
2026-04-16 13:17 ` [PATCH 5/9] vfio/pci: Provide a user-facing name for BAR mappings Matt Evans
2026-04-16 13:17 ` [PATCH 6/9] vfio/pci: Clean up BAR zap and revocation Matt Evans
2026-04-16 13:17 ` [PATCH 7/9] vfio/pci: Support mmap() of a VFIO DMABUF Matt Evans
2026-04-16 13:17 ` [PATCH 8/9] vfio/pci: Permanently revoke a DMABUF on request Matt Evans
2026-04-16 13:17 ` [PATCH 9/9] vfio/pci: Add mmap() attributes to DMABUF feature Matt Evans
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox