qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: "Cédric Le Goater" <clg@redhat.com>
To: qemu-devel@nongnu.org
Cc: "Alex Williamson" <alex.williamson@redhat.com>,
	"John Levon" <john.levon@nutanix.com>,
	"John Johnson" <john.g.johnson@oracle.com>,
	"Jagannathan Raman" <jag.raman@oracle.com>,
	"Elena Ufimtseva" <elena.ufimtseva@oracle.com>,
	"Cédric Le Goater" <clg@redhat.com>,
	"Steve Sistare" <steven.sistare@oracle.com>
Subject: [PULL 09/16] vfio/container: pass MemoryRegion to DMA operations
Date: Thu,  5 Jun 2025 10:42:38 +0200	[thread overview]
Message-ID: <20250605084245.1520562-10-clg@redhat.com> (raw)
In-Reply-To: <20250605084245.1520562-1-clg@redhat.com>

From: John Levon <john.levon@nutanix.com>

Pass through the MemoryRegion to DMA operation handlers of vfio
containers. The vfio-user container will need this later, to translate
the vaddr into an offset for the dma map vfio-user message; CPR will
also will need this.

Originally-by: John Johnson <john.g.johnson@oracle.com>
Signed-off-by: Jagannathan Raman <jag.raman@oracle.com>
Signed-off-by: Elena Ufimtseva <elena.ufimtseva@oracle.com>
Signed-off-by: John Levon <john.levon@nutanix.com>
Reviewed-by: Cédric Le Goater <clg@redhat.com>
Reviewed-by: Steve Sistare <steven.sistare@oracle.com>
Link: https://lore.kernel.org/qemu-devel/20250521215534.2688540-1-john.levon@nutanix.com
Signed-off-by: Cédric Le Goater <clg@redhat.com>
---
 include/hw/vfio/vfio-container-base.h | 9 +++++----
 hw/vfio/container-base.c              | 4 ++--
 hw/vfio/container.c                   | 3 ++-
 hw/vfio/iommufd.c                     | 3 ++-
 hw/vfio/listener.c                    | 6 +++---
 5 files changed, 14 insertions(+), 11 deletions(-)

diff --git a/include/hw/vfio/vfio-container-base.h b/include/hw/vfio/vfio-container-base.h
index f9e561cb081b361291e72ffe228845bd6b157f92..3feb773e5f41706a310b676c088ff95a85304325 100644
--- a/include/hw/vfio/vfio-container-base.h
+++ b/include/hw/vfio/vfio-container-base.h
@@ -78,7 +78,7 @@ void vfio_address_space_insert(VFIOAddressSpace *space,
 
 int vfio_container_dma_map(VFIOContainerBase *bcontainer,
                            hwaddr iova, ram_addr_t size,
-                           void *vaddr, bool readonly);
+                           void *vaddr, bool readonly, MemoryRegion *mr);
 int vfio_container_dma_unmap(VFIOContainerBase *bcontainer,
                              hwaddr iova, ram_addr_t size,
                              IOMMUTLBEntry *iotlb, bool unmap_all);
@@ -151,20 +151,21 @@ struct VFIOIOMMUClass {
     /**
      * @dma_map
      *
-     * Map an address range into the container.
+     * Map an address range into the container. Note that the memory region is
+     * referenced within an RCU read lock region across this call.
      *
      * @bcontainer: #VFIOContainerBase to use
      * @iova: start address to map
      * @size: size of the range to map
      * @vaddr: process virtual address of mapping
      * @readonly: true if mapping should be readonly
+     * @mr: the memory region for this mapping
      *
      * Returns 0 to indicate success and -errno otherwise.
      */
     int (*dma_map)(const VFIOContainerBase *bcontainer,
                    hwaddr iova, ram_addr_t size,
-                   void *vaddr, bool readonly);
-
+                   void *vaddr, bool readonly, MemoryRegion *mr);
     /**
      * @dma_unmap
      *
diff --git a/hw/vfio/container-base.c b/hw/vfio/container-base.c
index 1c6ca94b6040813a63bb8fca07b25189a8d8cb76..d834bd482290a8b195f94c07832b7f8020504c3a 100644
--- a/hw/vfio/container-base.c
+++ b/hw/vfio/container-base.c
@@ -75,12 +75,12 @@ void vfio_address_space_insert(VFIOAddressSpace *space,
 
 int vfio_container_dma_map(VFIOContainerBase *bcontainer,
                            hwaddr iova, ram_addr_t size,
-                           void *vaddr, bool readonly)
+                           void *vaddr, bool readonly, MemoryRegion *mr)
 {
     VFIOIOMMUClass *vioc = VFIO_IOMMU_GET_CLASS(bcontainer);
 
     g_assert(vioc->dma_map);
-    return vioc->dma_map(bcontainer, iova, size, vaddr, readonly);
+    return vioc->dma_map(bcontainer, iova, size, vaddr, readonly, mr);
 }
 
 int vfio_container_dma_unmap(VFIOContainerBase *bcontainer,
diff --git a/hw/vfio/container.c b/hw/vfio/container.c
index a9f0dbaec4c8d83450048d9bc53b34d859731ebb..a8c76eb48165405e2b41186a266dbc69ae1b782a 100644
--- a/hw/vfio/container.c
+++ b/hw/vfio/container.c
@@ -207,7 +207,8 @@ static int vfio_legacy_dma_unmap(const VFIOContainerBase *bcontainer,
 }
 
 static int vfio_legacy_dma_map(const VFIOContainerBase *bcontainer, hwaddr iova,
-                               ram_addr_t size, void *vaddr, bool readonly)
+                               ram_addr_t size, void *vaddr, bool readonly,
+                               MemoryRegion *mr)
 {
     const VFIOContainer *container = container_of(bcontainer, VFIOContainer,
                                                   bcontainer);
diff --git a/hw/vfio/iommufd.c b/hw/vfio/iommufd.c
index 6b2696793f82fd68c306b85407b5f98201b9f5c6..46a3b36301e9a690e04347005e64552ed0abede5 100644
--- a/hw/vfio/iommufd.c
+++ b/hw/vfio/iommufd.c
@@ -34,7 +34,8 @@
             TYPE_HOST_IOMMU_DEVICE_IOMMUFD "-vfio"
 
 static int iommufd_cdev_map(const VFIOContainerBase *bcontainer, hwaddr iova,
-                            ram_addr_t size, void *vaddr, bool readonly)
+                            ram_addr_t size, void *vaddr, bool readonly,
+                            MemoryRegion *mr)
 {
     const VFIOIOMMUFDContainer *container =
         container_of(bcontainer, VFIOIOMMUFDContainer, bcontainer);
diff --git a/hw/vfio/listener.c b/hw/vfio/listener.c
index 38e3dc82cf3a3c46e4319e6245f6143749b3cccf..74958661236e230751e52dbfc7857426358dcaac 100644
--- a/hw/vfio/listener.c
+++ b/hw/vfio/listener.c
@@ -170,7 +170,7 @@ static void vfio_iommu_map_notify(IOMMUNotifier *n, IOMMUTLBEntry *iotlb)
          */
         ret = vfio_container_dma_map(bcontainer, iova,
                                      iotlb->addr_mask + 1, vaddr,
-                                     read_only);
+                                     read_only, mr);
         if (ret) {
             error_report("vfio_container_dma_map(%p, 0x%"HWADDR_PRIx", "
                          "0x%"HWADDR_PRIx", %p) = %d (%s)",
@@ -240,7 +240,7 @@ static int vfio_ram_discard_notify_populate(RamDiscardListener *rdl,
         vaddr = memory_region_get_ram_ptr(section->mr) + start;
 
         ret = vfio_container_dma_map(bcontainer, iova, next - start,
-                                     vaddr, section->readonly);
+                                     vaddr, section->readonly, section->mr);
         if (ret) {
             /* Rollback */
             vfio_ram_discard_notify_discard(rdl, section);
@@ -564,7 +564,7 @@ static void vfio_listener_region_add(MemoryListener *listener,
     }
 
     ret = vfio_container_dma_map(bcontainer, iova, int128_get64(llsize),
-                                 vaddr, section->readonly);
+                                 vaddr, section->readonly, section->mr);
     if (ret) {
         error_setg(&err, "vfio_container_dma_map(%p, 0x%"HWADDR_PRIx", "
                    "0x%"HWADDR_PRIx", %p) = %d (%s)",
-- 
2.49.0



  parent reply	other threads:[~2025-06-05  8:45 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-06-05  8:42 [PULL 00/16] vfio queue Cédric Le Goater
2025-06-05  8:42 ` [PULL 01/16] vfio/igd: OpRegion not found fix error typo Cédric Le Goater
2025-06-05  8:42 ` [PULL 02/16] vfio: add more VFIOIOMMUClass docs Cédric Le Goater
2025-06-05  8:42 ` [PULL 03/16] vfio: move more cleanup into vfio_pci_put_device() Cédric Le Goater
2025-06-05  8:42 ` [PULL 04/16] vfio: move config space read into vfio_pci_config_setup() Cédric Le Goater
2025-06-05  8:42 ` [PULL 05/16] vfio: refactor out IRQ signalling setup Cédric Le Goater
2025-06-05  8:42 ` [PULL 06/16] vfio/iommufd: Add comment emphasizing no movement of hiod->realize() call Cédric Le Goater
2025-06-05  8:42 ` [PULL 07/16] vfio/igd: Fix incorrect error propagation in vfio_pci_igd_opregion_detect() Cédric Le Goater
2025-06-05  8:42 ` [PULL 08/16] vfio: return mr from vfio_get_xlat_addr Cédric Le Goater
2025-06-05  8:42 ` Cédric Le Goater [this message]
2025-06-05  8:42 ` [PULL 10/16] backends/iommufd: Add a helper to invalidate user-managed HWPT Cédric Le Goater
2025-06-05  8:42 ` [PULL 11/16] vfio/iommufd: Add properties and handlers to TYPE_HOST_IOMMU_DEVICE_IOMMUFD Cédric Le Goater
2025-06-05  8:42 ` [PULL 12/16] vfio/iommufd: Implement [at|de]tach_hwpt handlers Cédric Le Goater
2025-06-05  8:42 ` [PULL 13/16] vfio/iommufd: Save vendor specific device info Cédric Le Goater
2025-06-05  8:42 ` [PULL 14/16] MAINTAINERS: Add reviewer for CPR Cédric Le Goater
2025-06-05  8:42 ` [PULL 15/16] vfio: vfio_find_ram_discard_listener Cédric Le Goater
2025-06-05  8:42 ` [PULL 16/16] vfio: move vfio-cpr.h Cédric Le Goater
2025-06-05 19:00 ` [PULL 00/16] vfio queue Stefan Hajnoczi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20250605084245.1520562-10-clg@redhat.com \
    --to=clg@redhat.com \
    --cc=alex.williamson@redhat.com \
    --cc=elena.ufimtseva@oracle.com \
    --cc=jag.raman@oracle.com \
    --cc=john.g.johnson@oracle.com \
    --cc=john.levon@nutanix.com \
    --cc=qemu-devel@nongnu.org \
    --cc=steven.sistare@oracle.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).