From: Leon Romanovsky <leon@kernel.org>
To: "Bjorn Helgaas" <bhelgaas@google.com>,
"Logan Gunthorpe" <logang@deltatee.com>,
"Jens Axboe" <axboe@kernel.dk>,
"Robin Murphy" <robin.murphy@arm.com>,
"Joerg Roedel" <joro@8bytes.org>, "Will Deacon" <will@kernel.org>,
"Marek Szyprowski" <m.szyprowski@samsung.com>,
"Jason Gunthorpe" <jgg@ziepe.ca>,
"Leon Romanovsky" <leon@kernel.org>,
"Andrew Morton" <akpm@linux-foundation.org>,
"Jonathan Corbet" <corbet@lwn.net>,
"Sumit Semwal" <sumit.semwal@linaro.org>,
"Christian König" <christian.koenig@amd.com>,
"Kees Cook" <kees@kernel.org>,
"Gustavo A. R. Silva" <gustavoars@kernel.org>,
"Ankit Agrawal" <ankita@nvidia.com>,
"Yishai Hadas" <yishaih@nvidia.com>,
"Shameer Kolothum" <skolothumtho@nvidia.com>,
"Kevin Tian" <kevin.tian@intel.com>,
"Alex Williamson" <alex@shazbot.org>
Cc: Krishnakant Jaju <kjaju@nvidia.com>, Matt Ochs <mochs@nvidia.com>,
linux-pci@vger.kernel.org, linux-kernel@vger.kernel.org,
linux-block@vger.kernel.org, iommu@lists.linux.dev,
linux-mm@kvack.org, linux-doc@vger.kernel.org,
linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org,
linaro-mm-sig@lists.linaro.org, kvm@vger.kernel.org,
linux-hardening@vger.kernel.org, Alex Mastro <amastro@fb.com>,
Nicolin Chen <nicolinc@nvidia.com>
Subject: [PATCH v8 04/11] PCI/P2PDMA: Provide an access to pci_p2pdma_map_type() function
Date: Tue, 11 Nov 2025 11:57:46 +0200 [thread overview]
Message-ID: <20251111-dmabuf-vfio-v8-4-fd9aa5df478f@nvidia.com> (raw)
In-Reply-To: <20251111-dmabuf-vfio-v8-0-fd9aa5df478f@nvidia.com>
From: Leon Romanovsky <leonro@nvidia.com>
Provide an access to pci_p2pdma_map_type() function to allow subsystems
to determine the appropriate mapping type for P2PDMA transfers between
a provider and target device.
The pci_p2pdma_map_type() function is the core P2P layer version of
the existing public, but struct page focused, pci_p2pdma_state()
function. It returns the same result. It is required to use the p2p
subsystem from drivers that don't use the struct page layer.
Like __pci_p2pdma_update_state() it is not an exported function. The
idea is that only subsystem code will implement mapping helpers for
taking in phys_addr_t lists, this is deliberately not made accessible
to every driver to prevent abuse.
Following patches will use this function to implement a shared DMA
mapping helper for DMABUF.
Tested-by: Alex Mastro <amastro@fb.com>
Tested-by: Nicolin Chen <nicolinc@nvidia.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
---
drivers/pci/p2pdma.c | 14 ++++++--
include/linux/pci-p2pdma.h | 85 +++++++++++++++++++++++++---------------------
2 files changed, 58 insertions(+), 41 deletions(-)
diff --git a/drivers/pci/p2pdma.c b/drivers/pci/p2pdma.c
index 855d3493634c..981a76b6b7c0 100644
--- a/drivers/pci/p2pdma.c
+++ b/drivers/pci/p2pdma.c
@@ -1060,8 +1060,18 @@ void pci_p2pmem_publish(struct pci_dev *pdev, bool publish)
}
EXPORT_SYMBOL_GPL(pci_p2pmem_publish);
-static enum pci_p2pdma_map_type
-pci_p2pdma_map_type(struct p2pdma_provider *provider, struct device *dev)
+/**
+ * pci_p2pdma_map_type - Determine the mapping type for P2PDMA transfers
+ * @provider: P2PDMA provider structure
+ * @dev: Target device for the transfer
+ *
+ * Determines how peer-to-peer DMA transfers should be mapped between
+ * the provider and the target device. The mapping type indicates whether
+ * the transfer can be done directly through PCI switches or must go
+ * through the host bridge.
+ */
+enum pci_p2pdma_map_type pci_p2pdma_map_type(struct p2pdma_provider *provider,
+ struct device *dev)
{
enum pci_p2pdma_map_type type = PCI_P2PDMA_MAP_NOT_SUPPORTED;
struct pci_dev *pdev = to_pci_dev(provider->owner);
diff --git a/include/linux/pci-p2pdma.h b/include/linux/pci-p2pdma.h
index 15471252817b..517e121d2598 100644
--- a/include/linux/pci-p2pdma.h
+++ b/include/linux/pci-p2pdma.h
@@ -26,6 +26,45 @@ struct p2pdma_provider {
u64 bus_offset;
};
+enum pci_p2pdma_map_type {
+ /*
+ * PCI_P2PDMA_MAP_UNKNOWN: Used internally as an initial state before
+ * the mapping type has been calculated. Exported routines for the API
+ * will never return this value.
+ */
+ PCI_P2PDMA_MAP_UNKNOWN = 0,
+
+ /*
+ * Not a PCI P2PDMA transfer.
+ */
+ PCI_P2PDMA_MAP_NONE,
+
+ /*
+ * PCI_P2PDMA_MAP_NOT_SUPPORTED: Indicates the transaction will
+ * traverse the host bridge and the host bridge is not in the
+ * allowlist. DMA Mapping routines should return an error when
+ * this is returned.
+ */
+ PCI_P2PDMA_MAP_NOT_SUPPORTED,
+
+ /*
+ * PCI_P2PDMA_MAP_BUS_ADDR: Indicates that two devices can talk to
+ * each other directly through a PCI switch and the transaction will
+ * not traverse the host bridge. Such a mapping should program
+ * the DMA engine with PCI bus addresses.
+ */
+ PCI_P2PDMA_MAP_BUS_ADDR,
+
+ /*
+ * PCI_P2PDMA_MAP_THRU_HOST_BRIDGE: Indicates two devices can talk
+ * to each other, but the transaction traverses a host bridge on the
+ * allowlist. In this case, a normal mapping either with CPU physical
+ * addresses (in the case of dma-direct) or IOVA addresses (in the
+ * case of IOMMUs) should be used to program the DMA engine.
+ */
+ PCI_P2PDMA_MAP_THRU_HOST_BRIDGE,
+};
+
#ifdef CONFIG_PCI_P2PDMA
int pcim_p2pdma_init(struct pci_dev *pdev);
struct p2pdma_provider *pcim_p2pdma_provider(struct pci_dev *pdev, int bar);
@@ -45,6 +84,8 @@ int pci_p2pdma_enable_store(const char *page, struct pci_dev **p2p_dev,
bool *use_p2pdma);
ssize_t pci_p2pdma_enable_show(char *page, struct pci_dev *p2p_dev,
bool use_p2pdma);
+enum pci_p2pdma_map_type pci_p2pdma_map_type(struct p2pdma_provider *provider,
+ struct device *dev);
#else /* CONFIG_PCI_P2PDMA */
static inline int pcim_p2pdma_init(struct pci_dev *pdev)
{
@@ -106,6 +147,11 @@ static inline ssize_t pci_p2pdma_enable_show(char *page,
{
return sprintf(page, "none\n");
}
+static inline enum pci_p2pdma_map_type
+pci_p2pdma_map_type(struct p2pdma_provider *provider, struct device *dev)
+{
+ return PCI_P2PDMA_MAP_NOT_SUPPORTED;
+}
#endif /* CONFIG_PCI_P2PDMA */
@@ -120,45 +166,6 @@ static inline struct pci_dev *pci_p2pmem_find(struct device *client)
return pci_p2pmem_find_many(&client, 1);
}
-enum pci_p2pdma_map_type {
- /*
- * PCI_P2PDMA_MAP_UNKNOWN: Used internally as an initial state before
- * the mapping type has been calculated. Exported routines for the API
- * will never return this value.
- */
- PCI_P2PDMA_MAP_UNKNOWN = 0,
-
- /*
- * Not a PCI P2PDMA transfer.
- */
- PCI_P2PDMA_MAP_NONE,
-
- /*
- * PCI_P2PDMA_MAP_NOT_SUPPORTED: Indicates the transaction will
- * traverse the host bridge and the host bridge is not in the
- * allowlist. DMA Mapping routines should return an error when
- * this is returned.
- */
- PCI_P2PDMA_MAP_NOT_SUPPORTED,
-
- /*
- * PCI_P2PDMA_MAP_BUS_ADDR: Indicates that two devices can talk to
- * each other directly through a PCI switch and the transaction will
- * not traverse the host bridge. Such a mapping should program
- * the DMA engine with PCI bus addresses.
- */
- PCI_P2PDMA_MAP_BUS_ADDR,
-
- /*
- * PCI_P2PDMA_MAP_THRU_HOST_BRIDGE: Indicates two devices can talk
- * to each other, but the transaction traverses a host bridge on the
- * allowlist. In this case, a normal mapping either with CPU physical
- * addresses (in the case of dma-direct) or IOVA addresses (in the
- * case of IOMMUs) should be used to program the DMA engine.
- */
- PCI_P2PDMA_MAP_THRU_HOST_BRIDGE,
-};
-
struct pci_p2pdma_map_state {
struct p2pdma_provider *mem;
enum pci_p2pdma_map_type map;
--
2.51.1
next prev parent reply other threads:[~2025-11-11 9:58 UTC|newest]
Thread overview: 63+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-11-11 9:57 [PATCH v8 00/11] vfio/pci: Allow MMIO regions to be exported through dma-buf Leon Romanovsky
2025-11-11 9:57 ` [PATCH v8 01/11] PCI/P2PDMA: Separate the mmap() support from the core logic Leon Romanovsky
2025-11-11 9:57 ` [PATCH v8 02/11] PCI/P2PDMA: Simplify bus address mapping API Leon Romanovsky
2025-11-11 9:57 ` [PATCH v8 03/11] PCI/P2PDMA: Refactor to separate core P2P functionality from memory allocation Leon Romanovsky
2025-11-11 9:57 ` Leon Romanovsky [this message]
2025-11-11 9:57 ` [PATCH v8 05/11] PCI/P2PDMA: Document DMABUF model Leon Romanovsky
2025-11-19 9:18 ` Christian König
2025-11-19 13:13 ` Leon Romanovsky
2025-11-19 13:35 ` Jason Gunthorpe
2025-11-19 14:06 ` Christian König
2025-11-19 19:45 ` Jason Gunthorpe
2025-11-19 20:45 ` Leon Romanovsky
2025-11-11 9:57 ` [PATCH v8 06/11] dma-buf: provide phys_vec to scatter-gather mapping routine Leon Romanovsky
2025-11-18 23:02 ` Jason Gunthorpe
2025-11-19 0:06 ` Nicolin Chen
2025-11-19 13:32 ` Leon Romanovsky
2025-11-19 5:54 ` Tian, Kevin
2025-11-19 13:30 ` Leon Romanovsky
2025-11-19 13:37 ` Jason Gunthorpe
2025-11-19 13:45 ` Leon Romanovsky
2025-11-19 13:16 ` [Linaro-mm-sig] " Christian König
2025-11-19 13:25 ` Jason Gunthorpe
2025-11-19 13:42 ` Christian König
2025-11-19 13:48 ` Leon Romanovsky
2025-11-19 19:31 ` Jason Gunthorpe
2025-11-19 20:54 ` Leon Romanovsky
2025-11-20 7:08 ` Christian König
2025-11-20 7:41 ` Leon Romanovsky
2025-11-20 7:54 ` Christian König
2025-11-20 8:06 ` Leon Romanovsky
2025-11-20 8:32 ` Christian König
2025-11-20 8:42 ` Leon Romanovsky
2025-11-20 13:20 ` Jason Gunthorpe
2025-11-19 13:42 ` Leon Romanovsky
2025-11-19 14:11 ` Christian König
2025-11-19 14:50 ` Leon Romanovsky
2025-11-19 14:53 ` Christian König
2025-11-19 15:41 ` Leon Romanovsky
2025-11-19 16:33 ` Leon Romanovsky
2025-11-20 7:03 ` Christian König
2025-11-20 7:38 ` Leon Romanovsky
2025-11-19 19:36 ` Jason Gunthorpe
2025-11-11 9:57 ` [PATCH v8 07/11] vfio: Export vfio device get and put registration helpers Leon Romanovsky
2025-11-18 7:10 ` Tian, Kevin
2025-11-11 9:57 ` [PATCH v8 08/11] vfio/pci: Share the core device pointer while invoking feature functions Leon Romanovsky
2025-11-18 7:11 ` Tian, Kevin
2025-11-11 9:57 ` [PATCH v8 09/11] vfio/pci: Enable peer-to-peer DMA transactions by default Leon Romanovsky
2025-11-18 7:18 ` Tian, Kevin
2025-11-18 20:10 ` Alex Williamson
2025-11-19 0:01 ` Tian, Kevin
2025-11-18 20:18 ` Keith Busch
2025-11-19 0:02 ` Tian, Kevin
2025-11-19 13:54 ` Leon Romanovsky
2025-11-11 9:57 ` [PATCH v8 10/11] vfio/pci: Add dma-buf export support for MMIO regions Leon Romanovsky
2025-11-18 7:33 ` Tian, Kevin
2025-11-18 14:28 ` Jason Gunthorpe
2025-11-18 23:56 ` Tian, Kevin
2025-11-19 19:41 ` Jason Gunthorpe
2025-11-19 20:50 ` Leon Romanovsky
2025-11-11 9:57 ` [PATCH v8 11/11] vfio/nvgrace: Support get_dmabuf_phys Leon Romanovsky
2025-11-18 7:34 ` Tian, Kevin
2025-11-18 7:59 ` Ankit Agrawal
2025-11-18 14:30 ` Jason Gunthorpe
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20251111-dmabuf-vfio-v8-4-fd9aa5df478f@nvidia.com \
--to=leon@kernel.org \
--cc=akpm@linux-foundation.org \
--cc=alex@shazbot.org \
--cc=amastro@fb.com \
--cc=ankita@nvidia.com \
--cc=axboe@kernel.dk \
--cc=bhelgaas@google.com \
--cc=christian.koenig@amd.com \
--cc=corbet@lwn.net \
--cc=dri-devel@lists.freedesktop.org \
--cc=gustavoars@kernel.org \
--cc=iommu@lists.linux.dev \
--cc=jgg@ziepe.ca \
--cc=joro@8bytes.org \
--cc=kees@kernel.org \
--cc=kevin.tian@intel.com \
--cc=kjaju@nvidia.com \
--cc=kvm@vger.kernel.org \
--cc=linaro-mm-sig@lists.linaro.org \
--cc=linux-block@vger.kernel.org \
--cc=linux-doc@vger.kernel.org \
--cc=linux-hardening@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-media@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-pci@vger.kernel.org \
--cc=logang@deltatee.com \
--cc=m.szyprowski@samsung.com \
--cc=mochs@nvidia.com \
--cc=nicolinc@nvidia.com \
--cc=robin.murphy@arm.com \
--cc=skolothumtho@nvidia.com \
--cc=sumit.semwal@linaro.org \
--cc=will@kernel.org \
--cc=yishaih@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).