linux-doc.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Leon Romanovsky <leon@kernel.org>
To: Baolu Lu <baolu.lu@linux.intel.com>
Cc: "Marek Szyprowski" <m.szyprowski@samsung.com>,
	"Jens Axboe" <axboe@kernel.dk>, "Christoph Hellwig" <hch@lst.de>,
	"Keith Busch" <kbusch@kernel.org>, "Jake Edge" <jake@lwn.net>,
	"Jonathan Corbet" <corbet@lwn.net>,
	"Jason Gunthorpe" <jgg@ziepe.ca>,
	"Zhu Yanjun" <zyjzyj2000@gmail.com>,
	"Robin Murphy" <robin.murphy@arm.com>,
	"Joerg Roedel" <joro@8bytes.org>, "Will Deacon" <will@kernel.org>,
	"Sagi Grimberg" <sagi@grimberg.me>,
	"Bjorn Helgaas" <bhelgaas@google.com>,
	"Logan Gunthorpe" <logang@deltatee.com>,
	"Yishai Hadas" <yishaih@nvidia.com>,
	"Shameer Kolothum" <shameerali.kolothum.thodi@huawei.com>,
	"Kevin Tian" <kevin.tian@intel.com>,
	"Alex Williamson" <alex.williamson@redhat.com>,
	"Jérôme Glisse" <jglisse@redhat.com>,
	"Andrew Morton" <akpm@linux-foundation.org>,
	linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org,
	linux-block@vger.kernel.org, linux-rdma@vger.kernel.org,
	iommu@lists.linux.dev, linux-nvme@lists.infradead.org,
	linux-pci@vger.kernel.org, kvm@vger.kernel.org,
	linux-mm@kvack.org, "Niklas Schnelle" <schnelle@linux.ibm.com>,
	"Chuck Lever" <chuck.lever@oracle.com>,
	"Luis Chamberlain" <mcgrof@kernel.org>,
	"Matthew Wilcox" <willy@infradead.org>,
	"Dan Williams" <dan.j.williams@intel.com>,
	"Kanchan Joshi" <joshi.k@samsung.com>,
	"Chaitanya Kulkarni" <kch@nvidia.com>
Subject: Re: [PATCH v10 06/24] iommu/dma: Factor out a iommu_dma_map_swiotlb helper
Date: Tue, 29 Apr 2025 09:18:49 +0300	[thread overview]
Message-ID: <20250429061849.GL5848@unreal> (raw)
In-Reply-To: <9d1abdbc-4b21-47e2-bcaf-6bc8ca365b01@linux.intel.com>

On Tue, Apr 29, 2025 at 01:58:06PM +0800, Baolu Lu wrote:
> On 4/29/25 13:53, Leon Romanovsky wrote:
> > On Tue, Apr 29, 2025 at 12:58:18PM +0800, Baolu Lu wrote:
> > > On 4/28/25 17:22, Leon Romanovsky wrote:
> > > > From: Christoph Hellwig<hch@lst.de>
> > > > 
> > > > Split the iommu logic from iommu_dma_map_page into a separate helper.
> > > > This not only keeps the code neatly separated, but will also allow for
> > > > reuse in another caller.
> > > > 
> > > > Signed-off-by: Christoph Hellwig<hch@lst.de>
> > > > Tested-by: Jens Axboe<axboe@kernel.dk>
> > > > Reviewed-by: Luis Chamberlain<mcgrof@kernel.org>
> > > > Signed-off-by: Leon Romanovsky<leonro@nvidia.com>
> > > 
> > > Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com>
> > > 
> > > with a nit below ...
> > > 
> > > > ---
> > > >    drivers/iommu/dma-iommu.c | 73 ++++++++++++++++++++++-----------------
> > > >    1 file changed, 41 insertions(+), 32 deletions(-)
> > > > 
> > > > diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
> > > > index d3211a8d755e..d7684024c439 100644
> > > > --- a/drivers/iommu/dma-iommu.c
> > > > +++ b/drivers/iommu/dma-iommu.c
> > > > @@ -1138,6 +1138,43 @@ void iommu_dma_sync_sg_for_device(struct device *dev, struct scatterlist *sgl,
> > > >    			arch_sync_dma_for_device(sg_phys(sg), sg->length, dir);
> > > >    }
> > > > +static phys_addr_t iommu_dma_map_swiotlb(struct device *dev, phys_addr_t phys,
> > > > +		size_t size, enum dma_data_direction dir, unsigned long attrs)
> > > > +{
> > > > +	struct iommu_domain *domain = iommu_get_dma_domain(dev);
> > > > +	struct iova_domain *iovad = &domain->iova_cookie->iovad;
> > > > +
> > > > +	if (!is_swiotlb_active(dev)) {
> > > > +		dev_warn_once(dev, "DMA bounce buffers are inactive, unable to map unaligned transaction.\n");
> > > > +		return (phys_addr_t)DMA_MAPPING_ERROR;
> > > > +	}
> > > > +
> > > > +	trace_swiotlb_bounced(dev, phys, size);
> > > > +
> > > > +	phys = swiotlb_tbl_map_single(dev, phys, size, iova_mask(iovad), dir,
> > > > +			attrs);
> > > > +
> > > > +	/*
> > > > +	 * Untrusted devices should not see padding areas with random leftover
> > > > +	 * kernel data, so zero the pre- and post-padding.
> > > > +	 * swiotlb_tbl_map_single() has initialized the bounce buffer proper to
> > > > +	 * the contents of the original memory buffer.
> > > > +	 */
> > > > +	if (phys != (phys_addr_t)DMA_MAPPING_ERROR && dev_is_untrusted(dev)) {
> > > > +		size_t start, virt = (size_t)phys_to_virt(phys);
> > > > +
> > > > +		/* Pre-padding */
> > > > +		start = iova_align_down(iovad, virt);
> > > > +		memset((void *)start, 0, virt - start);
> > > > +
> > > > +		/* Post-padding */
> > > > +		start = virt + size;
> > > > +		memset((void *)start, 0, iova_align(iovad, start) - start);
> > > > +	}
> > > > +
> > > > +	return phys;
> > > > +}
> > > > +
> > > >    dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page,
> > > >    	      unsigned long offset, size_t size, enum dma_data_direction dir,
> > > >    	      unsigned long attrs)
> > > > @@ -1151,42 +1188,14 @@ dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page,
> > > >    	dma_addr_t iova, dma_mask = dma_get_mask(dev);
> > > >    	/*
> > > > -	 * If both the physical buffer start address and size are
> > > > -	 * page aligned, we don't need to use a bounce page.
> > > > +	 * If both the physical buffer start address and size are page aligned,
> > > > +	 * we don't need to use a bounce page.
> > > >    	 */
> > > >    	if (dev_use_swiotlb(dev, size, dir) &&
> > > >    	    iova_offset(iovad, phys | size)) {
> > > > -		if (!is_swiotlb_active(dev)) {
> > > 
> > > ... Is it better to move this check into the helper? Simply no-op if a
> > > bounce page is not needed:
> > > 
> > > 	if (!dev_use_swiotlb(dev, size, dir) ||
> > > 	    !iova_offset(iovad, phys | size))
> > > 		return phys;
> > 
> > Am I missing something? iommu_dma_map_page() has more code after this
> > check, so it is not correct to return immediately:
> > 
> >    1189 dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page,
> >    1190               unsigned long offset, size_t size, enum dma_data_direction dir,
> >    1191               unsigned long attrs)
> >    1192 {
> > 
> > <...>
> > 
> >    1201         /*
> >    1202          * If both the physical buffer start address and size are page aligned,
> >    1203          * we don't need to use a bounce page.
> >    1204          */
> >    1205         if (dev_use_swiotlb(dev, size, dir) &&
> >    1206             iova_unaligned(iovad, phys, size)) {
> >    1207                 phys = iommu_dma_map_swiotlb(dev, phys, size, dir, attrs);
> >    1208                 if (phys == (phys_addr_t)DMA_MAPPING_ERROR)
> >    1209                         return DMA_MAPPING_ERROR;
> >    1210         }
> >    1211
> >    1212         if (!coherent && !(attrs & DMA_ATTR_SKIP_CPU_SYNC))
> >    1213                 arch_sync_dma_for_device(phys, size, dir);
> >    1214
> >    1215         iova = __iommu_dma_map(dev, phys, size, prot, dma_mask);
> >    1216         if (iova == DMA_MAPPING_ERROR)
> >    1217                 swiotlb_tbl_unmap_single(dev, phys, size, dir, attrs);
> >    1218         return iova;
> >    1219 }
> 
> static phys_addr_t iommu_dma_map_swiotlb(struct device *dev, phys_addr_t
> phys,
> 		size_t size, enum dma_data_direction dir, unsigned long attrs)
> {
> <...>
> 	/*
> 	 * If both the physical buffer start address and size are page aligned,
> 	 * we don't need to use a bounce page.
> 	 */
> 	if (!dev_use_swiotlb(dev, size, dir) ||
> 	    !iova_offset(iovad, phys | size))
> 		return phys;
> <...>
> }
> 
> Then,
> 
> dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page,
> 	unsigned long offset, size_t size, enum dma_data_direction dir,
> 	unsigned long attrs)
> {
> <...>
> 	phys = iommu_dma_map_swiotlb(dev, phys, size, dir, attrs);
> 	if (phys == (phys_addr_t)DMA_MAPPING_ERROR)
> 		return DMA_MAPPING_ERROR;
> <...>
> }

Such change will cause to extra function call for everyone who doesn't
use SWIOTLB (RDMA, HMM e.t.c).

In addition, iommu_dma_map_swiotlb() is called through
dma_iova_link -> 
	iommu_dma_iova_link_swiotlb -> 
		iommu_dma_iova_bounce_and_link() -> 
			iommu_dma_map_swiotlb()
and dma_iova_link() has this "if (dev_use_swiotlb(dev, size, dir) && iova_unaligned(iovad, phys, size))"
very early at call stack.

So, in dma_iova_link() we will find ourselves with same check twice.

Thanks

> 
> Thanks,
> baolu
> 

  reply	other threads:[~2025-04-29  6:18 UTC|newest]

Thread overview: 41+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-04-28  9:22 [PATCH v10 00/24] Provide a new two step DMA mapping API Leon Romanovsky
2025-04-28  9:22 ` [PATCH v10 01/24] PCI/P2PDMA: Refactor the p2pdma mapping helpers Leon Romanovsky
2025-04-29  2:08   ` Baolu Lu
2025-04-28  9:22 ` [PATCH v10 02/24] dma-mapping: move the PCI P2PDMA mapping helpers to pci-p2pdma.h Leon Romanovsky
2025-04-29  2:09   ` Baolu Lu
2025-04-28  9:22 ` [PATCH v10 03/24] iommu: generalize the batched sync after map interface Leon Romanovsky
2025-04-29  2:19   ` Baolu Lu
2025-04-29  6:09     ` Leon Romanovsky
2025-04-29 11:53     ` Jason Gunthorpe
2025-04-28  9:22 ` [PATCH v10 04/24] iommu: add kernel-doc for iommu_unmap_fast Leon Romanovsky
2025-04-29  2:37   ` Baolu Lu
2025-04-28  9:22 ` [PATCH v10 05/24] dma-mapping: Provide an interface to allow allocate IOVA Leon Romanovsky
2025-04-29  3:10   ` Baolu Lu
2025-04-29  5:46     ` Leon Romanovsky
2025-04-28  9:22 ` [PATCH v10 06/24] iommu/dma: Factor out a iommu_dma_map_swiotlb helper Leon Romanovsky
2025-04-29  4:58   ` Baolu Lu
2025-04-29  5:53     ` Leon Romanovsky
2025-04-29  5:58       ` Baolu Lu
2025-04-29  6:18         ` Leon Romanovsky [this message]
2025-04-28  9:22 ` [PATCH v10 07/24] dma-mapping: Implement link/unlink ranges API Leon Romanovsky
2025-04-28  9:22 ` [PATCH v10 08/24] dma-mapping: add a dma_need_unmap helper Leon Romanovsky
2025-04-28  9:22 ` [PATCH v10 09/24] docs: core-api: document the IOVA-based API Leon Romanovsky
2025-04-28  9:22 ` [PATCH v10 10/24] mm/hmm: let users to tag specific PFN with DMA mapped bit Leon Romanovsky
2025-04-28  9:22 ` [PATCH v10 11/24] mm/hmm: provide generic DMA managing logic Leon Romanovsky
2025-04-28  9:22 ` [PATCH v10 12/24] RDMA/umem: Store ODP access mask information in PFN Leon Romanovsky
2025-04-28  9:22 ` [PATCH v10 13/24] RDMA/core: Convert UMEM ODP DMA mapping to caching IOVA and page linkage Leon Romanovsky
2025-04-28  9:22 ` [PATCH v10 14/24] RDMA/umem: Separate implicit ODP initialization from explicit ODP Leon Romanovsky
2025-04-28  9:22 ` [PATCH v10 15/24] vfio/mlx5: Explicitly use number of pages instead of allocated length Leon Romanovsky
2025-04-28  9:22 ` [PATCH v10 16/24] vfio/mlx5: Rewrite create mkey flow to allow better code reuse Leon Romanovsky
2025-04-28  9:22 ` [PATCH v10 17/24] vfio/mlx5: Enable the DMA link API Leon Romanovsky
2025-04-28  9:22 ` [PATCH v10 18/24] block: share more code for bio addition helper Leon Romanovsky
2025-04-28  9:22 ` [PATCH v10 19/24] block: don't merge different kinds of P2P transfers in a single bio Leon Romanovsky
2025-04-28  9:22 ` [PATCH v10 20/24] blk-mq: add scatterlist-less DMA mapping helpers Leon Romanovsky
2025-04-28  9:22 ` [PATCH v10 21/24] nvme-pci: remove struct nvme_descriptor Leon Romanovsky
2025-04-28  9:22 ` [PATCH v10 22/24] nvme-pci: use a better encoding for small prp pool allocations Leon Romanovsky
2025-04-28  9:22 ` [PATCH v10 23/24] nvme-pci: convert to blk_rq_dma_map Leon Romanovsky
2025-04-28 16:46   ` Keith Busch
2025-04-28 17:22     ` Leon Romanovsky
2025-04-28 17:30       ` Keith Busch
2025-04-28  9:22 ` [PATCH v10 24/24] nvme-pci: store aborted state in flags variable Leon Romanovsky
2025-05-12 10:07 ` (subset) [PATCH v10 00/24] Provide a new two step DMA mapping API Leon Romanovsky

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20250429061849.GL5848@unreal \
    --to=leon@kernel.org \
    --cc=akpm@linux-foundation.org \
    --cc=alex.williamson@redhat.com \
    --cc=axboe@kernel.dk \
    --cc=baolu.lu@linux.intel.com \
    --cc=bhelgaas@google.com \
    --cc=chuck.lever@oracle.com \
    --cc=corbet@lwn.net \
    --cc=dan.j.williams@intel.com \
    --cc=hch@lst.de \
    --cc=iommu@lists.linux.dev \
    --cc=jake@lwn.net \
    --cc=jgg@ziepe.ca \
    --cc=jglisse@redhat.com \
    --cc=joro@8bytes.org \
    --cc=joshi.k@samsung.com \
    --cc=kbusch@kernel.org \
    --cc=kch@nvidia.com \
    --cc=kevin.tian@intel.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=linux-pci@vger.kernel.org \
    --cc=linux-rdma@vger.kernel.org \
    --cc=logang@deltatee.com \
    --cc=m.szyprowski@samsung.com \
    --cc=mcgrof@kernel.org \
    --cc=robin.murphy@arm.com \
    --cc=sagi@grimberg.me \
    --cc=schnelle@linux.ibm.com \
    --cc=shameerali.kolothum.thodi@huawei.com \
    --cc=will@kernel.org \
    --cc=willy@infradead.org \
    --cc=yishaih@nvidia.com \
    --cc=zyjzyj2000@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).