From: Leon Romanovsky <leon@kernel.org>
To: Christoph Hellwig <hch@lst.de>
Cc: "Jens Axboe" <axboe@kernel.dk>, "Jason Gunthorpe" <jgg@ziepe.ca>,
"Robin Murphy" <robin.murphy@arm.com>,
"Joerg Roedel" <joro@8bytes.org>, "Will Deacon" <will@kernel.org>,
"Keith Busch" <kbusch@kernel.org>,
"Zeng, Oak" <oak.zeng@intel.com>,
"Chaitanya Kulkarni" <kch@nvidia.com>,
"Sagi Grimberg" <sagi@grimberg.me>,
"Bjorn Helgaas" <bhelgaas@google.com>,
"Logan Gunthorpe" <logang@deltatee.com>,
"Yishai Hadas" <yishaih@nvidia.com>,
"Shameer Kolothum" <shameerali.kolothum.thodi@huawei.com>,
"Kevin Tian" <kevin.tian@intel.com>,
"Alex Williamson" <alex.williamson@redhat.com>,
"Marek Szyprowski" <m.szyprowski@samsung.com>,
"Jérôme Glisse" <jglisse@redhat.com>,
"Andrew Morton" <akpm@linux-foundation.org>,
linux-block@vger.kernel.org, linux-kernel@vger.kernel.org,
linux-rdma@vger.kernel.org, iommu@lists.linux.dev,
linux-nvme@lists.infradead.org, linux-pci@vger.kernel.org,
kvm@vger.kernel.org, linux-mm@kvack.org
Subject: Re: [RFC PATCH v1 00/18] Provide a new two step DMA API mapping API
Date: Sun, 7 Jul 2024 12:45:46 +0300 [thread overview]
Message-ID: <20240707094546.GI6695@unreal> (raw)
In-Reply-To: <20240705063910.GA12337@lst.de>
On Fri, Jul 05, 2024 at 08:39:10AM +0200, Christoph Hellwig wrote:
> Review from the NVMe driver consumer perspective. I think if all these
> were implement we'd probably end up with less code than before the
> conversion.
Thanks for the review, I will try to address all the comments in the next version.
>
> The split between dma_iova_attrs, dma_memory_type and dma_iova_state is
> odd. I would have expected them to just be just a single object. While
> talking about this I think the domain field in dma_iova_state should
> probably be a private pointer instead of being tied to the iommu.
>
> Also do we need the attrs member in the iova_attrs structure? The
> "attrs" really are flags passed to the mapping routines that are
> per-operation and not persistent, so I'd expect them to be passed
> per-call and not stored in a structure.
It is left-over from my not-send version where I added new attribute
to indicate that dma_alloc_iova() can't support SWIOTLB to avoid
dev_use_swiotlb() mess. I will remove it.
>
> I'd also expect that the use_iova field to be in the mapping state
> and not separately provided by the driver.
>
> For nvme specific data structures I would have expected a dma_add/
> len pair in struct iod_dma_map, maybe even using a common type.
>
> Also the data structure split seems odd - I'd expect the actual
> mapping state and a small number (at least one) dma_addr/len pair
> to be inside the nvme_iod structure, and then only do the dynamic
> allocation if we need more of them because there are more segments
> and we are not using the iommu.
>
> If we had a common data structure for the dma_addr/len pairs
> dma_unlink_range could just take care of the unmap for the non-iommu
> case as well, which would be neat. I'd also expect that
> dma_free_iova would be covered by it.
Internally Jason asked for the same thing, but I didn't want to produce
asymmetric API where drivers have a call to dma_alloc_iova() but don't
have a call to dma_free_iova(). However, now, it is 2 versus 1, so I will
change it.
>
> I would have expected dma_link_range to return the dma_addr_t instead
> of poking into the iova structure in the callers.
>
> In __nvme_rq_dma_map the <= PAGE_SIZE case is pointless. In the
> existing code the reason for it is to avoid allocating and mapping the
> sg_table, but that code is still left before we even get to this code.
>
> My suggestion above to only allocate the dma_addr/len pairs when there
> is more than 1 or a few of it would allow to trivially implement that
> suggestion using the normal API without having to keep that special
> case and the dma_len parameter around.
>
> If this addes a version of dma_map_page_atttrs that directly took
> the physical address as a prep patch the callers would not have to
> bother with page pointer manipulations and just work on physical
> addresses for both the iommu and no-iommu cases. It would also help
> a little bit with the eventualy switch to store the physical address
> instead of page+offset in the bio_vec. Talking about that, I've
> been wanting to add a bvec_phys helper for to convert the
> page_phys(bv.bv_page) + bv.bv_offset calculations. This is becoming
> more urgent with more callers needing to that, I'll try to get it out
> to Jens ASAP so that it can make the 6.11 merge window.
>
> Can we make dma_start_range / dma_end_range simple no-ops for the
> non-iommu code to avoid boilerplate code in the callers to avoid
> boilerplate code in the callers to deal with the two cases?
Yes, sure.
Thanks
next prev parent reply other threads:[~2024-07-07 9:45 UTC|newest]
Thread overview: 51+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-07-02 9:09 [RFC PATCH v1 00/18] Provide a new two step DMA API mapping API Leon Romanovsky
2024-07-02 9:09 ` [RFC PATCH v1 01/18] dma-mapping: query DMA memory type Leon Romanovsky
2024-07-02 9:09 ` [RFC PATCH v1 02/18] dma-mapping: provide an interface to allocate IOVA Leon Romanovsky
2024-07-02 9:09 ` [RFC PATCH v1 03/18] dma-mapping: check if IOVA can be used Leon Romanovsky
2024-07-02 9:09 ` [RFC PATCH v1 04/18] dma-mapping: implement link range API Leon Romanovsky
2024-07-02 9:09 ` [RFC PATCH v1 05/18] mm/hmm: let users to tag specific PFN with DMA mapped bit Leon Romanovsky
2024-07-02 9:09 ` [RFC PATCH v1 06/18] dma-mapping: provide callbacks to link/unlink HMM PFNs to specific IOVA Leon Romanovsky
2024-07-02 9:09 ` [RFC PATCH v1 07/18] iommu/dma: Provide an interface to allow preallocate IOVA Leon Romanovsky
2024-07-02 9:09 ` [RFC PATCH v1 08/18] iommu/dma: Implement link/unlink ranges callbacks Leon Romanovsky
2024-07-02 9:09 ` [RFC PATCH v1 09/18] RDMA/umem: Preallocate and cache IOVA for UMEM ODP Leon Romanovsky
2024-07-02 9:09 ` [RFC PATCH v1 10/18] RDMA/umem: Store ODP access mask information in PFN Leon Romanovsky
2024-07-02 9:09 ` [RFC PATCH v1 11/18] RDMA/core: Separate DMA mapping to caching IOVA and page linkage Leon Romanovsky
2024-07-02 9:09 ` [RFC PATCH v1 12/18] RDMA/umem: Prevent UMEM ODP creation with SWIOTLB Leon Romanovsky
2024-07-02 9:09 ` [RFC PATCH v1 13/18] vfio/mlx5: Explicitly use number of pages instead of allocated length Leon Romanovsky
2024-07-02 9:09 ` [RFC PATCH v1 14/18] vfio/mlx5: Rewrite create mkey flow to allow better code reuse Leon Romanovsky
2024-07-02 9:09 ` [RFC PATCH v1 15/18] vfio/mlx5: Explicitly store page list Leon Romanovsky
2024-07-02 9:09 ` [RFC PATCH v1 16/18] vfio/mlx5: Convert vfio to use DMA link API Leon Romanovsky
2024-07-02 9:09 ` [RFC PATCH v1 17/18] block: export helper to get segment max size Leon Romanovsky
2024-07-02 9:09 ` [RFC PATCH v1 18/18] nvme-pci: use new dma API Leon Romanovsky
2024-07-04 15:23 ` Robin Murphy
2024-07-04 17:16 ` Leon Romanovsky
2024-07-05 5:58 ` Christoph Hellwig
2024-07-05 18:48 ` Leon Romanovsky
2024-07-06 6:08 ` Christoph Hellwig
2024-07-03 5:42 ` [RFC PATCH v1 00/18] Provide a new two step DMA API mapping API Christoph Hellwig
2024-07-03 10:42 ` Zhu Yanjun
2024-07-03 10:52 ` Leon Romanovsky
2024-07-03 14:35 ` Christoph Hellwig
2024-07-03 15:51 ` Leon Romanovsky
2024-07-04 7:48 ` Christoph Hellwig
2024-07-04 13:18 ` Leon Romanovsky
2024-07-05 6:00 ` Christoph Hellwig
2024-07-08 16:52 ` Jason Gunthorpe
2024-07-09 6:17 ` Christoph Hellwig
2024-07-09 18:53 ` Jason Gunthorpe
2024-07-10 6:27 ` Christoph Hellwig
2024-07-11 23:21 ` Jason Gunthorpe
2024-07-05 22:53 ` Chaitanya Kulkarni
2024-07-06 6:26 ` Christoph Hellwig
2024-07-07 9:16 ` Leon Romanovsky
2024-07-07 12:45 ` Leon Romanovsky
2024-07-05 6:39 ` Christoph Hellwig
2024-07-07 9:45 ` Leon Romanovsky [this message]
2024-07-08 23:57 ` Jason Gunthorpe
2024-07-09 6:20 ` Christoph Hellwig
2024-07-09 19:03 ` Jason Gunthorpe
2024-07-10 6:22 ` Christoph Hellwig
2024-07-11 23:29 ` Jason Gunthorpe
2024-07-12 4:54 ` Christoph Hellwig
2024-07-12 12:42 ` Jason Gunthorpe
2024-07-13 5:24 ` Christoph Hellwig
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240707094546.GI6695@unreal \
--to=leon@kernel.org \
--cc=akpm@linux-foundation.org \
--cc=alex.williamson@redhat.com \
--cc=axboe@kernel.dk \
--cc=bhelgaas@google.com \
--cc=hch@lst.de \
--cc=iommu@lists.linux.dev \
--cc=jgg@ziepe.ca \
--cc=jglisse@redhat.com \
--cc=joro@8bytes.org \
--cc=kbusch@kernel.org \
--cc=kch@nvidia.com \
--cc=kevin.tian@intel.com \
--cc=kvm@vger.kernel.org \
--cc=linux-block@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-nvme@lists.infradead.org \
--cc=linux-pci@vger.kernel.org \
--cc=linux-rdma@vger.kernel.org \
--cc=logang@deltatee.com \
--cc=m.szyprowski@samsung.com \
--cc=oak.zeng@intel.com \
--cc=robin.murphy@arm.com \
--cc=sagi@grimberg.me \
--cc=shameerali.kolothum.thodi@huawei.com \
--cc=will@kernel.org \
--cc=yishaih@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).