From: Christoph Hellwig <hch@lst.de>
To: Chuck Lever <cel@kernel.org>
Cc: Jason Gunthorpe <jgg@nvidia.com>,
Leon Romanovsky <leon@kernel.org>, Christoph Hellwig <hch@lst.de>,
NeilBrown <neilb@ownmail.net>, Jeff Layton <jlayton@kernel.org>,
Olga Kornievskaia <okorniev@redhat.com>,
Dai Ngo <dai.ngo@oracle.com>, Tom Talpey <tom@talpey.com>,
linux-rdma@vger.kernel.org, linux-nfs@vger.kernel.org,
Chuck Lever <chuck.lever@oracle.com>
Subject: Re: [PATCH v2 2/4] RDMA/core: use IOVA-based DMA mapping for bvec RDMA operations
Date: Wed, 21 Jan 2026 09:51:59 +0100 [thread overview]
Message-ID: <20260121085159.GB16458@lst.de> (raw)
In-Reply-To: <20260120143124.1822121-3-cel@kernel.org>
On Tue, Jan 20, 2026 at 09:31:22AM -0500, Chuck Lever wrote:
> From: Chuck Lever <chuck.lever@oracle.com>
>
> The bvec RDMA API maps each bvec individually via dma_map_phys(),
> requiring an IOTLB sync for each mapping. For large I/O operations
> with many bvecs, this overhead becomes significant.
>
> The two-step IOVA API (dma_iova_try_alloc / dma_iova_link /
> dma_iova_sync) allocates a contiguous IOVA range upfront, links
> all physical pages without IOTLB syncs, then performs a single
> sync at the end. This reduces IOTLB flushes from O(n) to O(1).
... and requires only a single output dma_addr_t compared to extra
per-input element storage in struct scatterlist.
> + const struct bio_vec *bvec, u32 nr_bvec,
> + struct bvec_iter *iter,
> + u64 remote_addr, u32 rkey, enum dma_data_direction dir)
Same minor nits as for the previous patch here as well.
> + struct ib_device *dev = qp->pd->device;
> + struct device *dma_dev = dev->dma_device;
> + struct bvec_iter link_iter;
> + struct bio_vec first_bv;
> + size_t total_len, mapped_len = 0;
> + int ret;
> +
> + /* Virtual DMA devices lack IOVA allocators */
> + if (ib_uses_virt_dma(dev))
> + return -EOPNOTSUPP;
No only lacks, but fundamentally can't support it.
> + total_len = iter->bi_size;
I'd just initialize this at declaration time.
> + /* Get the first (possibly offset-adjusted) bvec for starting phys addr */
I think this comment is kinda out of date now, as the offset adjustment
is transparently done by the bvec helpers, and there's no visible concept
of a start phys addr.
> + first_bv = mp_bvec_iter_bvec(bvec, *iter);
I'd also initialize first_bv at declaration time. The compilers are
smart enough defer the work past past the virtual dma check.
> + struct bio_vec bv = mp_bvec_iter_bvec(bvec, link_iter);
> +
> + ret = dma_iova_link(dma_dev, &ctx->iova.state, bvec_phys(&bv),
> + mapped_len, bv.bv_len, dir, 0);
> + if (ret)
> + goto out_destroy;
> +
> + if (check_add_overflow(mapped_len, bv.bv_len, &mapped_len)) {
> + ret = -EOVERFLOW;
> + goto out_destroy;
> + }
Do the overflow check before calling dma_iova_link as it's kinda
pointless to continue with that operation. But then again, I don't
really think we need the overflow check at all. The length is known
beforehand in bi_size, which is a u32, while mapped_len is a size_t,
so we can't really overflow here at all.
> + /*
> + * Try IOVA-based mapping first for multi-bvec transfers.
> + * This reduces IOTLB sync overhead by batching all mappings.
> + */
> + ret = rdma_rw_init_iova_wrs_bvec(ctx, qp, bvec, nr_bvec, &iter,
> + remote_addr, rkey, dir);
> + if (ret != -EOPNOTSUPP)
> + return ret;
> +
> + /* Fallback path requires iterator at initial state */
> + iter.bi_sector = 0;
> + iter.bi_size = total_len;
> + iter.bi_idx = 0;
> + iter.bi_bvec_done = offset;
rdma_rw_init_iova_wrs_bvec already avoids advancing the passed in
iter, and this rebuilds it. In addition, rdma_rw_init_iova_wrs_bvec
only returns -EOPNOTSUPP before advancing even the local iter.
So I think both the local iter copy in rdma_rw_init_iova_wrs_bvec
and this can go away. But it would be useful to capture that
rdma_rw_init_iova_wrs_bvec must leave the iter unmodified when
returning -EOPNOTSUPP in a comment.
next prev parent reply other threads:[~2026-01-21 8:52 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-01-20 14:31 [PATCH v2 0/4] Add a bio_vec based API to core/rw.c Chuck Lever
2026-01-20 14:31 ` [PATCH v2 1/4] RDMA/core: add bio_vec based RDMA read/write API Chuck Lever
2026-01-21 8:42 ` Christoph Hellwig
2026-01-21 8:48 ` Leon Romanovsky
2026-01-21 8:57 ` Christoph Hellwig
2026-01-21 10:16 ` Leon Romanovsky
2026-01-21 8:56 ` Christoph Hellwig
2026-01-21 14:14 ` Chuck Lever
2026-01-21 14:57 ` Christoph Hellwig
2026-01-21 15:10 ` Chuck Lever
2026-01-20 14:31 ` [PATCH v2 2/4] RDMA/core: use IOVA-based DMA mapping for bvec RDMA operations Chuck Lever
2026-01-21 8:51 ` Christoph Hellwig [this message]
2026-01-20 14:31 ` [PATCH v2 3/4] RDMA/core: add MR support for bvec-based " Chuck Lever
2026-01-21 9:05 ` Christoph Hellwig
2026-01-20 14:31 ` [PATCH v2 4/4] svcrdma: use bvec-based RDMA read/write API Chuck Lever
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260121085159.GB16458@lst.de \
--to=hch@lst.de \
--cc=cel@kernel.org \
--cc=chuck.lever@oracle.com \
--cc=dai.ngo@oracle.com \
--cc=jgg@nvidia.com \
--cc=jlayton@kernel.org \
--cc=leon@kernel.org \
--cc=linux-nfs@vger.kernel.org \
--cc=linux-rdma@vger.kernel.org \
--cc=neilb@ownmail.net \
--cc=okorniev@redhat.com \
--cc=tom@talpey.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox