From: Jason Gunthorpe <jgg@nvidia.com>
To: Leon Romanovsky <leon@kernel.org>
Cc: Doug Ledford <dledford@redhat.com>,
Christoph Hellwig <hch@lst.de>, Maor Gottlieb <maorg@nvidia.com>,
Dennis Dalessandro <dennis.dalessandro@cornelisnetworks.com>,
linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org,
Mike Marciniszyn <mike.marciniszyn@cornelisnetworks.com>,
Yishai Hadas <yishaih@nvidia.com>,
Zhu Yanjun <zyjzyj2000@gmail.com>
Subject: Re: [PATCH rdma-next 2/2] RDMA: Use dma_map_sgtable for map umem pages
Date: Wed, 23 Jun 2021 09:10:05 -0300 [thread overview]
Message-ID: <20210623121005.GK2371267@nvidia.com> (raw)
In-Reply-To: <YNLFRa75KQ+BO4rB@unreal>
On Wed, Jun 23, 2021 at 08:23:17AM +0300, Leon Romanovsky wrote:
> On Tue, Jun 22, 2021 at 10:18:16AM -0300, Jason Gunthorpe wrote:
> > On Tue, Jun 22, 2021 at 02:39:42PM +0300, Leon Romanovsky wrote:
> >
> > > diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c
> > > index 0eb40025075f..a76ef6a6bac5 100644
> > > +++ b/drivers/infiniband/core/umem.c
> > > @@ -51,11 +51,11 @@ static void __ib_umem_release(struct ib_device *dev, struct ib_umem *umem, int d
> > > struct scatterlist *sg;
> > > unsigned int i;
> > >
> > > - if (umem->nmap > 0)
> > > - ib_dma_unmap_sg(dev, umem->sg_head.sgl, umem->sg_nents,
> > > - DMA_BIDIRECTIONAL);
> > > + if (dirty)
> > > + ib_dma_unmap_sgtable_attrs(dev, &umem->sg_head,
> > > + DMA_BIDIRECTIONAL, 0);
> > >
> > > - for_each_sg(umem->sg_head.sgl, sg, umem->sg_nents, i)
> > > + for_each_sgtable_dma_sg(&umem->sg_head, sg, i)
> > > unpin_user_page_range_dirty_lock(sg_page(sg),
> > > DIV_ROUND_UP(sg->length, PAGE_SIZE), make_dirty);
> >
> > This isn't right, can't mix sg_page with a _dma_ API
>
> Jason, why is that?
>
> We use same pages that were passed to __sg_alloc_table_from_pages() in __ib_umem_get().
A sgl has two lists inside it a 'dma' list and a 'page' list, they are
not the same length and not interchangable.
If you use for_each_sgtable_dma_sg() then you iterate over the 'dma'
list and have to use 'dma' accessors
If you use for_each_sgtable_sg() then you iterate over the 'page' list
and have to use 'page' acessors
Mixing dma iteration with page accessors or vice-versa, like above, is
always a bug.
You can see it alos because the old code used umem->sg_nents which is
the CPU list length while this new code is using the dma list length.
Jason
next prev parent reply other threads:[~2021-06-23 12:10 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-06-22 11:39 [PATCH rdma-next 0/2] SG fix together with update to RDMA umem Leon Romanovsky
2021-06-22 11:39 ` [PATCH rdma-next 1/2] lib/scatterlist: Fix wrong update of orig_nents Leon Romanovsky
2021-06-22 11:39 ` [PATCH rdma-next 2/2] RDMA: Use dma_map_sgtable for map umem pages Leon Romanovsky
2021-06-22 13:18 ` Jason Gunthorpe
2021-06-23 5:23 ` Leon Romanovsky
2021-06-23 12:10 ` Jason Gunthorpe [this message]
2021-06-23 13:12 ` Leon Romanovsky
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20210623121005.GK2371267@nvidia.com \
--to=jgg@nvidia.com \
--cc=dennis.dalessandro@cornelisnetworks.com \
--cc=dledford@redhat.com \
--cc=hch@lst.de \
--cc=leon@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-rdma@vger.kernel.org \
--cc=maorg@nvidia.com \
--cc=mike.marciniszyn@cornelisnetworks.com \
--cc=yishaih@nvidia.com \
--cc=zyjzyj2000@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox