From: Jason Gunthorpe <jgg@ziepe.ca>
To: Logan Gunthorpe <logang@deltatee.com>
Cc: Christoph Hellwig <hch@lst.de>,
linux-kernel@vger.kernel.org, linux-block@vger.kernel.org,
linux-nvme@lists.infradead.org, linux-pci@vger.kernel.org,
linux-rdma@vger.kernel.org, Jens Axboe <axboe@kernel.dk>,
Bjorn Helgaas <bhelgaas@google.com>,
Dan Williams <dan.j.williams@intel.com>,
Sagi Grimberg <sagi@grimberg.me>, Keith Busch <kbusch@kernel.org>,
Stephen Bates <sbates@raithlin.com>
Subject: Re: [RFC PATCH 00/28] Removing struct page from P2PDMA
Date: Fri, 28 Jun 2019 14:29:26 -0300 [thread overview]
Message-ID: <20190628172926.GA3877@ziepe.ca> (raw)
In-Reply-To: <8022a2a4-4069-d256-11da-e6d9b2ffbf60@deltatee.com>
On Fri, Jun 28, 2019 at 10:22:06AM -0600, Logan Gunthorpe wrote:
> > Why not? If we have a 'bar info' structure that could have data
> > transfer op callbacks, infact, I think we might already have similar
> > callbacks for migrating to/from DEVICE_PRIVATE memory with DMA..
>
> Well it could, in theory be done, but It just seems wrong to setup and
> wait for more DMA requests while we are in mid-progress setting up
> another DMA request. Especially when the block layer has historically
> had issues with stack sizes. It's also possible you might have multiple
> bio_vec's that have to each do a migration and with a hook here they'd
> have to be done serially.
*shrug* this is just standard bounce buffering stuff...
> > I think the best reason to prefer a uniform phys_addr_t is that it
> > does give us the option to copy the data to/from CPU memory. That
> > option goes away as soon as the bio sometimes provides a dma_addr_t.
>
> Not really. phys_addr_t alone doesn't give us a way to copy data. You
> need a lookup table on that address and a couple of hooks.
Yes, I'm not sure how you envision using phys_addr_t without a
lookup.. At the end of the day we must get the src and target 'struct
device' in the dma_map area (at the minimum to compute the offset to
translate phys_addr_t to dma_addr_t) and the only way to do that from
phys_addr_t is via lookup??
> > At least for RDMA, we do have some cases (like siw/rxe, hfi) where
> > they sometimes need to do that copy. I suspect the block stack is
> > similar, in the general case.
>
> But the whole point of the use cases I'm trying to serve is to avoid the
> root complex.
Well, I think this is sort of a seperate issue. Generically I think
the dma layer should continue to work largely transparently, and if I
feed in BAR memory that can't be P2P'd it should bounce, just like
all the other DMA limitations it already supports. That is pretty much
its whole purpose in life.
The issue of having the caller optimize what it sends is kind of
separate - yes you definately still need the egress DMA device to
drive CMB buffer selection, and DEVICE_PRIVATE also needs it to decide
if it should migrate or not.
What I see as the question is how to layout the BIO.
If we agree the bio should only have phys_addr_t then we need some
'bar info' (ie at least the offset) in the dma map and some 'bar info'
(ie the DMA device) during the bio construciton.
What you are trying to do is optimize the passing of that 'bar info'
with a limited number of bits in the BIO.
A single flag means an interval tree, 4-8 bits could build a probably
O(1) hash lookup, 64 bits could store a pointer, etc.
If we can spare 4-8 bits in the bio then I suggest a 'perfect hash
table'. Assign each registered P2P 'bar info' a small 4 bit id and
hash on that. It should be fast enough to not worry about the double
lookup.
Jason
next prev parent reply other threads:[~2019-06-28 17:29 UTC|newest]
Thread overview: 89+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-06-20 16:12 [RFC PATCH 00/28] Removing struct page from P2PDMA Logan Gunthorpe
2019-06-20 16:12 ` [RFC PATCH 01/28] block: Introduce DMA direct request type Logan Gunthorpe
2019-06-20 16:12 ` [RFC PATCH 02/28] block: Add dma_vec structure Logan Gunthorpe
2019-06-20 16:12 ` [RFC PATCH 03/28] block: Warn on mis-use of dma-direct bios Logan Gunthorpe
2019-06-20 16:12 ` [RFC PATCH 04/28] block: Never bounce " Logan Gunthorpe
2019-06-20 17:23 ` Jason Gunthorpe
2019-06-20 18:38 ` Logan Gunthorpe
2019-06-20 16:12 ` [RFC PATCH 05/28] block: Skip dma-direct bios in bio_integrity_prep() Logan Gunthorpe
2019-06-20 16:12 ` [RFC PATCH 06/28] block: Support dma-direct bios in bio_advance_iter() Logan Gunthorpe
2019-06-20 16:12 ` [RFC PATCH 07/28] block: Use dma_vec length in bio_cur_bytes() for dma-direct bios Logan Gunthorpe
2019-06-20 16:12 ` [RFC PATCH 08/28] block: Introduce dmavec_phys_mergeable() Logan Gunthorpe
2019-06-20 16:12 ` [RFC PATCH 09/28] block: Introduce vec_gap_to_prev() Logan Gunthorpe
2019-06-20 16:12 ` [RFC PATCH 10/28] block: Create generic vec_split_segs() from bvec_split_segs() Logan Gunthorpe
2019-06-20 16:12 ` [RFC PATCH 11/28] block: Create blk_segment_split_ctx Logan Gunthorpe
2019-06-20 16:12 ` [RFC PATCH 12/28] block: Create helper for bvec_should_split() Logan Gunthorpe
2019-06-20 16:12 ` [RFC PATCH 13/28] block: Generalize bvec_should_split() Logan Gunthorpe
2019-06-20 16:12 ` [RFC PATCH 14/28] block: Support splitting dma-direct bios Logan Gunthorpe
2019-06-20 16:12 ` [RFC PATCH 15/28] block: Support counting dma-direct bio segments Logan Gunthorpe
2019-06-20 16:12 ` [RFC PATCH 16/28] block: Implement mapping dma-direct requests to SGs in blk_rq_map_sg() Logan Gunthorpe
2019-06-20 16:12 ` [RFC PATCH 17/28] block: Introduce queue flag to indicate support for dma-direct bios Logan Gunthorpe
2019-06-20 16:12 ` [RFC PATCH 18/28] block: Introduce bio_add_dma_addr() Logan Gunthorpe
2019-06-20 16:12 ` [RFC PATCH 19/28] nvme-pci: Support dma-direct bios Logan Gunthorpe
2019-06-20 16:12 ` [RFC PATCH 20/28] IB/core: Introduce API for initializing a RW ctx from a DMA address Logan Gunthorpe
2019-06-20 16:49 ` Jason Gunthorpe
2019-06-20 16:59 ` Logan Gunthorpe
2019-06-20 17:11 ` Jason Gunthorpe
2019-06-20 18:24 ` Logan Gunthorpe
2019-06-20 16:12 ` [RFC PATCH 21/28] nvmet: Split nvmet_bdev_execute_rw() into a helper function Logan Gunthorpe
2019-06-20 16:12 ` [RFC PATCH 22/28] nvmet: Use DMA addresses instead of struct pages for P2P Logan Gunthorpe
2019-06-20 16:12 ` [RFC PATCH 23/28] nvme-pci: Remove support for PCI_P2PDMA requests Logan Gunthorpe
2019-06-20 16:12 ` [RFC PATCH 24/28] block: Remove PCI_P2PDMA queue flag Logan Gunthorpe
2019-06-20 16:12 ` [RFC PATCH 25/28] IB/core: Remove P2PDMA mapping support in rdma_rw_ctx Logan Gunthorpe
2019-06-20 16:12 ` [RFC PATCH 26/28] PCI/P2PDMA: Remove SGL helpers Logan Gunthorpe
2019-06-20 16:12 ` [RFC PATCH 27/28] PCI/P2PDMA: Remove struct pages that back P2PDMA memory Logan Gunthorpe
2019-06-20 16:12 ` [RFC PATCH 28/28] memremap: Remove PCI P2PDMA page memory type Logan Gunthorpe
2019-06-20 18:45 ` [RFC PATCH 00/28] Removing struct page from P2PDMA Dan Williams
2019-06-20 19:33 ` Jason Gunthorpe
2019-06-20 20:18 ` Dan Williams
2019-06-20 20:51 ` Logan Gunthorpe
2019-06-21 17:47 ` Jason Gunthorpe
2019-06-21 17:54 ` Dan Williams
2019-06-24 7:31 ` Christoph Hellwig
2019-06-24 13:46 ` Jason Gunthorpe
2019-06-24 13:50 ` Christoph Hellwig
2019-06-24 13:55 ` Jason Gunthorpe
2019-06-24 16:53 ` Logan Gunthorpe
2019-06-24 18:16 ` Jason Gunthorpe
2019-06-24 18:28 ` Logan Gunthorpe
2019-06-24 18:54 ` Jason Gunthorpe
2019-06-24 19:37 ` Logan Gunthorpe
2019-06-24 16:10 ` Logan Gunthorpe
2019-06-25 7:18 ` Christoph Hellwig
2019-06-20 19:34 ` Logan Gunthorpe
2019-06-20 23:40 ` Dan Williams
2019-06-20 23:42 ` Logan Gunthorpe
2019-06-24 7:27 ` Christoph Hellwig
2019-06-24 16:07 ` Logan Gunthorpe
2019-06-25 7:20 ` Christoph Hellwig
2019-06-25 15:57 ` Logan Gunthorpe
2019-06-25 17:01 ` Christoph Hellwig
2019-06-25 19:54 ` Logan Gunthorpe
2019-06-26 6:57 ` Christoph Hellwig
2019-06-26 18:31 ` Logan Gunthorpe
2019-06-26 20:21 ` Jason Gunthorpe
2019-06-26 20:39 ` Dan Williams
2019-06-26 20:54 ` Jason Gunthorpe
2019-06-26 20:55 ` Logan Gunthorpe
2019-06-26 20:45 ` Logan Gunthorpe
2019-06-26 21:00 ` Jason Gunthorpe
2019-06-26 21:18 ` Logan Gunthorpe
2019-06-27 6:32 ` Jason Gunthorpe
2019-06-27 16:09 ` Logan Gunthorpe
2019-06-27 16:35 ` Jason Gunthorpe
2019-06-27 16:49 ` Logan Gunthorpe
2019-06-28 4:57 ` Jason Gunthorpe
2019-06-28 16:22 ` Logan Gunthorpe
2019-06-28 17:29 ` Jason Gunthorpe [this message]
2019-06-28 18:29 ` Logan Gunthorpe
2019-06-28 19:09 ` Jason Gunthorpe
2019-06-28 19:35 ` Logan Gunthorpe
2019-07-02 22:45 ` Jason Gunthorpe
2019-07-02 22:52 ` Logan Gunthorpe
2019-06-27 9:08 ` Christoph Hellwig
2019-06-27 16:30 ` Logan Gunthorpe
2019-06-27 17:00 ` Christoph Hellwig
2019-06-27 18:00 ` Logan Gunthorpe
2019-06-28 13:38 ` Christoph Hellwig
2019-06-28 15:54 ` Logan Gunthorpe
2019-06-27 9:01 ` Christoph Hellwig
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190628172926.GA3877@ziepe.ca \
--to=jgg@ziepe.ca \
--cc=axboe@kernel.dk \
--cc=bhelgaas@google.com \
--cc=dan.j.williams@intel.com \
--cc=hch@lst.de \
--cc=kbusch@kernel.org \
--cc=linux-block@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=linux-pci@vger.kernel.org \
--cc=linux-rdma@vger.kernel.org \
--cc=logang@deltatee.com \
--cc=sagi@grimberg.me \
--cc=sbates@raithlin.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).