From: arnd@linaro.org (Arnd Bergmann)
To: linux-arm-kernel@lists.infradead.org
Subject: NVMe vs DMA addressing limitations
Date: Thu, 12 Jan 2017 12:56:07 +0100 [thread overview]
Message-ID: <3306663.hKmLLq1hhl@wuerfel> (raw)
In-Reply-To: <80676b35-121b-0462-23fc-ed5608e1e671@grimberg.me>
On Thursday, January 12, 2017 12:09:11 PM CET Sagi Grimberg wrote:
> >> Another workaround me might need is to limit amount of concurrent DMA
> >> in the NVMe driver based on some platform quirk. The way that NVMe works,
> >> it can have very large amounts of data that is concurrently mapped into
> >> the device.
> >
> > That's not really just NVMe - other storage and network controllers also
> > can DMA map giant amounts of memory. There are a couple aspects to it:
> >
> > - dma coherent memoery - right now NVMe doesn't use too much of it,
> > but upcoming low-end NVMe controllers will soon start to require
> > fairl large amounts of it for the host memory buffer feature that
> > allows for DRAM-less controller designs. As an interesting quirk
> > that is memory only used by the PCIe devices, and never accessed
> > by the Linux host at all.
>
> Would it make sense to convert the nvme driver to use normal allocations
> and use the DMA streaming APIs (dma_sync_single_for_[cpu|device]) for
> both queues and future HMB?
That is an interesting question: We actually have the
"DMA_ATTR_NO_KERNEL_MAPPING" for this case, and ARM implements
it in the coherent interface, so that might be a good fit.
Implementing it in the streaming API makes no sense since we
already have a kernel mapping here, but using a normal allocation
(possibly with DMA_ATTR_NON_CONSISTENT or DMA_ATTR_SKIP_CPU_SYNC,
need to check) might help on other architectures that have
limited amounts of coherent memory and no CMA.
Another benefit of the coherent API for this kind of buffer is
that we can use CMA where available to get a large consecutive
chunk of RAM on architectures without an IOMMU when normal
memory is no longer available because of fragmentation.
Arnd
next prev parent reply other threads:[~2017-01-12 11:56 UTC|newest]
Thread overview: 37+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-12-29 20:45 [PATCH 1/2] arm64: dma_mapping: allow PCI host driver to limit DMA mask Nikita Yushchenko
2016-12-29 20:45 ` [PATCH 2/2] rcar-pcie: set host bridge's " Nikita Yushchenko
2016-12-30 9:46 ` [PATCH 1/2] arm64: dma_mapping: allow PCI host driver to limit " Sergei Shtylyov
2016-12-30 10:06 ` Sergei Shtylyov
2017-01-03 18:44 ` Will Deacon
2017-01-03 19:00 ` Nikita Yushchenko
2017-01-03 19:01 ` Nikita Yushchenko
2017-01-03 20:13 ` Grygorii Strashko
2017-01-03 20:23 ` Nikita Yushchenko
2017-01-03 23:13 ` Arnd Bergmann
2017-01-04 6:24 ` Nikita Yushchenko
2017-01-04 13:29 ` Arnd Bergmann
2017-01-04 14:30 ` Nikita Yushchenko
2017-01-04 14:46 ` Arnd Bergmann
2017-01-04 15:29 ` Nikita Yushchenko
2017-01-06 11:10 ` Arnd Bergmann
2017-01-06 13:47 ` Nikita Yushchenko
2017-01-06 14:38 ` [PATCH] arm64: do not set dma masks that device connection can't handle Nikita Yushchenko
2017-01-06 14:45 ` Nikita Yushchenko
2017-01-08 7:09 ` Sergei Shtylyov
2017-01-09 6:56 ` Nikita Yushchenko
2017-01-09 14:05 ` [PATCH 1/2] arm64: dma_mapping: allow PCI host driver to limit DMA mask Arnd Bergmann
2017-01-09 20:34 ` Nikita Yushchenko
2017-01-09 20:57 ` Christoph Hellwig
[not found] ` <e084dbad-29ab-25bd-5e17-da0fcd92f7ac@cogentembedded.com>
2017-01-10 7:07 ` NVMe vs DMA addressing limitations Christoph Hellwig
2017-01-10 7:31 ` Nikita Yushchenko
2017-01-10 11:01 ` Arnd Bergmann
2017-01-10 14:48 ` Christoph Hellwig
2017-01-10 15:02 ` Arnd Bergmann
2017-01-12 10:09 ` Sagi Grimberg
2017-01-12 11:56 ` Arnd Bergmann [this message]
2017-01-12 13:07 ` Christoph Hellwig
2017-01-10 10:54 ` Arnd Bergmann
2017-01-10 10:47 ` [PATCH 1/2] arm64: dma_mapping: allow PCI host driver to limit DMA mask Arnd Bergmann
2017-01-10 14:44 ` Christoph Hellwig
2017-01-10 15:00 ` Arnd Bergmann
2017-02-16 16:12 ` Arnd Bergmann
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=3306663.hKmLLq1hhl@wuerfel \
--to=arnd@linaro.org \
--cc=linux-arm-kernel@lists.infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox