From: Cho KyongHo <pullip.cho@samsung.com>
To: Will Deacon <will@kernel.org>
Cc: janghyuck.kim@samsung.com, catalin.marinas@arm.com,
joro@8bytes.org, linux-kernel@vger.kernel.org,
hyesoo.yu@samsung.com, iommu@lists.linux-foundation.org,
robin.murphy@arm.com, linux-arm-kernel@lists.infradead.org,
m.szyprowski@samsung.com
Subject: Re: [PATCH 1/2] dma-mapping: introduce relaxed version of dma sync
Date: Wed, 19 Aug 2020 10:24:05 +0900 [thread overview]
Message-ID: <20200819012405.GA130135@KEI> (raw)
In-Reply-To: <20200818100756.GA15543@willie-the-truck>
[-- Attachment #1: Type: text/plain, Size: 4796 bytes --]
On Tue, Aug 18, 2020 at 11:07:57AM +0100, Will Deacon wrote:
> On Tue, Aug 18, 2020 at 06:37:39PM +0900, Cho KyongHo wrote:
> > On Tue, Aug 18, 2020 at 09:28:53AM +0100, Will Deacon wrote:
> > > On Tue, Aug 18, 2020 at 04:43:10PM +0900, Cho KyongHo wrote:
> > > > Cache maintenance operations in the most of CPU architectures needs
> > > > memory barrier after the cache maintenance for the DMAs to view the
> > > > region of the memory correctly. The problem is that memory barrier is
> > > > very expensive and dma_[un]map_sg() and dma_sync_sg_for_{device|cpu}()
> > > > involves the memory barrier per every single cache sg entry. In some
> > > > CPU micro-architecture, a single memory barrier consumes more time than
> > > > cache clean on 4KiB. It becomes more serious if the number of CPU cores
> > > > are larger.
> > >
> > > Have you got higher-level performance data for this change? It's more likely
> > > that the DSB is what actually forces the prior cache maintenance to
> > > complete,
> >
> > This patch does not skip necessary DSB after cache maintenance. It just
> > remove repeated dsb per every single sg entry and call dsb just once
> > after cache maintenance on all sg entries is completed.
>
> Yes, I realise that, but what I'm saying is that a big part of your
> justification for this change is:
>
> | The problem is that memory barrier is very expensive and dma_[un]map_sg()
> | and dma_sync_sg_for_{device|cpu}() involves the memory barrier per every
> | single cache sg entry. In some CPU micro-architecture, a single memory
> | barrier consumes more time than cache clean on 4KiB.
>
> and my point is that the DSB is likely completing the cache maintenance,
> so as cache maintenance instructions retire faster in the micro-architecture,
> the DSB becomes absolutely slower. In other words, it doesn't make much
> sense to me to compare the cost of the DSB with the cost of the cache
> maintenance; what matters more is the code of the high-level unmap()
> operation for the sglist.
>
I now understand your point. But I still believe that repeated DSB in
the middle of cache maintenance wastes redundant CPU cycles. Avoiding
that redundancy causes extra complexity to implmentation of dma API. But
I think it is valuable.
> > > so it's important to look at the bigger picture, not just the
> > > apparent relative cost of these instructions.
> > >
> > If you mean bigger picture is the performance impact of this patch to a
> > complete user scenario, we are evaluating it in some latency sensitve
> > scenario. But I wonder if a performance gain in a platform/SoC specific
> > scenario is also persuasive.
>
> Latency is fine too, but phrasing the numbers (and we really need those)
> in terms of things like "The interrupt response time for this in-tree
> driver is improved by xxx ns (yy %) after this change" or "Throughput
> for this in-tree driver goes from xxx mb/s to yyy mb/s" would be really
> helpful.
>
Unfortunately, we have no in-tree driver to show the performance.
Instead, we just evaluated the speed of dma_sync_sg_for_device() to see
the improvements of this patch.
For example, Cortex-A55 in our 2-cluster, big-mid-little system gains 28%
(130.9 usec. -> 94.5 usec.) during dma_sync_sg_for_device(sg, nents,
DMA_TO_DEVICE) is running with nents = 256 and length of each sg entrh is 4KiB.
Let me describe the detailed performance results in the next patch
series which will include some fixes to errata in commit messages.
> > > Also, it's a miracle that non-coherent DMA even works,
> >
> > I am sorry, Will. I don't understand this. Can you let me know what do
> > you mena with the above sentence?
>
> Non-coherent DMA sucks for software.
I agree. But due to the H/W cost, proposals about coherent DMA are
always challenging.
> For the most part, Linux does a nice
> job of hiding this from device drivers, and I think _that_ is the primary
> concern, rather than performance. If performance is a problem, then the
> solution is cache coherence or a shared non-cacheable buffer (rather than
> the streaming API).
We are also trying to use non-cacheable buffers for the non-coherent
DMAs. But the problem with the non-cacheable buffer is CPU access speed.
>
> > > so I'm not sure
> > > that we should be complicating the implementation like this to try to
> > > make it "fast".
> > >
> > I agree that this patch makes the implementation of dma API a bit more
> > but I don't think this does not impact its complication seriously.
>
> It's death by a thousand cuts; this patch further fragments the architecture
> backends and leads to arm64-specific behaviour which consequently won't get
> well tested by anybody else. Now, it might be worth it, but there's not
> enough information here to make that call.
> Will
>
[-- Attachment #2: Type: text/plain, Size: 0 bytes --]
[-- Attachment #3: Type: text/plain, Size: 176 bytes --]
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
prev parent reply other threads:[~2020-08-19 1:33 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <CGME20200818075050epcas2p15c780650f5f6b4a54ce731c273d24c98@epcas2p1.samsung.com>
2020-08-18 7:43 ` [PATCH 1/2] dma-mapping: introduce relaxed version of dma sync Cho KyongHo
[not found] ` <CGME20200818075051epcas2p316edad6edd3df59444c08d392b075ea8@epcas2p3.samsung.com>
2020-08-18 7:43 ` [PATCH 2/2] arm64: dma-mapping: add relaxed DMA sync Cho KyongHo
2020-08-18 8:28 ` [PATCH 1/2] dma-mapping: introduce relaxed version of dma sync Will Deacon
2020-08-18 8:37 ` Christoph Hellwig
2020-08-18 9:46 ` Cho KyongHo
2020-08-18 9:37 ` Cho KyongHo
2020-08-18 10:07 ` Will Deacon
2020-08-18 16:10 ` Christoph Hellwig
2020-08-19 2:01 ` Cho KyongHo
2020-08-19 1:24 ` Cho KyongHo [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200819012405.GA130135@KEI \
--to=pullip.cho@samsung.com \
--cc=catalin.marinas@arm.com \
--cc=hyesoo.yu@samsung.com \
--cc=iommu@lists.linux-foundation.org \
--cc=janghyuck.kim@samsung.com \
--cc=joro@8bytes.org \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=m.szyprowski@samsung.com \
--cc=robin.murphy@arm.com \
--cc=will@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).