From: thunder.leizhen@huawei.com (Leizhen (ThunderTown))
To: linux-arm-kernel@lists.infradead.org
Subject: [PATCH v2 0/8] io-pgtable lock removal
Date: Mon, 26 Jun 2017 21:19:40 +0800 [thread overview]
Message-ID: <595109EC.5000201@huawei.com> (raw)
In-Reply-To: <15e7ce0a-bf4b-cc77-3600-c37ed865a4d7@huawei.com>
On 2017/6/26 21:12, John Garry wrote:
>
>>>
>>> I saw Will has already sent the pull request. But, FWIW, we are seeing
>>> roughly the same performance as v1 patchset. For PCI NIC, Zhou again
>>> found performance drop goes from ~15->8% with SMMU enabled, and for
>>> integrated storage controller [platform device], we still see a drop of
>>> about 50%, depending on datarates (Leizhen has been working on fixing
>>> this).
>>
>> Thanks for confirming. Following Joerg's suggestion that the storage
>> workloads may still depend on rbtree performance - it had slipped my
>> mind that even with small block sizes those could well be grouped into
>> scatterlists large enough to trigger a >64-page IOVA allocation - I've
>> taken the liberty of cooking up a simplified version of Leizhen's rbtree
>> optimisation series in the iommu/iova branch of my tree. I'll follow up
>> on that after the merge window, but if anyone wants to play with it in
>> the meantime feel free.
The main problem is lock confliction of cmd queue. I have prepared my patchset,
I will send it later.
>
> Just a reminder that we did also see poor performance with our integrated NIC on your v1 patchset also (I can push for v2 patchset testing, but expect the same).
>
> We might be able to now include a LSI 3108 PCI SAS card in our testing also to give a broader set of results.
>
> John
>
>>
>> Robin.
>>
>> .
>>
>
>
>
> .
>
--
Thanks!
BestRegards
prev parent reply other threads:[~2017-06-26 13:19 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-06-22 15:53 [PATCH v2 0/8] io-pgtable lock removal Robin Murphy
2017-06-22 15:53 ` [PATCH v2 1/8] iommu/io-pgtable-arm-v7s: Check table PTEs more precisely Robin Murphy
2017-06-22 15:53 ` [PATCH v2 2/8] iommu/io-pgtable-arm: Improve split_blk_unmap Robin Murphy
2017-06-22 15:53 ` [PATCH v2 3/8] iommu/io-pgtable-arm-v7s: Refactor split_blk_unmap Robin Murphy
2017-06-22 15:53 ` [PATCH v2 4/8] iommu/io-pgtable: Introduce explicit coherency Robin Murphy
2017-06-22 15:53 ` [PATCH v2 5/8] iommu/io-pgtable-arm: Support lockless operation Robin Murphy
2017-06-23 5:53 ` Linu Cherian
2017-06-23 8:56 ` Linu Cherian
2017-06-23 10:35 ` Robin Murphy
2017-06-23 11:34 ` Linu Cherian
2017-06-27 5:11 ` Linu Cherian
2017-06-27 8:39 ` Will Deacon
2017-06-27 9:08 ` Linu Cherian
2017-06-22 15:53 ` [PATCH v2 6/8] iommu/io-pgtable-arm-v7s: " Robin Murphy
2017-06-22 15:53 ` [PATCH v2 7/8] iommu/arm-smmu: Remove io-pgtable spinlock Robin Murphy
2017-06-22 15:53 ` [PATCH v2 8/8] iommu/arm-smmu-v3: " Robin Murphy
2017-06-23 8:47 ` [PATCH v2 0/8] io-pgtable lock removal John Garry
2017-06-23 9:58 ` Robin Murphy
2017-06-26 11:35 ` John Garry
2017-06-26 12:31 ` Robin Murphy
2017-06-26 13:12 ` John Garry
2017-06-26 13:19 ` Leizhen (ThunderTown) [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=595109EC.5000201@huawei.com \
--to=thunder.leizhen@huawei.com \
--cc=linux-arm-kernel@lists.infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).