From: Marc Zyngier <marc.zyngier@arm.com>
To: Ganapatrao Kulkarni <gklkml16@gmail.com>
Cc: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>,
LKML <linux-kernel@vger.kernel.org>,
shankerd@codeaurora.org,
Ganapatrao Kulkarni <ganapatrao.kulkarni@cavium.com>,
Robert Richter <Robert.Richter@cavium.com>,
Hanjun Guo <guohanjun@huawei.com>,
John Garry <john.garry@huawei.com>,
linux-arm-kernel@lists.infradead.org,
Linuxarm <linuxarm@huawei.com>,
"Nair, Jayachandran" <Jayachandran.Nair@cavium.com>,
gkulkarni@marvell.com
Subject: Re: [PATCH v3] irqchip: gicv3-its: Use NUMA aware memory allocation for ITS tables
Date: Fri, 11 Jan 2019 09:01:30 +0000 [thread overview]
Message-ID: <6abeadae-99d7-bdb0-28bb-eb472cb7d783@arm.com> (raw)
In-Reply-To: <CAKTKpr4uvQtYKrcAMSoaA-sYv2MNCqZgRb2KQhYa23KaYptaUA@mail.gmail.com>
On 11/01/2019 03:53, Ganapatrao Kulkarni wrote:
> Hi Shameer,
>
> Patch looks OK to me, please feel free to add,
> Reviewed-by: Ganapatrao Kulkarni <gkulkarni@marvell.com>
>
> On Thu, Dec 13, 2018 at 5:25 PM Marc Zyngier <marc.zyngier@arm.com> wrote:
>>
>> On 13/12/2018 10:59, Shameer Kolothum wrote:
>>> From: Shanker Donthineni <shankerd@codeaurora.org>
>>>
>>> The NUMA node information is visible to ITS driver but not being used
>>> other than handling hardware errata. ITS/GICR hardware accesses to the
>>> local NUMA node is usually quicker than the remote NUMA node. How slow
>>> the remote NUMA accesses are depends on the implementation details.
>>>
>>> This patch allocates memory for ITS management tables and command
>>> queue from the corresponding NUMA node using the appropriate NUMA
>>> aware functions. This change improves the performance of the ITS
>>> tables read latency on systems where it has more than one ITS block,
>>> and with the slower inter node accesses.
>>>
>>> Apache Web server benchmarking using ab tool on a HiSilicon D06
>>> board with multiple numa mem nodes shows Time per request and
>>> Transfer rate improvements of ~3.6% with this patch.
>>>
>>> Signed-off-by: Shanker Donthineni <shankerd@codeaurora.org>
>>> Signed-off-by: Hanjun Guo <guohanjun@huawei.com>
>>> Signed-off-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
>>> ---
>>>
>>> This is to revive the patch originally sent by Shanker[1] and
>>> to back it up with a benchmark test. Any further testing of
>>> this is most welcome.
>>>
>>> v2-->v3
>>> -Addressed comments to use page_address().
>>> -Added Benchmark results to commit log.
>>> -Removed T-by from Ganapatrao for now.
>>>
>>> v1-->v2
>>> -Edited commit text.
>>> -Added Ganapatrao's tested-by.
>>>
>>> Benchmark test details:
>>> --------------------------------
>>> Test Setup:
>>> -D06 with dimm on node 0(Sock#0) and 3 (Sock#1).
>>> -ITS belongs to numa node 0.
>>> -Filesystem mounted on a PCIe NVMe based disk.
>>> -Apache server installed on D06.
>>> -Running ab benchmark test in concurrency mode from a remote m/c
>>> connected to D06 via hns3(PCIe) n/w port.
>>> "ab -k -c 750 -n 2000000 http://10.202.225.188/"
>>>
>>> Test results are avg. of 15 runs.
>>>
>>> For 4.20-rc1 Kernel,
>>> ----------------------------
>>> Time per request(mean, concurrent) = 0.02753[ms]
>>> Transfer Rate = 416501[Kbytes/sec]
>>>
>>> For 4.20-rc1 + this patch,
>>> ----------------------------------
>>> Time per request(mean, concurrent) = 0.02653[ms]
>>> Transfer Rate = 431954[Kbytes/sec]
>>>
>>> % improvement ~3.6%
>>>
>>> vmstat shows around 170K-200K interrupts per second.
>>>
>>> ~# vmstat 1 -w
>>> procs -----------------------memory-- - -system--
>>> r b swpd free in
>>> 5 0 0 30166724 102794
>>> 9 0 0 30141828 171148
>>> 5 0 0 30150160 207185
>>> 13 0 0 30145924 175691
>>> 15 0 0 30140792 145250
>>> 13 0 0 30135556 201879
>>> 13 0 0 30134864 192391
>>> 10 0 0 30133632 168880
>>> ....
>>>
>>> [1] https://patchwork.kernel.org/patch/9833339/
>>
>> The figures certainly look convincing. I'd need someone from Cavium to
>> benchmark it on their hardware and come back with results so that we can
>> make a decision on this.
>
> Hi Marc,
> My setup got altered during Lab migration from Cavium to Marvell office.
> I don't think, i will have same setup anytime soon.
Fair enough. If nobody objects, I'll take it.
Shameer, please repost this on top of 5.0-rc1, together with
Ganapatrao's RB, and we'll take it from there.
Thanks,
M.
--
Jazz is not dead. It just smells funny...
next prev parent reply other threads:[~2019-01-11 9:01 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-12-13 10:59 [PATCH v3] irqchip: gicv3-its: Use NUMA aware memory allocation for ITS tables Shameer Kolothum
2018-12-13 11:54 ` Marc Zyngier
2019-01-11 3:53 ` Ganapatrao Kulkarni
2019-01-11 9:01 ` Marc Zyngier [this message]
2019-01-11 9:42 ` Suzuki K Poulose
2019-01-11 10:34 ` Shameerali Kolothum Thodi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=6abeadae-99d7-bdb0-28bb-eb472cb7d783@arm.com \
--to=marc.zyngier@arm.com \
--cc=Jayachandran.Nair@cavium.com \
--cc=Robert.Richter@cavium.com \
--cc=ganapatrao.kulkarni@cavium.com \
--cc=gklkml16@gmail.com \
--cc=gkulkarni@marvell.com \
--cc=guohanjun@huawei.com \
--cc=john.garry@huawei.com \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linuxarm@huawei.com \
--cc=shameerali.kolothum.thodi@huawei.com \
--cc=shankerd@codeaurora.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox