linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
From: Barry Song <21cnbao@gmail.com>
To: maz@kernel.org
Cc: 21cnbao@gmail.com, linux-arm-kernel@lists.infradead.org,
	linux-kernel@vger.kernel.org, linuxarm@huawei.com,
	song.bao.hua@hisilicon.com, tglx@linutronix.de, will@kernel.org
Subject: Re: [PATCH] irqchip/gic-v3: use dsb(ishst) to synchronize data to smp before issuing ipi
Date: Sun, 20 Feb 2022 12:46:00 +1300	[thread overview]
Message-ID: <20220219234600.304774-1-21cnbao@gmail.com> (raw)
In-Reply-To: <6432e7e97b828d887da8794c150161c4@kernel.org>

>> +	dsb(ishst);
>> 
>>  	for_each_cpu(cpu, mask) {
>>  		u64 cluster_id = MPIDR_TO_SGI_CLUSTER_ID(cpu_logical_map(cpu));
>
> I'm not opposed to that change, but I'm pretty curious whether this 
> makes
> any visible difference in practice. Could you measure the effect of this 
> change
> for any sort of IPI heavy workload?
> 
> Thanks,
> 
>          M.

In practice, at least I don't see much difference on the hardware I am
using. So the result probably depends on the implementaion of the real
hardwares.

I wrote a micro benchmark to measure the latency w/ and w/o the patch on
kunpeng920 with 96 cores(2 socket, each socket has 2dies, each die has
24 cores, cpu0-cpu47 belong to socket0, cpu48-95 belong to socket1) by
sending IPI to cpu0-cpu95 1000 times from an specified cpu:

#include <linux/module.h>
#include <linux/timekeeping.h>

static void ipi_latency_func(void *val)
{
}

static int __init ipi_latency_init(void)
{

        ktime_t stime, etime, delta;
        int cpu, i;
        int start = smp_processor_id();

        stime = ktime_get();
        for ( i = 0; i < 1000; i++)
                for (cpu = 0; cpu < 96; cpu++)
                        smp_call_function_single(cpu, ipi_latency_func, NULL, 1); 
        etime = ktime_get();

        delta = ktime_sub(etime, stime);

        printk("%s ipi from cpu%d to cpu0-95 delta of 1000times:%lld\n",
                        __func__, start, delta);

        return 0;
}
module_init(ipi_latency_init);

static void ipi_latency_exit(void)
{
}
module_exit(ipi_latency_exit);

MODULE_DESCRIPTION("IPI benchmark");
MODULE_LICENSE("GPL");

do the below 10times:
# taskset -c 0 insmod test.ko
# rmmod test

and the below 10times:
# taskset -c 48 insmod test.ko
# rmmod test

by taskset -c, I can change the source cpu sending IPI.

The result is as below:

vanilla kernel:
[  103.391684] ipi_latency_init ipi from cpu0 to cpu0-95 delta of 1000times:122237009
[  103.537256] ipi_latency_init ipi from cpu0 to cpu0-95 delta of 1000times:121364329
[  103.681276] ipi_latency_init ipi from cpu0 to cpu0-95 delta of 1000times:121420160
[  103.826254] ipi_latency_init ipi from cpu0 to cpu0-95 delta of 1000times:122392403
[  103.970209] ipi_latency_init ipi from cpu0 to cpu0-95 delta of 1000times:122371262
[  104.113879] ipi_latency_init ipi from cpu0 to cpu0-95 delta of 1000times:122041254
[  104.257444] ipi_latency_init ipi from cpu0 to cpu0-95 delta of 1000times:121594453
[  104.402432] ipi_latency_init ipi from cpu0 to cpu0-95 delta of 1000times:122592556
[  104.561434] ipi_latency_init ipi from cpu0 to cpu0-95 delta of 1000times:121601214
[  104.705561] ipi_latency_init ipi from cpu0 to cpu0-95 delta of 1000times:121732767

[  124.592944] ipi_latency_init ipi from cpu48 to cpu0-95 delta of 1000times:147048939
[  124.779280] ipi_latency_init ipi from cpu48 to cpu0-95 delta of 1000times:147467842
[  124.958162] ipi_latency_init ipi from cpu48 to cpu0-95 delta of 1000times:146448676
[  125.129253] ipi_latency_init ipi from cpu48 to cpu0-95 delta of 1000times:141537482
[  125.298848] ipi_latency_init ipi from cpu48 to cpu0-95 delta of 1000times:147161504
[  125.471531] ipi_latency_init ipi from cpu48 to cpu0-95 delta of 1000times:147833787
[  125.643133] ipi_latency_init ipi from cpu48 to cpu0-95 delta of 1000times:147438445
[  125.814530] ipi_latency_init ipi from cpu48 to cpu0-95 delta of 1000times:146806172
[  125.989677] ipi_latency_init ipi from cpu48 to cpu0-95 delta of 1000times:145971002
[  126.159497] ipi_latency_init ipi from cpu48 to cpu0-95 delta of 1000times:147780655

patched kernel:
[  428.828167] ipi_latency_init ipi from cpu0 to cpu0-95 delta of 1000times:122195849
[  428.970822] ipi_latency_init ipi from cpu0 to cpu0-95 delta of 1000times:122361042
[  429.111058] ipi_latency_init ipi from cpu0 to cpu0-95 delta of 1000times:122528494
[  429.257704] ipi_latency_init ipi from cpu0 to cpu0-95 delta of 1000times:121155045
[  429.410186] ipi_latency_init ipi from cpu0 to cpu0-95 delta of 1000times:121608565
[  429.570171] ipi_latency_init ipi from cpu0 to cpu0-95 delta of 1000times:121613673
[  429.718181] ipi_latency_init ipi from cpu0 to cpu0-95 delta of 1000times:121593737
[  429.862615] ipi_latency_init ipi from cpu0 to cpu0-95 delta of 1000times:121953875
[  430.002796] ipi_latency_init ipi from cpu0 to cpu0-95 delta of 1000times:122102741
[  430.142741] ipi_latency_init ipi from cpu0 to cpu0-95 delta of 1000times:122005473

[  516.642812] ipi_latency_init ipi from cpu48 to cpu0-95 delta of 1000times:145610926
[  516.817002] ipi_latency_init ipi from cpu48 to cpu0-95 delta of 1000times:145878266
[  517.004665] ipi_latency_init ipi from cpu48 to cpu0-95 delta of 1000times:145602966
[  517.188758] ipi_latency_init ipi from cpu48 to cpu0-95 delta of 1000times:145658672
[  517.372409] ipi_latency_init ipi from cpu48 to cpu0-95 delta of 1000times:141329497
[  517.557313] ipi_latency_init ipi from cpu48 to cpu0-95 delta of 1000times:146323829
[  517.733107] ipi_latency_init ipi from cpu48 to cpu0-95 delta of 1000times:146015196
[  517.921491] ipi_latency_init ipi from cpu48 to cpu0-95 delta of 1000times:146439231
[  518.093129] ipi_latency_init ipi from cpu48 to cpu0-95 delta of 1000times:146106916
[  518.264162] ipi_latency_init ipi from cpu48 to cpu0-95 delta of 1000times:145097868

So there is no much difference between vanilla and patched kernel.

What really makes me worried about my hardware is that IPI sent from the second socket
always shows worse performance than the first socket. This seems to be a problem
worth investigation.

Thanks
Barry

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  reply	other threads:[~2022-02-19 23:47 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-02-18 21:55 [PATCH] irqchip/gic-v3: use dsb(ishst) to synchronize data to smp before issuing ipi Barry Song
2022-02-19  9:56 ` Marc Zyngier
2022-02-19 23:46   ` Barry Song [this message]
2022-02-20  1:33     ` Barry Song
2022-02-20 13:30   ` Ard Biesheuvel
2022-02-20 15:04     ` Marc Zyngier
2022-02-20 15:05     ` Russell King (Oracle)
2022-02-20 20:09       ` Barry Song
2022-02-20 15:04 ` Russell King (Oracle)
2022-02-20 15:21   ` Marc Zyngier
2022-02-20 20:20     ` Barry Song

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220219234600.304774-1-21cnbao@gmail.com \
    --to=21cnbao@gmail.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linuxarm@huawei.com \
    --cc=maz@kernel.org \
    --cc=song.bao.hua@hisilicon.com \
    --cc=tglx@linutronix.de \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).