From: Peter Zijlstra <peterz@infradead.org>
To: Chen Yu <yu.c.chen@intel.com>
Cc: Vincent Guittot <vincent.guittot@linaro.org>,
Ingo Molnar <mingo@redhat.com>,
Juri Lelli <juri.lelli@redhat.com>,
Mel Gorman <mgorman@techsingularity.net>,
Tim Chen <tim.c.chen@intel.com>,
Dietmar Eggemann <dietmar.eggemann@arm.com>,
Steven Rostedt <rostedt@goodmis.org>,
Ben Segall <bsegall@google.com>,
K Prateek Nayak <kprateek.nayak@amd.com>,
Abel Wu <wuyun.abel@bytedance.com>,
Yicong Yang <yangyicong@hisilicon.com>,
"Gautham R . Shenoy" <gautham.shenoy@amd.com>,
Honglei Wang <wanghonglei@didichuxing.com>,
Len Brown <len.brown@intel.com>, Chen Yu <yu.chen.surf@gmail.com>,
Tianchen Ding <dtcccc@linux.alibaba.com>,
Joel Fernandes <joel@joelfernandes.org>,
Josh Don <joshdon@google.com>, Hillf Danton <hdanton@sina.com>,
kernel test robot <yujie.liu@intel.com>,
Arjan Van De Ven <arjan.van.de.ven@intel.com>,
linux-kernel@vger.kernel.org
Subject: Re: [PATCH v6 2/2] sched/fair: Introduce SIS_SHORT to wake up short task on current CPU
Date: Wed, 15 Mar 2023 16:25:52 +0100 [thread overview]
Message-ID: <20230315152552.GF2006103@hirez.programming.kicks-ass.net> (raw)
In-Reply-To: <373e6886e274f198608fa1b5f1c254e32b43845d.1677069490.git.yu.c.chen@intel.com>
On Wed, Feb 22, 2023 at 10:09:55PM +0800, Chen Yu wrote:
> will-it-scale
> =============
> case load baseline compare%
> context_switch1 224 groups 1.00 +946.68%
>
> There is a huge improvement in fast context switch test case, especially
> when the number of groups equals the CPUs.
>
> netperf
> =======
> case load baseline(std%) compare%( std%)
> TCP_RR 56-threads 1.00 ( 1.12) -0.05 ( 0.97)
> TCP_RR 112-threads 1.00 ( 0.50) +0.31 ( 0.35)
> TCP_RR 168-threads 1.00 ( 3.46) +5.50 ( 2.08)
> TCP_RR 224-threads 1.00 ( 2.52) +665.38 ( 3.38)
> TCP_RR 280-threads 1.00 ( 38.59) +22.12 ( 11.36)
> TCP_RR 336-threads 1.00 ( 15.88) -0.00 ( 19.96)
> TCP_RR 392-threads 1.00 ( 27.22) +0.26 ( 24.26)
> TCP_RR 448-threads 1.00 ( 37.88) +0.04 ( 27.87)
> UDP_RR 56-threads 1.00 ( 2.39) -0.36 ( 8.33)
> UDP_RR 112-threads 1.00 ( 22.62) -0.65 ( 24.66)
> UDP_RR 168-threads 1.00 ( 15.72) +3.97 ( 5.02)
> UDP_RR 224-threads 1.00 ( 15.90) +134.98 ( 28.59)
> UDP_RR 280-threads 1.00 ( 32.43) +0.26 ( 29.68)
> UDP_RR 336-threads 1.00 ( 39.21) -0.05 ( 39.71)
> UDP_RR 392-threads 1.00 ( 31.76) -0.22 ( 32.00)
> UDP_RR 448-threads 1.00 ( 44.90) +0.06 ( 31.83)
>
> There is significant 600+% improvement for TCP_RR and 100+% for UDP_RR
> when the number of threads equals the CPUs.
>
> tbench
> ======
> case load baseline(std%) compare%( std%)
> loopback 56-threads 1.00 ( 0.15) +0.88 ( 0.08)
> loopback 112-threads 1.00 ( 0.06) -0.41 ( 0.52)
> loopback 168-threads 1.00 ( 0.17) +45.42 ( 39.54)
> loopback 224-threads 1.00 ( 36.93) +24.10 ( 0.06)
> loopback 280-threads 1.00 ( 0.04) -0.04 ( 0.04)
> loopback 336-threads 1.00 ( 0.06) -0.16 ( 0.14)
> loopback 392-threads 1.00 ( 0.05) +0.06 ( 0.02)
> loopback 448-threads 1.00 ( 0.07) -0.02 ( 0.07)
>
> There is no noticeable impact on tbench. Although there is run-to-run variance
> in 168/224 threads case, with or without this patch applied.
So there is a very narrow, but significant, win at 4x overload.
What about 3x/5x overload, they only have very marginal gains.
So these patches are briliant if you run at exactly 4x overload, and
very meh otherwise.
Why do we care about 4x overload?
next prev parent reply other threads:[~2023-03-15 15:27 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-02-22 14:09 [PATCH v6 0/2] sched/fair: Wake short task on current CPU Chen Yu
2023-02-22 14:09 ` [PATCH v6 1/2] sched/fair: Record the average duration of a task Chen Yu
2023-02-22 14:09 ` [PATCH v6 2/2] sched/fair: Introduce SIS_SHORT to wake up short task on current CPU Chen Yu
2023-03-15 15:25 ` Peter Zijlstra [this message]
2023-03-16 11:11 ` Chen Yu
2023-03-14 3:13 ` [PATCH v6 0/2] sched/fair: Wake " K Prateek Nayak
2023-03-14 4:09 ` Chen Yu
2023-03-15 9:34 ` Yicong Yang
2023-03-16 11:13 ` Chen Yu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230315152552.GF2006103@hirez.programming.kicks-ass.net \
--to=peterz@infradead.org \
--cc=arjan.van.de.ven@intel.com \
--cc=bsegall@google.com \
--cc=dietmar.eggemann@arm.com \
--cc=dtcccc@linux.alibaba.com \
--cc=gautham.shenoy@amd.com \
--cc=hdanton@sina.com \
--cc=joel@joelfernandes.org \
--cc=joshdon@google.com \
--cc=juri.lelli@redhat.com \
--cc=kprateek.nayak@amd.com \
--cc=len.brown@intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mgorman@techsingularity.net \
--cc=mingo@redhat.com \
--cc=rostedt@goodmis.org \
--cc=tim.c.chen@intel.com \
--cc=vincent.guittot@linaro.org \
--cc=wanghonglei@didichuxing.com \
--cc=wuyun.abel@bytedance.com \
--cc=yangyicong@hisilicon.com \
--cc=yu.c.chen@intel.com \
--cc=yu.chen.surf@gmail.com \
--cc=yujie.liu@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox