From: David Laight <David.Laight@ACULAB.COM>
To: 'Steven Rostedt' <rostedt@goodmis.org>
Cc: 'Vincent Guittot' <vincent.guittot@linaro.org>,
Peter Zijlstra <peterz@infradead.org>,
Viresh Kumar <viresh.kumar@linaro.org>,
Ingo Molnar <mingo@redhat.com>,
Juri Lelli <juri.lelli@redhat.com>,
Dietmar Eggemann <dietmar.eggemann@arm.com>,
Ben Segall <bsegall@google.com>, Mel Gorman <mgorman@suse.de>,
linux-kernel <linux-kernel@vger.kernel.org>
Subject: RE: sched/fair: scheduler not running high priority process on idle cpu
Date: Wed, 15 Jan 2020 17:07:29 +0000 [thread overview]
Message-ID: <ab54668ad13d48da8aa43f955631ef9e@AcuMS.aculab.com> (raw)
In-Reply-To: <20200115103049.06600f6e@gandalf.local.home>
From Steven Rostedt
> Sent: 15 January 2020 15:31
...
> > For this case an idle cpu doing a unlocked check for a processes that has
> > been waiting 'ages' to preempt the running process may not be too
> > expensive.
>
> How do you measure a process waiting for ages on another CPU? And then
> by the time you get the information to pull it, there's always the race
> that the process will get the chance to run. And if you think about it,
> by looking for a process waiting for a long time, it is likely it will
> start to run because "ages" means it's probably close to being released.
Without a CBU (Crystal Ball Unit) you can always be unlucky.
But once you get over the 'normal' delays for a system call you probably
get an exponential (or is it logarithmic) distribution and the additional
delay is likely to be at least some fraction of the time it has already waited.
While not entirely the same, but something I still need to look at further.
This is a histogram of time taken (in ns) to send on a raw IPv4 socket.
0k: 1874462617
96k: 260350
160k: 30771
224k: 14812
288k: 770
352k: 593
416k: 489
480k: 368
544k: 185
608k: 63
672k: 27
736k: 6
800k: 1
864k: 2
928k: 3
992k: 4
1056k: 1
1120k: 0
1184k: 1
1248k: 1
1312k: 2
1376k: 3
1440k: 1
1504k: 1
1568k: 1
1632k: 4
1696k: 0 (5 times)
2016k: 1
2080k: 0
2144k: 1
total: 1874771078, average 32k
I've improved it no end by using per-thread sockets and setting
the socket write queue size large.
But there are still some places where it takes > 600us.
The top end is rather more linear than one might expect.
> > I presume the locks are in place for the migrate itself.
>
> Note, by grabbing locks on another CPU will incur overhead on that
> other CPU. I've seen huge latency caused by doing just this.
I'd have thought this would only be significant if the cache line
ends up being used by both cpus?
> > The only downside is that the process's data is likely to be in the wrong cache,
> > but unless the original cpu becomes available just after the migrate it is
> > probably still a win.
>
> If you are doing this with just tasks that are waiting for the CPU to
> be preemptable, then it is most likely not a win at all.
You'd need a good guess that the wait would be long.
> Now, the RT tasks do have an aggressive push / pull logic, that keeps
> track of which CPUs are running lower priority tasks and will work hard
> to keep all RT tasks running (and aggressively migrate them). But this
> logic still only takes place at preemption points (cond_resched(), etc).
I guess this only 'gives away' extra RT processes.
Rather than 'stealing' them - which is what I need.
David
-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)
next prev parent reply other threads:[~2020-01-15 17:07 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-01-14 16:50 sched/fair: scheduler not running high priority process on idle cpu David Laight
2020-01-14 16:59 ` Steven Rostedt
2020-01-14 17:33 ` David Laight
2020-01-14 17:48 ` Steven Rostedt
2020-01-15 12:44 ` David Laight
2020-01-15 13:18 ` Steven Rostedt
2020-01-15 14:43 ` David Laight
2020-01-15 15:11 ` David Laight
2020-01-15 15:30 ` Steven Rostedt
2020-01-15 17:07 ` David Laight [this message]
2020-01-20 9:39 ` Dietmar Eggemann
2020-01-20 10:51 ` David Laight
2020-01-15 14:56 ` Peter Zijlstra
2020-01-15 15:09 ` David Laight
2020-01-15 12:57 ` David Laight
2020-01-15 14:23 ` Steven Rostedt
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ab54668ad13d48da8aa43f955631ef9e@AcuMS.aculab.com \
--to=david.laight@aculab.com \
--cc=bsegall@google.com \
--cc=dietmar.eggemann@arm.com \
--cc=juri.lelli@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mgorman@suse.de \
--cc=mingo@redhat.com \
--cc=peterz@infradead.org \
--cc=rostedt@goodmis.org \
--cc=vincent.guittot@linaro.org \
--cc=viresh.kumar@linaro.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox