From: "Paul E. McKenney" <paulmck@linux.ibm.com>
To: peterz@infradead.org
Cc: linux-kernel@vger.kernel.org, andrea.parri@amarulasolutions.com
Subject: Question about sched_setaffinity()
Date: Sat, 27 Apr 2019 11:02:46 -0700 [thread overview]
Message-ID: <20190427180246.GA15502@linux.ibm.com> (raw)
Hello, Peter!
TL;DR: If a normal !PF_NO_SETAFFINITY kthread invokes sched_setaffinity(),
and sched_setaffinity() returns 0, is it expected behavior for that
kthread to be running on some CPU other than one of the ones specified by
the in_mask argument? All CPUs are online, and there is no CPU-hotplug
activity taking place.
Thanx, Paul
Details:
I have long showed the following "toy" synchronize_rcu() implementation:
void synchronize_rcu(void)
{
int cpu;
for_each_online_cpu(cpu)
run_on(cpu);
}
I decided that if I was going to show it, I should test it. And it
occurred to me that run_on() can be a call to sched_setaffinity():
void synchronize_rcu(void)
{
int cpu;
for_each_online_cpu(cpu)
sched_setaffinity(current->pid, cpumask_of(cpu));
}
This actually passes rcutorture. But, as Andrea noted, not klitmus.
After some investigation, it turned out that klitmus was creating kthreads
with PF_NO_SETAFFINITY, hence the failures. But that prompted me to
put checks into my code: After all, rcutorture can be fooled.
void synchronize_rcu(void)
{
int cpu;
for_each_online_cpu(cpu) {
sched_setaffinity(current->pid, cpumask_of(cpu));
WARN_ON_ONCE(raw_smp_processor_id() != cpu);
}
}
This triggers fairly quickly, usually in less than a minute of rcutorture
testing. And further investigation shows that sched_setaffinity()
always returned 0. So I tried this hack:
void synchronize_rcu(void)
{
int cpu;
for_each_online_cpu(cpu) {
while (raw_smp_processor_id() != cpu)
sched_setaffinity(current->pid, cpumask_of(cpu));
WARN_ON_ONCE(raw_smp_processor_id() != cpu);
}
}
This never triggers, and rcutorture's grace-period rate is not significantly
affected.
Is this expected behavior? Is there some configuration or setup that I
might be missing?
next reply other threads:[~2019-04-27 18:04 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-04-27 18:02 Paul E. McKenney [this message]
2019-04-30 10:03 ` Question about sched_setaffinity() Peter Zijlstra
2019-04-30 10:51 ` Paul E. McKenney
2019-04-30 11:55 ` Peter Zijlstra
2019-05-01 19:12 ` Paul E. McKenney
2019-05-01 19:16 ` Steven Rostedt
2019-05-01 20:27 ` Paul E. McKenney
2019-05-07 22:16 ` Paul E. McKenney
2019-05-09 17:36 ` Paul E. McKenney
2019-05-09 19:36 ` Paul E. McKenney
2019-05-10 12:08 ` Peter Zijlstra
2019-05-10 23:07 ` Paul E. McKenney
2019-05-11 21:45 ` Andrea Parri
2019-05-12 0:39 ` Paul E. McKenney
2019-05-12 1:05 ` Andrea Parri
2019-05-13 12:20 ` Paul E. McKenney
2019-05-13 15:37 ` Joel Fernandes
2019-05-13 15:53 ` Paul E. McKenney
2019-05-13 8:10 ` Peter Zijlstra
2019-05-13 12:19 ` Paul E. McKenney
2019-05-09 21:40 ` Andrea Parri
2019-05-09 21:56 ` Andrea Parri
2019-05-09 22:17 ` Paul E. McKenney
2019-05-10 6:32 ` Andrea Parri
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190427180246.GA15502@linux.ibm.com \
--to=paulmck@linux.ibm.com \
--cc=andrea.parri@amarulasolutions.com \
--cc=linux-kernel@vger.kernel.org \
--cc=peterz@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).