public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
To: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: peterz@infradead.org, mingo@redhat.com,
	linux-kernel@vger.kernel.org, tglx@linutronix.de
Subject: Re: native_smp_send_reschedule() splat from rt_mutex_lock()?
Date: Wed, 20 Sep 2017 12:44:52 -0700	[thread overview]
Message-ID: <20170920194452.GQ3521@linux.vnet.ibm.com> (raw)
In-Reply-To: <20170920162447.5j5kcs3t6kzbilql@linutronix.de>

On Wed, Sep 20, 2017 at 06:24:47PM +0200, Sebastian Andrzej Siewior wrote:
> On 2017-09-18 09:51:10 [-0700], Paul E. McKenney wrote:
> > Hello!
> Hi,
> 
> > [11072.586518] sched: Unexpected reschedule of offline CPU#6!
> > [11072.587578] ------------[ cut here ]------------
> > [11072.588563] WARNING: CPU: 0 PID: 59 at /home/paulmck/public_git/linux-rcu/arch/x86/kernel/smp.c:128 native_smp_send_reschedule+0x37/0x40
> > [11072.591543] Modules linked in:
> > [11072.591543] CPU: 0 PID: 59 Comm: rcub/10 Not tainted 4.14.0-rc1+ #1
> > [11072.610596] Call Trace:
> > [11072.611531]  resched_curr+0x61/0xd0
> > [11072.611531]  switched_to_rt+0x8f/0xa0
> > [11072.612647]  rt_mutex_setprio+0x25c/0x410
> > [11072.613591]  task_blocks_on_rt_mutex+0x1b3/0x1f0
> > [11072.614601]  rt_mutex_slowlock+0xa9/0x1e0
> > [11072.615567]  rt_mutex_lock+0x29/0x30
> > [11072.615567]  rcu_boost_kthread+0x127/0x3c0
> 
> > In theory, I could work around this by excluding CPU-hotplug operations
> > while doing RCU priority boosting, but in practice I am very much hoping
> > that there is a more reasonable solution out there...
> 
> so in CPUHP_TEARDOWN_CPU / take_cpu_down() / __cpu_disable() the CPU is
> marked as offline and interrupt handling is disabled. Later in
> CPUHP_AP_SCHED_STARTING / sched_cpu_dying() all tasks are migrated away.
> 
> Did this hit a random task during a CPU-hotplug operation which was not
> yet migrated away from the dying CPU? In theory a futex_unlock() of a RT
> task could also produce such a backtrace.

It could well have.  The rcutorture test suite does frequent random
CPU-hotplug operations, so if there is a window here, rcutorture is
likely to hit it sooner rather than later.

It also injects delays at the hypervisor level, with the tests running
as guest OSes, if that helps.

What should I do to diagnose this?  I could add a WARN_ON() in the
priority-boosting path, but as far as I can see, this would be a
probabilistic thing -- I don't see a way to guarantee it because
migration could happen at pretty much any time in the PREEMPT=y case
where this happens.

							Thanx, Paul

  reply	other threads:[~2017-09-20 19:44 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-09-18 16:51 native_smp_send_reschedule() splat from rt_mutex_lock()? Paul E. McKenney
2017-09-20 16:24 ` Sebastian Andrzej Siewior
2017-09-20 19:44   ` Paul E. McKenney [this message]
2017-09-21 12:41   ` Peter Zijlstra
2017-09-21 13:28     ` Sebastian Andrzej Siewior

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20170920194452.GQ3521@linux.vnet.ibm.com \
    --to=paulmck@linux.vnet.ibm.com \
    --cc=bigeasy@linutronix.de \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@redhat.com \
    --cc=peterz@infradead.org \
    --cc=tglx@linutronix.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox