From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752017AbcAOBmh (ORCPT ); Thu, 14 Jan 2016 20:42:37 -0500 Received: from e38.co.us.ibm.com ([32.97.110.159]:58778 "EHLO e38.co.us.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750805AbcAOBmf (ORCPT ); Thu, 14 Jan 2016 20:42:35 -0500 X-IBM-Helo: d03dlp01.boulder.ibm.com X-IBM-MailFrom: paulmck@linux.vnet.ibm.com X-IBM-RcptTo: linux-kernel@vger.kernel.org Date: Thu, 14 Jan 2016 17:42:32 -0800 From: "Paul E. McKenney" To: Thomas Gleixner Cc: Sasha Levin , LKML , Peter Zijlstra Subject: Re: timers: HARDIRQ-safe -> HARDIRQ-unsafe lock order detected Message-ID: <20160115014232.GQ3818@linux.vnet.ibm.com> Reply-To: paulmck@linux.vnet.ibm.com References: <56955C0F.1090005@oracle.com> <20160113161608.GN3818@linux.vnet.ibm.com> <20160114181846.GZ3818@linux.vnet.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.21 (2010-09-15) X-TM-AS-MML: disable X-Content-Scanned: Fidelis XPS MAILER x-cbid: 16011501-0029-0000-0000-00000FA3E6B8 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Jan 14, 2016 at 08:47:41PM +0100, Thomas Gleixner wrote: > On Thu, 14 Jan 2016, Paul E. McKenney wrote: > > On Thu, Jan 14, 2016 at 06:43:16PM +0100, Thomas Gleixner wrote: > > > Right. We have the pattern > > > > > > rcu_read_lock(); > > > x = lookup(); > > > if (x) > > > keep_hold(x) > > > rcu_read_unlock(); > > > return x; > > > > > > all over the place. Now that keep_hold() can be everything from a refcount to > > > a spinlock and I'm not sure that we can force stuff depending on the mechanism > > > to be completely symetric. So we are probably better off by making that rcu > > > unlock machinery more robust. > > > > OK. If I read the lockdep reports correctly, the issue occurs > > when rcu_read_unlock_special() finds that it needs to unboost, > > which means doing an rt_mutex_unlock(). This is done outside of > > rcu_read_unlock_special()'s irq-disabled region, but of course the caller > > might have disabled irqs. > > > > If I remember correctly, disabling irqs across rt_mutex_unlock() gets > > me lockdep splats. > > That shouldn't be the case. The splats come from this scenario: > > CPU0 CPU1 > rtmutex_lock(rcu) > raw_spin_lock(&rcu->lock) > rcu_read_lock() > Interrupt spin_lock_irq(some->lock); > rcu_read_unlock() > rcu_read_unlock_special() > rtmutex_unlock(rcu) > raw_spin_lock(&rcu->lock) > spin_lock(some->lock) > > Now we dead locked. > > > I could imagine having a per-CPU pointer to rt_mutex that > > rcu_read_unlock() sets, and that is checked at every point that irqs > > are enabled, with a call to rt_mutex_unlock() if that pointer is non-NULL. > > > > But perhaps you had something else in mind? > > We can solve that issue by taking rtmutex->wait_lock with irqsave. So the > above becomes: > > CPU0 CPU1 > rtmutex_lock(rcu) > raw_spin_lock_irq(&rcu->lock) > rcu_read_lock() > spin_lock_irq(some->lock); > rcu_read_unlock() > rcu_read_unlock_special() > rtmutex_unlock(rcu) > raw_spin_lock_irqsave(&rcu->lock, flags) > raw_spin_unlock_irq(&rcu->lock) > Interrupt ... > spin_lock(some->lock) raw_spin_unlock_irqrestore(&rcu->lock, flags) > ... > spin_unlock_irq(some->lock); > > Untested patch below. One small fix to make it build below. Started rcutorture, somewhat pointlessly given that the splat doesn't appear on my setup. Thanx, Paul ------------------------------------------------------------------------ diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c index 18e8b41ff796..1fdf6470dfd1 100644 --- a/kernel/locking/rtmutex.c +++ b/kernel/locking/rtmutex.c @@ -645,7 +645,7 @@ static int rt_mutex_adjust_prio_chain(struct task_struct *task, rt_mutex_enqueue(lock, waiter); /* [8] Release the task */ - raw_spin_unlock(&task->pi_lock, flags); + raw_spin_unlock(&task->pi_lock); put_task_struct(task); /*