From: Waiman Long <waiman.long@hpe.com>
To: Ding Tianhong <dingtianhong@huawei.com>
Cc: Peter Zijlstra <peterz@infradead.org>,
Ingo Molnar <mingo@redhat.com>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
Davidlohr Bueso <dave@stgolabs.net>,
Linus Torvalds <torvalds@linux-foundation.org>,
"Paul E. McKenney" <paulmck@us.ibm.com>,
Thomas Gleixner <tglx@linutronix.de>,
Will Deacon <Will.Deacon@arm.com>, Jason Low <jason.low2@hp.com>,
Tim Chen <tim.c.chen@linux.intel.com>,
Waiman Long <Waiman.Long@hp.com>
Subject: Re: [PATCH RFC] locking/mutexes: don't spin on owner when wait list is not NULL.
Date: Thu, 21 Jan 2016 18:02:34 -0500 [thread overview]
Message-ID: <56A1638A.7050202@hpe.com> (raw)
In-Reply-To: <56A0A4ED.3070308@huawei.com>
[-- Attachment #1: Type: text/plain, Size: 4432 bytes --]
On 01/21/2016 04:29 AM, Ding Tianhong wrote:
> I build a script to create several process for ioctl loop calling,
> the ioctl will calling the kernel function just like:
> xx_ioctl {
> ...
> rtnl_lock();
> function();
> rtnl_unlock();
> ...
> }
> The function may sleep several ms, but will not halt, at the same time
> another user service may calling ifconfig to change the state of the
> ethernet, and after several hours, the hung task thread report this problem:
>
> ========================================================================
> 149738.039038] INFO: task ifconfig:11890 blocked for more than 120 seconds.
> [149738.040597] "echo 0> /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> [149738.042280] ifconfig D ffff88061ec13680 0 11890 11573 0x00000080
> [149738.042284] ffff88052449bd40 0000000000000082 ffff88053a33f300 ffff88052449bfd8
> [149738.042286] ffff88052449bfd8 ffff88052449bfd8 ffff88053a33f300 ffffffff819e6240
> [149738.042288] ffffffff819e6244 ffff88053a33f300 00000000ffffffff ffffffff819e6248
> [149738.042290] Call Trace:
> [149738.042300] [<ffffffff8160d219>] schedule_preempt_disabled+0x29/0x70
> [149738.042303] [<ffffffff8160af65>] __mutex_lock_slowpath+0xc5/0x1c0
> [149738.042305] [<ffffffff8160a3cf>] mutex_lock+0x1f/0x2f
> [149738.042309] [<ffffffff8150d945>] rtnl_lock+0x15/0x20
> [149738.042311] [<ffffffff81514e3a>] dev_ioctl+0xda/0x590
> [149738.042314] [<ffffffff816121cc>] ? __do_page_fault+0x21c/0x560
> [149738.042318] [<ffffffff814e42c5>] sock_do_ioctl+0x45/0x50
> [149738.042320] [<ffffffff814e49d0>] sock_ioctl+0x1f0/0x2c0
> [149738.042324] [<ffffffff811dc9b5>] do_vfs_ioctl+0x2e5/0x4c0
> [149738.042327] [<ffffffff811e6a00>] ? fget_light+0xa0/0xd0
>
> ================================ cut here ================================
>
> I got the vmcore and found that the ifconfig is already in the wait_list of the
> rtnl_lock for 120 second, but my process could get and release the rtnl_lock
> normally several times in one second, so it means that my process jump the
> queue and the ifconfig couldn't get the rtnl all the time, I check the mutex lock
> slow path and found that the mutex may spin on owner ignore whether the wait list
> is empty, it will cause the task in the wait list always be cut in line, so add
> test for wait list in the mutex_can_spin_on_owner and avoid this problem.
>
> Signed-off-by: Ding Tianhong<dingtianhong@huawei.com>
> Cc: Ingo Molnar<mingo@redhat.com>
> Cc: Peter Zijlstra<peterz@infradead.org>
> Cc: Davidlohr Bueso<dave@stgolabs.net>
> Cc: Linus Torvalds<torvalds@linux-foundation.org>
> Cc: Paul E. McKenney<paulmck@us.ibm.com>
> Cc: Thomas Gleixner<tglx@linutronix.de>
> Cc: Will Deacon<will.deacon@arm.com>
> Cc: Jason Low<jason.low2@hp.com>
> Cc: Tim Chen<tim.c.chen@linux.intel.com>
> Cc: Waiman Long<Waiman.Long@hp.com>
> ---
> kernel/locking/mutex.c | 11 ++++++-----
> 1 file changed, 6 insertions(+), 5 deletions(-)
>
> diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
> index 0551c21..596b341 100644
> --- a/kernel/locking/mutex.c
> +++ b/kernel/locking/mutex.c
> @@ -256,7 +256,7 @@ static inline int mutex_can_spin_on_owner(struct mutex *lock)
> struct task_struct *owner;
> int retval = 1;
>
> - if (need_resched())
> + if (need_resched() || atomic_read(&lock->count) == -1)
> return 0;
>
> rcu_read_lock();
> @@ -283,10 +283,11 @@ static inline bool mutex_try_to_acquire(struct mutex *lock)
> /*
> * Optimistic spinning.
> *
> - * We try to spin for acquisition when we find that the lock owner
> - * is currently running on a (different) CPU and while we don't
> - * need to reschedule. The rationale is that if the lock owner is
> - * running, it is likely to release the lock soon.
> + * We try to spin for acquisition when we find that there are no
> + * pending waiters and the lock owner is currently running on a
> + * (different) CPU and while we don't need to reschedule. The
> + * rationale is that if the lock owner is running, it is likely
> + * to release the lock soon.
> *
> * Since this needs the lock owner, and this mutex implementation
> * doesn't track the owner atomically in the lock field, we need to
This patch will largely defeat the performance benefit of optimistic
spinning. I have an alternative solution to this live-lock problem.
Would you mind trying out the attached patch to see if it can fix your
problem?
Cheers,
Longman
[-- Attachment #2: 0001-locking-mutex-Enable-optimistic-spinning-of-woken-ta.patch --]
[-- Type: text/plain, Size: 5460 bytes --]
From 1bbb5a4434d395f48163abc5435c5c720a15d327 Mon Sep 17 00:00:00 2001
From: Waiman Long <Waiman.Long@hpe.com>
Date: Thu, 21 Jan 2016 17:53:14 -0500
Subject: [PATCH] locking/mutex: Enable optimistic spinning of woken task in wait list
Ding Tianhong reported a live-lock situation where a constant stream
of incoming optimistic spinners blocked a task in the wait list from
getting the mutex.
This patch attempts to fix this live-lock condition by enabling the
a woken task in the wait list to enter optimistic spinning loop itself
with precedence over the ones in the OSQ. This should prevent the
live-lock
condition from happening.
Signed-off-by: Waiman Long <Waiman.Long@hpe.com>
---
include/linux/mutex.h | 2 +
kernel/locking/mutex.c | 95 +++++++++++++++++++++++++++++++++++++++++++++++-
2 files changed, 95 insertions(+), 2 deletions(-)
diff --git a/include/linux/mutex.h b/include/linux/mutex.h
index 2cb7531..2c55ecd 100644
--- a/include/linux/mutex.h
+++ b/include/linux/mutex.h
@@ -57,6 +57,8 @@ struct mutex {
#endif
#ifdef CONFIG_MUTEX_SPIN_ON_OWNER
struct optimistic_spin_queue osq; /* Spinner MCS lock */
+ /* Set if wait list head actively spinning */
+ int wlh_spinning;
#endif
#ifdef CONFIG_DEBUG_MUTEXES
void *magic;
diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
index 0551c21..8b27b03 100644
--- a/kernel/locking/mutex.c
+++ b/kernel/locking/mutex.c
@@ -55,6 +55,7 @@ __mutex_init(struct mutex *lock, const char *name, struct lock_class_key *key)
mutex_clear_owner(lock);
#ifdef CONFIG_MUTEX_SPIN_ON_OWNER
osq_lock_init(&lock->osq);
+ lock->wlh_spinning = false;
#endif
debug_mutex_init(lock, name, key);
@@ -346,8 +347,12 @@ static bool mutex_optimistic_spin(struct mutex *lock,
if (owner && !mutex_spin_on_owner(lock, owner))
break;
- /* Try to acquire the mutex if it is unlocked. */
- if (mutex_try_to_acquire(lock)) {
+ /*
+ * Try to acquire the mutex if it is unlocked and the wait
+ * list head isn't spinning on the lock.
+ */
+ if (!READ_ONCE(lock->wlh_spinning) &&
+ mutex_try_to_acquire(lock)) {
lock_acquired(&lock->dep_map, ip);
if (use_ww_ctx) {
@@ -398,12 +403,91 @@ done:
return false;
}
+
+/*
+ * Wait list head optimistic spinning
+ *
+ * The wait list head, when woken up, will try to spin on the lock if the
+ * lock owner is active. It will also set the wlh_spinning flag to give
+ * itself a higher chance of getting the lock than the other optimisically
+ * spinning locker in the OSQ.
+ */
+static bool mutex_wlh_opt_spin(struct mutex *lock,
+ struct ww_acquire_ctx *ww_ctx, const bool use_ww_ctx)
+{
+ struct task_struct *owner, *task = current;
+ int gotlock = false;
+
+ WRITE_ONCE(lock->wlh_spinning, true);
+ while (true) {
+ if (use_ww_ctx && ww_ctx->acquired > 0) {
+ struct ww_mutex *ww;
+
+ ww = container_of(lock, struct ww_mutex, base);
+ /*
+ * If ww->ctx is set the contents are undefined, only
+ * by acquiring wait_lock there is a guarantee that
+ * they are not invalid when reading.
+ *
+ * As such, when deadlock detection needs to be
+ * performed the optimistic spinning cannot be done.
+ */
+ if (READ_ONCE(ww->ctx))
+ break;
+ }
+
+ /*
+ * If there's an owner, wait for it to either
+ * release the lock or go to sleep.
+ */
+ owner = READ_ONCE(lock->owner);
+ if (owner && !mutex_spin_on_owner(lock, owner))
+ break;
+
+ /*
+ * Try to acquire the mutex if it is unlocked. The mutex
+ * value is set to -1 which will be changed to 0 later on
+ * if the wait list becomes empty.
+ */
+ if (!mutex_is_locked(lock) &&
+ (atomic_cmpxchg_acquire(&lock->count, 1, -1) == 1)) {
+ gotlock = true;
+ break;
+ }
+
+ /*
+ * When there's no owner, we might have preempted between the
+ * owner acquiring the lock and setting the owner field. If
+ * we're an RT task that will live-lock because we won't let
+ * the owner complete.
+ */
+ if (!owner && (need_resched() || rt_task(task)))
+ break;
+
+ /*
+ * The cpu_relax() call is a compiler barrier which forces
+ * everything in this loop to be re-loaded. We don't need
+ * memory barriers as we'll eventually observe the right
+ * values at the cost of a few extra spins.
+ */
+ cpu_relax_lowlatency();
+
+ }
+ WRITE_ONCE(lock->wlh_spinning, false);
+ return gotlock;
+}
#else
static bool mutex_optimistic_spin(struct mutex *lock,
struct ww_acquire_ctx *ww_ctx, const bool use_ww_ctx)
{
return false;
}
+
+static bool mutex_wlh_opt_spin(struct mutex *lock,
+ struct ww_acquire_ctx *ww_ctx, const bool use_ww_ctx)
+{
+ return false;
+}
#endif
__visible __used noinline
@@ -543,6 +627,8 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
lock_contended(&lock->dep_map, ip);
for (;;) {
+ int gotlock;
+
/*
* Lets try to take the lock again - this is needed even if
* we get here for the first time (shortly after failing to
@@ -577,7 +663,12 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
/* didn't get the lock, go to sleep: */
spin_unlock_mutex(&lock->wait_lock, flags);
schedule_preempt_disabled();
+
+ /* optimistically spinning on the mutex without the wait lock */
+ gotlock = mutex_wlh_opt_spin(lock, ww_ctx, use_ww_ctx);
spin_lock_mutex(&lock->wait_lock, flags);
+ if (gotlock)
+ break;
}
__set_task_state(task, TASK_RUNNING);
--
1.7.1
next prev parent reply other threads:[~2016-01-21 23:02 UTC|newest]
Thread overview: 37+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-01-21 9:29 [PATCH RFC] locking/mutexes: don't spin on owner when wait list is not NULL Ding Tianhong
2016-01-21 21:23 ` Tim Chen
2016-01-22 2:41 ` Paul E. McKenney
2016-01-22 2:48 ` Davidlohr Bueso
2016-01-22 3:13 ` Paul E. McKenney
2016-01-21 23:02 ` Waiman Long [this message]
2016-01-22 6:09 ` Davidlohr Bueso
2016-01-22 13:38 ` Waiman Long
2016-01-22 16:46 ` Davidlohr Bueso
2016-01-25 2:23 ` [PATCH] locking/mutex: Allow next waiter lockless wakeup Davidlohr Bueso
2016-01-25 23:02 ` Waiman Long
2016-02-29 11:21 ` [tip:locking/core] " tip-bot for Davidlohr Bueso
2016-01-22 8:54 ` [PATCH RFC] locking/mutexes: don't spin on owner when wait list is not NULL Peter Zijlstra
2016-01-22 10:20 ` Jason Low
2016-01-22 10:53 ` Peter Zijlstra
2016-01-22 10:56 ` Peter Zijlstra
2016-01-22 11:06 ` Peter Zijlstra
2016-01-22 13:59 ` Waiman Long
2016-01-24 8:03 ` Ding Tianhong
2016-01-29 9:53 ` Peter Zijlstra
2016-01-30 1:18 ` Ding Tianhong
2016-02-01 3:29 ` huang ying
2016-02-01 3:35 ` Huang, Ying
2016-02-01 10:08 ` [PATCH] locking/mutex: Avoid spinner vs waiter starvation Peter Zijlstra
2016-02-02 21:19 ` Davidlohr Bueso
2016-02-03 7:10 ` Ding Tianhong
2016-02-03 19:24 ` Davidlohr Bueso
2016-02-04 1:20 ` Ding Tianhong
2016-02-12 18:33 ` Waiman Long
2016-02-03 22:07 ` Waiman Long
2016-02-04 1:35 ` Jason Low
2016-02-04 8:55 ` huang ying
2016-02-04 22:49 ` Jason Low
2016-01-22 13:41 ` [PATCH RFC] locking/mutexes: don't spin on owner when wait list is not NULL Waiman Long
-- strict thread matches above, loose matches on Subject: below --
2016-01-21 6:53 [PATCH RFC ] " Ding Tianhong
2016-01-21 7:29 ` Ingo Molnar
2016-01-21 9:04 ` Ding Tianhong
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=56A1638A.7050202@hpe.com \
--to=waiman.long@hpe.com \
--cc=Waiman.Long@hp.com \
--cc=Will.Deacon@arm.com \
--cc=dave@stgolabs.net \
--cc=dingtianhong@huawei.com \
--cc=jason.low2@hp.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@redhat.com \
--cc=paulmck@us.ibm.com \
--cc=peterz@infradead.org \
--cc=tglx@linutronix.de \
--cc=tim.c.chen@linux.intel.com \
--cc=torvalds@linux-foundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).