linux-serial.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] RFC: sched: Prevent wakeup to enter critical section needlessly
@ 2012-09-24 13:06 Ivo Sieben
  2012-10-09 11:30 ` [REPOST] " Ivo Sieben
  0 siblings, 1 reply; 16+ messages in thread
From: Ivo Sieben @ 2012-09-24 13:06 UTC (permalink / raw)
  To: linux-kernel, Ingo Molnar, Peter Zijlstra, linux-serial, RT,
	Alan Cox, Greg KH
  Cc: Ivo Sieben

Check the waitqueue task list to be non empty before entering the critical
section. This prevents locking the spin lock needlessly in case the queue
was empty, and therefor also prevent scheduling overhead on a PREEMPT_RT
system.

Signed-off-by: Ivo Sieben <meltedpianoman@gmail.com>
---

 Request for comments:
 - Does this make any sense?
 - I assume that I can safely use the list_empty_careful() function here, but is
   that correct?

 Background to this patch:
 Testing on a PREEMPT_RT system with TTY serial communication. Each time the TTY
 line discipline is dereferenced the Idle handling wait queue is woken up (see
 function put_ldisc in /drivers/tty/tty_ldisc.c)
 However line discipline idle handling is not used very often so the wait queue
 is empty most of the time. But still the wake_up() function enters the critical
 section guarded by spin locks. This causes additional scheduling overhead when
 a lower priority thread has control of that same lock.

 kernel/sched/core.c |   16 +++++++++++++---
 1 file changed, 13 insertions(+), 3 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 649c9f8..6436eb8 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -3631,9 +3631,19 @@ void __wake_up(wait_queue_head_t *q, unsigned int mode,
 {
 	unsigned long flags;
 
-	spin_lock_irqsave(&q->lock, flags);
-	__wake_up_common(q, mode, nr_exclusive, 0, key);
-	spin_unlock_irqrestore(&q->lock, flags);
+	/*
+	 * We can check for list emptiness outside the lock by using the
+	 * "careful" check that verifies both the next and prev pointers, so
+	 * that there cannot be any half-pending updates in progress.
+	 *
+	 * This prevents the wake up to enter the critical section needlessly
+	 * when the task list is empty.
+	 */
+	if (!list_empty_careful(&q->task_list)) {
+		spin_lock_irqsave(&q->lock, flags);
+		__wake_up_common(q, mode, nr_exclusive, 0, key);
+		spin_unlock_irqrestore(&q->lock, flags);
+	}
 }
 EXPORT_SYMBOL(__wake_up);
 
-- 
1.7.9.5



^ permalink raw reply related	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2012-11-21 13:58 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-09-24 13:06 [PATCH] RFC: sched: Prevent wakeup to enter critical section needlessly Ivo Sieben
2012-10-09 11:30 ` [REPOST] " Ivo Sieben
2012-10-09 13:37   ` Andi Kleen
2012-10-09 14:15     ` Peter Zijlstra
2012-10-09 15:17       ` Oleg Nesterov
2012-10-10 14:02         ` Andi Kleen
2012-10-18  8:30           ` [PATCH-v2] " Ivo Sieben
2012-10-25 10:12             ` [REPOST-v2] " Ivo Sieben
2012-11-19  7:30               ` Ivo Sieben
2012-11-19 10:20                 ` Preeti U Murthy
2012-11-19 15:10                 ` Oleg Nesterov
2012-11-19 15:34                   ` Ivo Sieben
2012-11-19 15:49                     ` Oleg Nesterov
2012-11-21 13:03                       ` Ivo Sieben
2012-11-21 13:47                         ` Alan Cox
2012-11-21 13:58                         ` Oleg Nesterov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).