linux-rt-users.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 1/3] RFC: Solved unnecessary schedule latency in the TTY layer (1/3)
@ 2012-05-03 12:37 Ivo Sieben
  2012-05-03 12:37 ` [PATCH 2/3] RFC: Solved unnecessary schedule latency in the TTY layer (2/3) Ivo Sieben
                   ` (3 more replies)
  0 siblings, 4 replies; 11+ messages in thread
From: Ivo Sieben @ 2012-05-03 12:37 UTC (permalink / raw)
  To: Greg KH, linux-serial, Alan Cox, RT; +Cc: Ivo Sieben

Solved unnecessary schedule latency in the TTY layer when used in
realtime context:
In case of PREEMPT_RT or when low_latency flag is set by the serial driver
the TTY receive flip buffer is copied to the line discipline directly
instead of using a workqueue in the background. Therefor only in case a
workqueue is actually used for copying data to the line discipline
we'll have to wait for the workqueue to finish. For a PREEMPT system this
prevents us from unnecessary blocking by the workqueue spin lock.

Note: In a PREEMPT_RT system "normal" spin locks behave like mutexes and
no interrupts (and therefor no scheduling) is disabled.

Signed-off-by: Ivo Sieben <meltedpianoman@gmail.com>
---
 drivers/tty/tty_buffer.c |   18 +++++++++++-------
 1 files changed, 11 insertions(+), 7 deletions(-)

diff --git a/drivers/tty/tty_buffer.c b/drivers/tty/tty_buffer.c
index 6c9b7cd..ed86359 100644
--- a/drivers/tty/tty_buffer.c
+++ b/drivers/tty/tty_buffer.c
@@ -317,12 +317,7 @@ EXPORT_SYMBOL(tty_insert_flip_string_flags);
 
 void tty_schedule_flip(struct tty_struct *tty)
 {
-	unsigned long flags;
-	spin_lock_irqsave(&tty->buf.lock, flags);
-	if (tty->buf.tail != NULL)
-		tty->buf.tail->commit = tty->buf.tail->used;
-	spin_unlock_irqrestore(&tty->buf.lock, flags);
-	schedule_work(&tty->buf.work);
+	tty_flip_buffer_push(tty);
 }
 EXPORT_SYMBOL(tty_schedule_flip);
 
@@ -469,7 +464,16 @@ static void flush_to_ldisc(struct work_struct *work)
  */
 void tty_flush_to_ldisc(struct tty_struct *tty)
 {
-	flush_work(&tty->buf.work);
+	/*
+	 * Only in case a workqueue is actually used for copying data to the
+	 * line discipline, we'll have to wait for the workqueue to finish. In
+	 * other cases this prevents us from unnecessary blocking by the workqueue
+	 * spin lock.
+	 */
+#ifndef CONFIG_PREEMPT_RT_FULL
+	if (!tty->low_latency)
+		flush_work(&tty->buf.work);
+#endif
 }
 
 /**
-- 
1.7.0.4



^ permalink raw reply related	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2012-05-15 15:04 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-05-03 12:37 [PATCH 1/3] RFC: Solved unnecessary schedule latency in the TTY layer (1/3) Ivo Sieben
2012-05-03 12:37 ` [PATCH 2/3] RFC: Solved unnecessary schedule latency in the TTY layer (2/3) Ivo Sieben
2012-05-03 16:25   ` Greg KH
2012-05-07  7:45     ` Ivo Sieben
2012-05-03 12:37 ` [PATCH 3/3] RFC: Solved unnecessary schedule latency in the TTY layer (3/3) Ivo Sieben
2012-05-03 16:24   ` Greg KH
2012-05-10 15:28   ` Alan Cox
2012-05-07 14:10 ` [PATCH 1/3] RFC: Solved unnecessary schedule latency in the TTY layer (1/3) Ivo Sieben
2012-05-10 15:26 ` Alan Cox
2012-05-14 12:25   ` Ivo Sieben
2012-05-15 15:04     ` Steven Rostedt

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).