From: Ivo Sieben <meltedpianoman@gmail.com>
To: Greg KH <gregkh@linuxfoundation.org>,
<linux-serial@vger.kernel.org>, Alan Cox <alan@linux.intel.com>,
RT <linux-rt-users@vger.kernel.org>
Cc: Ivo Sieben <meltedpianoman@gmail.com>
Subject: [PATCH 1/3] RFC: Solved unnecessary schedule latency in the TTY layer (1/3)
Date: Thu, 3 May 2012 14:37:41 +0200 [thread overview]
Message-ID: <1336048663-21882-1-git-send-email-meltedpianoman@gmail.com> (raw)
Solved unnecessary schedule latency in the TTY layer when used in
realtime context:
In case of PREEMPT_RT or when low_latency flag is set by the serial driver
the TTY receive flip buffer is copied to the line discipline directly
instead of using a workqueue in the background. Therefor only in case a
workqueue is actually used for copying data to the line discipline
we'll have to wait for the workqueue to finish. For a PREEMPT system this
prevents us from unnecessary blocking by the workqueue spin lock.
Note: In a PREEMPT_RT system "normal" spin locks behave like mutexes and
no interrupts (and therefor no scheduling) is disabled.
Signed-off-by: Ivo Sieben <meltedpianoman@gmail.com>
---
drivers/tty/tty_buffer.c | 18 +++++++++++-------
1 files changed, 11 insertions(+), 7 deletions(-)
diff --git a/drivers/tty/tty_buffer.c b/drivers/tty/tty_buffer.c
index 6c9b7cd..ed86359 100644
--- a/drivers/tty/tty_buffer.c
+++ b/drivers/tty/tty_buffer.c
@@ -317,12 +317,7 @@ EXPORT_SYMBOL(tty_insert_flip_string_flags);
void tty_schedule_flip(struct tty_struct *tty)
{
- unsigned long flags;
- spin_lock_irqsave(&tty->buf.lock, flags);
- if (tty->buf.tail != NULL)
- tty->buf.tail->commit = tty->buf.tail->used;
- spin_unlock_irqrestore(&tty->buf.lock, flags);
- schedule_work(&tty->buf.work);
+ tty_flip_buffer_push(tty);
}
EXPORT_SYMBOL(tty_schedule_flip);
@@ -469,7 +464,16 @@ static void flush_to_ldisc(struct work_struct *work)
*/
void tty_flush_to_ldisc(struct tty_struct *tty)
{
- flush_work(&tty->buf.work);
+ /*
+ * Only in case a workqueue is actually used for copying data to the
+ * line discipline, we'll have to wait for the workqueue to finish. In
+ * other cases this prevents us from unnecessary blocking by the workqueue
+ * spin lock.
+ */
+#ifndef CONFIG_PREEMPT_RT_FULL
+ if (!tty->low_latency)
+ flush_work(&tty->buf.work);
+#endif
}
/**
--
1.7.0.4
next reply other threads:[~2012-05-03 12:38 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-05-03 12:37 Ivo Sieben [this message]
2012-05-03 12:37 ` [PATCH 2/3] RFC: Solved unnecessary schedule latency in the TTY layer (2/3) Ivo Sieben
2012-05-03 16:25 ` Greg KH
2012-05-07 7:45 ` Ivo Sieben
2012-05-03 12:37 ` [PATCH 3/3] RFC: Solved unnecessary schedule latency in the TTY layer (3/3) Ivo Sieben
2012-05-03 16:24 ` Greg KH
2012-05-10 15:28 ` Alan Cox
2012-05-07 14:10 ` [PATCH 1/3] RFC: Solved unnecessary schedule latency in the TTY layer (1/3) Ivo Sieben
2012-05-10 15:26 ` Alan Cox
2012-05-14 12:25 ` Ivo Sieben
2012-05-15 15:04 ` Steven Rostedt
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1336048663-21882-1-git-send-email-meltedpianoman@gmail.com \
--to=meltedpianoman@gmail.com \
--cc=alan@linux.intel.com \
--cc=gregkh@linuxfoundation.org \
--cc=linux-rt-users@vger.kernel.org \
--cc=linux-serial@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).