From: Jeremy Kerr <jk@ozlabs.org>
To: linux-aspeed@lists.ozlabs.org
Subject: [PATCH 2/5] serial: expose buf_overrun count through proc interface
Date: Wed, 21 Mar 2018 10:52:38 +0800 [thread overview]
Message-ID: <20180321025241.19785-3-jk@ozlabs.org> (raw)
In-Reply-To: <20180321025241.19785-1-jk@ozlabs.org>
The buf_overrun count is only every written, and not exposed to
userspace anywhere. This means that dropped characters due to flip
buffer overruns are never visible to userspace.
The /proc/tty/driver/serial file exports a bunch of metrics (including
hardware overruns) already, so add the buf_overrun (as "bo:") to this
file.
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
---
drivers/tty/serial/serial_core.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/tty/serial/serial_core.c b/drivers/tty/serial/serial_core.c
index 8f3dfc8b5307..fc677534b510 100644
--- a/drivers/tty/serial/serial_core.c
+++ b/drivers/tty/serial/serial_core.c
@@ -1780,6 +1780,8 @@ static void uart_line_info(struct seq_file *m, struct uart_driver *drv, int i)
seq_printf(m, " brk:%d", uport->icount.brk);
if (uport->icount.overrun)
seq_printf(m, " oe:%d", uport->icount.overrun);
+ if (uport->icount.buf_overrun)
+ seq_printf(m, " bo:%d", uport->icount.buf_overrun);
#define INFOBIT(bit, str) \
if (uport->mctrl & (bit)) \
--
2.14.1
next prev parent reply other threads:[~2018-03-21 2:52 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-03-21 2:52 [PATCH 0/5] serial: implement flow control for ASPEED VUART driver Jeremy Kerr
2018-03-21 2:52 ` [PATCH 1/5] serial: Introduce UPSTAT_SYNC_FIFO for synchronised FIFOs Jeremy Kerr
2018-03-23 15:33 ` Greg Kroah-Hartman
2018-03-27 1:38 ` Jeremy Kerr
2018-03-23 15:50 ` Eddie James
2018-03-21 2:52 ` Jeremy Kerr [this message]
2018-03-23 15:29 ` [PATCH 2/5] serial: expose buf_overrun count through proc interface Greg Kroah-Hartman
2018-03-27 1:44 ` Jeremy Kerr
2018-03-23 15:50 ` Eddie James
2018-03-21 2:52 ` [PATCH 3/5] serial/8250: export serial8250_read_char Jeremy Kerr
2018-03-23 15:51 ` Eddie James
2018-03-21 2:52 ` [PATCH 4/5] serial/aspeed-vuart: Implement rx throttling Jeremy Kerr
2018-03-23 15:51 ` Eddie James
2018-03-21 2:52 ` [PATCH 5/5] serial/aspeed-vuart: Implement quick throttle mechanism Jeremy Kerr
2018-03-21 3:26 ` Joel Stanley
2018-03-21 3:32 ` Jeremy Kerr
2018-03-21 3:57 ` Jeremy Kerr
2018-03-21 4:36 ` [PATCH v2 " Jeremy Kerr
2018-03-23 15:51 ` [PATCH " Eddie James
2018-03-23 15:51 ` [PATCH 0/5] serial: implement flow control for ASPEED VUART driver Greg Kroah-Hartman
2018-03-27 3:38 ` Jeremy Kerr
2018-03-27 3:48 ` [PATCH v2 0/4] " Jeremy Kerr
2018-03-27 3:48 ` [PATCH v2 1/4] serial: Introduce UPSTAT_SYNC_FIFO for synchronised FIFOs Jeremy Kerr
2018-03-27 3:48 ` [PATCH v2 2/4] serial/8250: export serial8250_read_char Jeremy Kerr
2018-03-27 3:48 ` [PATCH v2 3/4] serial/aspeed-vuart: Implement rx throttling Jeremy Kerr
2018-03-27 3:48 ` [PATCH v2 4/4] serial/aspeed-vuart: Implement quick throttle mechanism Jeremy Kerr
2018-03-27 6:52 ` [PATCH v2 0/4] serial: implement flow control for ASPEED VUART driver Joel Stanley
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20180321025241.19785-3-jk@ozlabs.org \
--to=jk@ozlabs.org \
--cc=linux-aspeed@lists.ozlabs.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).