From: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
To: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Cc: sergey.senozhatsky.work@gmail.com, linux-kernel@vger.kernel.org,
mingo@redhat.com, peterz@infradead.org, pmladek@suse.com,
sergey.senozhatsky@gmail.com, rostedt@goodmis.org
Subject: Re: [PATCH] printk: Add best-effort printk() buffering.
Date: Tue, 9 May 2017 10:04:42 +0900 [thread overview]
Message-ID: <20170509010442.GA397@tigerII.localdomain> (raw)
In-Reply-To: <201705082205.DAE12471.FOHJSFOQFtOVML@I-love.SAKURA.ne.jp>
Hello,
On (05/08/17 22:05), Tetsuo Handa wrote:
> > On (04/30/17 22:54), Tetsuo Handa wrote:
> > > Sometimes we want to printk() multiple lines in a group without being
> > > disturbed by concurrent printk() from interrupts and/or other threads.
> > > For example, mixed printk() output of multiple thread's dump makes it
> > > hard to interpret.
> >
> > hm, it's very close to what printk-safe does [and printk-nmi, of course].
> > the difference is that buffered-printk does not disable local IRQs,
> > unlike printk-safe, which has to do it by design. so the question is,
> > can buffered-printk impose atomicity requirements? it seems that it can
> > (am I wrong?). and, if so, then can we use printk-safe instead? we can
> > add a new printk_buffered_begin/printk_buffered_end API, for example,
> > (or enter/exit) for that purpose, that would set a buffered-printk
> > `printk_context' bit so we can flush buffers in a "special way", not via IRQ
> > work, and may be avoid message loss (printk-safe buffers are bigger in size
> > than proposed PAGE_SIZE buffers).
>
> printk_buffered_begin()/printk_buffered_end() corresponds to
> get_printk_buffer()/put_printk_buffer().
> printk_context() distinguishes atomic contexts.
> flush_printk_buffer() flushes from non-NMI context.
>
> What does atomicity requirements mean?
what I meant was -- "can we sleep under printk_buffered_begin() or not".
printk-safe disables local IRQs. so what I propose is something like this
printk-safe-enter //disable local IRQs, use per-CPU buffer
backtrace
printk-safe-exit //flush per-CPU buffer, enable local IRQs
except that 'printk-safe-enter/exit' will have new names here, say
printk-buffered-begin/end, and, probably, handle flush differently.
> > hm, 16 is rather random, it's too much for UP and probably not enough for
> > a 240 CPUs system. for the time being there are 3 buffered-printk users
> > (as far as I can see), but who knows how more will be added in the future.
> > each CPU can have overlapping printks from process, IRQ and NMI contexts.
> > for NMI we use printk-nmi buffers, so it's out of the list; but, in general,
> > *it seems* that we better depend on the number of CPUs the system has.
> > which, once again, returns us back to printk-safe...
> >
> > thoughts?
>
> I can make 16 a CONFIG_ option.
but still, why use additional N buffers, when we already have per-CPU
buffers? what am I missing?
> Would you read 201705031521.EIJ39594.MFtOVOHSFLFOJQ@I-love.SAKURA.ne.jp ?
sure.
> But as long as actually writing to console devices is slow, message loss
> is inevitable no matter how big buffer is used. Rather, I'd expect an API
> which allows printk() users in schedulable context (e.g. kmallocwd and/or
> warn_alloc() for reporting allocation stalls) to wait until written to
> console devices. That will more likely to reduce message loss.
hm, from a schedulable context you can do *something* like
console_lock()
printk()
...
printk()
console_unlock()
you won't be able to console_lock() until all pending messages are
flushed. since you are in a schedulable context, you can sleep on
console_sem in console_lock(). well, just saying.
> > > + while (1) {
> > > + char *text = ptr->buf;
> > > + unsigned int text_len = ptr->used;
> > > + char *cp = memchr(text, '\n', text_len);
> > > + char c;
> >
> > what guarantees that there'll always be a terminating newline?
>
> Nothing guarantees. Why need such a guarantee?
: The memchr() and memrchr() functions return a pointer to the matching
: byte or NULL if the character does not occur in the given memory area.
so `cp' can be NULL here?
+ if (cp++)
+ text_len = cp - text;
+ else if (all)
+ cp = text + text_len;
+ else
+ break;
+ /* printk_get_level() depends on text '\0'-terminated. */
+ c = *cp;
+ *cp = '\0';
+ process_log(0, LOGLEVEL_DEFAULT, NULL, 0, text, text_len);
+ ptr->used -= text_len;
+ if (!ptr->used)
+ break;
+ *cp = c;
-ss
next prev parent reply other threads:[~2017-05-09 1:05 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-04-30 13:54 [PATCH] printk: Add best-effort printk() buffering Tetsuo Handa
2017-04-30 16:11 ` Joe Perches
2017-05-03 6:21 ` Tetsuo Handa
2017-05-03 9:30 ` Joe Perches
2017-05-08 7:05 ` Sergey Senozhatsky
2017-05-08 13:05 ` Tetsuo Handa
2017-05-09 1:04 ` Sergey Senozhatsky [this message]
2017-05-09 11:41 ` Tetsuo Handa
2017-05-10 6:21 ` Sergey Senozhatsky
2017-05-10 11:27 ` Tetsuo Handa
2017-05-15 13:15 ` Petr Mladek
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20170509010442.GA397@tigerII.localdomain \
--to=sergey.senozhatsky@gmail.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@redhat.com \
--cc=penguin-kernel@I-love.SAKURA.ne.jp \
--cc=peterz@infradead.org \
--cc=pmladek@suse.com \
--cc=rostedt@goodmis.org \
--cc=sergey.senozhatsky.work@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox