From: Heinz Graalfs <graalfs@linux.vnet.ibm.com>
To: Paolo Bonzini <pbonzini@redhat.com>, qemu-devel@nongnu.org
Cc: cornelia.huck@de.ibm.com, borntraeger@de.ibm.com
Subject: Re: [Qemu-devel] [RFC 0/3] qemu-char: Add poll timeouts for character backends
Date: Fri, 24 Oct 2014 13:29:49 +0200 [thread overview]
Message-ID: <544A382D.7080708@linux.vnet.ibm.com> (raw)
In-Reply-To: <544A18C4.5080405@redhat.com>
On 24/10/14 11:15, Paolo Bonzini wrote:
> On 10/24/2014 10:13 AM, Heinz Graalfs wrote:
>> On s390 one can observe system hang situations wrt console input when
>> using 'dataplane=on'.
>>
>> dataplane processing causes an inactive main thread and an active
>> dataplane thread.
>>
>> When a character backend descriptor disappears from the main thread's
>> poll() descriptor array (when can_read() returns 0) it happens that it
>> will never reappear in the poll() array due to missing poll() interrupts.
>>
>> The following patches fix observed hangs on s390 and provide a means
>> to avoid potential hangs in other backends/frontends.
>
> I think all you need is a simple
>
> qemu_notify_event();
>
> call when can_read can go from 0 to 1, for example just before
> get_console_data returns (for hw/char/sclpconsole-lm.c).
>
definitely, just that simple!
> By the way, for hw/char/sclpconsole-lm.c I'm not sure what happens if
> scon->length == SIZE_CONSOLE_BUFFER. You cannot read, so you cannot
> generate an event, and you cannot reset scon->length because you cannot
> generate an event. I think something like this is needed:
>
> diff --git a/hw/char/sclpconsole-lm.c b/hw/char/sclpconsole-lm.c
> index 80dd0a9..c61b77b 100644
> --- a/hw/char/sclpconsole-lm.c
> +++ b/hw/char/sclpconsole-lm.c
> @@ -61,10 +61,9 @@ static int chr_can_read(void *opaque)
>
> if (scon->event.event_pending) {
> return 0;
> - } else if (SIZE_CONSOLE_BUFFER - scon->length) {
> + } else {
> return 1;
> }
> - return 0;
> }
>
> static void chr_read(void *opaque, const uint8_t *buf, int size)
> @@ -78,6 +77,10 @@ static void chr_read(void *opaque, const uint8_t
> *buf, int size)
> sclp_service_interrupt(0);
> return;
> }
> + if (scon->length == SIZE_CONSOLE_BUFFER) {
> + /* Eat the character, but still process CR and LF. */
> + return;
> + }
yes, thanks a lot
> scon->buf[scon->length] = *buf;
> scon->length += 1;
> if (scon->echo) {
>
> Paolo
>
>
prev parent reply other threads:[~2014-10-24 11:30 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-10-24 8:13 [Qemu-devel] [RFC 0/3] qemu-char: Add poll timeouts for character backends Heinz Graalfs
2014-10-24 8:13 ` [Qemu-devel] [RFC 1/3] char: Trigger timeouts on poll() when frontend is unready Heinz Graalfs
2014-10-24 8:13 ` [Qemu-devel] [RFC 2/3] s390x: Fix hanging SCLP line mode console Heinz Graalfs
2014-10-24 8:13 ` [Qemu-devel] [RFC 3/3] s390x: Avoid hanging SCLP ASCII console Heinz Graalfs
2014-10-24 9:15 ` [Qemu-devel] [RFC 0/3] qemu-char: Add poll timeouts for character backends Paolo Bonzini
2014-10-24 11:29 ` Heinz Graalfs [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=544A382D.7080708@linux.vnet.ibm.com \
--to=graalfs@linux.vnet.ibm.com \
--cc=borntraeger@de.ibm.com \
--cc=cornelia.huck@de.ibm.com \
--cc=pbonzini@redhat.com \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).