qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Paolo Bonzini <pbonzini@redhat.com>
To: Peter Maydell <peter.maydell@linaro.org>
Cc: "Gavin Shan" <gshan@redhat.com>, "Marc Zyngier" <maz@kernel.org>,
	"QEMU Developers" <qemu-devel@nongnu.org>,
	qemu-arm <qemu-arm@nongnu.org>,
	"Marc-André Lureau" <marcandre.lureau@gmail.com>,
	"Shan Gavin" <shan.gavin@gmail.com>
Subject: Re: [PATCH] hw/char/pl011: Output characters using best-effort mode
Date: Fri, 21 Feb 2020 14:09:26 +0100	[thread overview]
Message-ID: <fe7f3a60-5d90-ea3c-44d1-119f8b45b15c@redhat.com> (raw)
In-Reply-To: <CAFEAcA-bHCLQGkFucY5RAY-mw9wFdDeOqCkcv0xgSRg-EYh9ew@mail.gmail.com>

On 21/02/20 13:44, Peter Maydell wrote:
> On Fri, 21 Feb 2020 at 11:44, Paolo Bonzini <pbonzini@redhat.com> wrote:
>>
>> On 21/02/20 11:21, Peter Maydell wrote:
>>> Before you do that, I would suggest investigating:
>>>  * is this a problem we've already had on x86 and that there is a
>>>    standard solution for
>> Disconnected sockets always lose data (see tcp_chr_write in
>> chardev/char-socket.c).
>>
>> For connected sockets, 8250 does at most 4 retries (each retry is
>> triggered by POLLOUT|POLLHUP).  After these four retries the output
>> chardev is considered broken, just like in Gavin's patch, and only a
>> reset will restart the output.
>>
>>>  * should this be applicable to more than just the socket chardev?
>>>    What's special about the socket chardev?
>>
>> For 8250 there's no difference between socket and everything else.
> 
> Interesting, I didn't know our 8250 emulation had this
> retry-and-drop-data logic. Is it feasible to put it into
> the chardev layer instead, so that every serial device
> can get it without having to manually implement it?

Yes, it should be possible.  But I must say I'm not sure why it exists
at all.  Maybe it should be dropped instead.  Instead, we should make
sure that after POLLHUP (the socket is disconnected) data is dropped.
Then, having retries triggered by repeated POLLOUT should not matter
very much.

Paolo



  reply	other threads:[~2020-02-21 13:27 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-02-20  6:01 [PATCH] hw/char/pl011: Output characters using best-effort mode Gavin Shan
2020-02-20  8:47 ` Philippe Mathieu-Daudé
2020-02-20  9:07   ` Gavin Shan
2020-02-20  9:10 ` Marc Zyngier
2020-02-20 10:10   ` Peter Maydell
2020-02-21  4:24     ` Gavin Shan
2020-02-21  9:09       ` Marc Zyngier
2020-02-23 23:57         ` Gavin Shan
2020-02-21 10:21       ` Peter Maydell
2020-02-21 11:44         ` Paolo Bonzini
2020-02-21 12:44           ` Peter Maydell
2020-02-21 13:09             ` Paolo Bonzini [this message]
2020-02-21 13:14               ` Peter Maydell
2020-02-21 18:15                 ` Paolo Bonzini
2020-02-23 23:45                   ` Gavin Shan
2020-02-23 23:26             ` Gavin Shan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=fe7f3a60-5d90-ea3c-44d1-119f8b45b15c@redhat.com \
    --to=pbonzini@redhat.com \
    --cc=gshan@redhat.com \
    --cc=marcandre.lureau@gmail.com \
    --cc=maz@kernel.org \
    --cc=peter.maydell@linaro.org \
    --cc=qemu-arm@nongnu.org \
    --cc=qemu-devel@nongnu.org \
    --cc=shan.gavin@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).