From: Gerd Hoffmann <kraxel@redhat.com>
To: Paul Brook <paul@codesourcery.com>
Cc: Amit Shah <amit.shah@redhat.com>, qemu list <qemu-devel@nongnu.org>
Subject: Re: [Qemu-devel] [PATCH v3 2/4] char: Add ability to provide a callback when write won't return -EAGAIN
Date: Tue, 20 Apr 2010 20:59:29 +0200 [thread overview]
Message-ID: <4BCDF991.9050002@redhat.com> (raw)
In-Reply-To: <201004201328.48953.paul@codesourcery.com>
On 04/20/10 14:28, Paul Brook wrote:
>> I sent out this series as a "feeler" to see if the approach was
>> acceptable.
>>
>> Paul didn't reply to my reply addressing his concern, so I take that as
>> he's OK with the approach as well :-)
>
> I'd probably exposed this as an asyncronous write rather than nonblocking
> operation. However both have their issues and I guess for character devices
> your approach makes sense (c.f. block devices where we want concurrent
> transfers).
For chardevs async operation introduces ordering issues, I think
supporting non-blocking writes is more useful here.
> It would be useful to have a debugging mode where the chardev layer
> deliberately returns spurious EAGAIN and short writes. Otherwise you've got a
> lot of very poorly tested device fallback code. I have low confidence in
> getting this right first time :-)
It might be a good idea to have a different interface for it, i.e.
qemu_chr_write() keeps current behavior and we'll add a new
qemu_chr_write_nonblocking() for users which want (and can handle) the
non-blocking behavior with short writes and -EAGAIN return values.
We should also clearly define what qemu_chr_write_nonblocking() should
do in case the underlying chardev backend doesn't support nonblocking
operation. Option one is to fail and expect the caller handle the
situation. Option two is fallback to blocking mode. I'd tend to pick
option two, fallback to blocking mode is what the caller most likely
will do anyway.
cheers,
Gerd
next prev parent reply other threads:[~2010-04-20 18:59 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-04-15 8:16 [Qemu-devel] [PATCH v3 0/4] char: write callback, virtio-console: flow control Amit Shah
2010-04-15 8:16 ` [Qemu-devel] [PATCH v3 1/4] char: Let writers know how much data was written in case of errors Amit Shah
2010-04-15 8:16 ` [Qemu-devel] [PATCH v3 2/4] char: Add ability to provide a callback when write won't return -EAGAIN Amit Shah
2010-04-15 8:16 ` [Qemu-devel] [PATCH v3 3/4] virtio-console: Factor out common init between console and generic ports Amit Shah
2010-04-15 8:16 ` [Qemu-devel] [PATCH v3 4/4] virtio-console: Throttle virtio-serial-bus if we can't consume any more guest data Amit Shah
2010-04-20 11:32 ` [Qemu-devel] [PATCH v3 2/4] char: Add ability to provide a callback when write won't return -EAGAIN Gerd Hoffmann
2010-04-20 11:44 ` Amit Shah
2010-04-20 12:28 ` Paul Brook
2010-04-20 12:39 ` Amit Shah
2010-04-20 18:59 ` Gerd Hoffmann [this message]
2010-04-15 12:04 ` [Qemu-devel] [PATCH v3 0/4] char: write callback, virtio-console: flow control Paul Brook
2010-04-15 12:58 ` Amit Shah
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4BCDF991.9050002@redhat.com \
--to=kraxel@redhat.com \
--cc=amit.shah@redhat.com \
--cc=paul@codesourcery.com \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).