From: Christian Schoenebeck <qemu_oss@crudebyte.com>
To: qemu-devel@nongnu.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
Anthony Perard <anthony.perard@citrix.com>,
Greg Kurz <groug@kaod.org>, Paul Durrant <paul@xen.org>
Subject: Re: [PATCH 2/2] 9pfs: fix init_in_iov_from_pdu truncating size
Date: Thu, 14 May 2020 18:10:20 +0200 [thread overview]
Message-ID: <2330066.V6eqdYP2KO@silver> (raw)
In-Reply-To: <alpine.DEB.2.21.2005140846460.26167@sstabellini-ThinkPad-T480s>
On Donnerstag, 14. Mai 2020 17:51:27 CEST Stefano Stabellini wrote:
> On Thu, 14 May 2020, Christian Schoenebeck wrote:
> > Looks like this issue will still take quite some time to be fixed with
> > Xen. If you don't mind I'll send out a patch to revert truncation on
> > virtio side, so that at least this bug is fixed with virtio ASAP.
>
> Let me answer to this quickly so that if you want to get the patch out
> today you can.
>
>
> Yes, I think it is OK to revert truncation in virtio now.
Good
> Only one
> thing: would there still be any value in doing for Xen:
>
> + if (pdu->id + 1 == P9_RREAD) {
> + /* size[4] Rread tag[2] count[4] data[count] */
> + const size_t hdr_size = 11;
> + /*
> + * If current transport buffer size is smaller than actually
> required + * for this Rreaddir response, then truncate the response
> to the + * currently available transport buffer size, however only
> if it would + * at least allow to return 1 payload byte to client.
> + */
> + if (buf_size < hdr_size + 1) {
>
>
> like your patch here does? Although not a complete solution it looks
> like it would still be a good improvement over the current situation for
> Xen.
IMO in its current form, no. It would just move the problematic from a clearly
visible 9pfs server termination with error, towards a silent data loss
(without any error) on client side. Remember: this patch does not roll back
the filesystem driver's read position.
Best regards,
Christian Schoenebeck
next prev parent reply other threads:[~2020-05-14 16:12 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-05-10 17:41 [PATCH 0/2] 9pfs: regression init_in_iov_from_pdu truncating size Christian Schoenebeck
2020-05-10 17:05 ` [PATCH 1/2] xen-9pfs: Fix log messages of reply errors Christian Schoenebeck
2020-05-11 22:09 ` Stefano Stabellini
2020-05-10 17:18 ` [PATCH 2/2] 9pfs: fix init_in_iov_from_pdu truncating size Christian Schoenebeck
2020-05-10 18:43 ` Christian Schoenebeck
2020-05-11 22:09 ` Stefano Stabellini
2020-05-12 11:29 ` Christian Schoenebeck
2020-05-12 23:24 ` Stefano Stabellini
2020-05-13 13:11 ` Christian Schoenebeck
2020-05-13 23:31 ` Stefano Stabellini
2020-05-14 14:24 ` Christian Schoenebeck
2020-05-14 15:51 ` Stefano Stabellini
2020-05-14 16:10 ` Christian Schoenebeck [this message]
2020-05-14 16:23 ` Stefano Stabellini
2020-05-14 16:24 ` Stefano Stabellini
2020-05-14 17:21 ` Christian Schoenebeck
2020-05-12 9:38 ` [PATCH 0/2] 9pfs: regression " Greg Kurz
2020-05-13 23:07 ` Greg Kurz
2020-05-13 23:33 ` Stefano Stabellini
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=2330066.V6eqdYP2KO@silver \
--to=qemu_oss@crudebyte.com \
--cc=anthony.perard@citrix.com \
--cc=groug@kaod.org \
--cc=paul@xen.org \
--cc=qemu-devel@nongnu.org \
--cc=sstabellini@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).