From: Greg Kurz <groug@kaod.org>
To: Jan Dakinevich <jan.dakinevich@gmail.com>
Cc: qemu-devel@nongnu.org,
"Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
Subject: Re: [Qemu-devel] [PATCH] 9pfs: check the size of transport buffer before marshaling
Date: Fri, 15 Sep 2017 16:56:24 +0200 [thread overview]
Message-ID: <20170915165624.1e9b51c2@bahia.lan> (raw)
In-Reply-To: <1505406696-2260-1-git-send-email-jan.dakinevich@gmail.com>
[-- Attachment #1: Type: text/plain, Size: 5122 bytes --]
On Thu, 14 Sep 2017 19:31:36 +0300
Jan Dakinevich <jan.dakinevich@gmail.com> wrote:
> v9fs_do_readdir_with_stat() and v9fs_do_readdir() stores as much data in
> the buffer as can fit unless marshaling erorr occurs. However, after
> commit 23a006d the behavior pdu_marshal was changed, and on error the
> routine assumes that buffers are misconfigured and breaks communication.
>
I agree and I could easily reproduce with a linux guest using 9p2000.u, ie, the
v9fs_do_readdir_with_stat() case, but I couldn't find a way to break transport
with 9p2000.L. Do you have a reproducer for the latter ?
I ask because I would appreciate to see some more details in the changelog,
for the records.
> Signed-off-by: Jan Dakinevich <jan.dakinevich@gmail.com>
> ---
Anyway the patch looks fine, but I'll wait for your answer before pushing
to 9p-next.
Cheers,
--
Greg
> hw/9pfs/9p.c | 44 ++++++++++++++++++++++++++++++++------------
> 1 file changed, 32 insertions(+), 12 deletions(-)
>
> diff --git a/hw/9pfs/9p.c b/hw/9pfs/9p.c
> index 1e38109..8e0b87e 100644
> --- a/hw/9pfs/9p.c
> +++ b/hw/9pfs/9p.c
> @@ -1679,6 +1679,16 @@ static void v9fs_init_qiov_from_pdu(QEMUIOVector *qiov, V9fsPDU *pdu,
> qemu_iovec_concat(qiov, &elem, skip, size);
> }
>
> +static size_t v9fs_marshal_size(V9fsPDU *pdu)
> +{
> + struct iovec *iov;
> + unsigned int niov;
> +
> + pdu->s->transport->init_in_iov_from_pdu(pdu, &iov, &niov, 0);
> +
> + return iov_size(iov, niov);
> +}
> +
> static int v9fs_xattr_read(V9fsState *s, V9fsPDU *pdu, V9fsFidState *fidp,
> uint64_t off, uint32_t max_count)
> {
> @@ -1725,6 +1735,10 @@ static int coroutine_fn v9fs_do_readdir_with_stat(V9fsPDU *pdu,
> off_t saved_dir_pos;
> struct dirent *dent;
>
> + /* 11 = 7 + 4 (7 = start offset, 4 = space for storing count) */
> + size_t offset = 11;
> + size_t marshal_size = v9fs_marshal_size(pdu);
> +
> /* save the directory position */
> saved_dir_pos = v9fs_co_telldir(pdu, fidp);
> if (saved_dir_pos < 0) {
> @@ -1752,18 +1766,23 @@ static int coroutine_fn v9fs_do_readdir_with_stat(V9fsPDU *pdu,
> if (err < 0) {
> break;
> }
> - /* 11 = 7 + 4 (7 = start offset, 4 = space for storing count) */
> - len = pdu_marshal(pdu, 11 + count, "S", &v9stat);
>
> - v9fs_readdir_unlock(&fidp->fs.dir);
> + if (v9stat.size + 2 > MIN(marshal_size - offset, max_count - count)) {
> + v9fs_readdir_unlock(&fidp->fs.dir);
>
> - if ((len != (v9stat.size + 2)) || ((count + len) > max_count)) {
> /* Ran out of buffer. Set dir back to old position and return */
> v9fs_co_seekdir(pdu, fidp, saved_dir_pos);
> v9fs_stat_free(&v9stat);
> v9fs_path_free(&path);
> return count;
> }
> +
> + len = pdu_marshal(pdu, offset, "S", &v9stat);
> + BUG_ON(len != v9stat.size + 2);
> +
> + v9fs_readdir_unlock(&fidp->fs.dir);
> +
> + offset += len;
> count += len;
> v9fs_stat_free(&v9stat);
> v9fs_path_free(&path);
> @@ -1884,6 +1903,10 @@ static int coroutine_fn v9fs_do_readdir(V9fsPDU *pdu, V9fsFidState *fidp,
> off_t saved_dir_pos;
> struct dirent *dent;
>
> + /* 11 = 7 + 4 (7 = start offset, 4 = space for storing count) */
> + size_t offset = 11;
> + size_t marshal_size = v9fs_marshal_size(pdu);
> +
> /* save the directory position */
> saved_dir_pos = v9fs_co_telldir(pdu, fidp);
> if (saved_dir_pos < 0) {
> @@ -1899,7 +1922,8 @@ static int coroutine_fn v9fs_do_readdir(V9fsPDU *pdu, V9fsFidState *fidp,
> }
> v9fs_string_init(&name);
> v9fs_string_sprintf(&name, "%s", dent->d_name);
> - if ((count + v9fs_readdir_data_size(&name)) > max_count) {
> + if (v9fs_readdir_data_size(&name) > MIN(marshal_size - offset,
> + max_count - count)) {
> v9fs_readdir_unlock(&fidp->fs.dir);
>
> /* Ran out of buffer. Set dir back to old position and return */
> @@ -1918,18 +1942,14 @@ static int coroutine_fn v9fs_do_readdir(V9fsPDU *pdu, V9fsFidState *fidp,
> qid.type = 0;
> qid.version = 0;
>
> - /* 11 = 7 + 4 (7 = start offset, 4 = space for storing count) */
> - len = pdu_marshal(pdu, 11 + count, "Qqbs",
> + len = pdu_marshal(pdu, offset, "Qqbs",
> &qid, dent->d_off,
> dent->d_type, &name);
> + BUG_ON(len != v9fs_readdir_data_size(&name));
>
> v9fs_readdir_unlock(&fidp->fs.dir);
>
> - if (len < 0) {
> - v9fs_co_seekdir(pdu, fidp, saved_dir_pos);
> - v9fs_string_free(&name);
> - return len;
> - }
> + offset += len;
> count += len;
> v9fs_string_free(&name);
> saved_dir_pos = dent->d_off;
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 195 bytes --]
next prev parent reply other threads:[~2017-09-15 14:56 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-09-14 16:31 [Qemu-devel] [PATCH] 9pfs: check the size of transport buffer before marshaling Jan Dakinevich
2017-09-15 14:56 ` Greg Kurz [this message]
2017-09-17 0:19 ` Jan Dakinevich
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20170915165624.1e9b51c2@bahia.lan \
--to=groug@kaod.org \
--cc=aneesh.kumar@linux.vnet.ibm.com \
--cc=jan.dakinevich@gmail.com \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).