qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Jamie Iles <jamie@nuviainc.com>
To: "Philippe Mathieu-Daudé" <philmd@redhat.com>
Cc: Jamie Iles <jamie@nuviainc.com>,
	qemu-devel@nongnu.org, lmichel@kalray.eu
Subject: Re: [PATCH 2/2] hw/core/loader: workaround read() size limit.
Date: Thu, 11 Nov 2021 17:04:56 +0000	[thread overview]
Message-ID: <YY1NOIdBgzJLYEiv@hazel> (raw)
In-Reply-To: <7e490883-b723-1ff6-9191-6ef0c91ccd25@redhat.com>

On Thu, Nov 11, 2021 at 04:55:35PM +0100, Philippe Mathieu-Daudé wrote:
> On 11/11/21 16:43, Philippe Mathieu-Daudé wrote:
> > On 11/11/21 16:36, Jamie Iles wrote:
> >> Hi Philippe,
> >>
> >> On Thu, Nov 11, 2021 at 03:55:48PM +0100, Philippe Mathieu-Daudé wrote:
> >>> Hi Jamie,
> >>>
> >>> On 11/11/21 15:11, Jamie Iles wrote:
> >>>> On Linux, read() will only ever read a maximum of 0x7ffff000 bytes
> >>>> regardless of what is asked.  If the file is larger than 0x7ffff000
> >>>> bytes the read will need to be broken up into multiple chunks.
> >>>>
> >>>> Cc: Luc Michel <lmichel@kalray.eu>
> >>>> Signed-off-by: Jamie Iles <jamie@nuviainc.com>
> >>>> ---
> >>>>  hw/core/loader.c | 40 ++++++++++++++++++++++++++++++++++------
> >>>>  1 file changed, 34 insertions(+), 6 deletions(-)
> >>>>
> >>>> diff --git a/hw/core/loader.c b/hw/core/loader.c
> >>>> index 348bbf535bd9..16ca9b99cf0f 100644
> >>>> --- a/hw/core/loader.c
> >>>> +++ b/hw/core/loader.c
> >>>> @@ -80,6 +80,34 @@ int64_t get_image_size(const char *filename)
> >>>>      return size;
> >>>>  }
> >>>>  
> >>>> +static ssize_t read_large(int fd, void *dst, size_t len)
> >>>> +{
> >>>> +    /*
> >>>> +     * man 2 read says:
> >>>> +     *
> >>>> +     * On Linux, read() (and similar system calls) will transfer at most
> >>>> +     * 0x7ffff000 (2,147,479,552) bytes, returning the number of bytes
> >>>
> >>> Could you mention MAX_RW_COUNT from linux/fs.h?
> >>>
> >>>> +     * actually transferred.  (This is true on both 32-bit and 64-bit
> >>>> +     * systems.)
> >>>
> >>> Maybe "This is true for both ILP32 and LP64 data models used by Linux"?
> >>> (because that would not be the case for the ILP64 model).
> >>>
> >>> Otherwise s/systems/Linux variants/?
> >>>
> >>>> +     *
> >>>> +     * So read in chunks no larger than 0x7ffff000 bytes.
> >>>> +     */
> >>>> +    size_t max_chunk_size = 0x7ffff000;
> >>>
> >>> We can declare it static const.
> >>
> >> Ack, can fix all of those up.
> >>
> >>>> +    size_t offset = 0;
> >>>> +
> >>>> +    while (offset < len) {
> >>>> +        size_t chunk_len = MIN(max_chunk_size, len - offset);
> >>>> +        ssize_t br = read(fd, dst + offset, chunk_len);
> >>>> +
> >>>> +        if (br < 0) {
> >>>> +            return br;
> >>>> +        }
> >>>> +        offset += br;
> >>>> +    }
> >>>> +
> >>>> +    return (ssize_t)len;
> >>>> +}
> >>>
> >>> I see other read()/pread() calls:
> >>>
> >>> hw/9pfs/9p-local.c:472:            tsize = read(fd, (void *)buf, bufsz);
> >>> hw/vfio/common.c:269:    if (pread(vbasedev->fd, &buf, size,
> >>> region->fd_offset + addr) != size) {
> >>> ...
> >>>
> >>> Maybe the read_large() belongs to "sysemu/os-xxx.h"?
> >>
> >> I think util/osdep.c would be a good fit for this.  To make sure we're 
> > 
> > Yes.
> > 
> >> on the same page though are you proposing converting all pread/read 
> >> calls to a qemu variant or auditing for ones that could potentially take 
> >> a larger size?
> > 
> > Yes, I took some time wondering beside loading blob in guest memory,
> > what would be the other issues you might encounter. I couldn't find
> > many cases. Eventually hw/vfio/. I haven't audit much, only noticed
> > hw/9pfs/9p-local.c and qga/commands-*.c (not sure if relevant), but
> > since we want to fix this, I'd rather try to fix it globally.
> 
> Actually what you suggest is simpler, add qemu_read() / qemu_pread()
> in util/osdep.c, convert all uses without caring about any audit.

Okay, this hasn't worked out too badly - I'll do the same for 
write/pwrite too and then switch all of the callers over with a 
coccinelle patch so it'll be a fairly large diff but simple.

We could elect to keep any calls with a compile-time constant length 
with the unwrapped variants but I think that's probably more confusing 
in the long-run.

Thanks,

Jamie


  reply	other threads:[~2021-11-11 17:07 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-11-11 14:11 [PATCH 0/2] Fix integer overflows in loading of large images Jamie Iles
2021-11-11 14:11 ` [PATCH 1/2] hw/core/loader: return image sizes as ssize_t Jamie Iles
2021-11-11 14:20   ` Philippe Mathieu-Daudé
2021-11-12  8:25   ` Luc Michel
2021-11-15  4:24   ` Alistair Francis
2022-06-02  1:13   ` Alistair Francis
2021-11-11 14:11 ` [PATCH 2/2] hw/core/loader: workaround read() size limit Jamie Iles
2021-11-11 14:55   ` Philippe Mathieu-Daudé
2021-11-11 15:36     ` Jamie Iles
2021-11-11 15:43       ` Philippe Mathieu-Daudé
2021-11-11 15:55         ` Philippe Mathieu-Daudé
2021-11-11 17:04           ` Jamie Iles [this message]
2021-11-30 15:38             ` Jamie Iles

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YY1NOIdBgzJLYEiv@hazel \
    --to=jamie@nuviainc.com \
    --cc=lmichel@kalray.eu \
    --cc=philmd@redhat.com \
    --cc=qemu-devel@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).