qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Hanna Czenczek <hreitz@redhat.com>
To: Peter Maydell <peter.maydell@linaro.org>
Cc: qemu-block@nongnu.org, qemu-devel@nongnu.org,
	Kevin Wolf <kwolf@redhat.com>,
	Richard Henderson <richard.henderson@linaro.org>
Subject: Re: [PULL 02/17] block: Collapse padded I/O vecs exceeding IOV_MAX
Date: Fri, 9 Jun 2023 09:45:22 +0200	[thread overview]
Message-ID: <b84ee45c-7120-3bdc-ae8a-11a363f7f6be@redhat.com> (raw)
In-Reply-To: <CAFEAcA_9pFYxttbp57QO-diMzqdmE8GdoyGMtMk_L4_a-TXyKA@mail.gmail.com>

On 08.06.23 11:52, Peter Maydell wrote:
> On Mon, 5 Jun 2023 at 16:48, Hanna Czenczek <hreitz@redhat.com> wrote:
>> When processing vectored guest requests that are not aligned to the
>> storage request alignment, we pad them by adding head and/or tail
>> buffers for a read-modify-write cycle.
> Hi; Coverity complains (CID 1512819) that the assert added
> in this commit is always true:
>
>> @@ -1573,26 +1701,34 @@ static void bdrv_padding_destroy(BdrvRequestPadding *pad)
>>   static int bdrv_pad_request(BlockDriverState *bs,
>>                               QEMUIOVector **qiov, size_t *qiov_offset,
>>                               int64_t *offset, int64_t *bytes,
>> +                            bool write,
>>                               BdrvRequestPadding *pad, bool *padded,
>>                               BdrvRequestFlags *flags)
>>   {
>>       int ret;
>> +    struct iovec *sliced_iov;
>> +    int sliced_niov;
>> +    size_t sliced_head, sliced_tail;
>>
>>       bdrv_check_qiov_request(*offset, *bytes, *qiov, *qiov_offset, &error_abort);
>>
>> -    if (!bdrv_init_padding(bs, *offset, *bytes, pad)) {
>> +    if (!bdrv_init_padding(bs, *offset, *bytes, write, pad)) {
>>           if (padded) {
>>               *padded = false;
>>           }
>>           return 0;
>>       }
>>
>> -    ret = qemu_iovec_init_extended(&pad->local_qiov, pad->buf, pad->head,
>> -                                   *qiov, *qiov_offset, *bytes,
>> -                                   pad->buf + pad->buf_len - pad->tail,
>> -                                   pad->tail);
>> +    sliced_iov = qemu_iovec_slice(*qiov, *qiov_offset, *bytes,
>> +                                  &sliced_head, &sliced_tail,
>> +                                  &sliced_niov);
>> +
>> +    /* Guaranteed by bdrv_check_qiov_request() */
>> +    assert(*bytes <= SIZE_MAX);
> This one. (The Coverity complaint is because SIZE_MAX for it is
> UINT64_MAX and an int64_t can't possibly be bigger than that.)
>
> Is this because the assert() is deliberately handling the case
> of a 32-bit host where SIZE_MAX might not be UINT64_MAX ? Or was
> the bound supposed to be SSIZE_MAX or INT64_MAX ?

It’s supposed to be SIZE_MAX, because of the subsequent 
bdrv_created_padded_qiov() call that takes a size_t.  So it is for a 
case where SIZE_MAX < UINT64_MAX, i.e. 32-bit hosts, yes.

> Looking at bdrv_check_qiov_request(), it seems to check bytes
> against BDRV_MAX_LENGTH, which is defined as something somewhere
> near INT64_MAX. So on a 32-bit host I'm not sure that function
> does guarantee that the bytes count is going to be less than
> SIZE_MAX...

Ah, crap.  I was thinking of BDRV_REQUEST_MAX_BYTES, which is checked by 
bdrv_check_request32(), which both callers of bdrv_pad_request() 
run/check before calling bdrv_pad_request().  So bdrv_pad_request() 
should use the same function.

I’ll send a patch to change the bdrv_check_qiov_request() to 
bdrv_check_request32().  Because both callers (bdrv_co_preadv_part(), 
bdrv_co_pwritev_part()) already call it (the latter only for non-zero 
writes, but bdrv_pad_request() similarly is called only for non-zero 
writes), there should be no change in behavior, and the asserted 
condition should in practice already be always fulfilled (because of the 
callers checking).

Hanna



  reply	other threads:[~2023-06-09  7:46 UTC|newest]

Thread overview: 29+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-06-05 15:45 [PULL 00/17] Block patches Hanna Czenczek
2023-06-05 15:45 ` [PULL 01/17] util/iov: Make qiov_slice() public Hanna Czenczek
2023-06-05 15:45 ` [PULL 02/17] block: Collapse padded I/O vecs exceeding IOV_MAX Hanna Czenczek
2023-06-06  8:00   ` Michael Tokarev
2023-06-06  8:45     ` Hanna Czenczek
2023-06-06 10:47       ` Michael Tokarev
2023-06-08  9:52   ` Peter Maydell
2023-06-09  7:45     ` Hanna Czenczek [this message]
2023-06-05 15:45 ` [PULL 03/17] util/iov: Remove qemu_iovec_init_extended() Hanna Czenczek
2023-06-05 15:45 ` [PULL 04/17] iotests/iov-padding: New test Hanna Czenczek
2023-06-05 15:45 ` [PULL 05/17] parallels: Out of image offset in BAT leads to image inflation Hanna Czenczek
2023-06-07  6:51   ` Michael Tokarev
2023-06-07  8:47     ` Michael Tokarev
2023-06-07 15:14     ` Hanna Czenczek
2023-06-09  8:54       ` Denis V. Lunev
2023-06-09  9:05         ` Michael Tokarev
2023-06-05 15:45 ` [PULL 06/17] parallels: Fix high_off calculation in parallels_co_check() Hanna Czenczek
2023-06-05 15:45 ` [PULL 07/17] parallels: Fix image_end_offset and data_end after out-of-image check Hanna Czenczek
2023-06-05 15:45 ` [PULL 08/17] parallels: create parallels_set_bat_entry_helper() to assign BAT value Hanna Czenczek
2023-06-05 15:45 ` [PULL 09/17] parallels: Use generic infrastructure for BAT writing in parallels_co_check() Hanna Czenczek
2023-06-05 15:45 ` [PULL 10/17] parallels: Move check of unclean image to a separate function Hanna Czenczek
2023-06-05 15:45 ` [PULL 11/17] parallels: Move check of cluster outside " Hanna Czenczek
2023-06-05 15:45 ` [PULL 12/17] parallels: Fix statistics calculation Hanna Czenczek
2023-06-05 15:45 ` [PULL 13/17] parallels: Move check of leaks to a separate function Hanna Czenczek
2023-06-05 15:45 ` [PULL 14/17] parallels: Move statistic collection " Hanna Czenczek
2023-06-05 15:45 ` [PULL 15/17] parallels: Replace qemu_co_mutex_lock by WITH_QEMU_LOCK_GUARD Hanna Czenczek
2023-06-05 15:45 ` [PULL 16/17] parallels: Incorrect condition in out-of-image check Hanna Czenczek
2023-06-05 15:45 ` [PULL 17/17] qcow2: add discard-no-unref option Hanna Czenczek
2023-06-05 19:03 ` [PULL 00/17] Block patches Richard Henderson

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=b84ee45c-7120-3bdc-ae8a-11a363f7f6be@redhat.com \
    --to=hreitz@redhat.com \
    --cc=kwolf@redhat.com \
    --cc=peter.maydell@linaro.org \
    --cc=qemu-block@nongnu.org \
    --cc=qemu-devel@nongnu.org \
    --cc=richard.henderson@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).