qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Heinrich Schuchardt <heinrich.schuchardt@canonical.com>
To: Peter Maydell <peter.maydell@linaro.org>
Cc: "Philippe Mathieu-Daudé" <philmd@linaro.org>,
	qemu-devel@nongnu.org,
	"Richard Henderson" <richard.henderson@linaro.org>,
	"David Hildenbrand" <david@redhat.com>,
	"Peter Xu" <peterx@redhat.com>,
	"Paolo Bonzini" <pbonzini@redhat.com>
Subject: Re: [PATCH, v2] physmem: avoid bounce buffer too small
Date: Wed, 28 Feb 2024 20:07:47 +0100	[thread overview]
Message-ID: <9c64be5c-25b8-421d-966a-bdac03dfe37c@canonical.com> (raw)
In-Reply-To: <CAFEAcA_Bshua2BQTfOb3D1aF27ayELEt9TcQM8hkQdKaih3xHw@mail.gmail.com>

On 28.02.24 19:39, Peter Maydell wrote:
> On Wed, 28 Feb 2024 at 18:28, Heinrich Schuchardt
> <heinrich.schuchardt@canonical.com> wrote:
>>
>> On 28.02.24 16:06, Philippe Mathieu-Daudé wrote:
>>> Hi Heinrich,
>>>
>>> On 28/2/24 13:59, Heinrich Schuchardt wrote:
>>>> virtqueue_map_desc() is called with values of sz exceeding that may
>>>> exceed
>>>> TARGET_PAGE_SIZE. sz = 0x2800 has been observed.
>>>>
>>>> We only support a single bounce buffer. We have to avoid
>>>> virtqueue_map_desc() calling address_space_map() multiple times.
>>>> Otherwise
>>>> we see an error
>>>>
>>>>       qemu: virtio: bogus descriptor or out of resources
>>>>
>>>> Increase the minimum size of the bounce buffer to 0x10000 which matches
>>>> the largest value of TARGET_PAGE_SIZE for all architectures.
>>>>
>>>> Signed-off-by: Heinrich Schuchardt <heinrich.schuchardt@canonical.com>
>>>> ---
>>>> v2:
>>>>      remove unrelated change
>>>> ---
>>>>    system/physmem.c | 8 ++++++--
>>>>    1 file changed, 6 insertions(+), 2 deletions(-)
>>>>
>>>> diff --git a/system/physmem.c b/system/physmem.c
>>>> index e3ebc19eef..3c82da1c86 100644
>>>> --- a/system/physmem.c
>>>> +++ b/system/physmem.c
>>>> @@ -3151,8 +3151,12 @@ void *address_space_map(AddressSpace *as,
>>>>                *plen = 0;
>>>>                return NULL;
>>>>            }
>>>> -        /* Avoid unbounded allocations */
>>>> -        l = MIN(l, TARGET_PAGE_SIZE);
>>>> +        /*
>>>> +         * There is only one bounce buffer. The largest occuring
>>>> value of
>>>> +         * parameter sz of virtqueue_map_desc() must fit into the bounce
>>>> +         * buffer.
>>>> +         */
>>>> +        l = MIN(l, 0x10000);
>>>
>>> Please define this magic value. Maybe ANY_TARGET_PAGE_SIZE or
>>> TARGETS_BIGGEST_PAGE_SIZE?
>>>
>>> Then along:
>>>     QEMU_BUILD_BUG_ON(TARGET_PAGE_SIZE <= TARGETS_BIGGEST_PAGE_SIZE);
>>
>> Thank you Philippe for reviewing.
>>
>> TARGETS_BIGGEST_PAGE_SIZE does not fit as the value is not driven by the
>> page size.
>> How about MIN_BOUNCE_BUFFER_SIZE?
>> Is include/exec/memory.h the right include for the constant?
>>
>> I don't think that TARGET_PAGE_SIZE has any relevance for setting the
>> bounce buffer size. I only mentioned it to say that we are not
>> decreasing the value on any existing architecture.
>>
>> I don't know why TARGET_PAGE_SIZE ever got into this piece of code.
>> e3127ae0cdcd ("exec: reorganize address_space_map") does not provide a
>> reason for this choice. Maybe Paolo remembers.
> 
> The limitation to a page dates back to commit 6d16c2f88f2a in 2009,
> which was the first implementation of this function. I don't think
> there's a particular reason for that value beyond that it was
> probably a convenient value that was assumed to be likely "big enough".
> 
> I think the idea with this bounce-buffer has always been that this
> isn't really a code path we expected to end up in very often --
> it's supposed to be for when devices are doing DMA, which they
> will typically be doing to memory (backed by host RAM), not
> devices (backed by MMIO and needing a bounce buffer). So the
> whole mechanism is a bit "last fallback to stop things breaking
> entirely".
> 
> The address_space_map() API says that it's allowed to return
> a subset of the range you ask for, so if the virtio code doesn't
> cope with the minimum being set to TARGET_PAGE_SIZE then either
> we need to fix that virtio code or we need to change the API
> of this function. (But I think you will also get a reduced
> range if you try to use it across a boundary between normal
> host-memory-backed RAM and a device MemoryRegion.)

If we allow a bounce buffer only to be used once (via the in_use flag), 
why do we allow only a single bounce buffer?

Could address_space_map() allocate a new bounce buffer on every call and 
address_space_unmap() deallocate it?

Isn't the design with a single bounce buffer bound to fail with a 
multi-threaded client as collision can be expected?

Best regards

Heinrich


  reply	other threads:[~2024-02-28 19:08 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-02-28 12:59 [PATCH, v2] physmem: avoid bounce buffer too small Heinrich Schuchardt
2024-02-28 15:06 ` Philippe Mathieu-Daudé
2024-02-28 18:27   ` Heinrich Schuchardt
2024-02-28 18:39     ` Peter Maydell
2024-02-28 19:07       ` Heinrich Schuchardt [this message]
2024-02-29  1:11         ` Peter Xu
2024-02-29 10:22           ` Heinrich Schuchardt
2024-02-29 10:36             ` Mattias Nissler
2024-02-29 10:46             ` Jonathan Cameron via
2024-02-29  9:38         ` Peter Maydell
2024-02-29 10:59           ` Jonathan Cameron via
2024-02-29 11:11             ` Peter Maydell
2024-02-29 11:17               ` Heinrich Schuchardt
2024-02-29 12:34                 ` Peter Maydell
2024-02-29 12:52                   ` Mattias Nissler
2024-02-29 13:19                     ` Peter Maydell
2024-02-29 14:17                   ` Heinrich Schuchardt
2024-02-29 14:52                     ` Peter Maydell
2024-02-29 11:18               ` Mattias Nissler

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=9c64be5c-25b8-421d-966a-bdac03dfe37c@canonical.com \
    --to=heinrich.schuchardt@canonical.com \
    --cc=david@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=peter.maydell@linaro.org \
    --cc=peterx@redhat.com \
    --cc=philmd@linaro.org \
    --cc=qemu-devel@nongnu.org \
    --cc=richard.henderson@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).