qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Peter Maydell <peter.maydell@linaro.org>
To: Peter Xu <peterx@redhat.com>
Cc: "Mattias Nissler" <mnissler@rivosinc.com>,
	qemu-devel@nongnu.org,
	"Marcel Apfelbaum" <marcel.apfelbaum@gmail.com>,
	stefanha@redhat.com, "Philippe Mathieu-Daudé" <philmd@linaro.org>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	"Paolo Bonzini" <pbonzini@redhat.com>,
	"David Hildenbrand" <david@redhat.com>
Subject: Re: [PATCH] softmmu: Support concurrent bounce buffers
Date: Fri, 13 Sep 2024 17:47:08 +0100	[thread overview]
Message-ID: <CAFEAcA9kSi1id2SnQWEPyM44GvSH=tPqf-Unhyk92xdy+xZkJg@mail.gmail.com> (raw)
In-Reply-To: <ZuRgV7lS75BpDUox@x1n>

On Fri, 13 Sept 2024 at 16:55, Peter Xu <peterx@redhat.com> wrote:
>
> On Thu, Sep 12, 2024 at 03:27:55PM +0100, Peter Maydell wrote:
> > Coverity is pretty unhappy about this trick, because it isn't able
> > to recognise that we can figure out the address of 'bounce'
> > from the address of 'bounce->buffer' and free it in the
> > address_space_unmap() code, so it thinks that every use
> > of address_space_map(), pci_dma_map(), etc, is a memory leak.
> > We can mark all those as false positives, of course, but it got
> > me wondering whether maybe we should have this function return
> > a struct that has all the information address_space_unmap()
> > needs rather than relying on it being able to figure it out
> > from the host memory pointer...
>
> Indeed that sounds like a viable option.  Looks like we don't have a lot of
> address_space_map() users.

There's quite a few wrappers of it too, so it's a little hard to count.
We might want to avoid the memory allocation in the common case
by having the caller pass in an ASMapInfo struct to be filled
in rather than having address_space_map() allocate-and-return one.

-- PMM


  reply	other threads:[~2024-09-13 16:47 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-08-19 13:54 [PATCH] softmmu: Support concurrent bounce buffers Mattias Nissler
2024-08-21 18:24 ` Peter Xu
2024-09-10 14:53 ` Michael S. Tsirkin
2024-09-10 15:44   ` Peter Maydell
2024-09-10 16:10     ` Mattias Nissler
2024-09-10 16:39       ` Michael S. Tsirkin
2024-09-10 21:36         ` Mattias Nissler
2024-09-11 10:24           ` Michael S. Tsirkin
2024-09-11 11:17             ` Mattias Nissler
2024-09-12 14:27 ` Peter Maydell
2024-09-13 15:55   ` Peter Xu
2024-09-13 16:47     ` Peter Maydell [this message]
2024-09-16  7:35       ` Mattias Nissler
2024-09-16  9:05         ` Peter Maydell
2024-09-16  9:29           ` Mattias Nissler
2024-09-25 10:03 ` Michael Tokarev
2024-09-25 10:23   ` Mattias Nissler
2024-09-26  7:58     ` Michael Tokarev
2024-09-26  8:12       ` Michael S. Tsirkin
2024-10-25  5:59         ` Michael Tokarev

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAFEAcA9kSi1id2SnQWEPyM44GvSH=tPqf-Unhyk92xdy+xZkJg@mail.gmail.com' \
    --to=peter.maydell@linaro.org \
    --cc=david@redhat.com \
    --cc=marcel.apfelbaum@gmail.com \
    --cc=mnissler@rivosinc.com \
    --cc=mst@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=peterx@redhat.com \
    --cc=philmd@linaro.org \
    --cc=qemu-devel@nongnu.org \
    --cc=stefanha@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).