From: Jonathan Cameron via <qemu-devel@nongnu.org>
To: Mattias Nissler <mnissler@rivosinc.com>
Cc: stefanha@redhat.com, jag.raman@oracle.com, qemu-devel@nongnu.org,
peterx@redhat.com, john.levon@nutanix.com,
"David Hildenbrand" <david@redhat.com>,
"Marcel Apfelbaum" <marcel.apfelbaum@gmail.com>,
"Paolo Bonzini" <pbonzini@redhat.com>,
"Michael S. Tsirkin" <mst@redhat.com>,
"Philippe Mathieu-Daudé" <philmd@linaro.org>,
"Elena Ufimtseva" <elena.ufimtseva@oracle.com>,
"Richard Henderson" <richard.henderson@linaro.org>
Subject: Re: [PATCH v6 0/5] Support message-based DMA in vfio-user server
Date: Fri, 9 Feb 2024 17:39:39 +0000 [thread overview]
Message-ID: <20240209173939.0000538f@huawei.com> (raw)
In-Reply-To: <20231101131611.775299-1-mnissler@rivosinc.com>
On Wed, 1 Nov 2023 06:16:06 -0700
Mattias Nissler <mnissler@rivosinc.com> wrote:
> This series adds basic support for message-based DMA in qemu's vfio-user
> server. This is useful for cases where the client does not provide file
> descriptors for accessing system memory via memory mappings. My motivating use
> case is to hook up device models as PCIe endpoints to a hardware design. This
> works by bridging the PCIe transaction layer to vfio-user, and the endpoint
> does not access memory directly, but sends memory requests TLPs to the hardware
> design in order to perform DMA.
>
> Note that more work is needed to make message-based DMA work well: qemu
> currently breaks down DMA accesses into chunks of size 8 bytes at maximum, each
> of which will be handled in a separate vfio-user DMA request message. This is
> quite terrible for large DMA accesses, such as when nvme reads and writes
> page-sized blocks for example. Thus, I would like to improve qemu to be able to
> perform larger accesses, at least for indirect memory regions. I have something
> working locally, but since this will likely result in more involved surgery and
> discussion, I am leaving this to be addressed in a separate patch.
>
Hi Mattias,
I was wondering what the status of this patch set is - seems no outstanding issues
have been raised?
I'd run into a similar problem with multiple DMA mappings using the bounce buffer
when using the emulated CXL memory with virtio-blk-pci accessing it.
In that particular case virtio-blk is using the "memory" address space, but
otherwise your first 2 patches work for me as well so I'd definitely like
to see those get merged!
Thanks,
Jonathan
> Changes from v1:
>
> * Address Stefan's review comments. In particular, enforce an allocation limit
> and don't drop the map client callbacks given that map requests can fail when
> hitting size limits.
>
> * libvfio-user version bump now included in the series.
>
> * Tested as well on big-endian s390x. This uncovered another byte order issue
> in vfio-user server code that I've included a fix for.
>
> Changes from v2:
>
> * Add a preparatory patch to make bounce buffering an AddressSpace-specific
> concept.
>
> * The total buffer size limit parameter is now per AdressSpace and can be
> configured for PCIDevice via a property.
>
> * Store a magic value in first bytes of bounce buffer struct as a best effort
> measure to detect invalid pointers in address_space_unmap.
>
> Changes from v3:
>
> * libvfio-user now supports twin-socket mode which uses separate sockets for
> client->server and server->client commands, respectively. This addresses the
> concurrent command bug triggered by server->client DMA access commands. See
> https://github.com/nutanix/libvfio-user/issues/279 for details.
>
> * Add missing teardown code in do_address_space_destroy.
>
> * Fix bounce buffer size bookkeeping race condition.
>
> * Generate unmap notification callbacks unconditionally.
>
> * Some cosmetic fixes.
>
> Changes from v4:
>
> * Fix accidentally dropped memory_region_unref, control flow restored to match
> previous code to simplify review.
>
> * Some cosmetic fixes.
>
> Changes from v5:
>
> * Unregister indirect memory region in libvfio-user dma_unregister callback.
>
> I believe all patches in the series have been reviewed appropriately, so my
> hope is that this will be the final iteration. Stefan, Peter, Jag, thanks for
> your feedback, let me know if there's anything else needed from my side before
> this can get merged.
>
> Mattias Nissler (5):
> softmmu: Per-AddressSpace bounce buffering
> softmmu: Support concurrent bounce buffers
> Update subprojects/libvfio-user
> vfio-user: Message-based DMA support
> vfio-user: Fix config space access byte order
>
> hw/pci/pci.c | 8 ++
> hw/remote/trace-events | 2 +
> hw/remote/vfio-user-obj.c | 104 +++++++++++++++++++++----
> include/exec/cpu-common.h | 2 -
> include/exec/memory.h | 41 +++++++++-
> include/hw/pci/pci_device.h | 3 +
> subprojects/libvfio-user.wrap | 2 +-
> system/dma-helpers.c | 4 +-
> system/memory.c | 8 ++
> system/physmem.c | 141 ++++++++++++++++++----------------
> 10 files changed, 226 insertions(+), 89 deletions(-)
>
next prev parent reply other threads:[~2024-02-09 17:40 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-11-01 13:16 [PATCH v6 0/5] Support message-based DMA in vfio-user server Mattias Nissler
2023-11-01 13:16 ` [PATCH v6 1/5] softmmu: Per-AddressSpace bounce buffering Mattias Nissler
2023-11-01 19:16 ` Peter Xu
2023-11-01 13:16 ` [PATCH v6 2/5] softmmu: Support concurrent bounce buffers Mattias Nissler
2023-11-01 19:18 ` Peter Xu
2023-11-01 13:16 ` [PATCH v6 3/5] Update subprojects/libvfio-user Mattias Nissler
2024-01-09 15:55 ` Jag Raman
2023-11-01 13:16 ` [PATCH v6 4/5] vfio-user: Message-based DMA support Mattias Nissler
2023-11-28 14:38 ` Jag Raman
2023-11-01 13:16 ` [PATCH v6 5/5] vfio-user: Fix config space access byte order Mattias Nissler
2023-11-28 14:39 ` Jag Raman
2024-02-09 17:39 ` Jonathan Cameron via [this message]
2024-02-12 8:07 ` [PATCH v6 0/5] Support message-based DMA in vfio-user server Mattias Nissler
2024-05-02 14:57 ` John Levon
2024-05-02 15:14 ` Peter Xu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240209173939.0000538f@huawei.com \
--to=qemu-devel@nongnu.org \
--cc=Jonathan.Cameron@huawei.com \
--cc=david@redhat.com \
--cc=elena.ufimtseva@oracle.com \
--cc=jag.raman@oracle.com \
--cc=john.levon@nutanix.com \
--cc=marcel.apfelbaum@gmail.com \
--cc=mnissler@rivosinc.com \
--cc=mst@redhat.com \
--cc=pbonzini@redhat.com \
--cc=peterx@redhat.com \
--cc=philmd@linaro.org \
--cc=richard.henderson@linaro.org \
--cc=stefanha@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).