From: David Hildenbrand <david@redhat.com>
To: Mattias Nissler <mnissler@rivosinc.com>, qemu-devel@nongnu.org
Cc: "Elena Ufimtseva" <elena.ufimtseva@oracle.com>,
"Jagannathan Raman" <jag.raman@oracle.com>,
"Philippe Mathieu-Daudé" <philmd@linaro.org>,
"Paolo Bonzini" <pbonzini@redhat.com>,
"Peter Xu" <peterx@redhat.com>,
stefanha@redhat.com, john.levon@nutanix.com
Subject: Re: [PATCH 0/3] Support message-based DMA in vfio-user server
Date: Tue, 4 Jul 2023 10:20:50 +0200 [thread overview]
Message-ID: <f4e6d2d0-d9e7-8779-8159-7b61546fd210@redhat.com> (raw)
In-Reply-To: <20230704080628.852525-1-mnissler@rivosinc.com>
On 04.07.23 10:06, Mattias Nissler wrote:
> This series adds basic support for message-based DMA in qemu's vfio-user
> server. This is useful for cases where the client does not provide file
> descriptors for accessing system memory via memory mappings. My motivating use
> case is to hook up device models as PCIe endpoints to a hardware design. This
> works by bridging the PCIe transaction layer to vfio-user, and the endpoint
> does not access memory directly, but sends memory requests TLPs to the hardware
> design in order to perform DMA.
>
> Note that in addition to the 3 commits included, we also need a
> subprojects/libvfio-user roll to bring in this bugfix:
> https://github.com/nutanix/libvfio-user/commit/bb308a2e8ee9486a4c8b53d8d773f7c8faaeba08
> Stefan, can I ask you to kindly update the
> https://gitlab.com/qemu-project/libvfio-user mirror? I'll be happy to include
> an update to subprojects/libvfio-user.wrap in this series.
>
> Finally, there is some more work required on top of this series to get
> message-based DMA to really work well:
>
> * libvfio-user has a long-standing issue where socket communication gets messed
> up when messages are sent from both ends at the same time. See
> https://github.com/nutanix/libvfio-user/issues/279 for more details. I've
> been engaging there and plan to contribute a fix.
>
> * qemu currently breaks down DMA accesses into chunks of size 8 bytes at
> maximum, each of which will be handled in a separate vfio-user DMA request
> message. This is quite terrible for large DMA acceses, such as when nvme
> reads and writes page-sized blocks for example. Thus, I would like to improve
> qemu to be able to perform larger accesses, at least for indirect memory
> regions. I have something working locally, but since this will likely result
> in more involved surgery and discussion, I am leaving this to be addressed in
> a separate patch.
>
I remember asking Stefan in the past if there wouldn't be a way to avoid
that mmap dance (and also handle uffd etc. easier) for vhost-user
(especially, virtiofsd) by only making QEMU access guest memory.
That could make memory-backend-ram support something like vhost-user,
avoiding shared memory and everything that comes with that (e.g., no
KSM, no shared zeropage).
So this series tackles vfio-user, does anybody know what it would take
to get something similar running for vhost-user?
--
Cheers,
David / dhildenb
next prev parent reply other threads:[~2023-07-04 8:21 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-07-04 8:06 [PATCH 0/3] Support message-based DMA in vfio-user server Mattias Nissler
2023-07-04 8:06 ` [PATCH 1/3] softmmu: Support concurrent bounce buffers Mattias Nissler
2023-07-20 18:10 ` Stefan Hajnoczi
2023-08-23 9:27 ` Mattias Nissler
2023-07-20 18:14 ` Stefan Hajnoczi
2023-07-04 8:06 ` [PATCH 2/3] softmmu: Remove DMA unmap notification callback Mattias Nissler
2023-07-20 18:14 ` Stefan Hajnoczi
2023-08-23 9:28 ` Mattias Nissler
2023-07-04 8:06 ` [PATCH 3/3] vfio-user: Message-based DMA support Mattias Nissler
2023-07-20 18:32 ` Stefan Hajnoczi
2023-08-23 9:28 ` Mattias Nissler
2023-07-04 8:20 ` David Hildenbrand [this message]
2023-07-20 18:41 ` [PATCH 0/3] Support message-based DMA in vfio-user server Stefan Hajnoczi
2023-07-20 22:10 ` Mattias Nissler
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=f4e6d2d0-d9e7-8779-8159-7b61546fd210@redhat.com \
--to=david@redhat.com \
--cc=elena.ufimtseva@oracle.com \
--cc=jag.raman@oracle.com \
--cc=john.levon@nutanix.com \
--cc=mnissler@rivosinc.com \
--cc=pbonzini@redhat.com \
--cc=peterx@redhat.com \
--cc=philmd@linaro.org \
--cc=qemu-devel@nongnu.org \
--cc=stefanha@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).