qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/3] Support message-based DMA in vfio-user server
@ 2023-07-04  8:06 Mattias Nissler
  2023-07-04  8:06 ` [PATCH 1/3] softmmu: Support concurrent bounce buffers Mattias Nissler
                   ` (4 more replies)
  0 siblings, 5 replies; 14+ messages in thread
From: Mattias Nissler @ 2023-07-04  8:06 UTC (permalink / raw)
  To: qemu-devel
  Cc: Elena Ufimtseva, Jagannathan Raman, Philippe Mathieu-Daudé,
	Paolo Bonzini, Peter Xu, stefanha, David Hildenbrand, john.levon,
	Mattias Nissler

This series adds basic support for message-based DMA in qemu's vfio-user
server. This is useful for cases where the client does not provide file
descriptors for accessing system memory via memory mappings. My motivating use
case is to hook up device models as PCIe endpoints to a hardware design. This
works by bridging the PCIe transaction layer to vfio-user, and the endpoint
does not access memory directly, but sends memory requests TLPs to the hardware
design in order to perform DMA.

Note that in addition to the 3 commits included, we also need a
subprojects/libvfio-user roll to bring in this bugfix:
https://github.com/nutanix/libvfio-user/commit/bb308a2e8ee9486a4c8b53d8d773f7c8faaeba08
Stefan, can I ask you to kindly update the
https://gitlab.com/qemu-project/libvfio-user mirror? I'll be happy to include
an update to subprojects/libvfio-user.wrap in this series.

Finally, there is some more work required on top of this series to get
message-based DMA to really work well:

* libvfio-user has a long-standing issue where socket communication gets messed
  up when messages are sent from both ends at the same time. See
  https://github.com/nutanix/libvfio-user/issues/279 for more details. I've
  been engaging there and plan to contribute a fix.

* qemu currently breaks down DMA accesses into chunks of size 8 bytes at
  maximum, each of which will be handled in a separate vfio-user DMA request
  message. This is quite terrible for large DMA acceses, such as when nvme
  reads and writes page-sized blocks for example. Thus, I would like to improve
  qemu to be able to perform larger accesses, at least for indirect memory
  regions. I have something working locally, but since this will likely result
  in more involved surgery and discussion, I am leaving this to be addressed in
  a separate patch.

Mattias Nissler (3):
  softmmu: Support concurrent bounce buffers
  softmmu: Remove DMA unmap notification callback
  vfio-user: Message-based DMA support

 hw/remote/vfio-user-obj.c |  62 ++++++++++++++++--
 softmmu/dma-helpers.c     |  28 --------
 softmmu/physmem.c         | 131 ++++++++------------------------------
 3 files changed, 83 insertions(+), 138 deletions(-)

-- 
2.34.1



^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2023-08-23  9:29 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-07-04  8:06 [PATCH 0/3] Support message-based DMA in vfio-user server Mattias Nissler
2023-07-04  8:06 ` [PATCH 1/3] softmmu: Support concurrent bounce buffers Mattias Nissler
2023-07-20 18:10   ` Stefan Hajnoczi
2023-08-23  9:27     ` Mattias Nissler
2023-07-20 18:14   ` Stefan Hajnoczi
2023-07-04  8:06 ` [PATCH 2/3] softmmu: Remove DMA unmap notification callback Mattias Nissler
2023-07-20 18:14   ` Stefan Hajnoczi
2023-08-23  9:28     ` Mattias Nissler
2023-07-04  8:06 ` [PATCH 3/3] vfio-user: Message-based DMA support Mattias Nissler
2023-07-20 18:32   ` Stefan Hajnoczi
2023-08-23  9:28     ` Mattias Nissler
2023-07-04  8:20 ` [PATCH 0/3] Support message-based DMA in vfio-user server David Hildenbrand
2023-07-20 18:41 ` Stefan Hajnoczi
2023-07-20 22:10   ` Mattias Nissler

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).