From: Alex Williamson <alex@shazbot.org>
To: Stephen Bates <sbates@raithlin.com>
Cc: qemu-devel@nongnu.org, mst@redhat.com,
marcel.apfelbaum@gmail.com, farosas@suse.de, lvivier@redhat.com,
pbonzini@redhat.com, shai@shai.pub, k.jensen@samsung.com
Subject: Re: [PATCH v1] hw/pci: Add PCI MMIO Bridge for device-to-device MMIO
Date: Wed, 3 Dec 2025 16:29:56 -0700 [thread overview]
Message-ID: <20251203162956.52d07f7b.alex@shazbot.org> (raw)
In-Reply-To: <aTCpGWb7V8R2HVl8@snoc-thinkstation>
On Wed, 3 Dec 2025 14:18:17 -0700
Stephen Bates <sbates@raithlin.com> wrote:
> This patch introduces a PCI MMIO Bridge device that enables PCI devices
> to perform MMIO operations on other PCI devices via command packets. This
> provides software-defined PCIe peer-to-peer (P2P) communication without
> requiring specific hardware topology.
Who is supposed to use this and why wouldn't they just use bounce
buffering through a guest kernel driver? Is rudimentary data movement
something we really want/need to push to the VMM? The device seems
inherently insecure.
> Configuration:
> qemu-system-x86_64 -machine q35 \
> -device pci-mmio-bridge,shadow-gpa=0x80000000,shadow-size=4096
>
> - shadow-gpa: Guest physical address (default: 0x80000000, 0=auto)
> - shadow-size: Buffer size in bytes (default: 4096, min: 4096)
> - poll-interval-ns: Polling interval (default: 1000000 = 1ms)
> - enabled: Enable/disable bridge (default: true)
Wouldn't it make more sense if the buffer were allocated by the guest
driver and programmed at runtime? Polling just adds yet more
questionable VMM overhead, why not ioeventfds?
> The bridge exposes shadow buffer information via a vendor-specific PCI config
> space:
>
> Offset 0x40: GPA bits [31:0]
> Offset 0x44: GPA bits [63:32]
> Offset 0x48: Buffer size
> Offset 0x4C: Queue depth
Arbitrary registers like this should be exposed via BARs or at least in
a vendor specific capability within config space.
...
> +
> +VFIO can only map guest RAM not emulated PCI MMIO space. And, at the present
only guest RAM: False, not emulated PCI MMIO: True
> +time, VFIO cannot map MMIO space into an IOVA mapping. Therefore the PCI MMIO
Other assigned device MMIO, it absolutely can. The legacy type1 support
has always had this and IOMMUFD based vfio is about to gain this via
dma-buf sharing as well. Thanks,
Alex
prev parent reply other threads:[~2025-12-03 23:31 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-12-03 21:18 [PATCH v1] hw/pci: Add PCI MMIO Bridge for device-to-device MMIO Stephen Bates
2025-12-03 21:29 ` Michael S. Tsirkin
2025-12-03 23:29 ` Alex Williamson [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20251203162956.52d07f7b.alex@shazbot.org \
--to=alex@shazbot.org \
--cc=farosas@suse.de \
--cc=k.jensen@samsung.com \
--cc=lvivier@redhat.com \
--cc=marcel.apfelbaum@gmail.com \
--cc=mst@redhat.com \
--cc=pbonzini@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=sbates@raithlin.com \
--cc=shai@shai.pub \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).