devicetree.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Michael S. Tsirkin" <mst@redhat.com>
To: David Woodhouse <dwmw2@infradead.org>
Cc: "Zhu Lingshan" <lingshan.zhu@amd.com>,
	virtio-comment@lists.linux.dev, hch@infradead.org,
	"Claire Chang" <tientzu@chromium.org>,
	linux-devicetree <devicetree@vger.kernel.org>,
	"Rob Herring" <robh+dt@kernel.org>,
	"Jörg Roedel" <joro@8bytes.org>,
	iommu@lists.linux-foundation.org, linux-kernel@vger.kernel.org,
	graf@amazon.de
Subject: Re: [RFC PATCH 1/3] content: Add VIRTIO_F_SWIOTLB to negotiate use of SWIOTLB bounce buffers
Date: Thu, 3 Apr 2025 09:19:09 -0400	[thread overview]
Message-ID: <20250403091001-mutt-send-email-mst@kernel.org> (raw)
In-Reply-To: <0261dfd09a5c548c1a0f56c89c7302e9701b630d.camel@infradead.org>

On Thu, Apr 03, 2025 at 09:22:57AM +0100, David Woodhouse wrote:
> On Thu, 2025-04-03 at 04:13 -0400, Michael S. Tsirkin wrote:
> > On Thu, Apr 03, 2025 at 08:54:45AM +0100, David Woodhouse wrote:
> > > On Thu, 2025-04-03 at 03:34 -0400, Michael S. Tsirkin wrote:
> > > > 
> > > > Indeed I personally do not exactly get why implement a virtual system
> > > > without an IOMMU when virtio-iommu is available.
> > > > 
> > > > I have a feeling it's about lack of windows drivers for virtio-iommu
> > > > at this point.
> > > 
> > > And a pKVM (etc.) implementation of virtio-iommu which would allow the
> > > *trusted* part of the hypervisor to know which guest memory should be
> > > shared with the VMM implementing the virtio device models?
> > 
> > Is there a blocker here?
> 
> Only the amount of complexity in what should be a minimal Trusted
> Compute Base. (And ideally subject to formal methods of proving its
> correctness too.)

Shrug. Does not have to be complex. Could be a "simple mode" for
virtio-iommu where it just accepts one buffer. No?

> And frankly, if we were going to accept a virtio-iommu in the TCB why
> not just implement enough virtqueue knowledge to build something where
> the trusted part just snoops on the *actual* e.g. virtio-net device to
> know which buffers the VMM was *invited* to access, and facilitate
> that?

Because it's awful? Buffers are a datapath thing. Stay away from there.

> We looked at doing that. It's awful.

Indeed.

> > > You'd also end up in a situation where you have a virtio-iommu for some
> > > devices, and a real two-stage IOMMU (e.g. SMMU or AMD's vIOMMU) for
> > > other devices. Are guest operating systems going to cope well with
> > > that?
> > 
> > They should. In particular because systems with multiple IOMMUs already
> > exist.
> > 
> > > Do the available discovery mechanisms for all the relevant IOMMUs
> > > even *allow* for that to be expressed?
> > 
> > I think yes. But, it's been a while since I played with this, let me
> > check what works, what does not, and get back to you on this.
> 
> Even if it could work in theory, I'll be astonished if it actually
> works in practice across a wide set of operating systems, and if it
> *ever* works for Windows.

Well it used to work. I won't have time to play with it until sometime
next week, if it's relevant.  If I poke at my windows system, I see 

> Compared with the simple option of presenting a device which
> conceptually doesn't even *do* DMA, which is confined to its own
> modular device driver... 

I'm not (yet) nacking this hack, though I already heartily dislike the
fact that it is mostly a PV-only thing since it can not be offloaded to
a real device efficiently *and* requires copies to move data
between devices. But, let's see if more issues surface.


-- 
MST


  parent reply	other threads:[~2025-04-03 13:19 UTC|newest]

Thread overview: 77+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-04-02 11:04 [RFC PATCH 0/3] Add Software IOTLB bounce buffer support David Woodhouse
2025-04-02 11:04 ` [RFC PATCH 1/3] content: Add VIRTIO_F_SWIOTLB to negotiate use of SWIOTLB bounce buffers David Woodhouse
2025-04-02 14:54   ` Michael S. Tsirkin
2025-04-02 15:12     ` David Woodhouse
2025-04-02 15:20       ` Michael S. Tsirkin
2025-04-02 15:47         ` David Woodhouse
2025-04-02 15:51           ` Michael S. Tsirkin
2025-04-02 16:16             ` David Woodhouse
2025-04-02 16:43               ` Michael S. Tsirkin
2025-04-02 17:10                 ` David Woodhouse
2025-04-03  7:29                   ` Christoph Hellwig
2025-04-03  7:37                     ` David Woodhouse
2025-04-03  7:39                       ` Christoph Hellwig
2025-04-03  7:43                         ` Michael S. Tsirkin
2025-04-03  7:44                           ` Christoph Hellwig
2025-04-03  8:10                         ` David Woodhouse
2025-04-04  6:29                           ` Christoph Hellwig
2025-04-04  6:39                             ` David Woodhouse
2025-04-04  6:44                               ` Christoph Hellwig
2025-04-04  6:45                                 ` Christoph Hellwig
2025-04-03  7:41                       ` Michael S. Tsirkin
2025-04-03  7:31                   ` Michael S. Tsirkin
2025-04-03  7:45                     ` David Woodhouse
2025-04-03  8:06                       ` Michael S. Tsirkin
2025-04-03  7:13   ` Zhu Lingshan
2025-04-03  7:24     ` David Woodhouse
2025-04-03  7:31       ` Zhu Lingshan
2025-04-04 10:27         ` David Woodhouse
2025-04-03  7:34     ` Michael S. Tsirkin
2025-04-03  7:54       ` David Woodhouse
2025-04-03  8:13         ` Michael S. Tsirkin
2025-04-03  8:22           ` David Woodhouse
2025-04-03  8:34             ` Zhu Lingshan
2025-04-03  8:57               ` David Woodhouse
2025-04-06  6:23                 ` Zhu Lingshan
2025-04-03 13:19             ` Michael S. Tsirkin [this message]
2025-04-03  7:24   ` Christoph Hellwig
2025-04-03  7:28     ` David Woodhouse
2025-04-03  7:35       ` Christoph Hellwig
2025-04-03  8:06         ` David Woodhouse
2025-04-04  6:35           ` Christoph Hellwig
2025-04-04  7:50             ` David Woodhouse
2025-04-04  8:09               ` Michael S. Tsirkin
2025-04-04  8:16                 ` David Woodhouse
2025-04-04  8:32                   ` Michael S. Tsirkin
2025-04-04  9:27                     ` David Woodhouse
2025-04-04 10:15                       ` David Woodhouse
2025-04-04 10:37                         ` Michael S. Tsirkin
2025-04-04 11:15                           ` David Woodhouse
2025-04-06 18:28                             ` Michael S. Tsirkin
2025-04-06 18:47                               ` David Woodhouse
2025-04-07  7:30                             ` Christoph Hellwig
2025-04-07  7:54                               ` David Woodhouse
2025-04-07  9:05                                 ` Christoph Hellwig
2025-04-07 10:09                                   ` David Woodhouse
2025-04-07 14:06                                     ` Christoph Hellwig
2025-04-07 14:59                                       ` David Woodhouse
2025-04-07 12:14                             ` Michael S. Tsirkin
2025-04-07 12:46                               ` David Woodhouse
2025-04-07  7:26                           ` Christoph Hellwig
2025-04-07  7:23                         ` Christoph Hellwig
2025-04-07  7:19                   ` Christoph Hellwig
2025-04-04  8:23               ` Christoph Hellwig
2025-04-04  9:39                 ` David Woodhouse
2025-04-07  7:34                   ` Christoph Hellwig
2025-04-07  9:40                     ` David Woodhouse
2025-04-02 11:04 ` [RFC PATCH 2/3] transport-mmio: Document restricted-dma-pool SWIOTLB bounce buffer David Woodhouse
2025-04-02 11:04 ` [RFC PATCH 3/3] transport-pci: Add SWIOTLB bounce buffer capability David Woodhouse
2025-04-02 14:58   ` Michael S. Tsirkin
2025-04-02 15:21     ` David Woodhouse
2025-04-03  7:27   ` Michael S. Tsirkin
2025-04-03  7:36     ` Zhu Lingshan
2025-04-03  7:37       ` Michael S. Tsirkin
2025-04-03  8:12         ` Zhu Lingshan
2025-04-03  8:16           ` Michael S. Tsirkin
2025-04-03  8:37             ` Zhu Lingshan
2025-04-03  8:44     ` David Woodhouse

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20250403091001-mutt-send-email-mst@kernel.org \
    --to=mst@redhat.com \
    --cc=devicetree@vger.kernel.org \
    --cc=dwmw2@infradead.org \
    --cc=graf@amazon.de \
    --cc=hch@infradead.org \
    --cc=iommu@lists.linux-foundation.org \
    --cc=joro@8bytes.org \
    --cc=lingshan.zhu@amd.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=robh+dt@kernel.org \
    --cc=tientzu@chromium.org \
    --cc=virtio-comment@lists.linux.dev \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).