qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Alex Williamson <alex.williamson@redhat.com>
To: tugouxp <13824125580@163.com>
Cc: qemu-devel@nongnu.org
Subject: Re: About VFIO Device Pass-through on Qemu.
Date: Fri, 5 Jul 2024 08:12:37 -0600	[thread overview]
Message-ID: <20240705081237.41208949.alex.williamson@redhat.com> (raw)
In-Reply-To: <17672e.9050.190822743a6.Coremail.13824125580@163.com>

On Fri, 5 Jul 2024 17:08:49 +0800 (CST)
tugouxp <13824125580@163.com> wrote:

> Hi folks:
> 
> 
>     I have a questions about device vfio pass-through usage snarios,
> PCI device pass-throug for example. did the GPA that host physical
> memory  mapped to Guest vcpu through MMU must be identical with the
> IOVA that host physical memory mapped to gust device thourgh iommu?

I'm having trouble parsing that wording, but without a vIOMMU the
device operates in the GPA address space, so IOVA == GPA.


> if so, that will be convenient for driver developer, because then can
> share data physical address between device and share memory. but is
> this true? is this the pass-through user manner?

If you're asking about a shared DMA memory buffer between devices, yes,
without a vIOMMU the buffer GPA (IOVA) would be the same between
devices.  Also note that device MMIO is mapped into the device address
space, so depending on the underlying host support for peer-to-peer DMA
there might be a working "direct" path between devices (where "direct"
means bounced through the IOMMU).

> my thought: it will be very convent for driver developer if GPA ==
> IOVA. because theuy are all "physical" on Guest, will offer a
> consistent view of memory resource for vCPU and vDevice, but is this
> true?
> 
> 
> VCPU:
>  GVA----(MMU)----GPA-------(+offset)----->HVA------>(MMU)----->HPA.
>  Device in Guest:                      
>   IOVA---->(IOMMU)---->HPA

Yes.  In fact this is the only way we can do transparent device
assignment without a paravirtualized DMA layer is to use the IOMMU to
map the device into the GPA address space.  Also the fixed IOVA/GPA to
HPA mapping in the IOMMU is what necessitates memory pinning.  Thanks,

Alex



      reply	other threads:[~2024-07-05 14:13 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-07-05  9:08 About VFIO Device Pass-through on Qemu tugouxp
2024-07-05 14:12 ` Alex Williamson [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20240705081237.41208949.alex.williamson@redhat.com \
    --to=alex.williamson@redhat.com \
    --cc=13824125580@163.com \
    --cc=qemu-devel@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).