public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
From: Jason Gunthorpe <jgg@nvidia.com>
To: "Suthikulpanit, Suravee" <suravee.suthikulpanit@amd.com>
Cc: linux-kernel@vger.kernel.org, iommu@lists.linux.dev,
	kvm@vger.kernel.org, joro@8bytes.org, robin.murphy@arm.com,
	yi.l.liu@intel.com, alex.williamson@redhat.com,
	nicolinc@nvidia.com, baolu.lu@linux.intel.com,
	eric.auger@redhat.com, pandoh@google.com, kumaranand@google.com,
	jon.grimm@amd.com, santosh.shukla@amd.com, vasant.hegde@amd.com,
	jay.chen@amd.com, joseph.chung@amd.com
Subject: Re: [RFC PATCH 00/21] iommu/amd: Introduce support for HW accelerated vIOMMU w/ nested page table
Date: Fri, 23 Jun 2023 19:56:58 -0300	[thread overview]
Message-ID: <ZJYjOor3TKSeIo7F@nvidia.com> (raw)
In-Reply-To: <b3c85550-a7f6-a0d9-74a4-f98c8251b80e@amd.com>

On Fri, Jun 23, 2023 at 03:05:06PM -0700, Suthikulpanit, Suravee wrote:

> For example, an AMD IOMMU hardware is normally listed as a PCI device (e.g.
> PCI ID 00:00.2). To setup IOMMU PAS for this IOMMU instance, the IOMMU
> driver allocate an IOMMU v1 page table for this device, which contains PAS
> mapping.

So it is just system dram?
 
> The IOMMU hardware use the PAS for storing Guest IOMMU information such as
> Guest MMIOs, DevID Mapping Table, DomID Mapping Table, and Guest
> Command/Event/PPR logs.

Why does it have to be in kernel memory?

Why not store the whole thing in user mapped memory and have the VMM
manipulate it directly?

Jason

  reply	other threads:[~2023-06-23 22:59 UTC|newest]

Thread overview: 29+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-06-21 23:54 [RFC PATCH 00/21] iommu/amd: Introduce support for HW accelerated vIOMMU w/ nested page table Suravee Suthikulpanit
2023-06-21 23:54 ` [RFC PATCH 01/21] iommu/amd: Declare helper functions as extern Suravee Suthikulpanit
2023-06-21 23:54 ` [RFC PATCH 02/21] iommu/amd: Clean up spacing in amd_iommu_ops declaration Suravee Suthikulpanit
2023-06-21 23:54 ` [RFC PATCH 03/21] iommu/amd: Update PASID, GATS, and GLX feature related macros Suravee Suthikulpanit
2023-06-21 23:54 ` [RFC PATCH 04/21] iommu/amd: Modify domain_enable_v2() to add giov parameter Suravee Suthikulpanit
2023-06-21 23:54 ` [RFC PATCH 05/21] iommu/amd: Refactor set_dte_entry() helper function Suravee Suthikulpanit
2023-06-21 23:54 ` [RFC PATCH 06/21] iommu/amd: Modify set_dte_entry() to add gcr3 input parameter Suravee Suthikulpanit
2023-06-21 23:54 ` [RFC PATCH 07/21] iommu/amd: Modify set_dte_entry() to add user domain " Suravee Suthikulpanit
2023-06-21 23:54 ` [RFC PATCH 08/21] iommu/amd: Allow nested IOMMU page tables Suravee Suthikulpanit
2023-06-21 23:54 ` [RFC PATCH 09/21] iommu/amd: Add support for hw_info for iommu capability query Suravee Suthikulpanit
2023-06-21 23:54 ` [RFC PATCH 10/21] iommu/amd: Introduce vIOMMU-specific events and event info Suravee Suthikulpanit
2023-06-21 23:54 ` [RFC PATCH 11/21] iommu/amd: Introduce Reset vMMIO Command Suravee Suthikulpanit
2023-06-21 23:54 ` [RFC PATCH 12/21] iommu/amd: Introduce AMD vIOMMU-specific UAPI Suravee Suthikulpanit
2023-06-21 23:55 ` [RFC PATCH 13/21] iommu/amd: Introduce vIOMMU command-line option Suravee Suthikulpanit
2023-06-21 23:55 ` [RFC PATCH 14/21] iommu/amd: Initialize vIOMMU private address space regions Suravee Suthikulpanit
2023-06-21 23:55 ` [RFC PATCH 15/21] iommu/amd: Introduce vIOMMU vminit and vmdestroy ioctl Suravee Suthikulpanit
2023-06-21 23:55 ` [RFC PATCH 16/21] iommu/amd: Introduce vIOMMU ioctl for updating device mapping table Suravee Suthikulpanit
2023-06-21 23:55 ` [RFC PATCH 17/21] iommu/amd: Introduce vIOMMU ioctl for updating domain mapping Suravee Suthikulpanit
2023-06-21 23:55 ` [RFC PATCH 18/21] iommu/amd: Introduce vIOMMU ioctl for handling guest MMIO accesses Suravee Suthikulpanit
2023-06-21 23:55 ` [RFC PATCH 19/21] iommu/amd: Introduce vIOMMU ioctl for handling command buffer mapping Suravee Suthikulpanit
2023-06-21 23:55 ` [RFC PATCH 20/21] iommu/amd: Introduce vIOMMU ioctl for setting up guest CR3 Suravee Suthikulpanit
2023-06-21 23:55 ` [RFC PATCH 21/21] iommufd: Introduce AMD HW-vIOMMU IOCTL Suravee Suthikulpanit
2023-06-22 13:46 ` [RFC PATCH 00/21] iommu/amd: Introduce support for HW accelerated vIOMMU w/ nested page table Jason Gunthorpe
2023-06-23  1:15   ` Suthikulpanit, Suravee
2023-06-23 11:45     ` Jason Gunthorpe
2023-06-23 22:05       ` Suthikulpanit, Suravee
2023-06-23 22:56         ` Jason Gunthorpe [this message]
2023-06-24  2:08           ` Suthikulpanit, Suravee
2023-06-26 13:20             ` Jason Gunthorpe

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ZJYjOor3TKSeIo7F@nvidia.com \
    --to=jgg@nvidia.com \
    --cc=alex.williamson@redhat.com \
    --cc=baolu.lu@linux.intel.com \
    --cc=eric.auger@redhat.com \
    --cc=iommu@lists.linux.dev \
    --cc=jay.chen@amd.com \
    --cc=jon.grimm@amd.com \
    --cc=joro@8bytes.org \
    --cc=joseph.chung@amd.com \
    --cc=kumaranand@google.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=nicolinc@nvidia.com \
    --cc=pandoh@google.com \
    --cc=robin.murphy@arm.com \
    --cc=santosh.shukla@amd.com \
    --cc=suravee.suthikulpanit@amd.com \
    --cc=vasant.hegde@amd.com \
    --cc=yi.l.liu@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox