public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
From: Jason Gunthorpe <jgg@nvidia.com>
To: "Suthikulpanit, Suravee" <suravee.suthikulpanit@amd.com>
Cc: linux-kernel@vger.kernel.org, iommu@lists.linux.dev,
	kvm@vger.kernel.org, joro@8bytes.org, robin.murphy@arm.com,
	yi.l.liu@intel.com, alex.williamson@redhat.com,
	nicolinc@nvidia.com, baolu.lu@linux.intel.com,
	eric.auger@redhat.com, pandoh@google.com, kumaranand@google.com,
	jon.grimm@amd.com, santosh.shukla@amd.com, vasant.hegde@amd.com,
	jay.chen@amd.com, joseph.chung@amd.com
Subject: Re: [RFC PATCH 00/21] iommu/amd: Introduce support for HW accelerated vIOMMU w/ nested page table
Date: Mon, 26 Jun 2023 10:20:40 -0300	[thread overview]
Message-ID: <ZJmQqLd5MVZpobrG@nvidia.com> (raw)
In-Reply-To: <ac4570c8-609b-03c3-c320-3dbe7432a8ed@amd.com>

On Fri, Jun 23, 2023 at 07:08:54PM -0700, Suthikulpanit, Suravee wrote:
> > > The IOMMU hardware use the PAS for storing Guest IOMMU information such as
> > > Guest MMIOs, DevID Mapping Table, DomID Mapping Table, and Guest
> > > Command/Event/PPR logs.
> > 
> > Why does it have to be in kernel memory?
> > 
> > Why not store the whole thing in user mapped memory and have the VMM
> > manipulate it directly?
> 
> The Guest MMIO, CmdBuf Dirty Status, are allocated per IOMMU instance. So,
> these data structure cannot be allocated by VMM. 

Yes, that is unfortunate so much stuff here wasn't 4k aligned so it
could be mapped sensibly. It doesn't really make any sense to have a
giant repeated register map that still has to be hypervisor trapped, a
command queue would have been more logical :(

> In this case, the IOMMUFD_CMD_MMIO_ACCESS might still be needed.

It seems this is unavoidable, but it needs a clearer name and purpose.

But more importantly we don't really have any object to hang this off
of - we don't have the notion of a "VM" in iommufd right now.

We had sort of been handwaving that maybe the entire FD is a "VM" and
maybe that works for some scenarios, but I don't think it works for
what you need, especially if you consider multi-instance.

So, it is good that you brought this series right now as I think it
needs harmonizing with what ARM needs to do, and this is the more
complex version of the two.

> The DomID and DevID mapping tables are allocated per-VM:
>   * DomID Mapping Table (512 KB contiguous memory)
>   * DevID Mapping Table (1 MB contiguous memory)

But these can be mapped into that IPA space at 4k granularity?
They just need contiguous IOVA? So the VMM could provide this memory
and we don't need calls to manipulate it?

> Let's say we can use IOMMU_SET_DEV_DATA to communicate the memory address of
> Dom/DevID Mapping tables to IOMMU driver to pin and map in the PAS IOMMU
> page table. Then, this might work. Does that go along the line of what you
> are thinking (mainly to try to avoid introducing additional ioctl)?

I think it makes more sense if memory that is logically part of the
VMM is mmap'd to the VMM. Since we have the general design of passing
user pointers and pinning them it makes some sense. You could do the
same trick as your IPA space and use a IPA IOAS plus an access to set
this all up.

This has the same issue as above, it needs some formal VM object, as
fundamentally you are asking the driver to allocate a limited resource
on a specific IOMMU instance and then link that to other actions.

Jason

      reply	other threads:[~2023-06-26 13:20 UTC|newest]

Thread overview: 29+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-06-21 23:54 [RFC PATCH 00/21] iommu/amd: Introduce support for HW accelerated vIOMMU w/ nested page table Suravee Suthikulpanit
2023-06-21 23:54 ` [RFC PATCH 01/21] iommu/amd: Declare helper functions as extern Suravee Suthikulpanit
2023-06-21 23:54 ` [RFC PATCH 02/21] iommu/amd: Clean up spacing in amd_iommu_ops declaration Suravee Suthikulpanit
2023-06-21 23:54 ` [RFC PATCH 03/21] iommu/amd: Update PASID, GATS, and GLX feature related macros Suravee Suthikulpanit
2023-06-21 23:54 ` [RFC PATCH 04/21] iommu/amd: Modify domain_enable_v2() to add giov parameter Suravee Suthikulpanit
2023-06-21 23:54 ` [RFC PATCH 05/21] iommu/amd: Refactor set_dte_entry() helper function Suravee Suthikulpanit
2023-06-21 23:54 ` [RFC PATCH 06/21] iommu/amd: Modify set_dte_entry() to add gcr3 input parameter Suravee Suthikulpanit
2023-06-21 23:54 ` [RFC PATCH 07/21] iommu/amd: Modify set_dte_entry() to add user domain " Suravee Suthikulpanit
2023-06-21 23:54 ` [RFC PATCH 08/21] iommu/amd: Allow nested IOMMU page tables Suravee Suthikulpanit
2023-06-21 23:54 ` [RFC PATCH 09/21] iommu/amd: Add support for hw_info for iommu capability query Suravee Suthikulpanit
2023-06-21 23:54 ` [RFC PATCH 10/21] iommu/amd: Introduce vIOMMU-specific events and event info Suravee Suthikulpanit
2023-06-21 23:54 ` [RFC PATCH 11/21] iommu/amd: Introduce Reset vMMIO Command Suravee Suthikulpanit
2023-06-21 23:54 ` [RFC PATCH 12/21] iommu/amd: Introduce AMD vIOMMU-specific UAPI Suravee Suthikulpanit
2023-06-21 23:55 ` [RFC PATCH 13/21] iommu/amd: Introduce vIOMMU command-line option Suravee Suthikulpanit
2023-06-21 23:55 ` [RFC PATCH 14/21] iommu/amd: Initialize vIOMMU private address space regions Suravee Suthikulpanit
2023-06-21 23:55 ` [RFC PATCH 15/21] iommu/amd: Introduce vIOMMU vminit and vmdestroy ioctl Suravee Suthikulpanit
2023-06-21 23:55 ` [RFC PATCH 16/21] iommu/amd: Introduce vIOMMU ioctl for updating device mapping table Suravee Suthikulpanit
2023-06-21 23:55 ` [RFC PATCH 17/21] iommu/amd: Introduce vIOMMU ioctl for updating domain mapping Suravee Suthikulpanit
2023-06-21 23:55 ` [RFC PATCH 18/21] iommu/amd: Introduce vIOMMU ioctl for handling guest MMIO accesses Suravee Suthikulpanit
2023-06-21 23:55 ` [RFC PATCH 19/21] iommu/amd: Introduce vIOMMU ioctl for handling command buffer mapping Suravee Suthikulpanit
2023-06-21 23:55 ` [RFC PATCH 20/21] iommu/amd: Introduce vIOMMU ioctl for setting up guest CR3 Suravee Suthikulpanit
2023-06-21 23:55 ` [RFC PATCH 21/21] iommufd: Introduce AMD HW-vIOMMU IOCTL Suravee Suthikulpanit
2023-06-22 13:46 ` [RFC PATCH 00/21] iommu/amd: Introduce support for HW accelerated vIOMMU w/ nested page table Jason Gunthorpe
2023-06-23  1:15   ` Suthikulpanit, Suravee
2023-06-23 11:45     ` Jason Gunthorpe
2023-06-23 22:05       ` Suthikulpanit, Suravee
2023-06-23 22:56         ` Jason Gunthorpe
2023-06-24  2:08           ` Suthikulpanit, Suravee
2023-06-26 13:20             ` Jason Gunthorpe [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ZJmQqLd5MVZpobrG@nvidia.com \
    --to=jgg@nvidia.com \
    --cc=alex.williamson@redhat.com \
    --cc=baolu.lu@linux.intel.com \
    --cc=eric.auger@redhat.com \
    --cc=iommu@lists.linux.dev \
    --cc=jay.chen@amd.com \
    --cc=jon.grimm@amd.com \
    --cc=joro@8bytes.org \
    --cc=joseph.chung@amd.com \
    --cc=kumaranand@google.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=nicolinc@nvidia.com \
    --cc=pandoh@google.com \
    --cc=robin.murphy@arm.com \
    --cc=santosh.shukla@amd.com \
    --cc=suravee.suthikulpanit@amd.com \
    --cc=vasant.hegde@amd.com \
    --cc=yi.l.liu@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox