qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Peter Xu <peterx@redhat.com>
To: Yi Sun <yi.y.sun@linux.intel.com>
Cc: qemu-devel@nongnu.org, pbonzini@redhat.com, rth@twiddle.net,
	ehabkost@redhat.com, mst@redhat.com, marcel.apfelbaum@gmail.com,
	jasowang@redhat.com, kevin.tian@intel.com, yi.l.liu@intel.com,
	yi.y.sun@intel.com
Subject: Re: [Qemu-devel] [PATCH v1 0/3] intel_iommu: support scalable mode
Date: Tue, 5 Mar 2019 11:09:34 +0800	[thread overview]
Message-ID: <20190305030934.GH1657@xz-x1> (raw)
In-Reply-To: <1551753295-30167-1-git-send-email-yi.y.sun@linux.intel.com>

On Tue, Mar 05, 2019 at 10:34:52AM +0800, Yi Sun wrote:
> Intel vt-d rev3.0 [1] introduces a new translation mode called
> 'scalable mode', which enables PASID-granular translations for
> first level, second level, nested and pass-through modes. The
> vt-d scalable mode is the key ingredient to enable Scalable I/O
> Virtualization (Scalable IOV) [2] [3], which allows sharing a
> device in minimal possible granularity (ADI - Assignable Device
> Interface). As a result, previous Extended Context (ECS) mode
> is deprecated (no production ever implements ECS).
> 
> This patch set emulates a minimal capability set of VT-d scalable
> mode, equivalent to what is available in VT-d legacy mode today:
>     1. Scalable mode root entry, context entry and PASID table
>     2. Seconds level translation under scalable mode
>     3. Queued invalidation (with 256 bits descriptor)
>     4. Pass-through mode
> 
> Corresponding intel-iommu driver support will be included in
> kernel 5.0:
>     https://www.spinics.net/lists/kernel/msg2985279.html
> 
> We will add emulation of full scalable mode capability along with
> guest iommu driver progress later, e.g.:
>     1. First level translation
>     2. Nested translation
>     3. Per-PASID invalidation descriptors
>     4. Page request services for handling recoverable faults
> 
> To verify the patches, below cases were tested according to Peter Xu's
> suggestions.
>     +---------+----------------------------------------------------------------+----------------------------------------------------------------+
>     |         |                      w/ Device Passthr                         |                     w/o Device Passthr                         |
>     |         +-------------------------------+--------------------------------+-------------------------------+--------------------------------+
>     |         | virtio-net-pci, vhost=on      | virtio-net-pci, vhost=off      | virtio-net-pci, vhost=on      | virtio-net-pci, vhost=off      |
>     |         +-------------------------------+--------------------------------+-------------------------------+--------------------------------+
>     |         | netperf | kernel bld | data cp| netperf | kernel bld | data cp | netperf | kernel bld | data cp| netperf | kernel bld | data cp |
>     +---------+-------------------------------+--------------------------------+-------------------------------+--------------------------------+
>     | Legacy  | Pass    | Pass       | Pass   | Pass    | Pass       | Pass    | Pass    | Pass       | Pass   | Pass    | Pass       | Pass    |
>     +---------+-------------------------------+--------------------------------+-------------------------------+--------------------------------+
>     | Scalable| Pass    | Pass       | Pass   | Pass    | Pass       | Pass    | Pass    | Pass       | Pass   | Pass    | Pass       | Pass    |
>     +---------+-------------------------------+--------------------------------+-------------------------------+--------------------------------+

Legacy vfio-pci?

I've reviewed the whole series, I would assume that the maintainer
might still test it a bit before a pull but again even before that I
would really like to double confirm this series won't break anything.

Thanks,

-- 
Peter Xu

  parent reply	other threads:[~2019-03-05  3:09 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-03-05  2:34 [Qemu-devel] [PATCH v1 0/3] intel_iommu: support scalable mode Yi Sun
2019-03-05  2:34 ` [Qemu-devel] [PATCH v1 1/3] intel_iommu: scalable mode emulation Yi Sun
2019-03-05  3:06   ` Peter Xu
2019-03-05  2:34 ` [Qemu-devel] [PATCH v1 2/3] intel_iommu: add 256 bits qi_desc support Yi Sun
2019-03-05  2:34 ` [Qemu-devel] [PATCH v1 3/3] intel_iommu: add scalable-mode option to make scalable mode work Yi Sun
2019-03-05  3:07   ` Peter Xu
2019-03-05  3:09 ` Peter Xu [this message]
2019-03-05  3:23   ` [Qemu-devel] [PATCH v1 0/3] intel_iommu: support scalable mode Michael S. Tsirkin
2019-03-05  3:24   ` Yi Sun
2019-03-05  4:48     ` Peter Xu
2019-03-05  5:15       ` Yi Sun
2019-03-05  5:36         ` Peter Xu
2019-03-05  6:27           ` Yi Sun
2019-03-05  6:39             ` Peter Xu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190305030934.GH1657@xz-x1 \
    --to=peterx@redhat.com \
    --cc=ehabkost@redhat.com \
    --cc=jasowang@redhat.com \
    --cc=kevin.tian@intel.com \
    --cc=marcel.apfelbaum@gmail.com \
    --cc=mst@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=rth@twiddle.net \
    --cc=yi.l.liu@intel.com \
    --cc=yi.y.sun@intel.com \
    --cc=yi.y.sun@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).