qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Peter Xu <peterx@redhat.com>
To: Yi Sun <yi.y.sun@linux.intel.com>
Cc: qemu-devel@nongnu.org, pbonzini@redhat.com, rth@twiddle.net,
	ehabkost@redhat.com, mst@redhat.com, marcel.apfelbaum@gmail.com,
	jasowang@redhat.com, kevin.tian@intel.com, yi.l.liu@intel.com,
	yi.y.sun@intel.com
Subject: Re: [Qemu-devel] [RFC v2 0/3] intel_iommu: support scalable mode
Date: Fri, 1 Mar 2019 15:07:34 +0800	[thread overview]
Message-ID: <20190301070734.GD22229@xz-x1> (raw)
In-Reply-To: <1551361677-28933-1-git-send-email-yi.y.sun@linux.intel.com>

On Thu, Feb 28, 2019 at 09:47:54PM +0800, Yi Sun wrote:
> Intel vt-d rev3.0 [1] introduces a new translation mode called
> 'scalable mode', which enables PASID-granular translations for
> first level, second level, nested and pass-through modes. The
> vt-d scalable mode is the key ingredient to enable Scalable I/O
> Virtualization (Scalable IOV) [2] [3], which allows sharing a
> device in minimal possible granularity (ADI - Assignable Device
> Interface). As a result, previous Extended Context (ECS) mode
> is deprecated (no production ever implements ECS).
> 
> This patch set emulates a minimal capability set of VT-d scalable
> mode, equivalent to what is available in VT-d legacy mode today:
>     1. Scalable mode root entry, context entry and PASID table
>     2. Seconds level translation under scalable mode
>     3. Queued invalidation (with 256 bits descriptor)
>     4. Pass-through mode
> 
> Corresponding intel-iommu driver support will be included in
> kernel 5.0:
>     https://www.spinics.net/lists/kernel/msg2985279.html
> 
> We will add emulation of full scalable mode capability along with
> guest iommu driver progress later, e.g.:
>     1. First level translation
>     2. Nested translation
>     3. Per-PASID invalidation descriptors
>     4. Page request services for handling recoverable faults
> 
> To verify the patches, below cases were tested according to Peter Xu's
> suggestions.
>     +---------+----------------------------------------------------------------+----------------------------------------------------------------+
>     |         |                      w/ Device Passthr                         |                     w/o Device Passthr                         |
>     |         +-------------------------------+--------------------------------+-------------------------------+--------------------------------+
>     |         | virtio-net-pci, vhost=on      | virtio-net-pci, vhost=off      | virtio-net-pci, vhost=on      | virtio-net-pci, vhost=off      |
>     |         +-------------------------------+--------------------------------+-------------------------------+--------------------------------+
>     |         | netperf | kernel bld | data cp| netperf | kernel bld | data cp | netperf | kernel bld | data cp| netperf | kernel bld | data cp |
>     +---------+-------------------------------+--------------------------------+-------------------------------+--------------------------------+
>     | Legacy  | Pass    | Pass       | Pass   | Pass    | Pass       | Pass    | Pass    | Pass       | Pass   | Pass    | Pass       | Pass    |
>     +---------+-------------------------------+--------------------------------+-------------------------------+--------------------------------+
>     | Scalable| Pass    | Pass       | Pass   | Pass    | Pass       | Pass    | Pass    | Pass       | Pass   | Pass    | Pass       | Pass    |
>     +---------+-------------------------------+--------------------------------+-------------------------------+--------------------------------+

Hi, Yi,

Thanks very much for the thorough test matrix!

The last thing I'd like to confirm is have you tested device
assignment with v2?  And note that when you test with virtio devices
you should not need caching-mode=on (but caching-mode=on should not
break anyone though).

I've still got some comments here and there but it looks very good at
least to me overall.

Thanks,

-- 
Peter Xu

  parent reply	other threads:[~2019-03-01  7:07 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-02-28 13:47 [Qemu-devel] [RFC v2 0/3] intel_iommu: support scalable mode Yi Sun
2019-02-28 13:47 ` [Qemu-devel] [RFC v2 1/3] intel_iommu: scalable mode emulation Yi Sun
2019-03-01  6:52   ` Peter Xu
2019-03-01  7:51     ` Yi Sun
2019-02-28 13:47 ` [Qemu-devel] [RFC v2 2/3] intel_iommu: add 256 bits qi_desc support Yi Sun
2019-03-01  6:59   ` Peter Xu
2019-03-01  7:51     ` Yi Sun
2019-02-28 13:47 ` [Qemu-devel] [RFC v2 3/3] intel_iommu: add scalable-mode option to make scalable mode work Yi Sun
2019-03-01  7:04   ` Peter Xu
2019-03-01  7:54     ` Yi Sun
2019-03-01  7:07 ` Peter Xu [this message]
2019-03-01  7:13   ` [Qemu-devel] [RFC v2 0/3] intel_iommu: support scalable mode Yi Sun
2019-03-01  7:30     ` Tian, Kevin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190301070734.GD22229@xz-x1 \
    --to=peterx@redhat.com \
    --cc=ehabkost@redhat.com \
    --cc=jasowang@redhat.com \
    --cc=kevin.tian@intel.com \
    --cc=marcel.apfelbaum@gmail.com \
    --cc=mst@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=rth@twiddle.net \
    --cc=yi.l.liu@intel.com \
    --cc=yi.y.sun@intel.com \
    --cc=yi.y.sun@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).