qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Yu Zhang <yu.c.zhang@linux.intel.com>
To: Igor Mammedov <imammedo@redhat.com>
Cc: Eduardo Habkost <ehabkost@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	qemu-devel@nongnu.org, Peter Xu <peterx@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Richard Henderson <rth@twiddle.net>
Subject: Re: [Qemu-devel] [PATCH v3 1/2] intel-iommu: differentiate host address width from IOVA address width.
Date: Sat, 22 Dec 2018 00:09:44 +0800	[thread overview]
Message-ID: <20181221160944.65c5skjhkel3of7y@linux.intel.com> (raw)
In-Reply-To: <20181221151325.39b64733@redhat.com>

On Fri, Dec 21, 2018 at 03:13:25PM +0100, Igor Mammedov wrote:
> On Thu, 20 Dec 2018 19:18:01 -0200
> Eduardo Habkost <ehabkost@redhat.com> wrote:
> 
> > On Wed, Dec 19, 2018 at 11:40:37AM +0100, Igor Mammedov wrote:
> > > On Wed, 19 Dec 2018 10:57:17 +0800
> > > Yu Zhang <yu.c.zhang@linux.intel.com> wrote:
> > >   
> > > > On Tue, Dec 18, 2018 at 03:55:36PM +0100, Igor Mammedov wrote:  
> > > > > On Tue, 18 Dec 2018 17:27:23 +0800
> > > > > Yu Zhang <yu.c.zhang@linux.intel.com> wrote:
> > > > >     
> > > > > > On Mon, Dec 17, 2018 at 02:17:40PM +0100, Igor Mammedov wrote:    
> > > > > > > On Wed, 12 Dec 2018 21:05:38 +0800
> > > > > > > Yu Zhang <yu.c.zhang@linux.intel.com> wrote:
> > > > > > >     
> > > > > > > > Currently, vIOMMU is using the value of IOVA address width, instead of
> > > > > > > > the host address width(HAW) to calculate the number of reserved bits in
> > > > > > > > data structures such as root entries, context entries, and entries of
> > > > > > > > DMA paging structures etc.
> > > > > > > > 
> > > > > > > > However values of IOVA address width and of the HAW may not equal. For
> > > > > > > > example, a 48-bit IOVA can only be mapped to host addresses no wider than
> > > > > > > > 46 bits. Using 48, instead of 46 to calculate the reserved bit may result
> > > > > > > > in an invalid IOVA being accepted.
> > > > > > > > 
> > > > > > > > To fix this, a new field - haw_bits is introduced in struct IntelIOMMUState,
> > > > > > > > whose value is initialized based on the maximum physical address set to
> > > > > > > > guest CPU.    
> > > > > > >     
> > > > > > > > Also, definitions such as VTD_HOST_AW_39/48BIT etc. are renamed
> > > > > > > > to clarify.
> > > > > > > > 
> > > > > > > > Signed-off-by: Yu Zhang <yu.c.zhang@linux.intel.com>
> > > > > > > > Reviewed-by: Peter Xu <peterx@redhat.com>
> > > > > > > > ---    
> > > > > > > [...]
> > > > > > >     
> > > > > > > > @@ -3100,6 +3104,8 @@ static void vtd_iommu_replay(IOMMUMemoryRegion *iommu_mr, IOMMUNotifier *n)
> > > > > > > >  static void vtd_init(IntelIOMMUState *s)
> > > > > > > >  {
> > > > > > > >      X86IOMMUState *x86_iommu = X86_IOMMU_DEVICE(s);
> > > > > > > > +    CPUState *cs = first_cpu;
> > > > > > > > +    X86CPU *cpu = X86_CPU(cs);
> > > > > > > >  
> > > > > > > >      memset(s->csr, 0, DMAR_REG_SIZE);
> > > > > > > >      memset(s->wmask, 0, DMAR_REG_SIZE);
> > > > > > > > @@ -3119,23 +3125,24 @@ static void vtd_init(IntelIOMMUState *s)
> > > > > > > >      s->cap = VTD_CAP_FRO | VTD_CAP_NFR | VTD_CAP_ND |
> > > > > > > >               VTD_CAP_MAMV | VTD_CAP_PSI | VTD_CAP_SLLPS |
> > > > > > > >               VTD_CAP_SAGAW_39bit | VTD_CAP_MGAW(s->aw_bits);
> > > > > > > > -    if (s->aw_bits == VTD_HOST_AW_48BIT) {
> > > > > > > > +    if (s->aw_bits == VTD_AW_48BIT) {
> > > > > > > >          s->cap |= VTD_CAP_SAGAW_48bit;
> > > > > > > >      }
> > > > > > > >      s->ecap = VTD_ECAP_QI | VTD_ECAP_IRO;
> > > > > > > > +    s->haw_bits = cpu->phys_bits;    
> > > > > > > Is it possible to avoid accessing CPU fields directly or cpu altogether
> > > > > > > and set phys_bits when iommu is created?    
> > > > > > 
> > > > > > Thanks for your comments, Igor.
> > > > > > 
> > > > > > Well, I guess you prefer not to query the CPU capabilities while deciding
> > > > > > the vIOMMU features. But to me, they are not that irrelevant.:)
> > > > > > 
> > > > > > Here the hardware address width in vt-d, and the one in cpuid.MAXPHYSADDR
> > > > > > are referring to the same concept. In VM, both are the maximum guest physical
> > > > > > address width. If we do not check the CPU field here, we will still have to
> > > > > > check the CPU field in other places such as build_dmar_q35(), and reset the
> > > > > > s->haw_bits again.
> > > > > > 
> > > > > > Is this explanation convincing enough? :)    
> > > > > current build_dmar_q35() doesn't do it, it's all new code in this series that
> > > > > contains not acceptable direct access from one device (iommu) to another (cpu).   
> > > > > Proper way would be for the owner of iommu to fish limits from somewhere and set
> > > > > values during iommu creation.    
> > > > 
> > > > Well, current build_dmar_q35() doesn't do it, because it is using the incorrect value. :)
> > > > According to the spec, the host address width is the maximum physical address width,
> > > > yet current implementation is using the DMA address width. For me, this is not only
> > > > wrong, but also unsecure. For this point, I think we all agree this need to be fixed.
> > > > 
> > > > As to how to fix it - should we query the cpu fields, I still do not understand why
> > > > this is not acceptable. :)
> > > > 
> > > > I had thought of other approaches before, yet I did not choose:
> > > >   
> > > > 1> Introduce a new parameter, say, "x-haw-bits" which is used for iommu to limit its    
> > > > physical address width(similar to the "x-aw-bits" for IOVA). But what should we check
> > > > this parameter or not? What if this parameter is set to sth. different than the "phys-bits"
> > > > or not?
> > > >   
> > > > 2> Another choice I had thought of is, to query the physical iommu. I abandoned this    
> > > > idea because my understanding is that vIOMMU is not a passthrued device, it is emulated.  
> > >   
> > > > So Igor, may I ask why you think checking against the cpu fields so not acceptable? :)  
> > > Because accessing private fields of device from another random device is not robust
> > > and a subject to breaking in unpredictable manner when field meaning or initialization
> > > order changes. (analogy to baremetal: one does not solder wire to a CPU die to let
> > > access some piece of data from random device).
> > >   
> > 
> > With either the solution below or the one I proposed, we still
> > have a ordering problem: if we want "-cpu ...,phys-bits=..." to
> As Michael said, it's questionable if iommu should rely on guest's
> phys-bits at all, but that aside we should use proper interfaces
> and hierarchy to initialize devices, see below why I dislike
> simplistic pc_max_phys_bits().

Well, my understanding of the vt-d spec is that the address limitation in
DMAR are referring to the same concept of CPUID.MAXPHYSADDR. I do not think
there's any different in the native scenario. :)

> 
> > affect the IOMMU device, we will need the CPU objects to be
> > created before IOMMU realize.
> > 
> > At least both proposals make the initialization ordering
> > explicitly a responsibility of the machine code.  In either case,
> > I don't think we will start creating all CPU objects after device
> > realize any time soon.
> > 
> > 
> > > I've looked at intel-iommu code and how it's created so here is a way to do the thing
> > > you need using proper interfaces:
> > > 
> > > 1. add x-haw_bits property
> > > 2. include in your series patch
> > >     '[Qemu-devel] [PATCH] qdev: let machine hotplug handler to override  bus hotplug handler'
> > > 3. add your iommu to pc_get_hotpug_handler() to redirect plug flow to
> > >    machine and let _pre_plug handler to check and set x-haw_bits for machine level  
> > 
> > Wow, that's a very complex way to pass a single integer from
> > machine code to device code.  If this is the only way to do that,
> > we really need to take a step back and rethink our API design.
> > 
> > What's wrong with having a simple
> >   uint32_t pc_max_phys_bits(PCMachineState*)
> > function?
> As suggested, it would be only aesthetic change for accessing first_cpu from
> random device at random time. IOMMU would still access cpu instance directly
> no matter how much wrappers one would use so it's still the same hack.
> If phys_bits were changing during VM lifecycle and iommu needed to use
> updated value then using pc_max_phys_bits() might be justified as
> we don't have interfaces to handle that but that's not the case here.
> 
> I suggested a typical way (albeit a bit complex) to handle device
> initialization in cases where bus plug handler is not sufficient.
> It follows proper hierarchy without any layer violations and can fail
> gracefully even if we start creating CPUs later using only '-device cpufoo'
> without need to fix iommu code to handle that (it would fail creating
> iommu with clear error that CPU isn't available and all user have to
> do is to fix CLI to make sure that CPU is created before iommu).
> 
> So I'd prefer if we used exiting pattern for device initialization
> instead of hacks whenever it is possible.

Thanks, Igor. I kind of understand your concern here. And I am wondering,
the phys-bits shall be a configuration used by the VM, not just vCPU. So,
instead of trying to deduce this value from the 1st created vCPU, or to
guarantee the order of vCPU & vIOMMU creation, is there any possibility
we move a max-phys-bits in the MachineState, and derive the 'phys-bits'
in vCPU and 'haw-bits' in vIOMMU from MachineState later in their creation
process respectively?

> 
> > 
> > > 4. you probably can use phys-bits/host-phys-bits properties to get data that you need
> > >    also see how ms->possible_cpus, that's how you can get access to CPU from machine
> > >    layer.
> > >   
> > [...]
> > 
> PS:
> Another thing I'd like to draw your attention to (since you recently looked at
> phys-bits) is about host/guest phys_bits and if it's safe from migration pov
> between hosts with different limits.
> 

Good point, and thanks for the remind. Edurado, Paolo, and I discussed this
before. And indeed, it is a bit tricky... :)

B.R.
Yu

  reply	other threads:[~2018-12-21 16:13 UTC|newest]

Thread overview: 57+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-12-12 13:05 [Qemu-devel] [PATCH v3 0/2] intel-iommu: add support for 5-level virtual IOMMU Yu Zhang
2018-12-12 13:05 ` [Qemu-devel] [PATCH v3 1/2] intel-iommu: differentiate host address width from IOVA address width Yu Zhang
2018-12-17 13:17   ` Igor Mammedov
2018-12-18  9:27     ` Yu Zhang
2018-12-18 14:23       ` Michael S. Tsirkin
2018-12-18 14:55       ` Igor Mammedov
2018-12-18 14:58         ` Michael S. Tsirkin
2018-12-19  3:03           ` Yu Zhang
2018-12-19  3:12             ` Michael S. Tsirkin
2018-12-19  6:28               ` Yu Zhang
2018-12-19 15:30                 ` Michael S. Tsirkin
2018-12-19  2:57         ` Yu Zhang
2018-12-19 10:40           ` Igor Mammedov
2018-12-19 16:47             ` Michael S. Tsirkin
2018-12-20  5:59               ` Yu Zhang
2018-12-20 21:18             ` Eduardo Habkost
2018-12-21 14:13               ` Igor Mammedov
2018-12-21 16:09                 ` Yu Zhang [this message]
2018-12-21 17:04                   ` Michael S. Tsirkin
2018-12-21 17:37                     ` Yu Zhang
2018-12-21 19:02                       ` Michael S. Tsirkin
2018-12-21 20:01                         ` Eduardo Habkost
2018-12-22  1:11                         ` Yu Zhang
2018-12-25 16:56                           ` Michael S. Tsirkin
2018-12-26  5:30                             ` Yu Zhang
2018-12-27 15:14                               ` Eduardo Habkost
2018-12-28  2:32                                 ` Yu Zhang
2018-12-29  1:29                                   ` Eduardo Habkost
2019-01-15  7:13                                     ` Yu Zhang
2019-01-18  7:10                                       ` Yu Zhang
2018-12-27 14:54                 ` Eduardo Habkost
2018-12-28 11:42                   ` Igor Mammedov
2018-12-20 20:58       ` Eduardo Habkost
2018-12-12 13:05 ` [Qemu-devel] [PATCH v3 2/2] intel-iommu: extend VTD emulation to allow 57-bit " Yu Zhang
2018-12-17 13:29   ` Igor Mammedov
2018-12-18  9:47     ` Yu Zhang
2018-12-18 10:01       ` Yu Zhang
2018-12-18 12:43         ` Michael S. Tsirkin
2018-12-18 13:45           ` Yu Zhang
2018-12-18 14:49             ` Michael S. Tsirkin
2018-12-19  3:40               ` Yu Zhang
2018-12-19  4:35                 ` Michael S. Tsirkin
2018-12-19  5:57                   ` Yu Zhang
2018-12-19 15:23                     ` Michael S. Tsirkin
2018-12-20  5:49                       ` Yu Zhang
2018-12-20 18:28                         ` Michael S. Tsirkin
2018-12-21 16:19                           ` Yu Zhang
2018-12-21 17:15                             ` Michael S. Tsirkin
2018-12-21 17:34                               ` Yu Zhang
2018-12-21 18:10                                 ` Michael S. Tsirkin
2018-12-22  0:41                                   ` Yu Zhang
2018-12-25 17:00                                     ` Michael S. Tsirkin
2018-12-26  5:58                                       ` Yu Zhang
2018-12-25  1:59                                 ` Tian, Kevin
2018-12-14  9:17 ` [Qemu-devel] [PATCH v3 0/2] intel-iommu: add support for 5-level virtual IOMMU Yu Zhang
2019-01-15  4:02 ` Michael S. Tsirkin
2019-01-15  7:27   ` Yu Zhang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20181221160944.65c5skjhkel3of7y@linux.intel.com \
    --to=yu.c.zhang@linux.intel.com \
    --cc=ehabkost@redhat.com \
    --cc=imammedo@redhat.com \
    --cc=mst@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=peterx@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=rth@twiddle.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).