From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Chen, Tiejun" Subject: Re: [v7][RFC][PATCH 06/13] hvmloader/ram: check if guest memory is out of reserved device memory maps Date: Thu, 13 Nov 2014 11:09:02 +0800 Message-ID: <546420CE.1080908@intel.com> References: <1414136077-18599-1-git-send-email-tiejun.chen@intel.com> <545354500200007800043D94@mail.emea.novell.com> <5457174C.8020400@intel.com> <5457515102000078000443B0@mail.emea.novell.com> <54574D8F.8060407@intel.com> <54575E2D0200007800044443@mail.emea.novell.com> <545767C4.7070806@intel.com> <5457787002000078000445C7@mail.emea.novell.com> <54576DF7.8060408@intel.com> <545784830200007800044627@mail.emea.novell.com> <54585EAA.20904@intel.com> <545894610200007800044A5B@mail.emea.novell.com> <545992A2.8070309@intel.com> <545A57AD02000078000C1037@mail.emea.novell.com> <545B3F4A.5070808@intel.com> <545B562F02000078000453FB@mail.emea.novell.com> <545C9E97.4040800@intel.com> <545CB64E02000078000459CD@mail.emea.novell.com> <5461AD94.2070008@intel.com> <5461BF97.1070709@intel.com> <5461DED50200007800046520@mail.emea.novell.com> <5461DFAF020000780004652B@mail.emea.novell.com> <5461DA23.6020105@intel.com> <5462CE68.6010709@intel.com> <54632EA80200007800046AE5@mail.emea.novell.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii"; Format="flowed" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <54632EA80200007800046AE5@mail.emea.novell.com> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Jan Beulich Cc: yang.z.zhang@intel.com, kevin.tian@intel.com, tim@xen.org, xen-devel@lists.xen.org List-Id: xen-devel@lists.xenproject.org On 2014/11/12 16:55, Jan Beulich wrote: >>>> On 12.11.14 at 04:05, wrote: >> I don't see any feedback to this point, so I think you still prefer we >> should do all check in the callback function. > > As a draft this looks reasonable, but there are various bugs to be > dealt with along with cosmetic issues (I'll point out the former, but > I'm tired of pointing out the latter once again - please go back to > earlier reviews of patches to refresh e.g. what types to use for > loop variables). > >> I tried to address this but obviously we have to pass each 'pdf' to >> callback functions, > > Yes, but at the generic IOMMU layer this shouldn't be named "bdf", > but something more neutral (maybe "id"). And you again lost the > segment there. > >> @@ -36,9 +40,24 @@ static int get_reserved_device_memory(xen_pfn_t start, >> if ( rdm.start_pfn != start || rdm.nr_pages != nr ) >> return -ERANGE; >> >> - if ( __copy_to_compat_offset(grdm->map.buffer, grdm->used_entries, >> - &rdm, 1) ) >> - return -EFAULT; >> + if ( d->arch.hvm_domain.pci_force ) >> + { >> + if ( __copy_to_compat_offset(grdm->map.buffer, grdm->used_entries, >> + &rdm, 1) ) >> + return -EFAULT; >> + } >> + else >> + { >> + for ( i = 0; i < d->arch.hvm_domain.num_pcidevs; i++ ) >> + { >> + pt_bdf = PCI_BDF2(d->arch.hvm_domain.pcidevs[i].bus, >> + d->arch.hvm_domain.pcidevs[i].devfn); >> + if ( pt_bdf == bdf ) >> + if ( __copy_to_compat_offset(grdm->map.buffer, grdm->used_entries, >> + &rdm, 1) ) >> + return -EFAULT; > > I think d->arch.hvm_domain.pcidevs[] shouldn't contain duplicates, > and hence there's no point continuing the loop if you found a match. > I take a look at this again. Seems we shouldn't just check bdf since as you know its possible to occupy one entry by multiple different BDFs, so we have to filter-to-keep one entry. Instead, I really hope we'd check to expose before we do the hypercall. BTW, I already ping Yang in local to look that possibility to reorder the sequence of the device assignment and the memory population in iommu side. Thanks Tiejun