From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Chen, Tiejun" Subject: Re: [v7][RFC][PATCH 06/13] hvmloader/ram: check if guest memory is out of reserved device memory maps Date: Tue, 18 Nov 2014 11:08:29 +0800 Message-ID: <546AB82D.5080305@intel.com> References: <1414136077-18599-1-git-send-email-tiejun.chen@intel.com> <5457787002000078000445C7@mail.emea.novell.com> <54576DF7.8060408@intel.com> <545784830200007800044627@mail.emea.novell.com> <54585EAA.20904@intel.com> <545894610200007800044A5B@mail.emea.novell.com> <545992A2.8070309@intel.com> <545A57AD02000078000C1037@mail.emea.novell.com> <545B3F4A.5070808@intel.com> <545B562F02000078000453FB@mail.emea.novell.com> <545C9E97.4040800@intel.com> <545CB64E02000078000459CD@mail.emea.novell.com> <5461AD94.2070008@intel.com> <5461BF97.1070709@intel.com> <5461DED50200007800046520@mail.emea.novell.com> <5461DFAF020000780004652B@mail.emea.novell.com> <5461DA23.6020105@intel.com> <5462CE68.6010709@intel.com> <54632EA80200007800046AE5@mail.emea.novell.com> <5469AA77.2070602@intel.com> <5469D68D0200007800048515@mail.emea.novell.com> <5469D749.2040807@intel.com> <5469E74302000078000485B0@mail.emea.novell.com> <5469DCD7.4030701@intel.com> <5469EF5D0200007800048609@mail.emea.novell.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii"; Format="flowed" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <5469EF5D0200007800048609@mail.emea.novell.com> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Jan Beulich Cc: yang.z.zhang@intel.com, kevin.tian@intel.com, tim@xen.org, xen-devel@lists.xen.org List-Id: xen-devel@lists.xenproject.org On 2014/11/17 19:51, Jan Beulich wrote: >>>> On 17.11.14 at 12:32, wrote: >> On 2014/11/17 19:17, Jan Beulich wrote: >>>>>> On 17.11.14 at 12:08, wrote: >>> >>>> On 2014/11/17 18:05, Jan Beulich wrote: >>>>>>>> On 17.11.14 at 08:57, wrote: >>>>>> --- a/xen/common/memory.c >>>>>> +++ b/xen/common/memory.c >>>>>> @@ -698,10 +698,13 @@ struct get_reserved_device_memory { >>>>>> unsigned int used_entries; >>>>>> }; >>>>>> >>>>>> -static int get_reserved_device_memory(xen_pfn_t start, >>>>>> - xen_ulong_t nr, void *ctxt) >>>>>> +static int get_reserved_device_memory(xen_pfn_t start, xen_ulong_t nr, u16 >>>> seg, >>>>>> + u16 *ids, int cnt, void *ctxt) >>>>> >>>>> While the approach is a lot better than what you did previously, I still >>>>> don't like you adding 3 new parameters when one would do (calling >>>>> the callback for each SBDF individually): That way you avoid >>>> >>>> Do you mean I should do this? >>>> >>>> for_each_rmrr_device ( rmrr, bdf, i ) >>>> { >>>> sbdf = PCI_SBDF(seg, rmrr->scope.devices[i]); >>>> rc = func(PFN_DOWN(rmrr->base_address), >>>> PFN_UP(rmrr->end_address) - >>>> PFN_DOWN(rmrr->base_address), >>>> sbdf, >>>> ctxt); >>>> >>>> But each different sbdf may occupy one same rmrr entry as I said >>>> previously, so we have to introduce more codes to filter them as one >>>> identified entry in the callback. >>> >>> Not really - remember that part of what needs to be done is to make >>> sure all devices associated with a given RMRR get assigned to the >>> same guest? Or the callback function could use a special return value >> >> Yes, but I means in the callback, >> >> get_reserved_device_memory() >> { >> ... >> for(each assigned pci devs:pt_sbdf) >> if (sbdf == pt_sbdf) >> __copy_to_guest_offset(buffer, ...) >> >> Buffer may be copied to include multiple same entries if we have two or >> more assigned devices associated to one give RMRR entry. > > Which would be easily avoided by ... > >>> (e.g. 1) to signal that the iteration for the current RMRR can be >>> terminated (or further entries skipped). > > ... the approach I outlined. > Here I tried to implement what you want. Note just pick two key fragments since others have no big deal. #1: @@ -898,14 +898,25 @@ int intel_iommu_get_reserved_device_memory(iommu_grdm_t *func, void *ctxt) { struct acpi_rmrr_unit *rmrr; int rc = 0; + unsigned int i; + u32 id; + u16 bdf; list_for_each_entry(rmrr, &acpi_rmrr_units, list) { - rc = func(PFN_DOWN(rmrr->base_address), - PFN_UP(rmrr->end_address) - PFN_DOWN(rmrr->base_address), - ctxt); - if ( rc ) - break; + for (i = 0; (bdf = rmrr->scope.devices[i]) && + i < rmrr->scope.devices_cnt && !rc; i++) + { + id = PCI_SBDF(rmrr->segment, bdf); + rc = func(PFN_DOWN(rmrr->base_address), + PFN_UP(rmrr->end_address) - + PFN_DOWN(rmrr->base_address), + id, + ctxt); + if ( rc < 0 ) + return rc; + } + rc = 0; } return rc; and #2, @@ -698,10 +698,13 @@ struct get_reserved_device_memory { unsigned int used_entries; }; -static int get_reserved_device_memory(xen_pfn_t start, - xen_ulong_t nr, void *ctxt) +static int get_reserved_device_memory(xen_pfn_t start, xen_ulong_t nr, + u32 id, void *ctxt) { struct get_reserved_device_memory *grdm = ctxt; + struct domain *d = get_domain_by_id(grdm->map.domid); + unsigned int i; + u32 sbdf; if ( grdm->used_entries < grdm->map.nr_entries ) { @@ -709,13 +712,34 @@ static int get_reserved_device_memory(xen_pfn_t start, .start_pfn = start, .nr_pages = nr }; - if ( __copy_to_guest_offset(grdm->map.buffer, grdm->used_entries, - &rdm, 1) ) - return -EFAULT; + if ( d->arch.hvm_domain.pci_force ) + { + if ( __copy_to_guest_offset(grdm->map.buffer, grdm->used_entries, + &rdm, 1) ) + return -EFAULT; + ++grdm->used_entries; + return 1; + } + else + { + for ( i = 0; i < d->arch.hvm_domain.num_pcidevs; i++ ) + { + sbdf = PCI_SBDF2(d->arch.hvm_domain.pcidevs[i].seg, + d->arch.hvm_domain.pcidevs[i].bus, + d->arch.hvm_domain.pcidevs[i].devfn); + if ( sbdf == id ) + { + if ( __copy_to_guest_offset(grdm->map.buffer, + grdm->used_entries, + &rdm, 1) ) + return -EFAULT; + ++grdm->used_entries; + return 1; + } + } + } } - ++grdm->used_entries; - return 0; } #endif Thanks Tiejun