From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Chen, Tiejun" Subject: Re: [v4][PATCH 2/9] xen:x86: define a new hypercall to get RMRR mappings Date: Tue, 26 Aug 2014 11:12:14 +0800 Message-ID: <53FBFB0E.8030305@intel.com> References: <1408702186-24432-1-git-send-email-tiejun.chen@intel.com> <1408702186-24432-3-git-send-email-tiejun.chen@intel.com> <53F7212A.2030003@citrix.com> <53FB1C2E.5030905@intel.com> <53FB2715.8050303@citrix.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii"; Format="flowed" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <53FB2715.8050303@citrix.com> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Andrew Cooper , JBeulich@suse.com, ian.jackson@eu.citrix.com, stefano.stabellini@eu.citrix.com, ian.campbell@citrix.com, yang.z.zhang@intel.com, kevin.tian@intel.com Cc: xen-devel@lists.xen.org List-Id: xen-devel@lists.xenproject.org On 2014/8/25 20:07, Andrew Cooper wrote: > On 25/08/14 12:21, Chen, Tiejun wrote: >> On 2014/8/22 18:53, Andrew Cooper wrote: >>> On 22/08/14 11:09, Tiejun Chen wrote: >>>> We need this new hypercall to get RMRR mapping for VM. >>>> >>>> Signed-off-by: Tiejun Chen >>>> --- >>>> xen/arch/x86/mm.c | 71 >>>> +++++++++++++++++++++++++++++++++++++++++++++ >>>> xen/include/public/memory.h | 37 ++++++++++++++++++++++- >>>> 2 files changed, 107 insertions(+), 1 deletion(-) >>>> >>>> diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c >>>> index d23cb3f..e0d6650 100644 >>>> --- a/xen/arch/x86/mm.c >>>> +++ b/xen/arch/x86/mm.c >>>> @@ -123,6 +123,7 @@ >>>> #include >>>> #include >>>> #include >>>> +#include >>>> >>>> /* Mapping of the fixmap space needed early. */ >>>> l1_pgentry_t __attribute__ ((__section__ (".bss.page_aligned"))) >>>> @@ -4842,6 +4843,76 @@ long arch_memory_op(unsigned long cmd, >>>> XEN_GUEST_HANDLE_PARAM(void) arg) >>>> return rc; >>>> } >>>> >>>> + case XENMEM_reserved_device_memory_map: >>>> + { >>>> + struct xen_reserved_device_memory_map map; >>>> + XEN_GUEST_HANDLE(xen_reserved_device_memory_t) buffer; >>>> + XEN_GUEST_HANDLE_PARAM(xen_reserved_device_memory_t) >>>> buffer_param; >>>> + unsigned int i = 0; >>>> + static unsigned int nr_entries = 0; >>>> + static struct xen_reserved_device_memory *rmrr_map; >>> >>> Absolutely not. This hypercall can easy be run concurrently. >>> >>>> + struct acpi_rmrr_unit *rmrr; >>>> + >>>> + if ( copy_from_guest(&map, arg, 1) ) >>>> + return -EFAULT; >>>> + >>>> + if ( !nr_entries ) >>>> + /* Currently we just need to cover RMRR. */ >>>> + list_for_each_entry( rmrr, &acpi_rmrr_units, list ) >>>> + nr_entries++; >>> >>> Maintain a global count as entries are added/removed from this list. It >>> is a waste of time recounting this list for each hypercall. >> >> Are you saying push this 'nr_entries' as a global count somewhere? I >> guess I can set this when we first construct acpi_rmrr_units in ACPI >> stuff. > > Not named "nr_entries", but yes. It is constant after boot. > >> >>> >>>> + >>>> + if ( !nr_entries ) >>>> + return -ENOENT; >>>> + else >>>> + { >>>> + if ( rmrr_map == NULL ) >>>> + { >>>> + rmrr_map = xmalloc_array(xen_reserved_device_memory_t, >>>> + nr_entries); >>> >>> You can do all of this without any memory allocation. >> What is your way without any memory allocation? Do you mean I should predefine a static array here? But how to predetermine the size? Or you mean I should do something with one rmrr_map, like this, struct xen_mem_reserved_device_memory rmrr_map; list_for_each_entry( rmrr, &acpi_rmrr_units, list ) { rmrr_map.start_pfn = ...; rmrr_map.nr_pages = ...; if ( copy_to_guest_offset(buffer, i, &rmrr_map, 1) ) return -EFAULT; i++; } >> I will check this. > > It is easy... > >> >>> >>>> + if ( rmrr_map == NULL ) >>>> + { >>>> + return -ENOMEM; >>>> + } >>>> + >>>> + list_for_each_entry( rmrr, &acpi_rmrr_units, list ) >>>> + { >>>> + rmrr_map[i].pfn = rmrr->base_address >> >>>> PAGE_SHIFT; >>>> + rmrr_map[i].count = PAGE_ALIGN(rmrr->end_address - >>>> + >>>> rmrr->base_address) / >>>> + PAGE_SIZE; >>>> + i++; > > In this loop, construct one on the stack and copy_to_guest, breaking if > there is a fault or you exceed the guests array. Its not possible to exceed the guests array since the caller always check if it get such a error, -ENOBUFS, then if yes, it can reallocate appropriate array. Thanks Tiejun > >>>> + } >>>> + } >>>> + } >>>> + >>>> + if ( map.nr_entries < nr_entries ) >>>> + { >>>> + map.nr_entries = nr_entries; >>>> + if ( copy_to_guest(arg, &map, 1) ) >>>> + return -EFAULT; >>>> + return -ENOBUFS; >>>> + } >>>> + >>>> + map.nr_entries = nr_entries; >>>> + buffer_param = guest_handle_cast(map.buffer, >>>> + >>>> xen_reserved_device_memory_t); >>>> + buffer = guest_handle_from_param(buffer_param, >>>> + >>>> xen_reserved_device_memory_t); >>>> + if ( !guest_handle_okay(buffer, map.nr_entries) ) >>>> + return -EFAULT; >>>> + >>>> + for ( i = 0; i < map.nr_entries; ++i ) >>>> + { >>>> + if ( copy_to_guest_offset(buffer, i, rmrr_map + i, 1) ) >>>> + return -EFAULT; >>>> + } >>>> + >>>> + if ( copy_to_guest(arg, &map, 1) ) >>>> + return -EFAULT; >>>> + >>>> + return 0; >>>> + } >>>> + >>>> default: >>>> return subarch_memory_op(cmd, arg); >>>> } >>>> diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h >>>> index 2c57aa0..8481843 100644 >>>> --- a/xen/include/public/memory.h >>>> +++ b/xen/include/public/memory.h >>>> @@ -523,7 +523,42 @@ DEFINE_XEN_GUEST_HANDLE(xen_mem_sharing_op_t); >>>> >>>> #endif /* defined(__XEN__) || defined(__XEN_TOOLS__) */ >>>> >>>> -/* Next available subop number is 26 */ >>>> +/* >>>> + * Some devices may reserve some range. >>> >>> "range" is not a useful unit of measure. >> >> So what about this? >> >> The regions of memory used for some devices should be reserved. > > No - it is not a case of "should", it is a case of "must". > > "For legacy reasons, some devices must be configured with special memory > regions to function correctly. The guest must avoid using any of these > regions." > >> >>> >>>> + * >>>> + * Currently we just have RMRR >>>> + * - Reserved memory Region Reporting Structure, >>>> + * So returns the RMRR memory map as it was when the domain >>>> + * was started. >>>> + */ >>>> +#define XENMEM_reserved_device_memory_map 26 >>>> +struct xen_reserved_device_memory { >>> >>> xen_mem_ to match the prevailing style >> >> Okay. >> >>> >>>> + /* PFN of the current mapping of the page. */ >>>> + xen_pfn_t pfn; >>>> + /* Number of the current mapping pages. */ >>>> + xen_ulong_t count; >>>> +}; >>> >>> This struct marks a range, but the fields don't make it clear. I would >>> suggest "start" and "nr_frames" as names. >>> >> >> I will prefer Jan's suggestion. > > As do I, > > ~Andrew >