From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Chen, Tiejun" Subject: Re: [RFC][PATCH 10/13] tools: extend XENMEM_set_memory_map Date: Fri, 15 May 2015 15:09:54 +0800 Message-ID: <55559BC2.6030104@intel.com> References: <1428657724-3498-1-git-send-email-tiejun.chen@intel.com> <1428657724-3498-11-git-send-email-tiejun.chen@intel.com> <5535206C0200007800073D05@mail.emea.novell.com> <5555608B.2060108@intel.com> <5555AB5F020000780007A57F@mail.emea.novell.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii"; Format="flowed" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <5555AB5F020000780007A57F@mail.emea.novell.com> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Jan Beulich Cc: tim@xen.org, kevin.tian@intel.com, wei.liu2@citrix.com, ian.campbell@citrix.com, andrew.cooper3@citrix.com, Ian.Jackson@eu.citrix.com, xen-devel@lists.xen.org, stefano.stabellini@citrix.com, yang.z.zhang@intel.com List-Id: xen-devel@lists.xenproject.org On 2015/5/15 14:16, Jan Beulich wrote: >>>> On 15.05.15 at 04:57, wrote: >> On 2015/4/20 21:51, Jan Beulich wrote: >>>>>> On 10.04.15 at 11:22, wrote: >>>> --- a/tools/libxl/libxl_dom.c >>>> +++ b/tools/libxl/libxl_dom.c >>>> @@ -787,6 +787,70 @@ out: >>>> return rc; >>>> } >>>> >>>> +static int libxl__domain_construct_memmap(libxl_ctx *ctx, >>>> + libxl_domain_config *d_config, >>>> + uint32_t domid, >>>> + struct xc_hvm_build_args *args, >>>> + int num_pcidevs, >>>> + libxl_device_pci *pcidevs) >>>> +{ >>>> + unsigned int nr = 0, i; >>>> + /* We always own at least one lowmem entry. */ >>>> + unsigned int e820_entries = 1; >>>> + uint64_t highmem_end = 0, highmem_size = args->mem_size - args->lowmem_size; >>>> + struct e820entry *e820 = NULL; >>>> + >>>> + /* Add all rdm entries. */ >>>> + e820_entries += d_config->num_rdms; >>>> + >>>> + /* If we should have a highmem range. */ >>>> + if (highmem_size) >>>> + { >>>> + highmem_end = (1ull<<32) + highmem_size; >>>> + e820_entries++; >>>> + } >>>> + >>>> + e820 = malloc(sizeof(struct e820entry) * e820_entries); >>>> + if (!e820) { >>>> + return -1; >>>> + } >>>> + >>>> + /* Low memory */ >>>> + e820[nr].addr = 0x100000; >>>> + e820[nr].size = args->lowmem_size - 0x100000; >>>> + e820[nr].type = E820_RAM; >>> >>> If you really mean it to be this lax (not covering the low 1Mb), then >>> you need to explain why in a comment (and the consuming side >>> should also have a similar explanation then). >>> >> >> Okay, here may need this, >> >> /* >> >> * Low RAM starts at least from 1M to make sure all standard regions >> >> * of the PC memory map, like BIOS, VGA memory-mapped I/O and vgabios, >> >> * have enough space. >> */ >> #define GUEST_LOW_MEM_START_DEFAULT 0x100000 > > But this only states a generic fact, but doesn't explain why you can > lump together all the different things below 1Mb into a single E820 > entry. I'm not sure if I'm misleading you. All different things below 1M is not in a single entry. Here we just construct these mappings: #1. [1M, lowmem_end] #2. [RDM] #3. [4G, highmem_end] Those stuffs below 1M are still constructed with multiple e820 entries by hvmloader. At this point I don't change anything. > >>>> + nr++; >>>> + >>>> + /* RDM mapping */ >>>> + for (i = 0; i < d_config->num_rdms; i++) { >>>> + /* >>>> + * We should drop this kind of rdm entry. >>>> + */ >>>> + if (d_config->rdms[i].flag == LIBXL_RDM_RESERVE_FLAG_INVALID) >>>> + continue; >>>> + >>>> + e820[nr].addr = d_config->rdms[i].start; >>>> + e820[nr].size = d_config->rdms[i].size; >>>> + e820[nr].type = E820_RESERVED; >>>> + nr++; >>>> + } >>> >>> Is this guaranteed not to produce overlapping entries? >>> >> >> Right, I would add this at the beginning, >> >> if (e820_entries >= E820MAX) { >> LOG(ERROR, "Ooops! Too many entries in the memory map!\n"); >> return -1; >> } > > That would be a protection against too many entries, but not against > overlapping ones. > Are you saying these kinds of mapping? #1. [1M, lowmem_end] #2. [RDM] #3. [4G, highmem_end] Before we call this function we already finish handling RDM with our policy. This means there's no any overlapping here. Thanks TIejun