From mboxrd@z Thu Jan 1 00:00:00 1970 From: Mukesh Rathor Subject: Re: [PATCH] Boot PV guests with more than 128GB (v2) for 3.7 Date: Thu, 2 Aug 2012 16:04:03 -0700 Message-ID: <20120802160403.02de484e@mantra.us.oracle.com> References: <1343745804-28028-1-git-send-email-konrad.wilk@oracle.com> <20120801155040.GB15812@phenom.dumpdata.com> <501A5EF7020000780009219C@nat28.tlf.novell.com> <20120802141710.GF16749@phenom.dumpdata.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <20120802141710.GF16749@phenom.dumpdata.com> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Konrad Rzeszutek Wilk Cc: Stefano Stabellini , Jan Beulich , xen-devel List-Id: xen-devel@lists.xenproject.org On Thu, 2 Aug 2012 10:17:10 -0400 Konrad Rzeszutek Wilk wrote: > On Thu, Aug 02, 2012 at 10:05:27AM +0100, Jan Beulich wrote: > > >>> On 01.08.12 at 17:50, Konrad Rzeszutek Wilk > > >>> wrote: > > > With these patches I've gotten it to boot up to 384GB. Around > > > that area something weird happens - mainly the pagetables that > > > the toolstack allocated seems to have missing data. I hadn't > > > looked in details, but this is what domain builder tells me: > > > > > > > > > xc_dom_alloc_segment: ramdisk : 0xffffffff82278000 -> > > > 0xffffffff930b4000 (pfn 0x2278 + 0x10e3c pages) > > > xc_dom_malloc : 1621 kB > > > xc_dom_pfn_to_ptr: domU mapping: pfn 0x2278+0x10e3c at > > > 0x7fb0853a2000 xc_dom_do_gunzip: unzip ok, 0x4ba831c -> 0x10e3be10 > > > xc_dom_alloc_segment: phys2mach : 0xffffffff930b4000 -> > > > 0xffffffffc30b4000 (pfn 0x130b4 + 0x30000 pages) > > > xc_dom_malloc : 4608 kB > > > xc_dom_pfn_to_ptr: domU mapping: pfn 0x130b4+0x30000 at > > > 0x7fb0553a2000 xc_dom_alloc_page : start info : > > > 0xffffffffc30b4000 (pfn 0x430b4) xc_dom_alloc_page : > > > xenstore : 0xffffffffc30b5000 (pfn 0x430b5) > > > xc_dom_alloc_page : console : 0xffffffffc30b6000 (pfn > > > 0x430b6) nr_page_tables: 0x0000ffffffffffff/48: > > > 0xffff000000000000 -> 0xffffffffffffffff, 1 table(s) > > > nr_page_tables: 0x0000007fffffffff/39: 0xffffff8000000000 -> > > > 0xffffffffffffffff, 1 table(s) nr_page_tables: > > > 0x000000003fffffff/30: 0xffffffff80000000 -> 0xffffffffffffffff, > > > 2 table(s) nr_page_tables: 0x00000000001fffff/21: > > > 0xffffffff80000000 -> 0xffffffffc33fffff, 538 table(s) > > > xc_dom_alloc_segment: page tables : 0xffffffffc30b7000 -> > > > 0xffffffffc32d5000 (pfn 0x430b7 + 0x21e pages) > > > xc_dom_pfn_to_ptr: domU mapping: pfn 0x430b7+0x21e at > > > 0x7fb055184000 xc_dom_alloc_page : boot stack : > > > 0xffffffffc32d5000 (pfn 0x432d5) xc_dom_build_image : > > > virt_alloc_end : 0xffffffffc32d6000 xc_dom_build_image : > > > virt_pgtab_end : 0xffffffffc3400000 > > > > > > Note it is is 0xffffffffc30b4000 - so already past the > > > level2_kernel_pgt (L3[510] > > > and in level2_fixmap_pgt territory (L3[511]). > > > > > > At that stage we are still operating using the Xen provided > > > pagetable - which look to have the L4[511][511] empty! Which > > > sounds to me like a Xen tool-stack problem? Jan, have you seen > > > something similar to this? > > > > No we haven't, but I also don't think anyone tried to create as > > big a DomU. I was, however, under the impression that DomU-s > > this big had been created at Oracle before. Or was that only up > > to 256Gb perhaps? > > Mukesh do you recall? Was it with OVM2.2.2 which was 3.4 based? > It might be that we did not have the 1TB hardware at that time yet. Yes, in ovm2.x, I debugged/booted upto 500GB domU. So something got broken after it looks like. I can debug later if it becomes hot. thanks, Mukesh