* Populate-on-demand memory problem
@ 2010-07-27 7:48 Dietmar Hahn
2010-07-27 13:10 ` George Dunlap
0 siblings, 1 reply; 6+ messages in thread
From: Dietmar Hahn @ 2010-07-27 7:48 UTC (permalink / raw)
To: xen-devel; +Cc: George Dunlap
Hi list,
we ported our system from Novel SLES11 using xen-3.3 to SLES11 SP1 using
xen-4.0 and ran into some trouble with the pod stuff.
We have a HVM guest and already used target_mem < max_mem on startup of
the guest.
With the new xen version we get
(XEN) p2m_pod_demand_populate: Out of populate-on-demand memory! tot_pages 792792 pod_entries 800
I did some code revisions and looking at pod patches
(http://lists.xensource.com/archives/html/xen-devel/2008-12/msg01030.html)
to understand the behavior. We use the following configuration:
maxmem = 4096
memory = 3096
What I see is:
- our guest boots with e820 map showing maxmem.
- reading xenstore memory/target returns '3170304' means 3096MB, 792576 pages
Now our guest uses the target memory and gives back 1000MB via
hypervisor call XENMEM_decrease_reservation to the hypervisor.
Later I try to map the complete domU memory into dom0 kernel space and here I
get the 'Out of populate-on-demand memory' crash.
As far as I understand (ignoring the p2m_pod_emergency_sweep)
- on populating a page
- the page is taken from the pod cache
- p2md->pod.count--
- p2md->pod.entry_count--
- page gets type p2m_ram_rw
- decreasing a page
- p2md->pod.entry_count--
- page gets type p2m_invalid
So if the guest uses all the target memory and gave back all
the (maxmem-target) memory p2md->pod.count and p2md->pod.entry_count should be
zero.
I added some tracing in the hypervisor and see on start of the guest:
p2m_pod_set_cache_target: p2md->pod.count: 791264 tot_pages: 791744
This pod.count is lower then the target seen in the guest!
On the first call of p2m_pod_demand_populate() I can see
p2m_pod_demand_populate: p2md->pod.entry_count: 1048064 p2md->pod.count: 791264 tot_pages: 792792
So pod.entry_count=1048064 (4096MB) complies to maxmem but
pod.count=791264 is lower then the target memory in xenstore.
Any help is welcome!
Thanks.
Dietmar.
--
Company details: http://ts.fujitsu.com/imprint.html
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: Populate-on-demand memory problem
2010-07-27 7:48 Populate-on-demand memory problem Dietmar Hahn
@ 2010-07-27 13:10 ` George Dunlap
2010-07-28 8:05 ` Dietmar Hahn
2010-08-09 8:48 ` Jan Beulich
0 siblings, 2 replies; 6+ messages in thread
From: George Dunlap @ 2010-07-27 13:10 UTC (permalink / raw)
To: Dietmar Hahn; +Cc: xen-devel
[-- Attachment #1: Type: text/plain, Size: 2501 bytes --]
Hmm, looks like I neglected to push a fix upstream. Can you test it
with the attached patch, and tell me if that fixes your problem?
-George
On Tue, Jul 27, 2010 at 8:48 AM, Dietmar Hahn
<dietmar.hahn@ts.fujitsu.com> wrote:
> Hi list,
>
> we ported our system from Novel SLES11 using xen-3.3 to SLES11 SP1 using
> xen-4.0 and ran into some trouble with the pod stuff.
> We have a HVM guest and already used target_mem < max_mem on startup of
> the guest.
> With the new xen version we get
> (XEN) p2m_pod_demand_populate: Out of populate-on-demand memory! tot_pages 792792 pod_entries 800
> I did some code revisions and looking at pod patches
> (http://lists.xensource.com/archives/html/xen-devel/2008-12/msg01030.html)
> to understand the behavior. We use the following configuration:
> maxmem = 4096
> memory = 3096
> What I see is:
> - our guest boots with e820 map showing maxmem.
> - reading xenstore memory/target returns '3170304' means 3096MB, 792576 pages
> Now our guest uses the target memory and gives back 1000MB via
> hypervisor call XENMEM_decrease_reservation to the hypervisor.
>
> Later I try to map the complete domU memory into dom0 kernel space and here I
> get the 'Out of populate-on-demand memory' crash.
>
> As far as I understand (ignoring the p2m_pod_emergency_sweep)
> - on populating a page
> - the page is taken from the pod cache
> - p2md->pod.count--
> - p2md->pod.entry_count--
> - page gets type p2m_ram_rw
> - decreasing a page
> - p2md->pod.entry_count--
> - page gets type p2m_invalid
>
> So if the guest uses all the target memory and gave back all
> the (maxmem-target) memory p2md->pod.count and p2md->pod.entry_count should be
> zero.
> I added some tracing in the hypervisor and see on start of the guest:
> p2m_pod_set_cache_target: p2md->pod.count: 791264 tot_pages: 791744
> This pod.count is lower then the target seen in the guest!
> On the first call of p2m_pod_demand_populate() I can see
> p2m_pod_demand_populate: p2md->pod.entry_count: 1048064 p2md->pod.count: 791264 tot_pages: 792792
> So pod.entry_count=1048064 (4096MB) complies to maxmem but
> pod.count=791264 is lower then the target memory in xenstore.
>
> Any help is welcome!
> Thanks.
> Dietmar.
>
> --
> Company details: http://ts.fujitsu.com/imprint.html
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xensource.com
> http://lists.xensource.com/xen-devel
>
[-- Attachment #2: 20091111-pod-domain-build-math-error.diff --]
[-- Type: text/x-diff, Size: 1735 bytes --]
diff -r 96b7350b1490 tools/libxc/xc_hvm_build.c
--- a/tools/libxc/xc_hvm_build.c Wed Nov 11 14:11:24 2009 +0000
+++ b/tools/libxc/xc_hvm_build.c Thu Nov 12 16:49:53 2009 +0000
@@ -107,7 +107,6 @@
xen_pfn_t *page_array = NULL;
unsigned long i, nr_pages = (unsigned long)memsize << (20 - PAGE_SHIFT);
unsigned long target_pages = (unsigned long)target << (20 - PAGE_SHIFT);
- unsigned long pod_pages = 0;
unsigned long entry_eip, cur_pages;
struct xen_add_to_physmap xatp;
struct shared_info *shared_info;
@@ -208,11 +207,6 @@
if ( done > 0 )
{
done <<= SUPERPAGE_PFN_SHIFT;
- if ( pod_mode && target_pages > cur_pages )
- {
- int d = target_pages - cur_pages;
- pod_pages += ( done < d ) ? done : d;
- }
cur_pages += done;
count -= done;
}
@@ -224,15 +218,16 @@
rc = xc_domain_memory_populate_physmap(
xc_handle, dom, count, 0, 0, &page_array[cur_pages]);
cur_pages += count;
- if ( pod_mode )
- pod_pages -= count;
}
}
+ /* Subtract 0x20 from target_pages for the VGA "hole". Xen will
+ * adjust the PoD cache size so that domain tot_pages will be
+ * target_pages - 0x20 after this call. */
if ( pod_mode )
rc = xc_domain_memory_set_pod_target(xc_handle,
dom,
- pod_pages,
+ target_pages - 0x20,
NULL, NULL, NULL);
if ( rc != 0 )
[-- Attachment #3: Type: text/plain, Size: 138 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: Populate-on-demand memory problem
2010-07-27 13:10 ` George Dunlap
@ 2010-07-28 8:05 ` Dietmar Hahn
2010-08-09 8:48 ` Jan Beulich
1 sibling, 0 replies; 6+ messages in thread
From: Dietmar Hahn @ 2010-07-28 8:05 UTC (permalink / raw)
To: George Dunlap; +Cc: xen-devel
Am 27.07.2010 schrieb George Dunlap:
> Hmm, looks like I neglected to push a fix upstream. Can you test it
> with the attached patch, and tell me if that fixes your problem?
With this patch all went fine again and the counters seem to be right.
Please add it to xen-unstable.
Many thanks.
Dietmar.
>
> -George
>
> On Tue, Jul 27, 2010 at 8:48 AM, Dietmar Hahn
> <dietmar.hahn@ts.fujitsu.com> wrote:
> > Hi list,
> >
> > we ported our system from Novel SLES11 using xen-3.3 to SLES11 SP1 using
> > xen-4.0 and ran into some trouble with the pod stuff.
> > We have a HVM guest and already used target_mem < max_mem on startup of
> > the guest.
> > With the new xen version we get
> > (XEN) p2m_pod_demand_populate: Out of populate-on-demand memory! tot_pages 792792 pod_entries 800
> > I did some code revisions and looking at pod patches
> > (http://lists.xensource.com/archives/html/xen-devel/2008-12/msg01030.html)
> > to understand the behavior. We use the following configuration:
> > maxmem = 4096
> > memory = 3096
> > What I see is:
> > - our guest boots with e820 map showing maxmem.
> > - reading xenstore memory/target returns '3170304' means 3096MB, 792576 pages
> > Now our guest uses the target memory and gives back 1000MB via
> > hypervisor call XENMEM_decrease_reservation to the hypervisor.
> >
> > Later I try to map the complete domU memory into dom0 kernel space and here I
> > get the 'Out of populate-on-demand memory' crash.
> >
> > As far as I understand (ignoring the p2m_pod_emergency_sweep)
> > - on populating a page
> > - the page is taken from the pod cache
> > - p2md->pod.count--
> > - p2md->pod.entry_count--
> > - page gets type p2m_ram_rw
> > - decreasing a page
> > - p2md->pod.entry_count--
> > - page gets type p2m_invalid
> >
> > So if the guest uses all the target memory and gave back all
> > the (maxmem-target) memory p2md->pod.count and p2md->pod.entry_count should be
> > zero.
> > I added some tracing in the hypervisor and see on start of the guest:
> > p2m_pod_set_cache_target: p2md->pod.count: 791264 tot_pages: 791744
> > This pod.count is lower then the target seen in the guest!
> > On the first call of p2m_pod_demand_populate() I can see
> > p2m_pod_demand_populate: p2md->pod.entry_count: 1048064 p2md->pod.count: 791264 tot_pages: 792792
> > So pod.entry_count=1048064 (4096MB) complies to maxmem but
> > pod.count=791264 is lower then the target memory in xenstore.
> >
> > Any help is welcome!
> > Thanks.
> > Dietmar.
> >
--
Company details: http://ts.fujitsu.com/imprint.html
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: Populate-on-demand memory problem
2010-07-27 13:10 ` George Dunlap
2010-07-28 8:05 ` Dietmar Hahn
@ 2010-08-09 8:48 ` Jan Beulich
2010-08-09 9:29 ` Keir Fraser
1 sibling, 1 reply; 6+ messages in thread
From: Jan Beulich @ 2010-08-09 8:48 UTC (permalink / raw)
To: Keir Fraser; +Cc: George Dunlap, xen-devel, Dietmar Hahn
Keir,
with Dietmar having tested this successfully, is there anything that
keeps this from being applied to -unstable (and perhaps also 4.0.1)?
Jan
>>> On 27.07.10 at 15:10, George Dunlap <George.Dunlap@eu.citrix.com> wrote:
> Hmm, looks like I neglected to push a fix upstream. Can you test it
> with the attached patch, and tell me if that fixes your problem?
>
> -George
>
> On Tue, Jul 27, 2010 at 8:48 AM, Dietmar Hahn
> <dietmar.hahn@ts.fujitsu.com> wrote:
>> Hi list,
>>
>> we ported our system from Novel SLES11 using xen-3.3 to SLES11 SP1 using
>> xen-4.0 and ran into some trouble with the pod stuff.
>> We have a HVM guest and already used target_mem < max_mem on startup of
>> the guest.
>> With the new xen version we get
>> (XEN) p2m_pod_demand_populate: Out of populate-on-demand memory! tot_pages
> 792792 pod_entries 800
>> I did some code revisions and looking at pod patches
>> (http://lists.xensource.com/archives/html/xen-devel/2008-12/msg01030.html)
>> to understand the behavior. We use the following configuration:
>> maxmem = 4096
>> memory = 3096
>> What I see is:
>> - our guest boots with e820 map showing maxmem.
>> - reading xenstore memory/target returns '3170304' means 3096MB, 792576
> pages
>> Now our guest uses the target memory and gives back 1000MB via
>> hypervisor call XENMEM_decrease_reservation to the hypervisor.
>>
>> Later I try to map the complete domU memory into dom0 kernel space and here
> I
>> get the 'Out of populate-on-demand memory' crash.
>>
>> As far as I understand (ignoring the p2m_pod_emergency_sweep)
>> - on populating a page
>> - the page is taken from the pod cache
>> - p2md->pod.count--
>> - p2md->pod.entry_count--
>> - page gets type p2m_ram_rw
>> - decreasing a page
>> - p2md->pod.entry_count--
>> - page gets type p2m_invalid
>>
>> So if the guest uses all the target memory and gave back all
>> the (maxmem-target) memory p2md->pod.count and p2md->pod.entry_count should be
>> zero.
>> I added some tracing in the hypervisor and see on start of the guest:
>> p2m_pod_set_cache_target: p2md->pod.count: 791264 tot_pages: 791744
>> This pod.count is lower then the target seen in the guest!
>> On the first call of p2m_pod_demand_populate() I can see
>> p2m_pod_demand_populate: p2md->pod.entry_count: 1048064 p2md->pod.count: 791264
> tot_pages: 792792
>> So pod.entry_count=1048064 (4096MB) complies to maxmem but
>> pod.count=791264 is lower then the target memory in xenstore.
>>
>> Any help is welcome!
>> Thanks.
>> Dietmar.
>>
>> --
>> Company details: http://ts.fujitsu.com/imprint.html
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xensource.com
>> http://lists.xensource.com/xen-devel
>>
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: Populate-on-demand memory problem
2010-08-09 8:48 ` Jan Beulich
@ 2010-08-09 9:29 ` Keir Fraser
2010-08-09 9:54 ` George Dunlap
0 siblings, 1 reply; 6+ messages in thread
From: Keir Fraser @ 2010-08-09 9:29 UTC (permalink / raw)
To: Jan Beulich; +Cc: George Dunlap, xen-devel@lists.xensource.com, Dietmar Hahn
On 09/08/2010 09:48, "Jan Beulich" <JBeulich@novell.com> wrote:
> Keir,
>
> with Dietmar having tested this successfully, is there anything that
> keeps this from being applied to -unstable (and perhaps also 4.0.1)?
George needs to resubmit it for inclusion, with a proper changeset comment
and a signed-off-by line.
-- Keir
> Jan
>
>>>> On 27.07.10 at 15:10, George Dunlap <George.Dunlap@eu.citrix.com> wrote:
>> Hmm, looks like I neglected to push a fix upstream. Can you test it
>> with the attached patch, and tell me if that fixes your problem?
>>
>> -George
>>
>> On Tue, Jul 27, 2010 at 8:48 AM, Dietmar Hahn
>> <dietmar.hahn@ts.fujitsu.com> wrote:
>>> Hi list,
>>>
>>> we ported our system from Novel SLES11 using xen-3.3 to SLES11 SP1 using
>>> xen-4.0 and ran into some trouble with the pod stuff.
>>> We have a HVM guest and already used target_mem < max_mem on startup of
>>> the guest.
>>> With the new xen version we get
>>> (XEN) p2m_pod_demand_populate: Out of populate-on-demand memory! tot_pages
>> 792792 pod_entries 800
>>> I did some code revisions and looking at pod patches
>>> (http://lists.xensource.com/archives/html/xen-devel/2008-12/msg01030.html)
>>> to understand the behavior. We use the following configuration:
>>> maxmem = 4096
>>> memory = 3096
>>> What I see is:
>>> - our guest boots with e820 map showing maxmem.
>>> - reading xenstore memory/target returns '3170304' means 3096MB, 792576
>> pages
>>> Now our guest uses the target memory and gives back 1000MB via
>>> hypervisor call XENMEM_decrease_reservation to the hypervisor.
>>>
>>> Later I try to map the complete domU memory into dom0 kernel space and here
>> I
>>> get the 'Out of populate-on-demand memory' crash.
>>>
>>> As far as I understand (ignoring the p2m_pod_emergency_sweep)
>>> - on populating a page
>>> - the page is taken from the pod cache
>>> - p2md->pod.count--
>>> - p2md->pod.entry_count--
>>> - page gets type p2m_ram_rw
>>> - decreasing a page
>>> - p2md->pod.entry_count--
>>> - page gets type p2m_invalid
>>>
>>> So if the guest uses all the target memory and gave back all
>>> the (maxmem-target) memory p2md->pod.count and p2md->pod.entry_count should
>>> be
>>> zero.
>>> I added some tracing in the hypervisor and see on start of the guest:
>>> p2m_pod_set_cache_target: p2md->pod.count: 791264 tot_pages: 791744
>>> This pod.count is lower then the target seen in the guest!
>>> On the first call of p2m_pod_demand_populate() I can see
>>> p2m_pod_demand_populate: p2md->pod.entry_count: 1048064 p2md->pod.count:
>>> 791264
>> tot_pages: 792792
>>> So pod.entry_count=1048064 (4096MB) complies to maxmem but
>>> pod.count=791264 is lower then the target memory in xenstore.
>>>
>>> Any help is welcome!
>>> Thanks.
>>> Dietmar.
>>>
>>> --
>>> Company details: http://ts.fujitsu.com/imprint.html
>>>
>>> _______________________________________________
>>> Xen-devel mailing list
>>> Xen-devel@lists.xensource.com
>>> http://lists.xensource.com/xen-devel
>>>
>
>
>
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: Populate-on-demand memory problem
2010-08-09 9:29 ` Keir Fraser
@ 2010-08-09 9:54 ` George Dunlap
0 siblings, 0 replies; 6+ messages in thread
From: George Dunlap @ 2010-08-09 9:54 UTC (permalink / raw)
To: Keir Fraser; +Cc: xen-devel@lists.xensource.com, Dietmar Hahn, Jan Beulich
Sorry, I've been trying to test all of the p2m/pod patches on a machine
with HAP (since some paches, like the one that enables replacing 4k
pages with a superpage, can only be tested on HAP), and running into a
bunch of problems.
But this patch can clearly stand on its own, so I'll post it later today.
-George
On 09/08/10 10:29, Keir Fraser wrote:
> On 09/08/2010 09:48, "Jan Beulich"<JBeulich@novell.com> wrote:
>
>> Keir,
>>
>> with Dietmar having tested this successfully, is there anything that
>> keeps this from being applied to -unstable (and perhaps also 4.0.1)?
>
> George needs to resubmit it for inclusion, with a proper changeset comment
> and a signed-off-by line.
>
> -- Keir
>
>> Jan
>>
>>>>> On 27.07.10 at 15:10, George Dunlap<George.Dunlap@eu.citrix.com> wrote:
>>> Hmm, looks like I neglected to push a fix upstream. Can you test it
>>> with the attached patch, and tell me if that fixes your problem?
>>>
>>> -George
>>>
>>> On Tue, Jul 27, 2010 at 8:48 AM, Dietmar Hahn
>>> <dietmar.hahn@ts.fujitsu.com> wrote:
>>>> Hi list,
>>>>
>>>> we ported our system from Novel SLES11 using xen-3.3 to SLES11 SP1 using
>>>> xen-4.0 and ran into some trouble with the pod stuff.
>>>> We have a HVM guest and already used target_mem< max_mem on startup of
>>>> the guest.
>>>> With the new xen version we get
>>>> (XEN) p2m_pod_demand_populate: Out of populate-on-demand memory! tot_pages
>>> 792792 pod_entries 800
>>>> I did some code revisions and looking at pod patches
>>>> (http://lists.xensource.com/archives/html/xen-devel/2008-12/msg01030.html)
>>>> to understand the behavior. We use the following configuration:
>>>> maxmem = 4096
>>>> memory = 3096
>>>> What I see is:
>>>> - our guest boots with e820 map showing maxmem.
>>>> - reading xenstore memory/target returns '3170304' means 3096MB, 792576
>>> pages
>>>> Now our guest uses the target memory and gives back 1000MB via
>>>> hypervisor call XENMEM_decrease_reservation to the hypervisor.
>>>>
>>>> Later I try to map the complete domU memory into dom0 kernel space and here
>>> I
>>>> get the 'Out of populate-on-demand memory' crash.
>>>>
>>>> As far as I understand (ignoring the p2m_pod_emergency_sweep)
>>>> - on populating a page
>>>> - the page is taken from the pod cache
>>>> - p2md->pod.count--
>>>> - p2md->pod.entry_count--
>>>> - page gets type p2m_ram_rw
>>>> - decreasing a page
>>>> - p2md->pod.entry_count--
>>>> - page gets type p2m_invalid
>>>>
>>>> So if the guest uses all the target memory and gave back all
>>>> the (maxmem-target) memory p2md->pod.count and p2md->pod.entry_count should
>>>> be
>>>> zero.
>>>> I added some tracing in the hypervisor and see on start of the guest:
>>>> p2m_pod_set_cache_target: p2md->pod.count: 791264 tot_pages: 791744
>>>> This pod.count is lower then the target seen in the guest!
>>>> On the first call of p2m_pod_demand_populate() I can see
>>>> p2m_pod_demand_populate: p2md->pod.entry_count: 1048064 p2md->pod.count:
>>>> 791264
>>> tot_pages: 792792
>>>> So pod.entry_count=1048064 (4096MB) complies to maxmem but
>>>> pod.count=791264 is lower then the target memory in xenstore.
>>>>
>>>> Any help is welcome!
>>>> Thanks.
>>>> Dietmar.
>>>>
>>>> --
>>>> Company details: http://ts.fujitsu.com/imprint.html
>>>>
>>>> _______________________________________________
>>>> Xen-devel mailing list
>>>> Xen-devel@lists.xensource.com
>>>> http://lists.xensource.com/xen-devel
>>>>
>>
>>
>>
>
>
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2010-08-09 9:54 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-07-27 7:48 Populate-on-demand memory problem Dietmar Hahn
2010-07-27 13:10 ` George Dunlap
2010-07-28 8:05 ` Dietmar Hahn
2010-08-09 8:48 ` Jan Beulich
2010-08-09 9:29 ` Keir Fraser
2010-08-09 9:54 ` George Dunlap
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).