xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: George Dunlap <george.dunlap@eu.citrix.com>
To: Tim Deegan <tim@xen.org>
Cc: Jan Beulich <JBeulich@suse.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [PATCH 1 of 2 RFC] xen, pod: Zero-check recently populated pages (checklast)
Date: Thu, 14 Jun 2012 15:24:46 +0100	[thread overview]
Message-ID: <4FD9F42E.2060707@eu.citrix.com> (raw)
In-Reply-To: <20120614090725.GC82539@ocelot.phlegethon.org>

On 14/06/12 10:07, Tim Deegan wrote:
>
>
> At 13:02 +0100 on 08 Jun (1339160536), Jan Beulich wrote:
>>>>> On 08.06.12 at 13:45, George Dunlap<george.dunlap@eu.citrix.com>  wrote:
>>> --- a/xen/include/asm-x86/p2m.h
>>> +++ b/xen/include/asm-x86/p2m.h
>>> @@ -287,6 +287,9 @@ struct p2m_domain {
>>>           unsigned         reclaim_super; /* Last gpfn of a scan */
>>>           unsigned         reclaim_single; /* Last gpfn of a scan */
>>>           unsigned         max_guest;    /* gpfn of max guest demand-populate */
>>> +#define POD_HISTORY_MAX 128
>>> +        unsigned         last_populated[POD_HISTORY_MAX]; /* gpfn of last guest page demand-populated */
> This is the gpfns of the last 128 order-9 superpages populated, right?
Ah, yes -- just order 9.
> Also, this line is>80 columns - I think I saw a few others in this series.
I'll go through and check, thanks.
>
>> unsigned long?
>>
>> Also, wouldn't it be better to allocate this table dynamically, at
>> once allowing its size to scale with the number of vCPU-s in the
>> guest?
> You could even make it a small per-vcpu array, assuming that the parallel
> scrubbing will be symmetric across vcpus.
I can't remember exactly what I found here (this was last summer I was 
doing the tests); it may be that Windows creates a bunch of tasks which 
may migrate to various cpus.  If that were the case, a global list would 
be better than per-vcpu lists.

The problem with dynamically scaling the list is that I don't have a 
heuristic to hand for how to scale it.

In both cases, it's not unlikely that making a change without testing 
will significantly reduce the effectiveness of the patch.  Would you 
rather hold off and wait until I can get a chance to run my benchmarks 
again (which may miss the 4.2 cycle), or accept a tidied-up version of 
this patch first, and hope to get a revised method (using dynamic 
scaling or per-vcpu arrays) in before 4.2, but for sure by 4.3?

  -George

  reply	other threads:[~2012-06-14 14:24 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-06-08 11:45 [PATCH 0 of 2 RFC] Rework populate-on-demand sweeping George Dunlap
2012-06-08 11:45 ` [PATCH 1 of 2 RFC] xen, pod: Zero-check recently populated pages (checklast) George Dunlap
2012-06-08 12:02   ` Jan Beulich
2012-06-14  9:07     ` Tim Deegan
2012-06-14 14:24       ` George Dunlap [this message]
2012-06-14 15:36         ` Tim Deegan
2012-06-08 11:45 ` [PATCH 2 of 2 RFC] xen, pod: Only sweep in an emergency, and only for 4k pages George Dunlap
2012-06-14  9:11   ` Tim Deegan
2012-06-14 12:42     ` George Dunlap
2012-06-14 13:13       ` Tim Deegan
2012-06-14 13:32         ` George Dunlap
2012-06-14  9:12 ` [PATCH 0 of 2 RFC] Rework populate-on-demand sweeping Tim Deegan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4FD9F42E.2060707@eu.citrix.com \
    --to=george.dunlap@eu.citrix.com \
    --cc=JBeulich@suse.com \
    --cc=tim@xen.org \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).