From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: xen-devel@lists.xen.org
Cc: sstabellini@kernel.org, wei.liu2@citrix.com,
George.Dunlap@eu.citrix.com, andrew.cooper3@citrix.com,
ian.jackson@eu.citrix.com, tim@xen.org, jbeulich@suse.com,
Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: [PATCH v3 5/9] mm: Do not discard already-scrubbed pages if softirqs are pending
Date: Fri, 14 Apr 2017 11:37:34 -0400 [thread overview]
Message-ID: <1492184258-3277-6-git-send-email-boris.ostrovsky@oracle.com> (raw)
In-Reply-To: <1492184258-3277-1-git-send-email-boris.ostrovsky@oracle.com>
While scrubbing from idle loop, check for softirqs every 256 pages.
If softirq is pending, don't scrub any further and merge the
partially-scrubbed buddy back into heap by breaking the clean portion
into smaller power-of-2 chunks. Then repeat the same process for the
dirty part.
Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
xen/common/page_alloc.c | 78 ++++++++++++++++++++++++++++++++++++++--------
1 files changed, 64 insertions(+), 14 deletions(-)
diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index fcd7308..0b2dff1 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -996,7 +996,7 @@ static int reserve_offlined_page(struct page_info *head)
static struct page_info *
merge_and_free_buddy(struct page_info *pg, unsigned int node,
unsigned int zone, unsigned int order,
- bool need_scrub)
+ bool need_scrub, bool is_frag)
{
ASSERT(spin_is_locked(&heap_lock));
@@ -1011,7 +1011,15 @@ merge_and_free_buddy(struct page_info *pg, unsigned int node,
if ( (page_to_mfn(pg) & mask) )
buddy = pg - mask; /* Merge with predecessor block. */
else
- buddy = pg + mask; /* Merge with successor block. */
+ {
+ /*
+ * Merge with successor block.
+ * Only un-fragmented buddy can be merged forward.
+ */
+ if ( is_frag )
+ break;
+ buddy = pg + mask;
+ }
if ( !mfn_valid(_mfn(page_to_mfn(buddy))) ||
!page_state_is(buddy, free) ||
@@ -1093,12 +1101,15 @@ static unsigned int node_to_scrub(bool get_node)
bool scrub_free_pages(void)
{
struct page_info *pg;
- unsigned int zone, order;
- unsigned long i;
+ unsigned int zone, order, scrub_order;
+ unsigned long i, num_processed, start, end;
unsigned int cpu = smp_processor_id();
- bool preempt = false;
+ bool preempt = false, is_frag;
nodeid_t node;
+ /* Scrubbing granularity. */
+#define SCRUB_CHUNK_ORDER 8
+
/*
* Don't scrub while dom0 is being constructed since we may
* fail trying to call map_domain_page() from scrub_one_page().
@@ -1123,25 +1134,64 @@ bool scrub_free_pages(void)
if ( !pg->u.free.dirty_head )
break;
- for ( i = 0; i < (1UL << order); i++)
+ scrub_order = MIN(order, SCRUB_CHUNK_ORDER);
+ num_processed = 0;
+ is_frag = false;
+ while ( num_processed < (1UL << order) )
{
- if ( test_bit(_PGC_need_scrub, &pg[i].count_info) )
+ for ( i = num_processed;
+ i < num_processed + (1UL << scrub_order); i++ )
{
- scrub_one_page(&pg[i]);
- pg[i].count_info &= ~PGC_need_scrub;
- node_need_scrub[node]--;
+ if ( test_bit(_PGC_need_scrub, &pg[i].count_info) )
+ {
+ scrub_one_page(&pg[i]);
+ pg[i].count_info &= ~PGC_need_scrub;
+ node_need_scrub[node]--;
+ }
}
+
+ num_processed += (1UL << scrub_order);
if ( softirq_pending(cpu) )
{
preempt = true;
+ is_frag = (num_processed < (1UL << order));
break;
}
}
- if ( i == (1UL << order) )
+ start = 0;
+ end = num_processed;
+
+ page_list_del(pg, &heap(node, zone, order));
+
+ /* Merge clean pages */
+ while ( start < end )
+ {
+ /*
+ * Largest power-of-two chunk starting @start,
+ * not greater than @end
+ */
+ unsigned chunk_order = flsl(end - start) - 1;
+ struct page_info *ppg = &pg[start];
+
+ PFN_ORDER(ppg) = chunk_order;
+ merge_and_free_buddy(ppg, node, zone, chunk_order, false, is_frag);
+ start += (1UL << chunk_order);
+ }
+
+ /* Merge unscrubbed pages */
+ while ( end < (1UL << order) )
{
- page_list_del(pg, &heap(node, zone, order));
- merge_and_free_buddy(pg, node, zone, order, false);
+ /*
+ * Largest power-of-two chunk starting @end, not crossing
+ * next power-of-two boundary
+ */
+ unsigned chunk_order = ffsl(end) - 1;
+ struct page_info *ppg = &pg[end];
+
+ PFN_ORDER(ppg) = chunk_order;
+ merge_and_free_buddy(ppg, node, zone, chunk_order, true, true);
+ end += (1UL << chunk_order);
}
if ( preempt || (node_need_scrub[node] == 0) )
@@ -1215,7 +1265,7 @@ static void free_heap_pages(
node_need_scrub[node] += (1UL << order);
}
- pg = merge_and_free_buddy(pg, node, zone, order, need_scrub);
+ pg = merge_and_free_buddy(pg, node, zone, order, need_scrub, false);
if ( tainted )
reserve_offlined_page(pg);
--
1.7.1
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
next prev parent reply other threads:[~2017-04-14 15:37 UTC|newest]
Thread overview: 51+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-04-14 15:37 [PATCH v3 0/9] Memory scrubbing from idle loop Boris Ostrovsky
2017-04-14 15:37 ` [PATCH v3 1/9] mm: Separate free page chunk merging into its own routine Boris Ostrovsky
2017-05-04 9:45 ` Jan Beulich
2017-04-14 15:37 ` [PATCH v3 2/9] mm: Place unscrubbed pages at the end of pagelist Boris Ostrovsky
2017-05-04 10:17 ` Jan Beulich
2017-05-04 14:53 ` Boris Ostrovsky
2017-05-04 15:00 ` Jan Beulich
2017-05-08 16:41 ` George Dunlap
2017-05-08 16:59 ` Boris Ostrovsky
2017-04-14 15:37 ` [PATCH v3 3/9] mm: Scrub pages in alloc_heap_pages() if needed Boris Ostrovsky
2017-05-04 14:44 ` Jan Beulich
2017-05-04 15:04 ` Boris Ostrovsky
2017-05-04 15:36 ` Jan Beulich
2017-04-14 15:37 ` [PATCH v3 4/9] mm: Scrub memory from idle loop Boris Ostrovsky
2017-05-04 15:31 ` Jan Beulich
2017-05-04 17:09 ` Boris Ostrovsky
2017-05-05 10:21 ` Jan Beulich
2017-05-05 13:42 ` Boris Ostrovsky
2017-05-05 14:10 ` Jan Beulich
2017-05-05 14:14 ` Jan Beulich
2017-05-05 14:27 ` Boris Ostrovsky
2017-05-05 14:51 ` Jan Beulich
2017-05-05 15:23 ` Boris Ostrovsky
2017-05-05 16:05 ` Jan Beulich
2017-05-05 16:49 ` Boris Ostrovsky
2017-05-08 7:14 ` Jan Beulich
2017-05-11 10:26 ` Dario Faggioli
2017-05-11 14:19 ` Boris Ostrovsky
2017-05-11 15:48 ` Dario Faggioli
2017-05-11 17:05 ` Boris Ostrovsky
2017-05-12 8:17 ` Dario Faggioli
2017-05-12 14:42 ` Boris Ostrovsky
2017-04-14 15:37 ` Boris Ostrovsky [this message]
2017-05-04 15:43 ` [PATCH v3 5/9] mm: Do not discard already-scrubbed pages if softirqs are pending Jan Beulich
2017-05-04 17:18 ` Boris Ostrovsky
2017-05-05 10:27 ` Jan Beulich
2017-05-05 13:51 ` Boris Ostrovsky
2017-05-05 14:13 ` Jan Beulich
2017-04-14 15:37 ` [PATCH v3 6/9] spinlock: Introduce spin_lock_cb() Boris Ostrovsky
2017-04-14 15:37 ` [PATCH v3 7/9] mm: Keep pages available for allocation while scrubbing Boris Ostrovsky
2017-05-04 16:03 ` Jan Beulich
2017-05-04 17:26 ` Boris Ostrovsky
2017-05-05 10:28 ` Jan Beulich
2017-04-14 15:37 ` [PATCH v3 8/9] mm: Print number of unscrubbed pages in 'H' debug handler Boris Ostrovsky
2017-04-14 15:37 ` [PATCH v3 9/9] mm: Make sure pages are scrubbed Boris Ostrovsky
2017-05-05 15:05 ` Jan Beulich
2017-05-08 15:48 ` Konrad Rzeszutek Wilk
2017-05-08 16:23 ` Boris Ostrovsky
2017-05-02 14:46 ` [PATCH v3 0/9] Memory scrubbing from idle loop Boris Ostrovsky
2017-05-02 14:58 ` Jan Beulich
2017-05-02 15:07 ` Boris Ostrovsky
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1492184258-3277-6-git-send-email-boris.ostrovsky@oracle.com \
--to=boris.ostrovsky@oracle.com \
--cc=George.Dunlap@eu.citrix.com \
--cc=andrew.cooper3@citrix.com \
--cc=ian.jackson@eu.citrix.com \
--cc=jbeulich@suse.com \
--cc=sstabellini@kernel.org \
--cc=tim@xen.org \
--cc=wei.liu2@citrix.com \
--cc=xen-devel@lists.xen.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).