From: Vlastimil Babka <vbabka@suse.cz>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: "Jörn Engel" <joern@logfs.org>,
"Michel Lespinasse" <walken@google.com>,
"Hugh Dickins" <hughd@google.com>,
"Rik van Riel" <riel@redhat.com>,
"Johannes Weiner" <hannes@cmpxchg.org>,
"Mel Gorman" <mgorman@suse.de>, "Michal Hocko" <mhocko@suse.cz>,
linux-mm@kvack.org, "Vlastimil Babka" <vbabka@suse.cz>
Subject: [PATCH v2 7/7] mm: munlock: manual pte walk in fast path instead of follow_page_mask()
Date: Mon, 19 Aug 2013 14:23:42 +0200 [thread overview]
Message-ID: <1376915022-12741-8-git-send-email-vbabka@suse.cz> (raw)
In-Reply-To: <1376915022-12741-1-git-send-email-vbabka@suse.cz>
Currently munlock_vma_pages_range() calls follow_page_mask() to obtain each
struct page. This entails repeated full page table translations and page table
lock taken for each page separately.
This patch attempts to avoid the costly follow_page_mask() where possible, by
iterating over ptes within single pmd under single page table lock. The first
pte is obtained by get_locked_pte() for non-THP page acquired by the initial
follow_page_mask(). The latter function is also used as a fallback in case
simple pte_present() and vm_normal_page() are not sufficient to obtain the
struct page.
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
---
mm/mlock.c | 79 +++++++++++++++++++++++++++++++++++++++++++++++++++++---------
1 file changed, 68 insertions(+), 11 deletions(-)
diff --git a/mm/mlock.c b/mm/mlock.c
index 77ddd6a..f9f21f4 100644
--- a/mm/mlock.c
+++ b/mm/mlock.c
@@ -377,33 +377,73 @@ void munlock_vma_pages_range(struct vm_area_struct *vma,
{
struct pagevec pvec;
struct zone *zone = NULL;
+ pte_t *pte = NULL;
+ spinlock_t *ptl;
+ unsigned long pmd_end;
pagevec_init(&pvec, 0);
vma->vm_flags &= ~VM_LOCKED;
while (start < end) {
- struct page *page;
+ struct page *page = NULL;
unsigned int page_mask, page_increm;
struct zone *pagezone;
+ /* If we can, try pte walk instead of follow_page_mask() */
+ if (pte && start < pmd_end) {
+ pte++;
+ if (pte_present(*pte))
+ page = vm_normal_page(vma, start, *pte);
+ if (page) {
+ get_page(page);
+ page_mask = 0;
+ }
+ }
+
/*
- * Although FOLL_DUMP is intended for get_dump_page(),
- * it just so happens that its special treatment of the
- * ZERO_PAGE (returning an error instead of doing get_page)
- * suits munlock very well (and if somehow an abnormal page
- * has sneaked into the range, we won't oops here: great).
+ * If we did sucessful pte walk step, use that page.
+ * Otherwise (NULL pte, !pte_present or vm_normal_page failed
+ * due to e.g. zero page), fallback to follow_page_mask() which
+ * handles all exceptions.
*/
- page = follow_page_mask(vma, start, FOLL_GET | FOLL_DUMP,
- &page_mask);
+ if (!page) {
+ if (pte) {
+ pte_unmap_unlock(pte, ptl);
+ pte = NULL;
+ }
+
+ /*
+ * Although FOLL_DUMP is intended for get_dump_page(),
+ * it just so happens that its special treatment of the
+ * ZERO_PAGE (returning an error instead of doing
+ * get_page) suits munlock very well (and if somehow an
+ * abnormal page has sneaked into the range, we won't
+ * oops here: great).
+ */
+ page = follow_page_mask(vma, start,
+ FOLL_GET | FOLL_DUMP, &page_mask);
+ pmd_end = pmd_addr_end(start, end);
+ }
+
if (page && !IS_ERR(page)) {
pagezone = page_zone(page);
/* The whole pagevec must be in the same zone */
if (pagezone != zone) {
- if (pagevec_count(&pvec))
+ if (pagevec_count(&pvec)) {
+ if (pte) {
+ pte_unmap_unlock(pte, ptl);
+ pte = NULL;
+ }
__munlock_pagevec(&pvec, zone);
+ }
zone = pagezone;
}
if (PageTransHuge(page)) {
+ /*
+ * We could not have stumbled upon a THP page
+ * using the pte walk.
+ */
+ VM_BUG_ON(pte);
/*
* THP pages are not handled by pagevec due
* to their possible split (see below).
@@ -422,19 +462,36 @@ void munlock_vma_pages_range(struct vm_area_struct *vma,
put_page(page); /* follow_page_mask() */
} else {
/*
+ * Initialize pte walk for further pages. We
+ * can do this here since we know the current
+ * page is not THP.
+ */
+ if (!pte)
+ pte = get_locked_pte(vma->vm_mm, start,
+ &ptl);
+ /*
* Non-huge pages are handled in batches
* via pagevec. The pin from
* follow_page_mask() prevents them from
* collapsing by THP.
*/
- if (pagevec_add(&pvec, page) == 0)
+ if (pagevec_add(&pvec, page) == 0) {
+ if (pte) {
+ pte_unmap_unlock(pte, ptl);
+ pte = NULL;
+ }
__munlock_pagevec(&pvec, zone);
+ }
}
}
page_increm = 1 + (~(start >> PAGE_SHIFT) & page_mask);
start += page_increm * PAGE_SIZE;
- cond_resched();
+ /* Don't resched while ptl is held */
+ if (!pte)
+ cond_resched();
}
+ if (pte)
+ pte_unmap_unlock(pte, ptl);
if (pagevec_count(&pvec))
__munlock_pagevec(&pvec, zone);
}
--
1.8.1.4
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2013-08-19 12:24 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-08-19 12:23 [PATCH v2 0/7] Improving munlock() performance for large non-THP areas Vlastimil Babka
2013-08-19 12:23 ` [PATCH v2 1/7] mm: putback_lru_page: remove unnecessary call to page_lru_base_type() Vlastimil Babka
2013-08-19 14:48 ` Mel Gorman
2013-08-19 12:23 ` [PATCH v2 2/7] mm: munlock: remove unnecessary call to lru_add_drain() Vlastimil Babka
2013-08-19 14:48 ` Mel Gorman
2013-08-19 12:23 ` [PATCH v2 3/7] mm: munlock: batch non-THP page isolation and munlock+putback using pagevec Vlastimil Babka
2013-08-19 14:58 ` Mel Gorman
2013-08-19 22:38 ` Andrew Morton
2013-08-22 11:13 ` Vlastimil Babka
2013-08-19 12:23 ` [PATCH v2 4/7] mm: munlock: batch NR_MLOCK zone state updates Vlastimil Babka
2013-08-19 15:01 ` Mel Gorman
2013-08-19 12:23 ` [PATCH v2 5/7] mm: munlock: bypass per-cpu pvec for putback_lru_page Vlastimil Babka
2013-08-19 15:05 ` Mel Gorman
2013-08-19 22:45 ` Andrew Morton
2013-08-22 11:16 ` Vlastimil Babka
2013-08-19 12:23 ` [PATCH v2 6/7] mm: munlock: remove redundant get_page/put_page pair on the fast path Vlastimil Babka
2013-08-19 15:07 ` Mel Gorman
2013-08-19 12:23 ` Vlastimil Babka [this message]
2013-08-19 22:47 ` [PATCH v2 7/7] mm: munlock: manual pte walk in fast path instead of follow_page_mask() Andrew Morton
2013-08-22 11:18 ` Vlastimil Babka
2013-08-27 22:24 ` Andrew Morton
2013-08-29 13:02 ` Vlastimil Babka
2013-08-19 22:48 ` [PATCH v2 0/7] Improving munlock() performance for large non-THP areas Andrew Morton
2013-08-22 11:21 ` Vlastimil Babka
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1376915022-12741-8-git-send-email-vbabka@suse.cz \
--to=vbabka@suse.cz \
--cc=akpm@linux-foundation.org \
--cc=hannes@cmpxchg.org \
--cc=hughd@google.com \
--cc=joern@logfs.org \
--cc=linux-mm@kvack.org \
--cc=mgorman@suse.de \
--cc=mhocko@suse.cz \
--cc=riel@redhat.com \
--cc=walken@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).