* [PATCH 0/8] mm: hugetlbfs: fix hugetlbfs optimization v3
@ 2013-11-20 17:51 Andrea Arcangeli
2013-11-20 17:51 ` [PATCH 1/8] mm: hugetlbfs: fix hugetlbfs optimization Andrea Arcangeli
` (7 more replies)
0 siblings, 8 replies; 9+ messages in thread
From: Andrea Arcangeli @ 2013-11-20 17:51 UTC (permalink / raw)
To: Andrew Morton
Cc: linux-mm, linux-kernel, Khalid Aziz, Pravin Shelar,
Greg Kroah-Hartman, Ben Hutchings, Christoph Lameter,
Johannes Weiner, Mel Gorman, Rik van Riel, Andi Kleen,
Minchan Kim, Linus Torvalds
Changes since v2:
1) optimize away a few more locked ops in the get_page/put_page
hugetlbfs and slab paths (see 3/8 and 4/8).
3/8 is the least trivial addition to the series as we now are
running PageSlab and PageHeadHuge on random page structure without
holding any reference count on this. A smp_rmb() if any of the two
checks succeeds is what is supposed to make it safe doing so and
it's lighter weight than get_page_unless_zero (hence the supposed
optimization out of it). 3/8 makes no difference whatsoever to the
speed of the THP case. It's unclear if 3/8 is worth it but it seems
every bit is affecting performance for directio over hugetlbfs with
>8GB/sec storage devices so I thought of trying it. 4/8 is quite
self explanatory and it removes some smp_rmb which is not needed
with the current layout of the struct page.
2) two nice cleanups from Andrew
3) Removed the PageHeadHuge export as it's not needed right now
Andrea Arcangeli (6):
mm: hugetlbfs: fix hugetlbfs optimization
mm: hugetlb: use get_page_foll in follow_hugetlb_page
mm: hugetlbfs: move the put/get_page slab and hugetlbfs optimization
in a faster path
mm: thp: optimize compound_trans_huge
mm: tail page refcounting optimization for slab and hugetlbfs
mm/hugetlb.c: defer PageHeadHuge() symbol export
Andrew Morton (2):
mm/hugetlb.c: simplify PageHeadHuge() and PageHuge()
mm/swap.c: reorganize put_compound_page()
include/linux/huge_mm.h | 23 ++++
include/linux/mm.h | 32 +++++-
mm/hugetlb.c | 20 +++-
mm/internal.h | 3 +-
mm/swap.c | 284 +++++++++++++++++++++++++++++-------------------
5 files changed, 240 insertions(+), 122 deletions(-)
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH 1/8] mm: hugetlbfs: fix hugetlbfs optimization
2013-11-20 17:51 [PATCH 0/8] mm: hugetlbfs: fix hugetlbfs optimization v3 Andrea Arcangeli
@ 2013-11-20 17:51 ` Andrea Arcangeli
2013-11-20 17:51 ` [PATCH 2/8] mm: hugetlb: use get_page_foll in follow_hugetlb_page Andrea Arcangeli
` (6 subsequent siblings)
7 siblings, 0 replies; 9+ messages in thread
From: Andrea Arcangeli @ 2013-11-20 17:51 UTC (permalink / raw)
To: Andrew Morton
Cc: linux-mm, linux-kernel, Khalid Aziz, Pravin Shelar,
Greg Kroah-Hartman, Ben Hutchings, Christoph Lameter,
Johannes Weiner, Mel Gorman, Rik van Riel, Andi Kleen,
Minchan Kim, Linus Torvalds
The patch from commit 7cb2ef56e6a8b7b368b2e883a0a47d02fed66911 can
cause dereference of a dangling pointer if split_huge_page runs during
PageHuge() if there are updates to the tail_page->private field.
Also it is repeating compound_head twice for hugetlbfs and it is
running compound_head+compound_trans_head for THP when a single one is
needed in both cases.
The new code within the PageSlab() check doesn't need to verify that
the THP page size is never bigger than the smallest hugetlbfs page
size, to avoid memory corruption.
A longstanding theoretical race condition was found while fixing the
above (see the change right after the skip_unlock label, that is
relevant for the compound_lock path too).
By re-establishing the _mapcount tail refcounting for all compound
pages, this also fixes the below problem:
echo 0 >/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
BUG: Bad page state in process bash pfn:59a01
page:ffffea000139b038 count:0 mapcount:10 mapping: (null) index:0x0
page flags: 0x1c00000000008000(tail)
Modules linked in:
CPU: 6 PID: 2018 Comm: bash Not tainted 3.12.0+ #25
Hardware name: Bochs Bochs, BIOS Bochs 01/01/2011
0000000000000009 ffff880079cb5cc8 ffffffff81640e8b 0000000000000006
ffffea000139b038 ffff880079cb5ce8 ffffffff8115bb15 00000000000002c1
ffffea000139b038 ffff880079cb5d48 ffffffff8115bd83 ffff880079cb5de8
Call Trace:
[<ffffffff81640e8b>] dump_stack+0x55/0x76
[<ffffffff8115bb15>] bad_page+0xd5/0x130
[<ffffffff8115bd83>] free_pages_prepare+0x213/0x280
[<ffffffff8115df16>] __free_pages+0x36/0x80
[<ffffffff8119b011>] update_and_free_page+0xc1/0xd0
[<ffffffff8119b512>] free_pool_huge_page+0xc2/0xe0
[<ffffffff8119b8cc>] set_max_huge_pages.part.58+0x14c/0x220
[<ffffffff81308a8c>] ? _kstrtoull+0x2c/0x90
[<ffffffff8119ba70>] nr_hugepages_store_common.isra.60+0xd0/0xf0
[<ffffffff8119bac3>] nr_hugepages_store+0x13/0x20
[<ffffffff812f763f>] kobj_attr_store+0xf/0x20
[<ffffffff812354e9>] sysfs_write_file+0x189/0x1e0
[<ffffffff811baff5>] vfs_write+0xc5/0x1f0
[<ffffffff811bb505>] SyS_write+0x55/0xb0
[<ffffffff81651712>] system_call_fastpath+0x16/0x1b
Signed-off-by: Khalid Aziz <khalid.aziz@oracle.com>
Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>
---
include/linux/hugetlb.h | 6 ++
mm/hugetlb.c | 17 ++++++
mm/swap.c | 143 ++++++++++++++++++++++++++++--------------------
3 files changed, 106 insertions(+), 60 deletions(-)
diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index acd2010..d4f3dbf 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -31,6 +31,7 @@ struct hugepage_subpool *hugepage_new_subpool(long nr_blocks);
void hugepage_put_subpool(struct hugepage_subpool *spool);
int PageHuge(struct page *page);
+int PageHeadHuge(struct page *page_head);
void reset_vma_resv_huge_pages(struct vm_area_struct *vma);
int hugetlb_sysctl_handler(struct ctl_table *, int, void __user *, size_t *, loff_t *);
@@ -104,6 +105,11 @@ static inline int PageHuge(struct page *page)
return 0;
}
+static inline int PageHeadHuge(struct page *page_head)
+{
+ return 0;
+}
+
static inline void reset_vma_resv_huge_pages(struct vm_area_struct *vma)
{
}
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 7d57af2..14737f8e 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -736,6 +736,23 @@ int PageHuge(struct page *page)
}
EXPORT_SYMBOL_GPL(PageHuge);
+/*
+ * PageHeadHuge() only returns true for hugetlbfs head page, but not for
+ * normal or transparent huge pages.
+ */
+int PageHeadHuge(struct page *page_head)
+{
+ compound_page_dtor *dtor;
+
+ if (!PageHead(page_head))
+ return 0;
+
+ dtor = get_compound_page_dtor(page_head);
+
+ return dtor == free_huge_page;
+}
+EXPORT_SYMBOL_GPL(PageHeadHuge);
+
pgoff_t __basepage_index(struct page *page)
{
struct page *page_head = compound_head(page);
diff --git a/mm/swap.c b/mm/swap.c
index 7a9f80d..84b26aa 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -82,19 +82,6 @@ static void __put_compound_page(struct page *page)
static void put_compound_page(struct page *page)
{
- /*
- * hugetlbfs pages cannot be split from under us. If this is a
- * hugetlbfs page, check refcount on head page and release the page if
- * the refcount becomes zero.
- */
- if (PageHuge(page)) {
- page = compound_head(page);
- if (put_page_testzero(page))
- __put_compound_page(page);
-
- return;
- }
-
if (unlikely(PageTail(page))) {
/* __split_huge_page_refcount can run under us */
struct page *page_head = compound_trans_head(page);
@@ -111,14 +98,31 @@ static void put_compound_page(struct page *page)
* still hot on arches that do not support
* this_cpu_cmpxchg_double().
*/
- if (PageSlab(page_head)) {
- if (PageTail(page)) {
+ if (PageSlab(page_head) || PageHeadHuge(page_head)) {
+ if (likely(PageTail(page))) {
+ /*
+ * __split_huge_page_refcount
+ * cannot race here.
+ */
+ VM_BUG_ON(!PageHead(page_head));
+ atomic_dec(&page->_mapcount);
if (put_page_testzero(page_head))
VM_BUG_ON(1);
-
- atomic_dec(&page->_mapcount);
- goto skip_lock_tail;
+ if (put_page_testzero(page_head))
+ __put_compound_page(page_head);
+ return;
} else
+ /*
+ * __split_huge_page_refcount
+ * run before us, "page" was a
+ * THP tail. The split
+ * page_head has been freed
+ * and reallocated as slab or
+ * hugetlbfs page of smaller
+ * order (only possible if
+ * reallocated as slab on
+ * x86).
+ */
goto skip_lock;
}
/*
@@ -132,8 +136,27 @@ static void put_compound_page(struct page *page)
/* __split_huge_page_refcount run before us */
compound_unlock_irqrestore(page_head, flags);
skip_lock:
- if (put_page_testzero(page_head))
- __put_single_page(page_head);
+ if (put_page_testzero(page_head)) {
+ /*
+ * The head page may have been
+ * freed and reallocated as a
+ * compound page of smaller
+ * order and then freed again.
+ * All we know is that it
+ * cannot have become: a THP
+ * page, a compound page of
+ * higher order, a tail page.
+ * That is because we still
+ * hold the refcount of the
+ * split THP tail and
+ * page_head was the THP head
+ * before the split.
+ */
+ if (PageHead(page_head))
+ __put_compound_page(page_head);
+ else
+ __put_single_page(page_head);
+ }
out_put_single:
if (put_page_testzero(page))
__put_single_page(page);
@@ -155,7 +178,6 @@ out_put_single:
VM_BUG_ON(atomic_read(&page->_count) != 0);
compound_unlock_irqrestore(page_head, flags);
-skip_lock_tail:
if (put_page_testzero(page_head)) {
if (PageHead(page_head))
__put_compound_page(page_head);
@@ -198,51 +220,52 @@ bool __get_page_tail(struct page *page)
* proper PT lock that already serializes against
* split_huge_page().
*/
+ unsigned long flags;
bool got = false;
- struct page *page_head;
-
- /*
- * If this is a hugetlbfs page it cannot be split under us. Simply
- * increment refcount for the head page.
- */
- if (PageHuge(page)) {
- page_head = compound_head(page);
- atomic_inc(&page_head->_count);
- got = true;
- } else {
- unsigned long flags;
+ struct page *page_head = compound_trans_head(page);
- page_head = compound_trans_head(page);
- if (likely(page != page_head &&
- get_page_unless_zero(page_head))) {
-
- /* Ref to put_compound_page() comment. */
- if (PageSlab(page_head)) {
- if (likely(PageTail(page))) {
- __get_page_tail_foll(page, false);
- return true;
- } else {
- put_page(page_head);
- return false;
- }
- }
-
- /*
- * page_head wasn't a dangling pointer but it
- * may not be a head page anymore by the time
- * we obtain the lock. That is ok as long as it
- * can't be freed from under us.
- */
- flags = compound_lock_irqsave(page_head);
- /* here __split_huge_page_refcount won't run anymore */
+ if (likely(page != page_head && get_page_unless_zero(page_head))) {
+ /* Ref to put_compound_page() comment. */
+ if (PageSlab(page_head) || PageHeadHuge(page_head)) {
if (likely(PageTail(page))) {
+ /*
+ * This is a hugetlbfs page or a slab
+ * page. __split_huge_page_refcount
+ * cannot race here.
+ */
+ VM_BUG_ON(!PageHead(page_head));
__get_page_tail_foll(page, false);
- got = true;
- }
- compound_unlock_irqrestore(page_head, flags);
- if (unlikely(!got))
+ return true;
+ } else {
+ /*
+ * __split_huge_page_refcount run
+ * before us, "page" was a THP
+ * tail. The split page_head has been
+ * freed and reallocated as slab or
+ * hugetlbfs page of smaller order
+ * (only possible if reallocated as
+ * slab on x86).
+ */
put_page(page_head);
+ return false;
+ }
+ }
+
+ /*
+ * page_head wasn't a dangling pointer but it
+ * may not be a head page anymore by the time
+ * we obtain the lock. That is ok as long as it
+ * can't be freed from under us.
+ */
+ flags = compound_lock_irqsave(page_head);
+ /* here __split_huge_page_refcount won't run anymore */
+ if (likely(PageTail(page))) {
+ __get_page_tail_foll(page, false);
+ got = true;
}
+ compound_unlock_irqrestore(page_head, flags);
+ if (unlikely(!got))
+ put_page(page_head);
}
return got;
}
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH 2/8] mm: hugetlb: use get_page_foll in follow_hugetlb_page
2013-11-20 17:51 [PATCH 0/8] mm: hugetlbfs: fix hugetlbfs optimization v3 Andrea Arcangeli
2013-11-20 17:51 ` [PATCH 1/8] mm: hugetlbfs: fix hugetlbfs optimization Andrea Arcangeli
@ 2013-11-20 17:51 ` Andrea Arcangeli
2013-11-20 17:51 ` [PATCH 3/8] mm: hugetlbfs: move the put/get_page slab and hugetlbfs optimization in a faster path Andrea Arcangeli
` (5 subsequent siblings)
7 siblings, 0 replies; 9+ messages in thread
From: Andrea Arcangeli @ 2013-11-20 17:51 UTC (permalink / raw)
To: Andrew Morton
Cc: linux-mm, linux-kernel, Khalid Aziz, Pravin Shelar,
Greg Kroah-Hartman, Ben Hutchings, Christoph Lameter,
Johannes Weiner, Mel Gorman, Rik van Riel, Andi Kleen,
Minchan Kim, Linus Torvalds
get_page_foll is more optimal and is always safe to use under the PT
lock. More so for hugetlbfs as there's no risk of race conditions with
split_huge_page regardless of the PT lock.
Reviewed-by: Khalid Aziz <khalid.aziz@oracle.com>
Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>
---
mm/hugetlb.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 14737f8e..f03e068 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -3113,7 +3113,7 @@ long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma,
same_page:
if (pages) {
pages[i] = mem_map_offset(page, pfn_offset);
- get_page(pages[i]);
+ get_page_foll(pages[i]);
}
if (vmas)
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH 3/8] mm: hugetlbfs: move the put/get_page slab and hugetlbfs optimization in a faster path
2013-11-20 17:51 [PATCH 0/8] mm: hugetlbfs: fix hugetlbfs optimization v3 Andrea Arcangeli
2013-11-20 17:51 ` [PATCH 1/8] mm: hugetlbfs: fix hugetlbfs optimization Andrea Arcangeli
2013-11-20 17:51 ` [PATCH 2/8] mm: hugetlb: use get_page_foll in follow_hugetlb_page Andrea Arcangeli
@ 2013-11-20 17:51 ` Andrea Arcangeli
2013-11-20 17:51 ` [PATCH 4/8] mm: thp: optimize compound_trans_huge Andrea Arcangeli
` (4 subsequent siblings)
7 siblings, 0 replies; 9+ messages in thread
From: Andrea Arcangeli @ 2013-11-20 17:51 UTC (permalink / raw)
To: Andrew Morton
Cc: linux-mm, linux-kernel, Khalid Aziz, Pravin Shelar,
Greg Kroah-Hartman, Ben Hutchings, Christoph Lameter,
Johannes Weiner, Mel Gorman, Rik van Riel, Andi Kleen,
Minchan Kim, Linus Torvalds
We don't actually need a reference on the head page in the slab and
hugetlbfs paths, as long as we add a smp_rmb() which should be faster
than get_page_unless_zero.
Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>
---
mm/swap.c | 140 ++++++++++++++++++++++++++++++++++----------------------------
1 file changed, 78 insertions(+), 62 deletions(-)
diff --git a/mm/swap.c b/mm/swap.c
index 84b26aa..dbf5427 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -86,46 +86,62 @@ static void put_compound_page(struct page *page)
/* __split_huge_page_refcount can run under us */
struct page *page_head = compound_trans_head(page);
+ /*
+ * THP can not break up slab pages so avoid taking
+ * compound_lock(). Slab performs non-atomic bit ops
+ * on page->flags for better performance. In
+ * particular slab_unlock() in slub used to be a hot
+ * path. It is still hot on arches that do not support
+ * this_cpu_cmpxchg_double().
+ *
+ * If "page" is part of a slab or hugetlbfs page it
+ * cannot be splitted and the head page cannot change
+ * from under us. And if "page" is part of a THP page
+ * under splitting, if the head page pointed by the
+ * THP tail isn't a THP head anymore, we'll find
+ * PageTail clear after smp_rmb() and we'll threat it
+ * as a single page.
+ */
+ if (PageSlab(page_head) || PageHeadHuge(page_head)) {
+ /*
+ * If "page" is a THP tail, we must read the tail page
+ * flags after the head page flags. The
+ * split_huge_page side enforces write memory
+ * barriers between clearing PageTail and before the
+ * head page can be freed and reallocated.
+ */
+ smp_rmb();
+ if (likely(PageTail(page))) {
+ /*
+ * __split_huge_page_refcount
+ * cannot race here.
+ */
+ VM_BUG_ON(!PageHead(page_head));
+ VM_BUG_ON(page_mapcount(page) <= 0);
+ atomic_dec(&page->_mapcount);
+ if (put_page_testzero(page_head))
+ __put_compound_page(page_head);
+ return;
+ } else
+ /*
+ * __split_huge_page_refcount
+ * run before us, "page" was a
+ * THP tail. The split
+ * page_head has been freed
+ * and reallocated as slab or
+ * hugetlbfs page of smaller
+ * order (only possible if
+ * reallocated as slab on
+ * x86).
+ */
+ goto out_put_single;
+ }
+
if (likely(page != page_head &&
get_page_unless_zero(page_head))) {
unsigned long flags;
/*
- * THP can not break up slab pages so avoid taking
- * compound_lock(). Slab performs non-atomic bit ops
- * on page->flags for better performance. In particular
- * slab_unlock() in slub used to be a hot path. It is
- * still hot on arches that do not support
- * this_cpu_cmpxchg_double().
- */
- if (PageSlab(page_head) || PageHeadHuge(page_head)) {
- if (likely(PageTail(page))) {
- /*
- * __split_huge_page_refcount
- * cannot race here.
- */
- VM_BUG_ON(!PageHead(page_head));
- atomic_dec(&page->_mapcount);
- if (put_page_testzero(page_head))
- VM_BUG_ON(1);
- if (put_page_testzero(page_head))
- __put_compound_page(page_head);
- return;
- } else
- /*
- * __split_huge_page_refcount
- * run before us, "page" was a
- * THP tail. The split
- * page_head has been freed
- * and reallocated as slab or
- * hugetlbfs page of smaller
- * order (only possible if
- * reallocated as slab on
- * x86).
- */
- goto skip_lock;
- }
- /*
* page_head wasn't a dangling pointer but it
* may not be a head page anymore by the time
* we obtain the lock. That is ok as long as it
@@ -135,7 +151,6 @@ static void put_compound_page(struct page *page)
if (unlikely(!PageTail(page))) {
/* __split_huge_page_refcount run before us */
compound_unlock_irqrestore(page_head, flags);
-skip_lock:
if (put_page_testzero(page_head)) {
/*
* The head page may have been
@@ -221,36 +236,37 @@ bool __get_page_tail(struct page *page)
* split_huge_page().
*/
unsigned long flags;
- bool got = false;
+ bool got;
struct page *page_head = compound_trans_head(page);
- if (likely(page != page_head && get_page_unless_zero(page_head))) {
- /* Ref to put_compound_page() comment. */
- if (PageSlab(page_head) || PageHeadHuge(page_head)) {
- if (likely(PageTail(page))) {
- /*
- * This is a hugetlbfs page or a slab
- * page. __split_huge_page_refcount
- * cannot race here.
- */
- VM_BUG_ON(!PageHead(page_head));
- __get_page_tail_foll(page, false);
- return true;
- } else {
- /*
- * __split_huge_page_refcount run
- * before us, "page" was a THP
- * tail. The split page_head has been
- * freed and reallocated as slab or
- * hugetlbfs page of smaller order
- * (only possible if reallocated as
- * slab on x86).
- */
- put_page(page_head);
- return false;
- }
+ /* Ref to put_compound_page() comment. */
+ if (PageSlab(page_head) || PageHeadHuge(page_head)) {
+ smp_rmb();
+ if (likely(PageTail(page))) {
+ /*
+ * This is a hugetlbfs page or a slab
+ * page. __split_huge_page_refcount
+ * cannot race here.
+ */
+ VM_BUG_ON(!PageHead(page_head));
+ __get_page_tail_foll(page, true);
+ return true;
+ } else {
+ /*
+ * __split_huge_page_refcount run
+ * before us, "page" was a THP
+ * tail. The split page_head has been
+ * freed and reallocated as slab or
+ * hugetlbfs page of smaller order
+ * (only possible if reallocated as
+ * slab on x86).
+ */
+ return false;
}
+ }
+ got = false;
+ if (likely(page != page_head && get_page_unless_zero(page_head))) {
/*
* page_head wasn't a dangling pointer but it
* may not be a head page anymore by the time
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH 4/8] mm: thp: optimize compound_trans_huge
2013-11-20 17:51 [PATCH 0/8] mm: hugetlbfs: fix hugetlbfs optimization v3 Andrea Arcangeli
` (2 preceding siblings ...)
2013-11-20 17:51 ` [PATCH 3/8] mm: hugetlbfs: move the put/get_page slab and hugetlbfs optimization in a faster path Andrea Arcangeli
@ 2013-11-20 17:51 ` Andrea Arcangeli
2013-11-20 17:51 ` [PATCH 5/8] mm: tail page refcounting optimization for slab and hugetlbfs Andrea Arcangeli
` (3 subsequent siblings)
7 siblings, 0 replies; 9+ messages in thread
From: Andrea Arcangeli @ 2013-11-20 17:51 UTC (permalink / raw)
To: Andrew Morton
Cc: linux-mm, linux-kernel, Khalid Aziz, Pravin Shelar,
Greg Kroah-Hartman, Ben Hutchings, Christoph Lameter,
Johannes Weiner, Mel Gorman, Rik van Riel, Andi Kleen,
Minchan Kim, Linus Torvalds
Currently we don't clobber page_tail->first_page during
split_huge_page, so compound_trans_head can be set to compound_head
without adverse effects, and this mostly optimizes away a smp_rmb.
It looks worthwhile to keep around the implementation that doesn't
relay on page_tail->first_page not to be clobbered, because it would
be necessary if we'll decide to enforce page->private to zero at all
times whenever PG_private is not set, also for anonymous pages. For
anonymous pages enforcing such an invariant doesn't matter as
anonymous pages don't use page->private so we can get away with this
microoptimization.
Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>
---
include/linux/huge_mm.h | 23 +++++++++++++++++++++++
1 file changed, 23 insertions(+)
diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index 91672e2..db51201 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -157,6 +157,26 @@ static inline int hpage_nr_pages(struct page *page)
return HPAGE_PMD_NR;
return 1;
}
+/*
+ * compound_trans_head() should be used instead of compound_head(),
+ * whenever the "page" passed as parameter could be the tail of a
+ * transparent hugepage that could be undergoing a
+ * __split_huge_page_refcount(). The page structure layout often
+ * changes across releases and it makes extensive use of unions. So if
+ * the page structure layout will change in a way that
+ * page->first_page gets clobbered by __split_huge_page_refcount, the
+ * implementation making use of smp_rmb() will be required.
+ *
+ * Currently we define compound_trans_head as compound_head, because
+ * page->private is in the same union with page->first_page, and
+ * page->private isn't clobbered. However this also means we're
+ * currently leaving dirt into the page->private field of anonymous
+ * pages resulting from a THP split, instead of setting page->private
+ * to zero like for every other page that has PG_private not set. But
+ * anonymous pages don't use page->private so this is not a problem.
+ */
+#if 0
+/* This will be needed if page->private will be clobbered in split_huge_page */
static inline struct page *compound_trans_head(struct page *page)
{
if (PageTail(page)) {
@@ -174,6 +194,9 @@ static inline struct page *compound_trans_head(struct page *page)
}
return page;
}
+#else
+#define compound_trans_head(page) compound_head(page)
+#endif
extern int do_huge_pmd_numa_page(struct mm_struct *mm, struct vm_area_struct *vma,
unsigned long addr, pmd_t pmd, pmd_t *pmdp);
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH 5/8] mm: tail page refcounting optimization for slab and hugetlbfs
2013-11-20 17:51 [PATCH 0/8] mm: hugetlbfs: fix hugetlbfs optimization v3 Andrea Arcangeli
` (3 preceding siblings ...)
2013-11-20 17:51 ` [PATCH 4/8] mm: thp: optimize compound_trans_huge Andrea Arcangeli
@ 2013-11-20 17:51 ` Andrea Arcangeli
2013-11-20 17:51 ` [PATCH 6/8] mm/hugetlb.c: simplify PageHeadHuge() and PageHuge() Andrea Arcangeli
` (2 subsequent siblings)
7 siblings, 0 replies; 9+ messages in thread
From: Andrea Arcangeli @ 2013-11-20 17:51 UTC (permalink / raw)
To: Andrew Morton
Cc: linux-mm, linux-kernel, Khalid Aziz, Pravin Shelar,
Greg Kroah-Hartman, Ben Hutchings, Christoph Lameter,
Johannes Weiner, Mel Gorman, Rik van Riel, Andi Kleen,
Minchan Kim, Linus Torvalds
This skips the _mapcount mangling for slab and hugetlbfs pages.
The main trouble in doing this is to guarantee that PageSlab and
PageHeadHuge remains constant for all get_page/put_page run on the
tail of slab or hugetlbfs compound pages. Otherwise if they're set
during get_page but not set during put_page, the _mapcount of the tail
page would underflow.
PageHeadHuge will remain true until the compound page is released and
enters the buddy allocator so it won't risk to change even if the tail
page is the last reference left on the page.
PG_slab instead is cleared before the slab frees the head page with
put_page, so if the tail pin is released after the slab freed the
page, we would have a problem. But in the slab case the tail pin
cannot be the last reference left on the page. This is because the
slab code is free to reuse the compound page after a
kfree/kmem_cache_free without having to check if there's any tail pin
left. In turn all tail pins must be always released while the head is
still pinned by the slab code and so we know PG_slab will be still set
too.
Reviewed-by: Khalid Aziz <khalid.aziz@oracle.com>
Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>
---
include/linux/hugetlb.h | 6 ------
include/linux/mm.h | 32 +++++++++++++++++++++++++++++++-
mm/internal.h | 3 ++-
mm/swap.c | 33 +++++++++++++++++++++++++++------
4 files changed, 60 insertions(+), 14 deletions(-)
diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index d4f3dbf..acd2010 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -31,7 +31,6 @@ struct hugepage_subpool *hugepage_new_subpool(long nr_blocks);
void hugepage_put_subpool(struct hugepage_subpool *spool);
int PageHuge(struct page *page);
-int PageHeadHuge(struct page *page_head);
void reset_vma_resv_huge_pages(struct vm_area_struct *vma);
int hugetlb_sysctl_handler(struct ctl_table *, int, void __user *, size_t *, loff_t *);
@@ -105,11 +104,6 @@ static inline int PageHuge(struct page *page)
return 0;
}
-static inline int PageHeadHuge(struct page *page_head)
-{
- return 0;
-}
-
static inline void reset_vma_resv_huge_pages(struct vm_area_struct *vma)
{
}
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 0548eb2..6b20b34 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -414,15 +414,45 @@ static inline int page_count(struct page *page)
return atomic_read(&compound_head(page)->_count);
}
+#ifdef CONFIG_HUGETLB_PAGE
+extern int PageHeadHuge(struct page *page_head);
+#else /* CONFIG_HUGETLB_PAGE */
+static inline int PageHeadHuge(struct page *page_head)
+{
+ return 0;
+}
+#endif /* CONFIG_HUGETLB_PAGE */
+
+static inline bool __compound_tail_refcounted(struct page *page)
+{
+ return !PageSlab(page) && !PageHeadHuge(page);
+}
+
+/*
+ * This takes a head page as parameter and tells if the
+ * tail page reference counting can be skipped.
+ *
+ * For this to be safe, PageSlab and PageHeadHuge must remain true on
+ * any given page where they return true here, until all tail pins
+ * have been released.
+ */
+static inline bool compound_tail_refcounted(struct page *page)
+{
+ VM_BUG_ON(!PageHead(page));
+ return __compound_tail_refcounted(page);
+}
+
static inline void get_huge_page_tail(struct page *page)
{
/*
* __split_huge_page_refcount() cannot run
* from under us.
+ * In turn no need of compound_trans_head here.
*/
VM_BUG_ON(page_mapcount(page) < 0);
VM_BUG_ON(atomic_read(&page->_count) != 0);
- atomic_inc(&page->_mapcount);
+ if (compound_tail_refcounted(compound_head(page)))
+ atomic_inc(&page->_mapcount);
}
extern bool __get_page_tail(struct page *page);
diff --git a/mm/internal.h b/mm/internal.h
index 684f7aa..a85a3ab 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -51,7 +51,8 @@ static inline void __get_page_tail_foll(struct page *page,
VM_BUG_ON(page_mapcount(page) < 0);
if (get_page_head)
atomic_inc(&page->first_page->_count);
- atomic_inc(&page->_mapcount);
+ if (compound_tail_refcounted(page->first_page))
+ atomic_inc(&page->_mapcount);
}
/*
diff --git a/mm/swap.c b/mm/swap.c
index dbf5427..b4c49bf 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -88,8 +88,9 @@ static void put_compound_page(struct page *page)
/*
* THP can not break up slab pages so avoid taking
- * compound_lock(). Slab performs non-atomic bit ops
- * on page->flags for better performance. In
+ * compound_lock() and skip the tail page refcounting
+ * (in _mapcount) too. Slab performs non-atomic bit
+ * ops on page->flags for better performance. In
* particular slab_unlock() in slub used to be a hot
* path. It is still hot on arches that do not support
* this_cpu_cmpxchg_double().
@@ -102,7 +103,7 @@ static void put_compound_page(struct page *page)
* PageTail clear after smp_rmb() and we'll threat it
* as a single page.
*/
- if (PageSlab(page_head) || PageHeadHuge(page_head)) {
+ if (!__compound_tail_refcounted(page_head)) {
/*
* If "page" is a THP tail, we must read the tail page
* flags after the head page flags. The
@@ -117,10 +118,30 @@ static void put_compound_page(struct page *page)
* cannot race here.
*/
VM_BUG_ON(!PageHead(page_head));
- VM_BUG_ON(page_mapcount(page) <= 0);
- atomic_dec(&page->_mapcount);
- if (put_page_testzero(page_head))
+ VM_BUG_ON(page_mapcount(page) != 0);
+ if (put_page_testzero(page_head)) {
+ /*
+ * If this is the tail of a
+ * slab compound page, the
+ * tail pin must not be the
+ * last reference held on the
+ * page, because the PG_slab
+ * cannot be cleared before
+ * all tail pins (which skips
+ * the _mapcount tail
+ * refcounting) have been
+ * released. For hugetlbfs the
+ * tail pin may be the last
+ * reference on the page
+ * instead, because
+ * PageHeadHuge will not go
+ * away until the compound
+ * page enters the buddy
+ * allocator.
+ */
+ VM_BUG_ON(PageSlab(page_head));
__put_compound_page(page_head);
+ }
return;
} else
/*
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH 6/8] mm/hugetlb.c: simplify PageHeadHuge() and PageHuge()
2013-11-20 17:51 [PATCH 0/8] mm: hugetlbfs: fix hugetlbfs optimization v3 Andrea Arcangeli
` (4 preceding siblings ...)
2013-11-20 17:51 ` [PATCH 5/8] mm: tail page refcounting optimization for slab and hugetlbfs Andrea Arcangeli
@ 2013-11-20 17:51 ` Andrea Arcangeli
2013-11-20 17:51 ` [PATCH 7/8] mm/swap.c: reorganize put_compound_page() Andrea Arcangeli
2013-11-20 17:51 ` [PATCH 8/8] mm/hugetlb.c: defer PageHeadHuge() symbol export Andrea Arcangeli
7 siblings, 0 replies; 9+ messages in thread
From: Andrea Arcangeli @ 2013-11-20 17:51 UTC (permalink / raw)
To: Andrew Morton
Cc: linux-mm, linux-kernel, Khalid Aziz, Pravin Shelar,
Greg Kroah-Hartman, Ben Hutchings, Christoph Lameter,
Johannes Weiner, Mel Gorman, Rik van Riel, Andi Kleen,
Minchan Kim, Linus Torvalds
From: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>
---
mm/hugetlb.c | 12 ++----------
1 file changed, 2 insertions(+), 10 deletions(-)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index f03e068..9b8a14b 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -724,15 +724,11 @@ static void prep_compound_gigantic_page(struct page *page, unsigned long order)
*/
int PageHuge(struct page *page)
{
- compound_page_dtor *dtor;
-
if (!PageCompound(page))
return 0;
page = compound_head(page);
- dtor = get_compound_page_dtor(page);
-
- return dtor == free_huge_page;
+ return get_compound_page_dtor(page) == free_huge_page;
}
EXPORT_SYMBOL_GPL(PageHuge);
@@ -742,14 +738,10 @@ EXPORT_SYMBOL_GPL(PageHuge);
*/
int PageHeadHuge(struct page *page_head)
{
- compound_page_dtor *dtor;
-
if (!PageHead(page_head))
return 0;
- dtor = get_compound_page_dtor(page_head);
-
- return dtor == free_huge_page;
+ return get_compound_page_dtor(page_head) == free_huge_page;
}
EXPORT_SYMBOL_GPL(PageHeadHuge);
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH 7/8] mm/swap.c: reorganize put_compound_page()
2013-11-20 17:51 [PATCH 0/8] mm: hugetlbfs: fix hugetlbfs optimization v3 Andrea Arcangeli
` (5 preceding siblings ...)
2013-11-20 17:51 ` [PATCH 6/8] mm/hugetlb.c: simplify PageHeadHuge() and PageHuge() Andrea Arcangeli
@ 2013-11-20 17:51 ` Andrea Arcangeli
2013-11-20 17:51 ` [PATCH 8/8] mm/hugetlb.c: defer PageHeadHuge() symbol export Andrea Arcangeli
7 siblings, 0 replies; 9+ messages in thread
From: Andrea Arcangeli @ 2013-11-20 17:51 UTC (permalink / raw)
To: Andrew Morton
Cc: linux-mm, linux-kernel, Khalid Aziz, Pravin Shelar,
Greg Kroah-Hartman, Ben Hutchings, Christoph Lameter,
Johannes Weiner, Mel Gorman, Rik van Riel, Andi Kleen,
Minchan Kim, Linus Torvalds
From: Andrew Morton <akpm@linux-foundation.org>
Tweak it so save a tab stop, make code layout slightly less nutty.
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>
---
mm/swap.c | 254 +++++++++++++++++++++++++++++++-------------------------------
1 file changed, 125 insertions(+), 129 deletions(-)
diff --git a/mm/swap.c b/mm/swap.c
index b4c49bf..ddb470d 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -82,154 +82,150 @@ static void __put_compound_page(struct page *page)
static void put_compound_page(struct page *page)
{
- if (unlikely(PageTail(page))) {
- /* __split_huge_page_refcount can run under us */
- struct page *page_head = compound_trans_head(page);
+ struct page *page_head;
- /*
- * THP can not break up slab pages so avoid taking
- * compound_lock() and skip the tail page refcounting
- * (in _mapcount) too. Slab performs non-atomic bit
- * ops on page->flags for better performance. In
- * particular slab_unlock() in slub used to be a hot
- * path. It is still hot on arches that do not support
- * this_cpu_cmpxchg_double().
- *
- * If "page" is part of a slab or hugetlbfs page it
- * cannot be splitted and the head page cannot change
- * from under us. And if "page" is part of a THP page
- * under splitting, if the head page pointed by the
- * THP tail isn't a THP head anymore, we'll find
- * PageTail clear after smp_rmb() and we'll threat it
- * as a single page.
- */
- if (!__compound_tail_refcounted(page_head)) {
+ if (likely(!PageTail(page))) {
+ if (put_page_testzero(page)) {
/*
- * If "page" is a THP tail, we must read the tail page
- * flags after the head page flags. The
- * split_huge_page side enforces write memory
- * barriers between clearing PageTail and before the
- * head page can be freed and reallocated.
+ * By the time all refcounts have been released
+ * split_huge_page cannot run anymore from under us.
*/
- smp_rmb();
- if (likely(PageTail(page))) {
- /*
- * __split_huge_page_refcount
- * cannot race here.
- */
- VM_BUG_ON(!PageHead(page_head));
- VM_BUG_ON(page_mapcount(page) != 0);
- if (put_page_testzero(page_head)) {
- /*
- * If this is the tail of a
- * slab compound page, the
- * tail pin must not be the
- * last reference held on the
- * page, because the PG_slab
- * cannot be cleared before
- * all tail pins (which skips
- * the _mapcount tail
- * refcounting) have been
- * released. For hugetlbfs the
- * tail pin may be the last
- * reference on the page
- * instead, because
- * PageHeadHuge will not go
- * away until the compound
- * page enters the buddy
- * allocator.
- */
- VM_BUG_ON(PageSlab(page_head));
- __put_compound_page(page_head);
- }
- return;
- } else
- /*
- * __split_huge_page_refcount
- * run before us, "page" was a
- * THP tail. The split
- * page_head has been freed
- * and reallocated as slab or
- * hugetlbfs page of smaller
- * order (only possible if
- * reallocated as slab on
- * x86).
- */
- goto out_put_single;
+ if (PageHead(page))
+ __put_compound_page(page);
+ else
+ __put_single_page(page);
}
+ return;
+ }
- if (likely(page != page_head &&
- get_page_unless_zero(page_head))) {
- unsigned long flags;
+ /* __split_huge_page_refcount can run under us */
+ page_head = compound_trans_head(page);
+ /*
+ * THP can not break up slab pages so avoid taking
+ * compound_lock() and skip the tail page refcounting (in
+ * _mapcount) too. Slab performs non-atomic bit ops on
+ * page->flags for better performance. In particular
+ * slab_unlock() in slub used to be a hot path. It is still
+ * hot on arches that do not support
+ * this_cpu_cmpxchg_double().
+ *
+ * If "page" is part of a slab or hugetlbfs page it cannot be
+ * splitted and the head page cannot change from under us. And
+ * if "page" is part of a THP page under splitting, if the
+ * head page pointed by the THP tail isn't a THP head anymore,
+ * we'll find PageTail clear after smp_rmb() and we'll threat
+ * it as a single page.
+ */
+ if (!__compound_tail_refcounted(page_head)) {
+ /*
+ * If "page" is a THP tail, we must read the tail page
+ * flags after the head page flags. The
+ * split_huge_page side enforces write memory barriers
+ * between clearing PageTail and before the head page
+ * can be freed and reallocated.
+ */
+ smp_rmb();
+ if (likely(PageTail(page))) {
/*
- * page_head wasn't a dangling pointer but it
- * may not be a head page anymore by the time
- * we obtain the lock. That is ok as long as it
- * can't be freed from under us.
+ * __split_huge_page_refcount cannot race
+ * here.
*/
- flags = compound_lock_irqsave(page_head);
- if (unlikely(!PageTail(page))) {
- /* __split_huge_page_refcount run before us */
- compound_unlock_irqrestore(page_head, flags);
- if (put_page_testzero(page_head)) {
- /*
- * The head page may have been
- * freed and reallocated as a
- * compound page of smaller
- * order and then freed again.
- * All we know is that it
- * cannot have become: a THP
- * page, a compound page of
- * higher order, a tail page.
- * That is because we still
- * hold the refcount of the
- * split THP tail and
- * page_head was the THP head
- * before the split.
- */
- if (PageHead(page_head))
- __put_compound_page(page_head);
- else
- __put_single_page(page_head);
- }
-out_put_single:
- if (put_page_testzero(page))
- __put_single_page(page);
- return;
+ VM_BUG_ON(!PageHead(page_head));
+ VM_BUG_ON(page_mapcount(page) != 0);
+ if (put_page_testzero(page_head)) {
+ /*
+ * If this is the tail of a slab
+ * compound page, the tail pin must
+ * not be the last reference held on
+ * the page, because the PG_slab
+ * cannot be cleared before all tail
+ * pins (which skips the _mapcount
+ * tail refcounting) have been
+ * released. For hugetlbfs the tail
+ * pin may be the last reference on
+ * the page instead, because
+ * PageHeadHuge will not go away until
+ * the compound page enters the buddy
+ * allocator.
+ */
+ VM_BUG_ON(PageSlab(page_head));
+ __put_compound_page(page_head);
}
- VM_BUG_ON(page_head != page->first_page);
+ return;
+ } else
/*
- * We can release the refcount taken by
- * get_page_unless_zero() now that
- * __split_huge_page_refcount() is blocked on
- * the compound_lock.
+ * __split_huge_page_refcount run before us,
+ * "page" was a THP tail. The split page_head
+ * has been freed and reallocated as slab or
+ * hugetlbfs page of smaller order (only
+ * possible if reallocated as slab on x86).
*/
- if (put_page_testzero(page_head))
- VM_BUG_ON(1);
- /* __split_huge_page_refcount will wait now */
- VM_BUG_ON(page_mapcount(page) <= 0);
- atomic_dec(&page->_mapcount);
- VM_BUG_ON(atomic_read(&page_head->_count) <= 0);
- VM_BUG_ON(atomic_read(&page->_count) != 0);
- compound_unlock_irqrestore(page_head, flags);
+ goto out_put_single;
+ }
+
+ if (likely(page != page_head && get_page_unless_zero(page_head))) {
+ unsigned long flags;
+ /*
+ * page_head wasn't a dangling pointer but it may not
+ * be a head page anymore by the time we obtain the
+ * lock. That is ok as long as it can't be freed from
+ * under us.
+ */
+ flags = compound_lock_irqsave(page_head);
+ if (unlikely(!PageTail(page))) {
+ /* __split_huge_page_refcount run before us */
+ compound_unlock_irqrestore(page_head, flags);
if (put_page_testzero(page_head)) {
+ /*
+ * The head page may have been freed
+ * and reallocated as a compound page
+ * of smaller order and then freed
+ * again. All we know is that it
+ * cannot have become: a THP page, a
+ * compound page of higher order, a
+ * tail page. That is because we
+ * still hold the refcount of the
+ * split THP tail and page_head was
+ * the THP head before the split.
+ */
if (PageHead(page_head))
__put_compound_page(page_head);
else
__put_single_page(page_head);
}
- } else {
- /* page_head is a dangling pointer */
- VM_BUG_ON(PageTail(page));
- goto out_put_single;
+out_put_single:
+ if (put_page_testzero(page))
+ __put_single_page(page);
+ return;
}
- } else if (put_page_testzero(page)) {
- if (PageHead(page))
- __put_compound_page(page);
- else
- __put_single_page(page);
+ VM_BUG_ON(page_head != page->first_page);
+ /*
+ * We can release the refcount taken by
+ * get_page_unless_zero() now that
+ * __split_huge_page_refcount() is blocked on the
+ * compound_lock.
+ */
+ if (put_page_testzero(page_head))
+ VM_BUG_ON(1);
+ /* __split_huge_page_refcount will wait now */
+ VM_BUG_ON(page_mapcount(page) <= 0);
+ atomic_dec(&page->_mapcount);
+ VM_BUG_ON(atomic_read(&page_head->_count) <= 0);
+ VM_BUG_ON(atomic_read(&page->_count) != 0);
+ compound_unlock_irqrestore(page_head, flags);
+
+ if (put_page_testzero(page_head)) {
+ if (PageHead(page_head))
+ __put_compound_page(page_head);
+ else
+ __put_single_page(page_head);
+ }
+ } else {
+ /* page_head is a dangling pointer */
+ VM_BUG_ON(PageTail(page));
+ goto out_put_single;
}
}
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH 8/8] mm/hugetlb.c: defer PageHeadHuge() symbol export
2013-11-20 17:51 [PATCH 0/8] mm: hugetlbfs: fix hugetlbfs optimization v3 Andrea Arcangeli
` (6 preceding siblings ...)
2013-11-20 17:51 ` [PATCH 7/8] mm/swap.c: reorganize put_compound_page() Andrea Arcangeli
@ 2013-11-20 17:51 ` Andrea Arcangeli
7 siblings, 0 replies; 9+ messages in thread
From: Andrea Arcangeli @ 2013-11-20 17:51 UTC (permalink / raw)
To: Andrew Morton
Cc: linux-mm, linux-kernel, Khalid Aziz, Pravin Shelar,
Greg Kroah-Hartman, Ben Hutchings, Christoph Lameter,
Johannes Weiner, Mel Gorman, Rik van Riel, Andi Kleen,
Minchan Kim, Linus Torvalds
No actual need of it. So keep it internal.
Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>
---
mm/hugetlb.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 9b8a14b..133ea72 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -743,7 +743,6 @@ int PageHeadHuge(struct page *page_head)
return get_compound_page_dtor(page_head) == free_huge_page;
}
-EXPORT_SYMBOL_GPL(PageHeadHuge);
pgoff_t __basepage_index(struct page *page)
{
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 9+ messages in thread
end of thread, other threads:[~2013-11-20 17:51 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-11-20 17:51 [PATCH 0/8] mm: hugetlbfs: fix hugetlbfs optimization v3 Andrea Arcangeli
2013-11-20 17:51 ` [PATCH 1/8] mm: hugetlbfs: fix hugetlbfs optimization Andrea Arcangeli
2013-11-20 17:51 ` [PATCH 2/8] mm: hugetlb: use get_page_foll in follow_hugetlb_page Andrea Arcangeli
2013-11-20 17:51 ` [PATCH 3/8] mm: hugetlbfs: move the put/get_page slab and hugetlbfs optimization in a faster path Andrea Arcangeli
2013-11-20 17:51 ` [PATCH 4/8] mm: thp: optimize compound_trans_huge Andrea Arcangeli
2013-11-20 17:51 ` [PATCH 5/8] mm: tail page refcounting optimization for slab and hugetlbfs Andrea Arcangeli
2013-11-20 17:51 ` [PATCH 6/8] mm/hugetlb.c: simplify PageHeadHuge() and PageHuge() Andrea Arcangeli
2013-11-20 17:51 ` [PATCH 7/8] mm/swap.c: reorganize put_compound_page() Andrea Arcangeli
2013-11-20 17:51 ` [PATCH 8/8] mm/hugetlb.c: defer PageHeadHuge() symbol export Andrea Arcangeli
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).