* [PATCH 1/4] mm: remove debug_pagealloc_enabled
@ 2011-11-11 12:36 Stanislaw Gruszka
2011-11-11 12:36 ` [PATCH 2/4] mm: more intensive memory corruption debug Stanislaw Gruszka
` (3 more replies)
0 siblings, 4 replies; 10+ messages in thread
From: Stanislaw Gruszka @ 2011-11-11 12:36 UTC (permalink / raw)
To: linux-mm
Cc: linux-kernel, Mel Gorman, Andrea Arcangeli, Andrew Morton,
Rafael J. Wysocki, Christoph Lameter, Stanislaw Gruszka
After we finish (no)bootmem, pages are passed to buddy allocator. Since
debug_pagealloc_enabled is not set, we do not protect pages, what is
not what we want with CONFIG_DEBUG_PAGEALLOC=y. That could be fixed by
calling enable_debug_pagealloc() before free_all_bootmem(), but actually
I do not see any reason why we need that global variable. Hence patch
remove it.
Signed-off-by: Stanislaw Gruszka <sgruszka@redhat.com>
---
arch/x86/mm/pageattr.c | 6 ------
include/linux/mm.h | 10 ----------
init/main.c | 5 -----
mm/debug-pagealloc.c | 3 ---
4 files changed, 0 insertions(+), 24 deletions(-)
diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c
index f9e5267..5031eef 100644
--- a/arch/x86/mm/pageattr.c
+++ b/arch/x86/mm/pageattr.c
@@ -1334,12 +1334,6 @@ void kernel_map_pages(struct page *page, int numpages, int enable)
}
/*
- * If page allocator is not up yet then do not call c_p_a():
- */
- if (!debug_pagealloc_enabled)
- return;
-
- /*
* The return value is ignored as the calls cannot fail.
* Large pages for identity mappings are not used at boot time
* and hence no memory allocations during large page split.
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 3dc3a8c..0a22db1 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1537,23 +1537,13 @@ static inline void vm_stat_account(struct mm_struct *mm,
#endif /* CONFIG_PROC_FS */
#ifdef CONFIG_DEBUG_PAGEALLOC
-extern int debug_pagealloc_enabled;
-
extern void kernel_map_pages(struct page *page, int numpages, int enable);
-
-static inline void enable_debug_pagealloc(void)
-{
- debug_pagealloc_enabled = 1;
-}
#ifdef CONFIG_HIBERNATION
extern bool kernel_page_present(struct page *page);
#endif /* CONFIG_HIBERNATION */
#else
static inline void
kernel_map_pages(struct page *page, int numpages, int enable) {}
-static inline void enable_debug_pagealloc(void)
-{
-}
#ifdef CONFIG_HIBERNATION
static inline bool kernel_page_present(struct page *page) { return true; }
#endif /* CONFIG_HIBERNATION */
diff --git a/init/main.c b/init/main.c
index 217ed23..99c4ba3 100644
--- a/init/main.c
+++ b/init/main.c
@@ -282,10 +282,6 @@ static int __init unknown_bootoption(char *param, char *val)
return 0;
}
-#ifdef CONFIG_DEBUG_PAGEALLOC
-int __read_mostly debug_pagealloc_enabled = 0;
-#endif
-
static int __init init_setup(char *str)
{
unsigned int i;
@@ -597,7 +593,6 @@ asmlinkage void __init start_kernel(void)
}
#endif
page_cgroup_init();
- enable_debug_pagealloc();
debug_objects_mem_init();
kmemleak_init();
setup_per_cpu_pageset();
diff --git a/mm/debug-pagealloc.c b/mm/debug-pagealloc.c
index 7cea557..789ff70 100644
--- a/mm/debug-pagealloc.c
+++ b/mm/debug-pagealloc.c
@@ -95,9 +95,6 @@ static void unpoison_pages(struct page *page, int n)
void kernel_map_pages(struct page *page, int numpages, int enable)
{
- if (!debug_pagealloc_enabled)
- return;
-
if (enable)
unpoison_pages(page, numpages);
else
--
1.7.1
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH 2/4] mm: more intensive memory corruption debug
2011-11-11 12:36 [PATCH 1/4] mm: remove debug_pagealloc_enabled Stanislaw Gruszka
@ 2011-11-11 12:36 ` Stanislaw Gruszka
2011-11-11 14:29 ` Mel Gorman
2011-11-11 12:36 ` [PATCH 3/4] PM / Hibernate : do not count debug pages as savable Stanislaw Gruszka
` (2 subsequent siblings)
3 siblings, 1 reply; 10+ messages in thread
From: Stanislaw Gruszka @ 2011-11-11 12:36 UTC (permalink / raw)
To: linux-mm
Cc: linux-kernel, Mel Gorman, Andrea Arcangeli, Andrew Morton,
Rafael J. Wysocki, Christoph Lameter, Stanislaw Gruszka
With CONFIG_DEBUG_PAGEALLOC configured, cpu will generate exception on
access (read,write) to not allocated page, what allow to catch code
which corrupt memory. However kernel is trying to maximalise memory
usage, hence there is usually not much free pages in the system and
buggy code usually corrupt some crucial data.
This patch change buddy allocator to keep more free/protected pages
and interlace free/protected and allocated pages to increase probability
of catch a corruption.
When kernel is compiled with CONFIG_DEBUG_PAGEALLOC, corrupt_dbg
parameter is available to specify page order that should be kept free.
I.e:
* corrupt_dbg=1:
- order=0 allocation will result of 1 page allocated and 1 consecutive
page protected
- order > 0 allocations are not affected
* corrupt_dbg=2
- order=0 allocation will result 1 allocated page and 3 consecutive
pages protected
- order=1 allocation will result 2 allocated pages and 2 consecutive
pages protected
- order > 1 allocations are not affected
* and so on
Probably only practical usage is corrupt_dbg=1, as long someone is not
really desperate by memory corruption bug and have huge amount of RAM.
Signed-off-by: Stanislaw Gruszka <sgruszka@redhat.com>
---
Documentation/kernel-parameters.txt | 9 ++++
include/linux/mm.h | 11 +++++
include/linux/page-debug-flags.h | 4 +-
mm/Kconfig.debug | 1 +
mm/page_alloc.c | 74 +++++++++++++++++++++++++++++++----
5 files changed, 90 insertions(+), 9 deletions(-)
diff --git a/Documentation/kernel-parameters.txt b/Documentation/kernel-parameters.txt
index a0c5c5f..cbfa533 100644
--- a/Documentation/kernel-parameters.txt
+++ b/Documentation/kernel-parameters.txt
@@ -567,6 +567,15 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
/proc/<pid>/coredump_filter.
See also Documentation/filesystems/proc.txt.
+ corrupt_dbg= [KNL] When CONFIG_DEBUG_PAGEALLOC is set, this
+ parameter allows control order of pages that will be
+ intentionally kept free (and hence protected) by buddy
+ allocator. Bigger value increase probability of
+ catching random memory corruption, but reduce amount
+ of memory for normal system use. Setting this
+ parameter to 1 or 2, should be enough to identify most
+ random memory corruption problems.
+
cpuidle.off=1 [CPU_IDLE]
disable the cpuidle sub-system
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 0a22db1..4de55df 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1617,5 +1617,16 @@ extern void copy_user_huge_page(struct page *dst, struct page *src,
unsigned int pages_per_huge_page);
#endif /* CONFIG_TRANSPARENT_HUGEPAGE || CONFIG_HUGETLBFS */
+#ifdef CONFIG_DEBUG_PAGEALLOC
+extern unsigned int _corrupt_dbg;
+
+static inline unsigned int corrupt_dbg(void)
+{
+ return _corrupt_dbg;
+}
+#else
+static inline unsigned int corrupt_dbg(void) { return 0; }
+#endif /* CONFIG_DEBUG_PAGEALLOC */
+
#endif /* __KERNEL__ */
#endif /* _LINUX_MM_H */
diff --git a/include/linux/page-debug-flags.h b/include/linux/page-debug-flags.h
index b0638fd..f63c905 100644
--- a/include/linux/page-debug-flags.h
+++ b/include/linux/page-debug-flags.h
@@ -13,6 +13,7 @@
enum page_debug_flags {
PAGE_DEBUG_FLAG_POISON, /* Page is poisoned */
+ PAGE_DEBUG_FLAG_CORRUPT,
};
/*
@@ -21,7 +22,8 @@ enum page_debug_flags {
*/
#ifdef CONFIG_WANT_PAGE_DEBUG_FLAGS
-#if !defined(CONFIG_PAGE_POISONING) \
+#if !defined(CONFIG_PAGE_POISONING) && \
+ !defined(CONFIG_DEBUG_PAGEALLOC) \
/* && !defined(CONFIG_PAGE_DEBUG_SOMETHING_ELSE) && ... */
#error WANT_PAGE_DEBUG_FLAGS is turned on with no debug features!
#endif
diff --git a/mm/Kconfig.debug b/mm/Kconfig.debug
index 8b1a477..3c554f0 100644
--- a/mm/Kconfig.debug
+++ b/mm/Kconfig.debug
@@ -4,6 +4,7 @@ config DEBUG_PAGEALLOC
depends on !HIBERNATION || ARCH_SUPPORTS_DEBUG_PAGEALLOC && !PPC && !SPARC
depends on !KMEMCHECK
select PAGE_POISONING if !ARCH_SUPPORTS_DEBUG_PAGEALLOC
+ select WANT_PAGE_DEBUG_FLAGS
---help---
Unmap pages from the kernel linear mapping after free_pages().
This results in a large slowdown, but helps to find certain types
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 9dd443d..de25c82 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -57,6 +57,7 @@
#include <linux/ftrace_event.h>
#include <linux/memcontrol.h>
#include <linux/prefetch.h>
+#include <linux/page-debug-flags.h>
#include <asm/tlbflush.h>
#include <asm/div64.h>
@@ -403,6 +404,44 @@ static inline void prep_zero_page(struct page *page, int order, gfp_t gfp_flags)
clear_highpage(page + i);
}
+#ifdef CONFIG_DEBUG_PAGEALLOC
+unsigned int _corrupt_dbg;
+
+static int __init corrupt_dbg_setup(char *buf)
+{
+ unsigned long res;
+
+ if (kstrtoul(buf, 10, &res) < 0 || res > MAX_ORDER / 2) {
+ printk(KERN_ERR "Bad corrupt_dbg value\n");
+ return 0;
+ }
+ _corrupt_dbg = res;
+ printk(KERN_INFO "Setting corrupt debug order to %d\n", _corrupt_dbg);
+ return 0;
+}
+__setup("corrupt_dbg=", corrupt_dbg_setup);
+
+static inline void set_page_corrupt_dbg(struct page *page)
+{
+ __set_bit(PAGE_DEBUG_FLAG_CORRUPT, &page->debug_flags);
+}
+
+static inline void clear_page_corrupt_dbg(struct page *page)
+{
+ __clear_bit(PAGE_DEBUG_FLAG_CORRUPT, &page->debug_flags);
+}
+
+static inline bool page_is_corrupt_dbg(struct page *page)
+{
+ return test_bit(PAGE_DEBUG_FLAG_CORRUPT, &page->debug_flags);
+}
+
+#else
+static inline void set_page_corrupt_dbg(struct page *page) { }
+static inline void clear_page_corrupt_dbg(struct page *page) { }
+static inline bool page_is_corrupt_dbg(struct page *page) { return false; }
+#endif
+
static inline void set_page_order(struct page *page, int order)
{
set_page_private(page, order);
@@ -460,6 +499,11 @@ static inline int page_is_buddy(struct page *page, struct page *buddy,
if (page_zone_id(page) != page_zone_id(buddy))
return 0;
+ if (page_is_corrupt_dbg(buddy) && page_order(buddy) == order) {
+ VM_BUG_ON(page_count(buddy) != 0);
+ return 1;
+ }
+
if (PageBuddy(buddy) && page_order(buddy) == order) {
VM_BUG_ON(page_count(buddy) != 0);
return 1;
@@ -518,9 +562,15 @@ static inline void __free_one_page(struct page *page,
break;
/* Our buddy is free, merge with it and move up one order. */
- list_del(&buddy->lru);
- zone->free_area[order].nr_free--;
- rmv_page_order(buddy);
+ if (page_is_corrupt_dbg(buddy)) {
+ clear_page_corrupt_dbg(buddy);
+ set_page_private(page, 0);
+ __mod_zone_page_state(zone, NR_FREE_PAGES, 1 << order);
+ } else {
+ list_del(&buddy->lru);
+ zone->free_area[order].nr_free--;
+ rmv_page_order(buddy);
+ }
combined_idx = buddy_idx & page_idx;
page = page + (combined_idx - page_idx);
page_idx = combined_idx;
@@ -736,7 +786,7 @@ void __meminit __free_pages_bootmem(struct page *page, unsigned int order)
* -- wli
*/
static inline void expand(struct zone *zone, struct page *page,
- int low, int high, struct free_area *area,
+ unsigned int low, unsigned int high, struct free_area *area,
int migratetype)
{
unsigned long size = 1 << high;
@@ -746,9 +796,16 @@ static inline void expand(struct zone *zone, struct page *page,
high--;
size >>= 1;
VM_BUG_ON(bad_range(zone, &page[size]));
- list_add(&page[size].lru, &area->free_list[migratetype]);
- area->nr_free++;
- set_page_order(&page[size], high);
+ if (high < corrupt_dbg()) {
+ INIT_LIST_HEAD(&page[size].lru);
+ set_page_corrupt_dbg(&page[size]);
+ set_page_private(&page[size], high);
+ __mod_zone_page_state(zone, NR_FREE_PAGES, -(1 << high));
+ } else {
+ set_page_order(&page[size], high);
+ list_add(&page[size].lru, &area->free_list[migratetype]);
+ area->nr_free++;
+ }
}
}
@@ -1756,7 +1813,8 @@ void warn_alloc_failed(gfp_t gfp_mask, int order, const char *fmt, ...)
{
unsigned int filter = SHOW_MEM_FILTER_NODES;
- if ((gfp_mask & __GFP_NOWARN) || !__ratelimit(&nopage_rs))
+ if ((gfp_mask & __GFP_NOWARN) || !__ratelimit(&nopage_rs) ||
+ corrupt_dbg() > 0)
return;
/*
--
1.7.1
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH 3/4] PM / Hibernate : do not count debug pages as savable
2011-11-11 12:36 [PATCH 1/4] mm: remove debug_pagealloc_enabled Stanislaw Gruszka
2011-11-11 12:36 ` [PATCH 2/4] mm: more intensive memory corruption debug Stanislaw Gruszka
@ 2011-11-11 12:36 ` Stanislaw Gruszka
2011-11-11 12:36 ` [PATCH 4/4] slub: min order when corrupt_dbg Stanislaw Gruszka
2011-11-11 14:12 ` [PATCH 1/4] mm: remove debug_pagealloc_enabled Mel Gorman
3 siblings, 0 replies; 10+ messages in thread
From: Stanislaw Gruszka @ 2011-11-11 12:36 UTC (permalink / raw)
To: linux-mm
Cc: linux-kernel, Mel Gorman, Andrea Arcangeli, Andrew Morton,
Rafael J. Wysocki, Christoph Lameter, Stanislaw Gruszka
When debugging memory corruption with CONFIG_DEBUG_PAGEALLOC and
corrupt_dbg > 0, we have lot of free pages that are not marked so.
Snapshot code account them as savable, what cause hibernate memory
preallocation failure.
It is pretty hard to make hibernate allocation succeed with
corrupt_dbg=1. This change at least make it possible when system has
relatively big amount of RAM.
Acked-by: Rafael J. Wysocki <rjw@sisk.pl>
Signed-off-by: Stanislaw Gruszka <sgruszka@redhat.com>
---
include/linux/mm.h | 6 ++++++
kernel/power/snapshot.c | 6 ++++++
mm/page_alloc.c | 6 ------
3 files changed, 12 insertions(+), 6 deletions(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 4de55df..6c9268d 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1624,8 +1624,14 @@ static inline unsigned int corrupt_dbg(void)
{
return _corrupt_dbg;
}
+
+static inline bool page_is_corrupt_dbg(struct page *page)
+{
+ return test_bit(PAGE_DEBUG_FLAG_CORRUPT, &page->debug_flags);
+}
#else
static inline unsigned int corrupt_dbg(void) { return 0; }
+static inline bool page_is_corrupt_dbg(struct page *page) { return false; }
#endif /* CONFIG_DEBUG_PAGEALLOC */
#endif /* __KERNEL__ */
diff --git a/kernel/power/snapshot.c b/kernel/power/snapshot.c
index cbe2c14..d738e4b 100644
--- a/kernel/power/snapshot.c
+++ b/kernel/power/snapshot.c
@@ -858,6 +858,9 @@ static struct page *saveable_highmem_page(struct zone *zone, unsigned long pfn)
PageReserved(page))
return NULL;
+ if (page_is_corrupt_dbg(page))
+ return NULL;
+
return page;
}
@@ -920,6 +923,9 @@ static struct page *saveable_page(struct zone *zone, unsigned long pfn)
&& (!kernel_page_present(page) || pfn_is_nosave(pfn)))
return NULL;
+ if (page_is_corrupt_dbg(page))
+ return NULL;
+
return page;
}
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index de25c82..0dc080d 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -431,15 +431,9 @@ static inline void clear_page_corrupt_dbg(struct page *page)
__clear_bit(PAGE_DEBUG_FLAG_CORRUPT, &page->debug_flags);
}
-static inline bool page_is_corrupt_dbg(struct page *page)
-{
- return test_bit(PAGE_DEBUG_FLAG_CORRUPT, &page->debug_flags);
-}
-
#else
static inline void set_page_corrupt_dbg(struct page *page) { }
static inline void clear_page_corrupt_dbg(struct page *page) { }
-static inline bool page_is_corrupt_dbg(struct page *page) { return false; }
#endif
static inline void set_page_order(struct page *page, int order)
--
1.7.1
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH 4/4] slub: min order when corrupt_dbg
2011-11-11 12:36 [PATCH 1/4] mm: remove debug_pagealloc_enabled Stanislaw Gruszka
2011-11-11 12:36 ` [PATCH 2/4] mm: more intensive memory corruption debug Stanislaw Gruszka
2011-11-11 12:36 ` [PATCH 3/4] PM / Hibernate : do not count debug pages as savable Stanislaw Gruszka
@ 2011-11-11 12:36 ` Stanislaw Gruszka
2011-11-11 14:46 ` Christoph Lameter
2011-11-11 14:12 ` [PATCH 1/4] mm: remove debug_pagealloc_enabled Mel Gorman
3 siblings, 1 reply; 10+ messages in thread
From: Stanislaw Gruszka @ 2011-11-11 12:36 UTC (permalink / raw)
To: linux-mm
Cc: linux-kernel, Mel Gorman, Andrea Arcangeli, Andrew Morton,
Rafael J. Wysocki, Christoph Lameter, Stanislaw Gruszka
Disable slub debug facilities and allocate slabs at minimal order when
corrupt_dbg > 0 to increase probability to catch random memory
corruption by cpu exception.
Signed-off-by: Stanislaw Gruszka <sgruszka@redhat.com>
---
mm/slub.c | 10 ++++++++--
1 files changed, 8 insertions(+), 2 deletions(-)
diff --git a/mm/slub.c b/mm/slub.c
index 7d2a996..b0e4318 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -2844,7 +2844,7 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order)
unsigned long flags = s->flags;
unsigned long size = s->objsize;
unsigned long align = s->align;
- int order;
+ int order, min_order;
/*
* Round up object size to the next word boundary. We can only
@@ -2929,8 +2929,11 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order)
*/
size = ALIGN(size, align);
s->size = size;
+ min_order = get_order(size);
if (forced_order >= 0)
order = forced_order;
+ else if (corrupt_dbg())
+ order = min_order;
else
order = calculate_order(size, s->reserved);
@@ -2951,7 +2954,7 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order)
* Determine the number of objects per slab
*/
s->oo = oo_make(order, size, s->reserved);
- s->min = oo_make(get_order(size), size, s->reserved);
+ s->min = oo_make(min_order, size, s->reserved);
if (oo_objects(s->oo) > oo_objects(s->max))
s->max = s->oo;
@@ -3645,6 +3648,9 @@ void __init kmem_cache_init(void)
struct kmem_cache *temp_kmem_cache_node;
unsigned long kmalloc_size;
+ if (corrupt_dbg())
+ slub_debug = 0;
+
kmem_size = offsetof(struct kmem_cache, node) +
nr_node_ids * sizeof(struct kmem_cache_node *);
--
1.7.1
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [PATCH 1/4] mm: remove debug_pagealloc_enabled
2011-11-11 12:36 [PATCH 1/4] mm: remove debug_pagealloc_enabled Stanislaw Gruszka
` (2 preceding siblings ...)
2011-11-11 12:36 ` [PATCH 4/4] slub: min order when corrupt_dbg Stanislaw Gruszka
@ 2011-11-11 14:12 ` Mel Gorman
2011-11-14 10:20 ` Stanislaw Gruszka
3 siblings, 1 reply; 10+ messages in thread
From: Mel Gorman @ 2011-11-11 14:12 UTC (permalink / raw)
To: Stanislaw Gruszka
Cc: linux-mm, linux-kernel, Andrea Arcangeli, Andrew Morton,
Rafael J. Wysocki, Ingo Molnar, Christoph Lameter
On Fri, Nov 11, 2011 at 01:36:31PM +0100, Stanislaw Gruszka wrote:
> After we finish (no)bootmem, pages are passed to buddy allocator. Since
> debug_pagealloc_enabled is not set, we do not protect pages, what is
> not what we want with CONFIG_DEBUG_PAGEALLOC=y. That could be fixed by
> calling enable_debug_pagealloc() before free_all_bootmem(), but actually
> I do not see any reason why we need that global variable. Hence patch
> remove it.
>
> Signed-off-by: Stanislaw Gruszka <sgruszka@redhat.com>
> ---
> arch/x86/mm/pageattr.c | 6 ------
> include/linux/mm.h | 10 ----------
> init/main.c | 5 -----
> mm/debug-pagealloc.c | 3 ---
> 4 files changed, 0 insertions(+), 24 deletions(-)
>
> diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c
> index f9e5267..5031eef 100644
> --- a/arch/x86/mm/pageattr.c
> +++ b/arch/x86/mm/pageattr.c
> @@ -1334,12 +1334,6 @@ void kernel_map_pages(struct page *page, int numpages, int enable)
> }
>
> /*
> - * If page allocator is not up yet then do not call c_p_a():
> - */
> - if (!debug_pagealloc_enabled)
> - return;
> -
> - /*
According to commit [12d6f21e: x86: do not PSE on
CONFIG_DEBUG_PAGEALLOC=y], the intention of debug_pagealloc_enabled
was to force additional testing of splitting large pages due to
cpa. Presumably this was because when bootmem was retired, all the
pages would be mapped forcing the protection to be applied later
while the system was running and races would be more interesting.
This patch is trading additional CPA testing for better detecting
of memory corruption with DEBUG_PAGEALLOC. I see no issue with this
per-se, but I'm cc'ing Ingo for comment as it was his patch and this
is something that should go by the x86 maintainers.
--
Mel Gorman
SUSE Labs
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH 2/4] mm: more intensive memory corruption debug
2011-11-11 12:36 ` [PATCH 2/4] mm: more intensive memory corruption debug Stanislaw Gruszka
@ 2011-11-11 14:29 ` Mel Gorman
2011-11-14 10:29 ` Stanislaw Gruszka
2011-11-14 12:23 ` Stanislaw Gruszka
0 siblings, 2 replies; 10+ messages in thread
From: Mel Gorman @ 2011-11-11 14:29 UTC (permalink / raw)
To: Stanislaw Gruszka
Cc: linux-mm, linux-kernel, Andrea Arcangeli, Andrew Morton,
Rafael J. Wysocki, Christoph Lameter
On Fri, Nov 11, 2011 at 01:36:32PM +0100, Stanislaw Gruszka wrote:
> With CONFIG_DEBUG_PAGEALLOC configured, cpu will generate exception on
> access (read,write) to not allocated page, what allow to catch code
> which corrupt memory. However kernel is trying to maximalise memory
> usage, hence there is usually not much free pages in the system and
> buggy code usually corrupt some crucial data.
>
> This patch change buddy allocator to keep more free/protected pages
> and interlace free/protected and allocated pages to increase probability
> of catch a corruption.
>
> When kernel is compiled with CONFIG_DEBUG_PAGEALLOC, corrupt_dbg
> parameter is available to specify page order that should be kept free.
>
> I.e:
>
> * corrupt_dbg=1:
> - order=0 allocation will result of 1 page allocated and 1 consecutive
> page protected
It is common to call this a guard page so I would suggest using a
similar name. The meaning of "guard page" will be obvious without
looking at the documentation.
> - order > 0 allocations are not affected
> * corrupt_dbg=2
> - order=0 allocation will result 1 allocated page and 3 consecutive
> pages protected
> - order=1 allocation will result 2 allocated pages and 2 consecutive
> pages protected
> - order > 1 allocations are not affected
That's a bit confusing to read and the name corrupt_dbg does not
give any hints. It would be easier to understand if it was called
debug_guardpage_minorder=n where where 1<<n is the minimum allocation
size used by the page allocator.
"When kernel is compiled with CONFIG_DEBUG_PAGEALLOC,
debug_guardpage_minorder defines the minimum order used by the page
allocator to grant a request. The requested size will be returned with
the remaining pages used as guard pages."
or similar.
> * and so on
>
> Probably only practical usage is corrupt_dbg=1, as long someone is not
> really desperate by memory corruption bug and have huge amount of RAM.
>
> Signed-off-by: Stanislaw Gruszka <sgruszka@redhat.com>
> ---
> Documentation/kernel-parameters.txt | 9 ++++
> include/linux/mm.h | 11 +++++
> include/linux/page-debug-flags.h | 4 +-
> mm/Kconfig.debug | 1 +
> mm/page_alloc.c | 74 +++++++++++++++++++++++++++++++----
> 5 files changed, 90 insertions(+), 9 deletions(-)
>
> diff --git a/Documentation/kernel-parameters.txt b/Documentation/kernel-parameters.txt
> index a0c5c5f..cbfa533 100644
> --- a/Documentation/kernel-parameters.txt
> +++ b/Documentation/kernel-parameters.txt
> @@ -567,6 +567,15 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
> /proc/<pid>/coredump_filter.
> See also Documentation/filesystems/proc.txt.
>
> + corrupt_dbg= [KNL] When CONFIG_DEBUG_PAGEALLOC is set, this
> + parameter allows control order of pages that will be
> + intentionally kept free (and hence protected) by buddy
> + allocator. Bigger value increase probability of
> + catching random memory corruption, but reduce amount
> + of memory for normal system use. Setting this
> + parameter to 1 or 2, should be enough to identify most
> + random memory corruption problems.
> +
This was clearer than the commit log entry at least although I still
think the parameter name is poor.
> cpuidle.off=1 [CPU_IDLE]
> disable the cpuidle sub-system
>
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index 0a22db1..4de55df 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -1617,5 +1617,16 @@ extern void copy_user_huge_page(struct page *dst, struct page *src,
> unsigned int pages_per_huge_page);
> #endif /* CONFIG_TRANSPARENT_HUGEPAGE || CONFIG_HUGETLBFS */
>
> +#ifdef CONFIG_DEBUG_PAGEALLOC
> +extern unsigned int _corrupt_dbg;
> +
> +static inline unsigned int corrupt_dbg(void)
> +{
> + return _corrupt_dbg;
> +}
> +#else
> +static inline unsigned int corrupt_dbg(void) { return 0; }
> +#endif /* CONFIG_DEBUG_PAGEALLOC */
> +
> #endif /* __KERNEL__ */
> #endif /* _LINUX_MM_H */
> diff --git a/include/linux/page-debug-flags.h b/include/linux/page-debug-flags.h
> index b0638fd..f63c905 100644
> --- a/include/linux/page-debug-flags.h
> +++ b/include/linux/page-debug-flags.h
> @@ -13,6 +13,7 @@
>
> enum page_debug_flags {
> PAGE_DEBUG_FLAG_POISON, /* Page is poisoned */
> + PAGE_DEBUG_FLAG_CORRUPT,
See, the corrupt name here is misleading. It's not corrupt, it's a
guard page. Until something writes to it, it's not corrupt.
> };
>
> /*
> @@ -21,7 +22,8 @@ enum page_debug_flags {
> */
>
> #ifdef CONFIG_WANT_PAGE_DEBUG_FLAGS
> -#if !defined(CONFIG_PAGE_POISONING) \
> +#if !defined(CONFIG_PAGE_POISONING) && \
> + !defined(CONFIG_DEBUG_PAGEALLOC) \
> /* && !defined(CONFIG_PAGE_DEBUG_SOMETHING_ELSE) && ... */
> #error WANT_PAGE_DEBUG_FLAGS is turned on with no debug features!
> #endif
> diff --git a/mm/Kconfig.debug b/mm/Kconfig.debug
> index 8b1a477..3c554f0 100644
> --- a/mm/Kconfig.debug
> +++ b/mm/Kconfig.debug
> @@ -4,6 +4,7 @@ config DEBUG_PAGEALLOC
> depends on !HIBERNATION || ARCH_SUPPORTS_DEBUG_PAGEALLOC && !PPC && !SPARC
> depends on !KMEMCHECK
> select PAGE_POISONING if !ARCH_SUPPORTS_DEBUG_PAGEALLOC
> + select WANT_PAGE_DEBUG_FLAGS
Why not add PAGE_CORRUPT (or preferably PAGE_GUARD) in the same pattern
as PAGE_POISONING already uses?
> ---help---
> Unmap pages from the kernel linear mapping after free_pages().
> This results in a large slowdown, but helps to find certain types
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 9dd443d..de25c82 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -57,6 +57,7 @@
> #include <linux/ftrace_event.h>
> #include <linux/memcontrol.h>
> #include <linux/prefetch.h>
> +#include <linux/page-debug-flags.h>
>
> #include <asm/tlbflush.h>
> #include <asm/div64.h>
> @@ -403,6 +404,44 @@ static inline void prep_zero_page(struct page *page, int order, gfp_t gfp_flags)
> clear_highpage(page + i);
> }
>
> +#ifdef CONFIG_DEBUG_PAGEALLOC
> +unsigned int _corrupt_dbg;
> +
> +static int __init corrupt_dbg_setup(char *buf)
> +{
> + unsigned long res;
> +
> + if (kstrtoul(buf, 10, &res) < 0 || res > MAX_ORDER / 2) {
> + printk(KERN_ERR "Bad corrupt_dbg value\n");
> + return 0;
> + }
You don't document the limitations of the value for corrupt_dbg.
> + _corrupt_dbg = res;
> + printk(KERN_INFO "Setting corrupt debug order to %d\n", _corrupt_dbg);
> + return 0;
> +}
> +__setup("corrupt_dbg=", corrupt_dbg_setup);
> +
> +static inline void set_page_corrupt_dbg(struct page *page)
> +{
> + __set_bit(PAGE_DEBUG_FLAG_CORRUPT, &page->debug_flags);
> +}
> +
> +static inline void clear_page_corrupt_dbg(struct page *page)
> +{
> + __clear_bit(PAGE_DEBUG_FLAG_CORRUPT, &page->debug_flags);
> +}
> +
> +static inline bool page_is_corrupt_dbg(struct page *page)
> +{
> + return test_bit(PAGE_DEBUG_FLAG_CORRUPT, &page->debug_flags);
> +}
> +
> +#else
> +static inline void set_page_corrupt_dbg(struct page *page) { }
> +static inline void clear_page_corrupt_dbg(struct page *page) { }
> +static inline bool page_is_corrupt_dbg(struct page *page) { return false; }
> +#endif
> +
> static inline void set_page_order(struct page *page, int order)
> {
> set_page_private(page, order);
> @@ -460,6 +499,11 @@ static inline int page_is_buddy(struct page *page, struct page *buddy,
> if (page_zone_id(page) != page_zone_id(buddy))
> return 0;
>
> + if (page_is_corrupt_dbg(buddy) && page_order(buddy) == order) {
> + VM_BUG_ON(page_count(buddy) != 0);
> + return 1;
> + }
> +
> if (PageBuddy(buddy) && page_order(buddy) == order) {
> VM_BUG_ON(page_count(buddy) != 0);
> return 1;
> @@ -518,9 +562,15 @@ static inline void __free_one_page(struct page *page,
> break;
>
> /* Our buddy is free, merge with it and move up one order. */
> - list_del(&buddy->lru);
> - zone->free_area[order].nr_free--;
> - rmv_page_order(buddy);
> + if (page_is_corrupt_dbg(buddy)) {
> + clear_page_corrupt_dbg(buddy);
> + set_page_private(page, 0);
> + __mod_zone_page_state(zone, NR_FREE_PAGES, 1 << order);
Why are the buddies not merged?
> + } else {
> + list_del(&buddy->lru);
> + zone->free_area[order].nr_free--;
> + rmv_page_order(buddy);
> + }
> combined_idx = buddy_idx & page_idx;
> page = page + (combined_idx - page_idx);
> page_idx = combined_idx;
> @@ -736,7 +786,7 @@ void __meminit __free_pages_bootmem(struct page *page, unsigned int order)
> * -- wli
> */
> static inline void expand(struct zone *zone, struct page *page,
> - int low, int high, struct free_area *area,
> + unsigned int low, unsigned int high, struct free_area *area,
> int migratetype)
> {
> unsigned long size = 1 << high;
> @@ -746,9 +796,16 @@ static inline void expand(struct zone *zone, struct page *page,
> high--;
> size >>= 1;
> VM_BUG_ON(bad_range(zone, &page[size]));
> - list_add(&page[size].lru, &area->free_list[migratetype]);
> - area->nr_free++;
> - set_page_order(&page[size], high);
> + if (high < corrupt_dbg()) {
> + INIT_LIST_HEAD(&page[size].lru);
> + set_page_corrupt_dbg(&page[size]);
> + set_page_private(&page[size], high);
> + __mod_zone_page_state(zone, NR_FREE_PAGES, -(1 << high));
> + } else {
Because high is a signed integer, I don't think this would necessarily
optimised away at compile time when DEBUG_PAGEALLOC is not set adding a
new branch to a heavily executed fast path.
For the fast paths, you should not add new branches if you can. Move the
debugging code to inline functions that only exist when DEBUG_PAGEALLOC
is set so there is no additional overhead in the !CONFIG_DEBUG_PAGEALLOC
case.
> + set_page_order(&page[size], high);
> + list_add(&page[size].lru, &area->free_list[migratetype]);
> + area->nr_free++;
> + }
> }
> }
>
> @@ -1756,7 +1813,8 @@ void warn_alloc_failed(gfp_t gfp_mask, int order, const char *fmt, ...)
> {
> unsigned int filter = SHOW_MEM_FILTER_NODES;
>
> - if ((gfp_mask & __GFP_NOWARN) || !__ratelimit(&nopage_rs))
> + if ((gfp_mask & __GFP_NOWARN) || !__ratelimit(&nopage_rs) ||
> + corrupt_dbg() > 0)
> return;
>
> /*
> --
> 1.7.1
>
--
Mel Gorman
SUSE Labs
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH 4/4] slub: min order when corrupt_dbg
2011-11-11 12:36 ` [PATCH 4/4] slub: min order when corrupt_dbg Stanislaw Gruszka
@ 2011-11-11 14:46 ` Christoph Lameter
0 siblings, 0 replies; 10+ messages in thread
From: Christoph Lameter @ 2011-11-11 14:46 UTC (permalink / raw)
To: Stanislaw Gruszka
Cc: linux-mm, linux-kernel, Mel Gorman, Andrea Arcangeli,
Andrew Morton, Rafael J. Wysocki
On Fri, 11 Nov 2011, Stanislaw Gruszka wrote:
> Disable slub debug facilities and allocate slabs at minimal order when
> corrupt_dbg > 0 to increase probability to catch random memory
> corruption by cpu exception.
Just setting slub_max_order to zero on boot has the same effect that all
of this here. Settug slub_max_order would only require a small hunk in
kmem_cache_init.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH 1/4] mm: remove debug_pagealloc_enabled
2011-11-11 14:12 ` [PATCH 1/4] mm: remove debug_pagealloc_enabled Mel Gorman
@ 2011-11-14 10:20 ` Stanislaw Gruszka
0 siblings, 0 replies; 10+ messages in thread
From: Stanislaw Gruszka @ 2011-11-14 10:20 UTC (permalink / raw)
To: Mel Gorman
Cc: linux-mm, linux-kernel, Andrea Arcangeli, Andrew Morton,
Rafael J. Wysocki, Ingo Molnar, Christoph Lameter
On Fri, Nov 11, 2011 at 02:12:21PM +0000, Mel Gorman wrote:
> On Fri, Nov 11, 2011 at 01:36:31PM +0100, Stanislaw Gruszka wrote:
> > After we finish (no)bootmem, pages are passed to buddy allocator. Since
> > debug_pagealloc_enabled is not set, we do not protect pages, what is
> > not what we want with CONFIG_DEBUG_PAGEALLOC=y. That could be fixed by
> > calling enable_debug_pagealloc() before free_all_bootmem(), but actually
> > I do not see any reason why we need that global variable. Hence patch
> > remove it.
> >
> > Signed-off-by: Stanislaw Gruszka <sgruszka@redhat.com>
> > ---
> > arch/x86/mm/pageattr.c | 6 ------
> > include/linux/mm.h | 10 ----------
> > init/main.c | 5 -----
> > mm/debug-pagealloc.c | 3 ---
> > 4 files changed, 0 insertions(+), 24 deletions(-)
> >
> > diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c
> > index f9e5267..5031eef 100644
> > --- a/arch/x86/mm/pageattr.c
> > +++ b/arch/x86/mm/pageattr.c
> > @@ -1334,12 +1334,6 @@ void kernel_map_pages(struct page *page, int numpages, int enable)
> > }
> >
> > /*
> > - * If page allocator is not up yet then do not call c_p_a():
> > - */
> > - if (!debug_pagealloc_enabled)
> > - return;
> > -
> > - /*
>
> According to commit [12d6f21e: x86: do not PSE on
> CONFIG_DEBUG_PAGEALLOC=y], the intention of debug_pagealloc_enabled
> was to force additional testing of splitting large pages due to
> cpa. Presumably this was because when bootmem was retired, all the
> pages would be mapped forcing the protection to be applied later
> while the system was running and races would be more interesting.
>
> This patch is trading additional CPA testing for better detecting
> of memory corruption with DEBUG_PAGEALLOC. I see no issue with this
> per-se, but I'm cc'ing Ingo for comment as it was his patch and this
> is something that should go by the x86 maintainers.
Not sure if I understand all of that (Ok, I clearly do not understend,
I do not even know what CPA mean: change page address ?), but I think
more splitting large pages testing was achived by this hunk
-#ifdef CONFIG_DEBUG_PAGEALLOC
- /* pse is not compatible with on-the-fly unmapping,
- * disable it even if the cpus claim to support it.
- */
- setup_clear_cpu_cap(X86_FEATURE_PSE);
-#endif
of commit 12d6f21e, because changelog say:
get more testing of the c_p_a() code done by not turning off
PSE on DEBUG_PAGEALLOC.
But to make PSE and DEBUG_PAGEALLOC work debug_pagealloc_enabled was
introduced. Now CPA code was changed that PSE and DEBUG_PAGEALLOC works
without problem (I tested that on pse cappable cpu), so I think
debug_pagealloc_enabled is unneeded, or do I'm wrong?
Stanislaw
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH 2/4] mm: more intensive memory corruption debug
2011-11-11 14:29 ` Mel Gorman
@ 2011-11-14 10:29 ` Stanislaw Gruszka
2011-11-14 12:23 ` Stanislaw Gruszka
1 sibling, 0 replies; 10+ messages in thread
From: Stanislaw Gruszka @ 2011-11-14 10:29 UTC (permalink / raw)
To: Mel Gorman
Cc: linux-mm, linux-kernel, Andrea Arcangeli, Andrew Morton,
Rafael J. Wysocki, Christoph Lameter
On Fri, Nov 11, 2011 at 02:29:53PM +0000, Mel Gorman wrote:
> > if (PageBuddy(buddy) && page_order(buddy) == order) {
> > VM_BUG_ON(page_count(buddy) != 0);
> > return 1;
> > @@ -518,9 +562,15 @@ static inline void __free_one_page(struct page *page,
> > break;
> >
> > /* Our buddy is free, merge with it and move up one order. */
> > - list_del(&buddy->lru);
> > - zone->free_area[order].nr_free--;
> > - rmv_page_order(buddy);
> > + if (page_is_corrupt_dbg(buddy)) {
> > + clear_page_corrupt_dbg(buddy);
> > + set_page_private(page, 0);
> > + __mod_zone_page_state(zone, NR_FREE_PAGES, 1 << order);
>
> Why are the buddies not merged?
I believe they are merged, but I'll double check.
> > static inline void expand(struct zone *zone, struct page *page,
> > - int low, int high, struct free_area *area,
> > + unsigned int low, unsigned int high, struct free_area *area,
> > int migratetype)
> > {
> > unsigned long size = 1 << high;
> > @@ -746,9 +796,16 @@ static inline void expand(struct zone *zone, struct page *page,
> > high--;
> > size >>= 1;
> > VM_BUG_ON(bad_range(zone, &page[size]));
> > - list_add(&page[size].lru, &area->free_list[migratetype]);
> > - area->nr_free++;
> > - set_page_order(&page[size], high);
> > + if (high < corrupt_dbg()) {
> > + INIT_LIST_HEAD(&page[size].lru);
> > + set_page_corrupt_dbg(&page[size]);
> > + set_page_private(&page[size], high);
> > + __mod_zone_page_state(zone, NR_FREE_PAGES, -(1 << high));
> > + } else {
>
> Because high is a signed integer, I don't think this would necessarily
> optimised away at compile time when DEBUG_PAGEALLOC is not set adding a
> new branch to a heavily executed fast path.
>
> For the fast paths, you should not add new branches if you can. Move the
> debugging code to inline functions that only exist when DEBUG_PAGEALLOC
> is set so there is no additional overhead in the !CONFIG_DEBUG_PAGEALLOC
> case.
I changed "high" type from int to unsigned int in the patch, and checked
that this branch is removed by compiler in !CONFIG_DEBUG_PAGEALLOC case.
But perhaps having this inside preprocessor checks is cleaner, so I'll do
that.
Thanks for the comments, I'll rework and repost.
Stanislaw
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH 2/4] mm: more intensive memory corruption debug
2011-11-11 14:29 ` Mel Gorman
2011-11-14 10:29 ` Stanislaw Gruszka
@ 2011-11-14 12:23 ` Stanislaw Gruszka
1 sibling, 0 replies; 10+ messages in thread
From: Stanislaw Gruszka @ 2011-11-14 12:23 UTC (permalink / raw)
To: Mel Gorman
Cc: linux-mm, linux-kernel, Andrea Arcangeli, Andrew Morton,
Rafael J. Wysocki, Christoph Lameter
On Fri, Nov 11, 2011 at 02:29:53PM +0000, Mel Gorman wrote:
> > --- a/mm/Kconfig.debug
> > +++ b/mm/Kconfig.debug
> > @@ -4,6 +4,7 @@ config DEBUG_PAGEALLOC
> > depends on !HIBERNATION || ARCH_SUPPORTS_DEBUG_PAGEALLOC && !PPC && !SPARC
> > depends on !KMEMCHECK
> > select PAGE_POISONING if !ARCH_SUPPORTS_DEBUG_PAGEALLOC
> > + select WANT_PAGE_DEBUG_FLAGS
>
> Why not add PAGE_CORRUPT (or preferably PAGE_GUARD) in the same pattern
> as PAGE_POISONING already uses?
Additional CONFIG_PAGE_GUARD variable, would be duplicate of
CONFIG_DEBUG_PAGEALLOC. PAGE_POISONING is needed for compile
another file, no such thing would be needed with PAGE_GUARD,
hence I'm consider such variable useless.
Stanislaw
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2011-11-14 12:22 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-11-11 12:36 [PATCH 1/4] mm: remove debug_pagealloc_enabled Stanislaw Gruszka
2011-11-11 12:36 ` [PATCH 2/4] mm: more intensive memory corruption debug Stanislaw Gruszka
2011-11-11 14:29 ` Mel Gorman
2011-11-14 10:29 ` Stanislaw Gruszka
2011-11-14 12:23 ` Stanislaw Gruszka
2011-11-11 12:36 ` [PATCH 3/4] PM / Hibernate : do not count debug pages as savable Stanislaw Gruszka
2011-11-11 12:36 ` [PATCH 4/4] slub: min order when corrupt_dbg Stanislaw Gruszka
2011-11-11 14:46 ` Christoph Lameter
2011-11-11 14:12 ` [PATCH 1/4] mm: remove debug_pagealloc_enabled Mel Gorman
2011-11-14 10:20 ` Stanislaw Gruszka
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).