* Re: [PATCH 2/2] mm/debug-pagealloc: cleanup page guard code
2014-11-07 7:35 ` [PATCH 2/2] mm/debug-pagealloc: cleanup page guard code Joonsoo Kim
@ 2014-11-07 9:55 ` Vlastimil Babka
2014-11-09 23:28 ` Gioh Kim
1 sibling, 0 replies; 4+ messages in thread
From: Vlastimil Babka @ 2014-11-07 9:55 UTC (permalink / raw)
To: Joonsoo Kim, Andrew Morton
Cc: Kirill A. Shutemov, Rik van Riel, Peter Zijlstra, Mel Gorman,
Johannes Weiner, Minchan Kim, Yasuaki Ishimatsu, Zhang Yanfei,
Tang Chen, Naoya Horiguchi, Bartlomiej Zolnierkiewicz,
Wen Congyang, Marek Szyprowski, Michal Nazarewicz, Laura Abbott,
Heesub Shin, Aneesh Kumar K.V, Ritesh Harjani, t.stanislaws,
Gioh Kim, linux-mm, linux-kernel
On 11/07/2014 08:35 AM, Joonsoo Kim wrote:
> Page guard is used by debug-pagealloc feature. Currently,
> it is open-coded, but, I think that more abstraction of it makes
> core page allocator code more readable.
>
> There is no functional difference.
>
> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
> ---
> mm/page_alloc.c | 38 +++++++++++++++++++-------------------
> 1 file changed, 19 insertions(+), 19 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index d673f64..c0dbede 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -440,18 +440,29 @@ static int __init debug_guardpage_minorder_setup(char *buf)
> }
> __setup("debug_guardpage_minorder=", debug_guardpage_minorder_setup);
>
> -static inline void set_page_guard_flag(struct page *page)
> +static inline void set_page_guard(struct zone *zone, struct page *page,
> + unsigned int order, int migratetype)
> {
> __set_bit(PAGE_DEBUG_FLAG_GUARD, &page->debug_flags);
> + INIT_LIST_HEAD(&page->lru);
> + set_page_private(page, order);
> + /* Guard pages are not available for any usage */
> + __mod_zone_freepage_state(zone, -(1 << order), migratetype);
> }
>
> -static inline void clear_page_guard_flag(struct page *page)
> +static inline void clear_page_guard(struct zone *zone, struct page *page,
> + unsigned int order, int migratetype)
> {
> __clear_bit(PAGE_DEBUG_FLAG_GUARD, &page->debug_flags);
> + set_page_private(page, 0);
> + if (!is_migrate_isolate(migratetype))
> + __mod_zone_freepage_state(zone, (1 << order), migratetype);
> }
> #else
> -static inline void set_page_guard_flag(struct page *page) { }
> -static inline void clear_page_guard_flag(struct page *page) { }
> +static inline void set_page_guard(struct zone *zone, struct page *page,
> + unsigned int order, int migratetype) {}
> +static inline void clear_page_guard(struct zone *zone, struct page *page,
> + unsigned int order, int migratetype) {}
> #endif
>
> static inline void set_page_order(struct page *page, unsigned int order)
> @@ -582,12 +593,7 @@ static inline void __free_one_page(struct page *page,
> * merge with it and move up one order.
> */
> if (page_is_guard(buddy)) {
> - clear_page_guard_flag(buddy);
> - set_page_private(buddy, 0);
> - if (!is_migrate_isolate(migratetype)) {
> - __mod_zone_freepage_state(zone, 1 << order,
> - migratetype);
> - }
> + clear_page_guard(zone, buddy, order, migratetype);
> } else {
> list_del(&buddy->lru);
> zone->free_area[order].nr_free--;
> @@ -862,23 +868,17 @@ static inline void expand(struct zone *zone, struct page *page,
> size >>= 1;
> VM_BUG_ON_PAGE(bad_range(zone, &page[size]), &page[size]);
>
> -#ifdef CONFIG_DEBUG_PAGEALLOC
> - if (high < debug_guardpage_minorder()) {
> + if (IS_ENABLED(CONFIG_DEBUG_PAGEALLOC) &&
> + high < debug_guardpage_minorder()) {
> /*
> * Mark as guard pages (or page), that will allow to
> * merge back to allocator when buddy will be freed.
> * Corresponding page table entries will not be touched,
> * pages will stay not present in virtual address space
> */
> - INIT_LIST_HEAD(&page[size].lru);
> - set_page_guard_flag(&page[size]);
> - set_page_private(&page[size], high);
> - /* Guard pages are not available for any usage */
> - __mod_zone_freepage_state(zone, -(1 << high),
> - migratetype);
> + set_page_guard(zone, &page[size], high, migratetype);
> continue;
> }
> -#endif
> list_add(&page[size].lru, &area->free_list[migratetype]);
> area->nr_free++;
> set_page_order(&page[size], high);
>
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 4+ messages in thread* Re: [PATCH 2/2] mm/debug-pagealloc: cleanup page guard code
2014-11-07 7:35 ` [PATCH 2/2] mm/debug-pagealloc: cleanup page guard code Joonsoo Kim
2014-11-07 9:55 ` Vlastimil Babka
@ 2014-11-09 23:28 ` Gioh Kim
1 sibling, 0 replies; 4+ messages in thread
From: Gioh Kim @ 2014-11-09 23:28 UTC (permalink / raw)
To: Joonsoo Kim, Andrew Morton
Cc: Kirill A. Shutemov, Rik van Riel, Peter Zijlstra, Mel Gorman,
Johannes Weiner, Minchan Kim, Yasuaki Ishimatsu, Zhang Yanfei,
Tang Chen, Naoya Horiguchi, Bartlomiej Zolnierkiewicz,
Wen Congyang, Marek Szyprowski, Michal Nazarewicz, Laura Abbott,
Heesub Shin, Aneesh Kumar K.V, Ritesh Harjani, t.stanislaws,
Vlastimil Babka, linux-mm, linux-kernel
2014-11-07 ?AEA 4:35?! Joonsoo Kim AI(?!) 3/4 ' +-U:
> Page guard is used by debug-pagealloc feature. Currently,
> it is open-coded, but, I think that more abstraction of it makes
> core page allocator code more readable.
>
> There is no functional difference.
>
> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> ---
> mm/page_alloc.c | 38 +++++++++++++++++++-------------------
> 1 file changed, 19 insertions(+), 19 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index d673f64..c0dbede 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -440,18 +440,29 @@ static int __init debug_guardpage_minorder_setup(char *buf)
> }
> __setup("debug_guardpage_minorder=", debug_guardpage_minorder_setup);
>
> -static inline void set_page_guard_flag(struct page *page)
> +static inline void set_page_guard(struct zone *zone, struct page *page,
> + unsigned int order, int migratetype)
> {
> __set_bit(PAGE_DEBUG_FLAG_GUARD, &page->debug_flags);
> + INIT_LIST_HEAD(&page->lru);
> + set_page_private(page, order);
> + /* Guard pages are not available for any usage */
> + __mod_zone_freepage_state(zone, -(1 << order), migratetype);
> }
>
> -static inline void clear_page_guard_flag(struct page *page)
> +static inline void clear_page_guard(struct zone *zone, struct page *page,
> + unsigned int order, int migratetype)
> {
> __clear_bit(PAGE_DEBUG_FLAG_GUARD, &page->debug_flags);
> + set_page_private(page, 0);
> + if (!is_migrate_isolate(migratetype))
> + __mod_zone_freepage_state(zone, (1 << order), migratetype);
> }
> #else
> -static inline void set_page_guard_flag(struct page *page) { }
> -static inline void clear_page_guard_flag(struct page *page) { }
> +static inline void set_page_guard(struct zone *zone, struct page *page,
> + unsigned int order, int migratetype) {}
> +static inline void clear_page_guard(struct zone *zone, struct page *page,
> + unsigned int order, int migratetype) {}
> #endif
>
> static inline void set_page_order(struct page *page, unsigned int order)
> @@ -582,12 +593,7 @@ static inline void __free_one_page(struct page *page,
> * merge with it and move up one order.
> */
> if (page_is_guard(buddy)) {
> - clear_page_guard_flag(buddy);
> - set_page_private(buddy, 0);
> - if (!is_migrate_isolate(migratetype)) {
> - __mod_zone_freepage_state(zone, 1 << order,
> - migratetype);
> - }
> + clear_page_guard(zone, buddy, order, migratetype);
> } else {
> list_del(&buddy->lru);
> zone->free_area[order].nr_free--;
> @@ -862,23 +868,17 @@ static inline void expand(struct zone *zone, struct page *page,
> size >>= 1;
> VM_BUG_ON_PAGE(bad_range(zone, &page[size]), &page[size]);
>
> -#ifdef CONFIG_DEBUG_PAGEALLOC
> - if (high < debug_guardpage_minorder()) {
> + if (IS_ENABLED(CONFIG_DEBUG_PAGEALLOC) &&
> + high < debug_guardpage_minorder()) {
> /*
> * Mark as guard pages (or page), that will allow to
> * merge back to allocator when buddy will be freed.
> * Corresponding page table entries will not be touched,
> * pages will stay not present in virtual address space
> */
> - INIT_LIST_HEAD(&page[size].lru);
> - set_page_guard_flag(&page[size]);
> - set_page_private(&page[size], high);
> - /* Guard pages are not available for any usage */
> - __mod_zone_freepage_state(zone, -(1 << high),
> - migratetype);
> + set_page_guard(zone, &page[size], high, migratetype);
> continue;
> }
> -#endif
> list_add(&page[size].lru, &area->free_list[migratetype]);
> area->nr_free++;
> set_page_order(&page[size], high);
>
Looks good!
Thanks for your work.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 4+ messages in thread