From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from psmtp.com (na3sys010amx127.postini.com [74.125.245.127]) by kanga.kvack.org (Postfix) with SMTP id D6E8C6B0002 for ; Tue, 5 Mar 2013 09:58:03 -0500 (EST) Received: by mail-pb0-f46.google.com with SMTP id uo15so4521155pbc.5 for ; Tue, 05 Mar 2013 06:58:02 -0800 (PST) From: Jiang Liu Subject: [RFC PATCH v1 01/33] mm: introduce common help functions to deal with reserved/managed pages Date: Tue, 5 Mar 2013 22:54:44 +0800 Message-Id: <1362495317-32682-2-git-send-email-jiang.liu@huawei.com> In-Reply-To: <1362495317-32682-1-git-send-email-jiang.liu@huawei.com> References: <1362495317-32682-1-git-send-email-jiang.liu@huawei.com> Sender: owner-linux-mm@kvack.org List-ID: To: Andrew Morton , David Rientjes Cc: Jiang Liu , Wen Congyang , Maciej Rutecki , Chris Clayton , "Rafael J . Wysocki" , Mel Gorman , Minchan Kim , KAMEZAWA Hiroyuki , Michal Hocko , Jianguo Wu , Anatolij Gustschin , Aurelien Jacquiot , Benjamin Herrenschmidt , Catalin Marinas , Chen Liqin , Chris Metcalf , Chris Zankel , David Howells , "David S. Miller" , Eric Biederman , Fenghua Yu , Geert Uytterhoeven , Guan Xuetao , Haavard Skinnemoen , Hans-Christian Egtvedt , Heiko Carstens , Helge Deller , Hirokazu Takata , "H. Peter Anvin" , Ingo Molnar , Ivan Kokshaysky , "James E.J. Bottomley" , Jeff Dike , Jeremy Fitzhardinge , Jonas Bonn , Koichi Yasutake , Konrad Rzeszutek Wilk , Lennox Wu , Mark Salter , Martin Schwidefsky , Matt Turner , Max Filippov , "Michael S. Tsirkin" , Michal Simek , Michel Lespinasse , Mikael Starvik , Mike Frysinger , Paul Mackerras , Paul Mundt , Ralf Baechle , Richard Henderson , Rik van Riel , Russell King , Rusty Russell , Sam Ravnborg , Tang Chen , Thomas Gleixner , Tony Luck , Will Deacon , Yasuaki Ishimatsu , Yinghai Lu , Yoshinori Sato , x86@kernel.org, xen-devel@lists.xensource.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, virtualization@lists.linux-foundation.org Code to deal with reserved/managed pages are duplicated by many architectures, so introduce common help functions to reduce duplicated code. These common help functions will also be used to concentrate code to modify totalram_pages and zone->managed_pages, which makes the code much more clear. Signed-off-by: Jiang Liu --- include/linux/mm.h | 37 +++++++++++++++++++++++++++++++++++++ mm/page_alloc.c | 20 ++++++++++++++++++++ 2 files changed, 57 insertions(+) diff --git a/include/linux/mm.h b/include/linux/mm.h index 7acc9dc..881461c 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1295,6 +1295,43 @@ extern void free_area_init_node(int nid, unsigned long * zones_size, unsigned long zone_start_pfn, unsigned long *zholes_size); extern void free_initmem(void); +/* Help functions to deal with reserved/managed pages. */ +extern unsigned long free_reserved_area(unsigned long start, unsigned long end, + int poison, char *s); + +static inline void adjust_managed_page_count(struct page *page, long count) +{ + totalram_pages += count; +} + +static inline void __free_reserved_page(struct page *page) +{ + ClearPageReserved(page); + init_page_count(page); + __free_page(page); +} + +static inline void free_reserved_page(struct page *page) +{ + __free_reserved_page(page); + adjust_managed_page_count(page, 1); +} + +static inline void mark_page_reserved(struct page *page) +{ + SetPageReserved(page); + adjust_managed_page_count(page, -1); +} + +static inline void free_initmem_default(int poison) +{ + extern char __init_begin[], __init_end[]; + + free_reserved_area(PAGE_ALIGN((unsigned long)&__init_begin) , + ((unsigned long)&__init_end) & PAGE_MASK, + poison, "unused kernel"); +} + #ifdef CONFIG_HAVE_MEMBLOCK_NODE_MAP /* * With CONFIG_HAVE_MEMBLOCK_NODE_MAP set, an architecture may initialise its diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 8fcced7..0fadb09 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -5113,6 +5113,26 @@ early_param("movablecore", cmdline_parse_movablecore); #endif /* CONFIG_HAVE_MEMBLOCK_NODE_MAP */ +unsigned long free_reserved_area(unsigned long start, unsigned long end, + int poison, char *s) +{ + unsigned long pages, pos; + + pos = start = PAGE_ALIGN(start); + end &= PAGE_MASK; + for (pages = 0; pos < end; pos += PAGE_SIZE, pages++) { + if (poison) + memset((void *)pos, poison, PAGE_SIZE); + free_reserved_page(virt_to_page(pos)); + } + + if (pages && s) + pr_info("Freeing %s memory: %ldK (%lx - %lx)\n", + s, pages << (PAGE_SHIFT - 10), start, end); + + return pages; +} + /** * set_dma_reserve - set the specified number of pages reserved in the first zone * @new_dma_reserve: The number of pages to mark reserved -- 1.7.9.5 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org