From: Jiang Liu <liuj97@gmail.com> To: Andrew Morton <akpm@linux-foundation.org> Cc: Jiang Liu <jiang.liu@huawei.com>, David Rientjes <rientjes@google.com>, Wen Congyang <wency@cn.fujitsu.com>, Mel Gorman <mgorman@suse.de>, Minchan Kim <minchan@kernel.org>, KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>, Michal Hocko <mhocko@suse.cz>, James Bottomley <James.Bottomley@HansenPartnership.com>, Sergei Shtylyov <sergei.shtylyov@cogentembedded.com>, David Howells <dhowells@redhat.com>, Mark Salter <msalter@redhat.com>, Jianguo Wu <wujianguo@huawei.com>, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, Michel Lespinasse <walken@google.com>, Rik van Riel <riel@redhat.com> Subject: [PATCH v8, part3 10/14] mm: use a dedicated lock to protect totalram_pages and zone->managed_pages Date: Sun, 26 May 2013 21:38:38 +0800 [thread overview] Message-ID: <1369575522-26405-11-git-send-email-jiang.liu@huawei.com> (raw) In-Reply-To: <1369575522-26405-1-git-send-email-jiang.liu@huawei.com> Currently lock_memory_hotplug()/unlock_memory_hotplug() are used to protect totalram_pages and zone->managed_pages. Other than the memory hotplug driver, totalram_pages and zone->managed_pages may also be modified at runtime by other drivers, such as Xen balloon, virtio_balloon etc. For those cases, memory hotplug lock is a little too heavy, so introduce a dedicated lock to protect totalram_pages and zone->managed_pages. Now we have a simplified locking rules totalram_pages and zone->managed_pages as: 1) no locking for read accesses because they are unsigned long. 2) no locking for write accesses at boot time in single-threaded context. 3) serialize write accesses at runtime by acquiring the dedicated managed_page_count_lock. Also adjust zone->managed_pages when freeing reserved pages into the buddy system, to keep totalram_pages and zone->managed_pages in consistence. Signed-off-by: Jiang Liu <jiang.liu@huawei.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Mel Gorman <mgorman@suse.de> Cc: Michel Lespinasse <walken@google.com> Cc: Rik van Riel <riel@redhat.com> Cc: Minchan Kim <minchan@kernel.org> Cc: linux-mm@kvack.org (open list:MEMORY MANAGEMENT) Cc: linux-kernel@vger.kernel.org (open list) --- include/linux/mm.h | 6 ++---- include/linux/mmzone.h | 14 ++++++++++---- mm/page_alloc.c | 12 ++++++++++++ 3 files changed, 24 insertions(+), 8 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index a56bcaa..bfe3686 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1309,6 +1309,7 @@ extern void free_initmem(void); */ extern unsigned long free_reserved_area(void *start, void *end, int poison, char *s); + #ifdef CONFIG_HIGHMEM /* * Free a highmem page into the buddy system, adjusting totalhigh_pages @@ -1317,10 +1318,7 @@ extern unsigned long free_reserved_area(void *start, void *end, extern void free_highmem_page(struct page *page); #endif -static inline void adjust_managed_page_count(struct page *page, long count) -{ - totalram_pages += count; -} +extern void adjust_managed_page_count(struct page *page, long count); /* Free the reserved page into the buddy system, so it gets managed. */ static inline void __free_reserved_page(struct page *page) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 131989a..178f166 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -474,10 +474,16 @@ struct zone { * frequently read in proximity to zone->lock. It's good to * give them a chance of being in the same cacheline. * - * Write access to present_pages and managed_pages at runtime should - * be protected by lock_memory_hotplug()/unlock_memory_hotplug(). - * Any reader who can't tolerant drift of present_pages and - * managed_pages should hold memory hotplug lock to get a stable value. + * Write access to present_pages at runtime should be protected by + * lock_memory_hotplug()/unlock_memory_hotplug(). Any reader who can't + * tolerant drift of present_pages should hold memory hotplug lock to + * get a stable value. + * + * Read access to managed_pages should be safe because it's unsigned + * long. Write access to zone->managed_pages and totalram_pages are + * protected by managed_page_count_lock at runtime. Idealy only + * adjust_managed_page_count() should be used instead of directly + * touching zone->managed_pages and totalram_pages. */ unsigned long spanned_pages; unsigned long present_pages; diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 9b50e6a..924c099 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -103,6 +103,9 @@ nodemask_t node_states[NR_NODE_STATES] __read_mostly = { }; EXPORT_SYMBOL(node_states); +/* Protect totalram_pages and zone->managed_pages */ +static DEFINE_SPINLOCK(managed_page_count_lock); + unsigned long totalram_pages __read_mostly; unsigned long totalreserve_pages __read_mostly; /* @@ -5231,6 +5234,15 @@ early_param("movablecore", cmdline_parse_movablecore); #endif /* CONFIG_HAVE_MEMBLOCK_NODE_MAP */ +void adjust_managed_page_count(struct page *page, long count) +{ + spin_lock(&managed_page_count_lock); + page_zone(page)->managed_pages += count; + totalram_pages += count; + spin_unlock(&managed_page_count_lock); +} +EXPORT_SYMBOL(adjust_managed_page_count); + unsigned long free_reserved_area(void *start, void *end, int poison, char *s) { void *pos; -- 1.8.1.2 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
WARNING: multiple messages have this Message-ID (diff)
From: Jiang Liu <liuj97@gmail.com> To: Andrew Morton <akpm@linux-foundation.org> Cc: Jiang Liu <jiang.liu@huawei.com>, David Rientjes <rientjes@google.com>, Wen Congyang <wency@cn.fujitsu.com>, Mel Gorman <mgorman@suse.de>, Minchan Kim <minchan@kernel.org>, KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>, Michal Hocko <mhocko@suse.cz>, James Bottomley <James.Bottomley@HansenPartnership.com>, Sergei Shtylyov <sergei.shtylyov@cogentembedded.com>, David Howells <dhowells@redhat.com>, Mark Salter <msalter@redhat.com>, Jianguo Wu <wujianguo@huawei.com>, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, Michel Lespinasse <walken@google.com>, Rik van Riel <riel@redhat.com> Subject: [PATCH v8, part3 10/14] mm: use a dedicated lock to protect totalram_pages and zone->managed_pages Date: Sun, 26 May 2013 21:38:38 +0800 [thread overview] Message-ID: <1369575522-26405-11-git-send-email-jiang.liu@huawei.com> (raw) Message-ID: <20130526133838.PHU6snhA8kCsRfbrEhIdoU77e4qx4XT7NH_RZzuCIPk@z> (raw) In-Reply-To: <1369575522-26405-1-git-send-email-jiang.liu@huawei.com> Currently lock_memory_hotplug()/unlock_memory_hotplug() are used to protect totalram_pages and zone->managed_pages. Other than the memory hotplug driver, totalram_pages and zone->managed_pages may also be modified at runtime by other drivers, such as Xen balloon, virtio_balloon etc. For those cases, memory hotplug lock is a little too heavy, so introduce a dedicated lock to protect totalram_pages and zone->managed_pages. Now we have a simplified locking rules totalram_pages and zone->managed_pages as: 1) no locking for read accesses because they are unsigned long. 2) no locking for write accesses at boot time in single-threaded context. 3) serialize write accesses at runtime by acquiring the dedicated managed_page_count_lock. Also adjust zone->managed_pages when freeing reserved pages into the buddy system, to keep totalram_pages and zone->managed_pages in consistence. Signed-off-by: Jiang Liu <jiang.liu@huawei.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Mel Gorman <mgorman@suse.de> Cc: Michel Lespinasse <walken@google.com> Cc: Rik van Riel <riel@redhat.com> Cc: Minchan Kim <minchan@kernel.org> Cc: linux-mm@kvack.org (open list:MEMORY MANAGEMENT) Cc: linux-kernel@vger.kernel.org (open list) --- include/linux/mm.h | 6 ++---- include/linux/mmzone.h | 14 ++++++++++---- mm/page_alloc.c | 12 ++++++++++++ 3 files changed, 24 insertions(+), 8 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index a56bcaa..bfe3686 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1309,6 +1309,7 @@ extern void free_initmem(void); */ extern unsigned long free_reserved_area(void *start, void *end, int poison, char *s); + #ifdef CONFIG_HIGHMEM /* * Free a highmem page into the buddy system, adjusting totalhigh_pages @@ -1317,10 +1318,7 @@ extern unsigned long free_reserved_area(void *start, void *end, extern void free_highmem_page(struct page *page); #endif -static inline void adjust_managed_page_count(struct page *page, long count) -{ - totalram_pages += count; -} +extern void adjust_managed_page_count(struct page *page, long count); /* Free the reserved page into the buddy system, so it gets managed. */ static inline void __free_reserved_page(struct page *page) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 131989a..178f166 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -474,10 +474,16 @@ struct zone { * frequently read in proximity to zone->lock. It's good to * give them a chance of being in the same cacheline. * - * Write access to present_pages and managed_pages at runtime should - * be protected by lock_memory_hotplug()/unlock_memory_hotplug(). - * Any reader who can't tolerant drift of present_pages and - * managed_pages should hold memory hotplug lock to get a stable value. + * Write access to present_pages at runtime should be protected by + * lock_memory_hotplug()/unlock_memory_hotplug(). Any reader who can't + * tolerant drift of present_pages should hold memory hotplug lock to + * get a stable value. + * + * Read access to managed_pages should be safe because it's unsigned + * long. Write access to zone->managed_pages and totalram_pages are + * protected by managed_page_count_lock at runtime. Idealy only + * adjust_managed_page_count() should be used instead of directly + * touching zone->managed_pages and totalram_pages. */ unsigned long spanned_pages; unsigned long present_pages; diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 9b50e6a..924c099 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -103,6 +103,9 @@ nodemask_t node_states[NR_NODE_STATES] __read_mostly = { }; EXPORT_SYMBOL(node_states); +/* Protect totalram_pages and zone->managed_pages */ +static DEFINE_SPINLOCK(managed_page_count_lock); + unsigned long totalram_pages __read_mostly; unsigned long totalreserve_pages __read_mostly; /* @@ -5231,6 +5234,15 @@ early_param("movablecore", cmdline_parse_movablecore); #endif /* CONFIG_HAVE_MEMBLOCK_NODE_MAP */ +void adjust_managed_page_count(struct page *page, long count) +{ + spin_lock(&managed_page_count_lock); + page_zone(page)->managed_pages += count; + totalram_pages += count; + spin_unlock(&managed_page_count_lock); +} +EXPORT_SYMBOL(adjust_managed_page_count); + unsigned long free_reserved_area(void *start, void *end, int poison, char *s) { void *pos; -- 1.8.1.2
next prev parent reply other threads:[~2013-05-26 13:38 UTC|newest] Thread overview: 48+ messages / expand[flat|nested] mbox.gz Atom feed top 2013-05-26 13:38 [PATCH v8, part3 00/14] accurately calculate memory statisitic information Jiang Liu 2013-05-26 13:38 ` [PATCH v8, part3 01/14] mm: change signature of free_reserved_area() to fix building warnings Jiang Liu 2013-05-26 13:38 ` Jiang Liu 2013-05-26 13:38 ` [PATCH v8, part3 02/14] mm: enhance free_reserved_area() to support poisoning memory with zero Jiang Liu 2013-05-26 13:38 ` Jiang Liu 2013-05-29 8:51 ` Vineet Gupta 2013-05-29 8:51 ` Vineet Gupta 2013-05-26 13:38 ` [PATCH v8, part3 03/14] mm/ARM64: kill poison_init_mem() Jiang Liu 2013-05-26 13:38 ` Jiang Liu 2013-05-28 13:43 ` Catalin Marinas 2013-05-28 13:43 ` Catalin Marinas 2013-05-26 13:38 ` [PATCH v8, part3 04/14] mm/x86: use free_reserved_area() to simplify code Jiang Liu 2013-05-26 13:38 ` Jiang Liu 2013-05-26 13:38 ` [PATCH v8, part3 05/14] mm/tile: use common help functions to free reserved pages Jiang Liu 2013-05-26 13:38 ` Jiang Liu 2013-05-26 13:38 ` [PATCH v8, part3 06/14] mm, acornfb: use free_reserved_area() to simplify code Jiang Liu 2013-05-26 13:38 ` Jiang Liu 2013-05-30 21:58 ` Andrew Morton 2013-05-30 21:58 ` Andrew Morton 2013-05-31 4:31 ` Jiang Liu 2013-05-31 4:31 ` Jiang Liu 2013-05-26 13:38 ` [PATCH v8, part3 07/14] mm: fix some trivial typos in comments Jiang Liu 2013-05-26 13:38 ` Jiang Liu 2013-05-26 13:38 ` [PATCH v8, part3 08/14] mm: use managed_pages to calculate default zonelist order Jiang Liu 2013-05-26 13:38 ` Jiang Liu 2013-05-26 13:38 ` [PATCH v8, part3 09/14] mm: accurately calculate zone->managed_pages for highmem zones Jiang Liu 2013-05-26 13:38 ` Jiang Liu 2013-05-26 13:38 ` Jiang Liu [this message] 2013-05-26 13:38 ` [PATCH v8, part3 10/14] mm: use a dedicated lock to protect totalram_pages and zone->managed_pages Jiang Liu 2013-06-11 20:00 ` Andrew Morton 2013-06-11 20:00 ` Andrew Morton 2013-06-12 2:17 ` Jiang Liu 2013-06-12 2:17 ` Jiang Liu 2013-05-26 13:38 ` [PATCH v8, part3 11/14] mm: make __free_pages_bootmem() only available at boot time Jiang Liu 2013-05-26 13:38 ` Jiang Liu 2013-05-26 13:38 ` [PATCH v8, part3 12/14] mm: correctly update zone->mamaged_pages Jiang Liu 2013-05-26 13:38 ` Jiang Liu 2013-05-27 0:52 ` Wanpeng Li 2013-05-27 0:52 ` Wanpeng Li 2013-05-27 0:52 ` Wanpeng Li 2013-05-27 13:57 ` Sergei Shtylyov 2013-05-27 13:57 ` Sergei Shtylyov 2013-05-28 14:20 ` Liu Jiang 2013-05-28 14:20 ` Liu Jiang 2013-05-26 13:38 ` [PATCH v8, part3 13/14] mm: concentrate modification of totalram_pages into the mm core Jiang Liu 2013-05-26 13:38 ` Jiang Liu 2013-05-26 13:38 ` [PATCH v8, part3 14/14] mm: report available pages as "MemTotal" for each NUMA node Jiang Liu 2013-05-26 13:38 ` Jiang Liu
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=1369575522-26405-11-git-send-email-jiang.liu@huawei.com \ --to=liuj97@gmail.com \ --cc=James.Bottomley@HansenPartnership.com \ --cc=akpm@linux-foundation.org \ --cc=dhowells@redhat.com \ --cc=jiang.liu@huawei.com \ --cc=kamezawa.hiroyu@jp.fujitsu.com \ --cc=linux-arch@vger.kernel.org \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-mm@kvack.org \ --cc=mgorman@suse.de \ --cc=mhocko@suse.cz \ --cc=minchan@kernel.org \ --cc=msalter@redhat.com \ --cc=riel@redhat.com \ --cc=rientjes@google.com \ --cc=sergei.shtylyov@cogentembedded.com \ --cc=walken@google.com \ --cc=wency@cn.fujitsu.com \ --cc=wujianguo@huawei.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).