linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 00/17] mm: zone & pgdat accessors plus some cleanup
@ 2013-01-16  0:24 Cody P Schafer
  2013-01-16  0:24 ` [PATCH 01/17] mm/compaction: rename var zone_end_pfn to avoid conflicts with new function Cody P Schafer
                   ` (16 more replies)
  0 siblings, 17 replies; 24+ messages in thread
From: Cody P Schafer @ 2013-01-16  0:24 UTC (permalink / raw)
  To: Linux MM; +Cc: LKML, Andrew Morton, Catalin Marinas

Summaries:

01 - removes the use of zone_end_pfn as a local var name.
02 - adds zone_end_pfn(), zone_is_initialized(), zone_is_empty() and zone_spans_pfn()
03 - adds a VM_BUG using zone_is_initialized() in __free_one_page()

04 - add ensure_zone_is_initialized() (for memory_hotplug)
05 - use the above addition.

06 - add pgdat_end_pfn() and pgdat_is_empty()

07,08,09,10,11,12,16,17 - use the new helpers

13 - avoid repeating checks for section in page flags by adding a define.
14 - memory hotplug: factor out zone+pgdat growth.
15 - add debugging message to VM_BUG check.

As a general concern: spanned_pages & start_pfn (in pgdat & zone) are supposed
to be locked (via a seqlock) when read (due to changes to them via
memory_hotplug), but very few (only 1?) of their users appear to actually lock
them.

--

 include/linux/mm.h     |  8 ++++--
 include/linux/mmzone.h | 34 ++++++++++++++++++++++---
 mm/compaction.c        | 10 ++++----
 mm/kmemleak.c          |  5 ++--
 mm/memory_hotplug.c    | 68 ++++++++++++++++++++++++++++----------------------
 mm/page_alloc.c        | 31 +++++++++++++----------
 mm/vmstat.c            |  2 +-
 7 files changed, 100 insertions(+), 58 deletions(-)

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* [PATCH 01/17] mm/compaction: rename var zone_end_pfn to avoid conflicts with new function
  2013-01-16  0:24 [PATCH 00/17] mm: zone & pgdat accessors plus some cleanup Cody P Schafer
@ 2013-01-16  0:24 ` Cody P Schafer
  2013-01-16  1:08   ` Dave Hansen
  2013-01-16  0:24 ` [PATCH 02/17] mmzone: add various zone_*() helper functions Cody P Schafer
                   ` (15 subsequent siblings)
  16 siblings, 1 reply; 24+ messages in thread
From: Cody P Schafer @ 2013-01-16  0:24 UTC (permalink / raw)
  To: Linux MM; +Cc: LKML, Andrew Morton, Catalin Marinas, Cody P Schafer

Patches that follow add a inline function zone_end_pfn(), which
conflicts with the naming of a local variable in isolate_freepages().

Rename the variable so it does not conflict.

Signed-off-by: Cody P Schafer <cody@linux.vnet.ibm.com>
---
 mm/compaction.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/mm/compaction.c b/mm/compaction.c
index c62bd06..1b52528 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -644,7 +644,7 @@ static void isolate_freepages(struct zone *zone,
 				struct compact_control *cc)
 {
 	struct page *page;
-	unsigned long high_pfn, low_pfn, pfn, zone_end_pfn, end_pfn;
+	unsigned long high_pfn, low_pfn, pfn, z_end_pfn, end_pfn;
 	int nr_freepages = cc->nr_freepages;
 	struct list_head *freelist = &cc->freepages;
 
@@ -663,7 +663,7 @@ static void isolate_freepages(struct zone *zone,
 	 */
 	high_pfn = min(low_pfn, pfn);
 
-	zone_end_pfn = zone->zone_start_pfn + zone->spanned_pages;
+	z_end_pfn = zone->zone_start_pfn + zone->spanned_pages;
 
 	/*
 	 * Isolate free pages until enough are available to migrate the
@@ -706,7 +706,7 @@ static void isolate_freepages(struct zone *zone,
 		 * only scans within a pageblock
 		 */
 		end_pfn = ALIGN(pfn + 1, pageblock_nr_pages);
-		end_pfn = min(end_pfn, zone_end_pfn);
+		end_pfn = min(end_pfn, z_end_pfn);
 		isolated = isolate_freepages_block(cc, pfn, end_pfn,
 						   freelist, false);
 		nr_freepages += isolated;
-- 
1.8.0.3

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 02/17] mmzone: add various zone_*() helper functions.
  2013-01-16  0:24 [PATCH 00/17] mm: zone & pgdat accessors plus some cleanup Cody P Schafer
  2013-01-16  0:24 ` [PATCH 01/17] mm/compaction: rename var zone_end_pfn to avoid conflicts with new function Cody P Schafer
@ 2013-01-16  0:24 ` Cody P Schafer
  2013-01-16  1:19   ` Dave Hansen
  2013-01-16  1:20   ` Dave Hansen
  2013-01-16  0:24 ` [PATCH 03/17] mm/page_alloc: add a VM_BUG in __free_one_page() if the zone is uninitialized Cody P Schafer
                   ` (14 subsequent siblings)
  16 siblings, 2 replies; 24+ messages in thread
From: Cody P Schafer @ 2013-01-16  0:24 UTC (permalink / raw)
  To: Linux MM
  Cc: LKML, Andrew Morton, Catalin Marinas, Cody P Schafer,
	Cody P Schafer

From: Cody P Schafer <jmesmon@gmail.com>

Add zone_is_initialized(), zone_is_empty(), zone_spans_pfn(), and
zone_end_pfn().

Signed-off-by: Cody P Schafer <cody@linux.vnet.ibm.com>
---
 include/linux/mmzone.h | 20 ++++++++++++++++++++
 1 file changed, 20 insertions(+)

diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 73b64a3..696cb7c 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -543,6 +543,26 @@ static inline int zone_is_oom_locked(const struct zone *zone)
 	return test_bit(ZONE_OOM_LOCKED, &zone->flags);
 }
 
+static inline unsigned zone_end_pfn(const struct zone *zone)
+{
+	return zone->zone_start_pfn + zone->spanned_pages;
+}
+
+static inline bool zone_spans_pfn(const struct zone *zone, unsigned long pfn)
+{
+	return zone->zone_start_pfn <= pfn && pfn < zone_end_pfn(zone);
+}
+
+static inline bool zone_is_initialized(struct zone *zone)
+{
+	return !!zone->wait_table;
+}
+
+static inline bool zone_is_empty(struct zone *zone)
+{
+	return zone->spanned_pages == 0;
+}
+
 /*
  * The "priority" of VM scanning is how much of the queues we will scan in one
  * go. A value of 12 for DEF_PRIORITY implies that we will scan 1/4096th of the
-- 
1.8.0.3

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 03/17] mm/page_alloc: add a VM_BUG in __free_one_page() if the zone is uninitialized.
  2013-01-16  0:24 [PATCH 00/17] mm: zone & pgdat accessors plus some cleanup Cody P Schafer
  2013-01-16  0:24 ` [PATCH 01/17] mm/compaction: rename var zone_end_pfn to avoid conflicts with new function Cody P Schafer
  2013-01-16  0:24 ` [PATCH 02/17] mmzone: add various zone_*() helper functions Cody P Schafer
@ 2013-01-16  0:24 ` Cody P Schafer
  2013-01-16  0:24 ` [PATCH 04/17] mm: add helper ensure_zone_is_initialized() Cody P Schafer
                   ` (13 subsequent siblings)
  16 siblings, 0 replies; 24+ messages in thread
From: Cody P Schafer @ 2013-01-16  0:24 UTC (permalink / raw)
  To: Linux MM
  Cc: LKML, Andrew Morton, Catalin Marinas, Cody P Schafer,
	Cody P Schafer

From: Cody P Schafer <jmesmon@gmail.com>

Freeing pages to uninitialized zones is not handled by
__free_one_page(), and should never happen when the code is correct.

Ran into this while writing some code that dynamically onlines extra
zones.

Signed-off-by: Cody P Schafer <cody@linux.vnet.ibm.com>
---
 mm/page_alloc.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index df2022f..da5a5ec 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -532,6 +532,8 @@ static inline void __free_one_page(struct page *page,
 	unsigned long uninitialized_var(buddy_idx);
 	struct page *buddy;
 
+	VM_BUG_ON(!zone_is_initialized(zone));
+
 	if (unlikely(PageCompound(page)))
 		if (unlikely(destroy_compound_page(page, order)))
 			return;
-- 
1.8.0.3

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 04/17] mm: add helper ensure_zone_is_initialized()
  2013-01-16  0:24 [PATCH 00/17] mm: zone & pgdat accessors plus some cleanup Cody P Schafer
                   ` (2 preceding siblings ...)
  2013-01-16  0:24 ` [PATCH 03/17] mm/page_alloc: add a VM_BUG in __free_one_page() if the zone is uninitialized Cody P Schafer
@ 2013-01-16  0:24 ` Cody P Schafer
  2013-01-16  0:24 ` [PATCH 05/17] mm/memory_hotplug: use ensure_zone_is_initialized() Cody P Schafer
                   ` (12 subsequent siblings)
  16 siblings, 0 replies; 24+ messages in thread
From: Cody P Schafer @ 2013-01-16  0:24 UTC (permalink / raw)
  To: Linux MM
  Cc: LKML, Andrew Morton, Catalin Marinas, Cody P Schafer,
	Cody P Schafer

From: Cody P Schafer <jmesmon@gmail.com>

ensure_zone_is_initialized() checks if a zone is in a empty & not
initialized state (typically occuring after it is created in memory
hotplugging), and, if so, calls init_currently_empty_zone() to
initialize the zone.

Signed-off-by: Cody P Schafer <cody@linux.vnet.ibm.com>
---
 mm/memory_hotplug.c | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index d04ed87..875bdfe 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -253,6 +253,17 @@ static void fix_zone_id(struct zone *zone, unsigned long start_pfn,
 		set_page_links(pfn_to_page(pfn), zid, nid, pfn);
 }
 
+/* Can fail with -ENOMEM from allocating a wait table with vmalloc() or
+ * alloc_bootmem_node_nopanic() */
+static int __ref ensure_zone_is_initialized(struct zone *zone,
+			unsigned long start_pfn, unsigned long num_pages)
+{
+	if (!zone_is_initialized(zone))
+		return init_currently_empty_zone(zone, start_pfn, num_pages,
+						 MEMMAP_HOTPLUG);
+	return 0;
+}
+
 static int __meminit move_pfn_range_left(struct zone *z1, struct zone *z2,
 		unsigned long start_pfn, unsigned long end_pfn)
 {
-- 
1.8.0.3

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 05/17] mm/memory_hotplug: use ensure_zone_is_initialized()
  2013-01-16  0:24 [PATCH 00/17] mm: zone & pgdat accessors plus some cleanup Cody P Schafer
                   ` (3 preceding siblings ...)
  2013-01-16  0:24 ` [PATCH 04/17] mm: add helper ensure_zone_is_initialized() Cody P Schafer
@ 2013-01-16  0:24 ` Cody P Schafer
  2013-01-16  0:24 ` [PATCH 06/17] mmzone: add pgdat_{end_pfn,is_empty}() helpers & consolidate Cody P Schafer
                   ` (11 subsequent siblings)
  16 siblings, 0 replies; 24+ messages in thread
From: Cody P Schafer @ 2013-01-16  0:24 UTC (permalink / raw)
  To: Linux MM; +Cc: LKML, Andrew Morton, Catalin Marinas, Cody P Schafer

Remove open coding of ensure_zone_is_initialzied().

Signed-off-by: Cody P Schafer <cody@linux.vnet.ibm.com>
---
 mm/memory_hotplug.c | 29 ++++++++++-------------------
 1 file changed, 10 insertions(+), 19 deletions(-)

diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 875bdfe..8e352fe 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -271,12 +271,9 @@ static int __meminit move_pfn_range_left(struct zone *z1, struct zone *z2,
 	unsigned long flags;
 	unsigned long z1_start_pfn;
 
-	if (!z1->wait_table) {
-		ret = init_currently_empty_zone(z1, start_pfn,
-			end_pfn - start_pfn, MEMMAP_HOTPLUG);
-		if (ret)
-			return ret;
-	}
+	ret = ensure_zone_is_initialized(z1, start_pfn, end_pfn - start_pfn);
+	if (ret)
+		return ret;
 
 	pgdat_resize_lock(z1->zone_pgdat, &flags);
 
@@ -316,12 +313,9 @@ static int __meminit move_pfn_range_right(struct zone *z1, struct zone *z2,
 	unsigned long flags;
 	unsigned long z2_end_pfn;
 
-	if (!z2->wait_table) {
-		ret = init_currently_empty_zone(z2, start_pfn,
-			end_pfn - start_pfn, MEMMAP_HOTPLUG);
-		if (ret)
-			return ret;
-	}
+	ret = ensure_zone_is_initialized(z2, start_pfn, end_pfn - start_pfn)
+	if (ret)
+		return ret;
 
 	pgdat_resize_lock(z1->zone_pgdat, &flags);
 
@@ -374,16 +368,13 @@ static int __meminit __add_zone(struct zone *zone, unsigned long phys_start_pfn)
 	int nid = pgdat->node_id;
 	int zone_type;
 	unsigned long flags;
+	int ret;
 
 	zone_type = zone - pgdat->node_zones;
-	if (!zone->wait_table) {
-		int ret;
+	ret = ensure_zone_is_initialized(zone, phys_start_pfn, nr_pages);
+	if (ret)
+		return ret;
 
-		ret = init_currently_empty_zone(zone, phys_start_pfn,
-						nr_pages, MEMMAP_HOTPLUG);
-		if (ret)
-			return ret;
-	}
 	pgdat_resize_lock(zone->zone_pgdat, &flags);
 	grow_zone_span(zone, phys_start_pfn, phys_start_pfn + nr_pages);
 	grow_pgdat_span(zone->zone_pgdat, phys_start_pfn,
-- 
1.8.0.3

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 06/17] mmzone: add pgdat_{end_pfn,is_empty}() helpers & consolidate.
  2013-01-16  0:24 [PATCH 00/17] mm: zone & pgdat accessors plus some cleanup Cody P Schafer
                   ` (4 preceding siblings ...)
  2013-01-16  0:24 ` [PATCH 05/17] mm/memory_hotplug: use ensure_zone_is_initialized() Cody P Schafer
@ 2013-01-16  0:24 ` Cody P Schafer
  2013-01-16  0:24 ` [PATCH 07/17] mm/page_alloc: use zone_spans_pfn() instead of open coding Cody P Schafer
                   ` (10 subsequent siblings)
  16 siblings, 0 replies; 24+ messages in thread
From: Cody P Schafer @ 2013-01-16  0:24 UTC (permalink / raw)
  To: Linux MM
  Cc: LKML, Andrew Morton, Catalin Marinas, Cody P Schafer,
	Cody P Schafer

From: Cody P Schafer <jmesmon@gmail.com>

Add pgdat_end_pfn() and pgdat_is_empty() helpers which match the similar
zone_*() functions.

Change node_end_pfn() to be a wrapper of pgdat_end_pfn().

Signed-off-by: Cody P Schafer <cody@linux.vnet.ibm.com>
---
 include/linux/mmzone.h | 14 ++++++++++----
 1 file changed, 10 insertions(+), 4 deletions(-)

diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 696cb7c..d7abff0 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -772,11 +772,17 @@ typedef struct pglist_data {
 #define nid_page_nr(nid, pagenr) 	pgdat_page_nr(NODE_DATA(nid),(pagenr))
 
 #define node_start_pfn(nid)	(NODE_DATA(nid)->node_start_pfn)
+#define node_end_pfn(nid) pgdat_end_pfn(NODE_DATA(nid))
 
-#define node_end_pfn(nid) ({\
-	pg_data_t *__pgdat = NODE_DATA(nid);\
-	__pgdat->node_start_pfn + __pgdat->node_spanned_pages;\
-})
+static inline unsigned long pgdat_end_pfn(pg_data_t *pgdat)
+{
+	return pgdat->node_start_pfn + pgdat->node_spanned_pages;
+}
+
+static inline bool pgdat_is_empty(pg_data_t *pgdat)
+{
+	return !pgdat->node_start_pfn && !pgdat->node_spanned_pages;
+}
 
 #include <linux/memory_hotplug.h>
 
-- 
1.8.0.3

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 07/17] mm/page_alloc: use zone_spans_pfn() instead of open coding.
  2013-01-16  0:24 [PATCH 00/17] mm: zone & pgdat accessors plus some cleanup Cody P Schafer
                   ` (5 preceding siblings ...)
  2013-01-16  0:24 ` [PATCH 06/17] mmzone: add pgdat_{end_pfn,is_empty}() helpers & consolidate Cody P Schafer
@ 2013-01-16  0:24 ` Cody P Schafer
  2013-01-16  0:24 ` [PATCH 08/17] mm/page_alloc: use zone_spans_pfn() instead of open coded checks Cody P Schafer
                   ` (9 subsequent siblings)
  16 siblings, 0 replies; 24+ messages in thread
From: Cody P Schafer @ 2013-01-16  0:24 UTC (permalink / raw)
  To: Linux MM; +Cc: LKML, Andrew Morton, Catalin Marinas, Cody P Schafer

Use zone_spans_pfn() instead of open coding pfn ownership checks.

This is split from following patch as could slightly degrade the
generated code. Pre-patch, the code uses it's knowledge that start_pfn <
end_pfn to cut down on the number of comparisons. Post-patch, the
compiler has to figure it out.

Signed-off-by: Cody P Schafer <cody@linux.vnet.ibm.com>
---
 mm/page_alloc.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index da5a5ec..3911c1a 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -978,9 +978,9 @@ int move_freepages_block(struct zone *zone, struct page *page,
 	end_pfn = start_pfn + pageblock_nr_pages - 1;
 
 	/* Do not cross zone boundaries */
-	if (start_pfn < zone->zone_start_pfn)
+	if (!zone_spans_pfn(zone, start_pfn))
 		start_page = page;
-	if (end_pfn >= zone->zone_start_pfn + zone->spanned_pages)
+	if (!zone_spans_pfn(zone, end_pfn))
 		return 0;
 
 	return move_freepages(zone, start_page, end_page, migratetype);
-- 
1.8.0.3

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 08/17] mm/page_alloc: use zone_spans_pfn() instead of open coded checks.
  2013-01-16  0:24 [PATCH 00/17] mm: zone & pgdat accessors plus some cleanup Cody P Schafer
                   ` (6 preceding siblings ...)
  2013-01-16  0:24 ` [PATCH 07/17] mm/page_alloc: use zone_spans_pfn() instead of open coding Cody P Schafer
@ 2013-01-16  0:24 ` Cody P Schafer
  2013-01-16  0:24 ` [PATCH 09/17] mm/page_alloc: use zone_end_pfn() & zone_spans_pfn() Cody P Schafer
                   ` (8 subsequent siblings)
  16 siblings, 0 replies; 24+ messages in thread
From: Cody P Schafer @ 2013-01-16  0:24 UTC (permalink / raw)
  To: Linux MM
  Cc: LKML, Andrew Morton, Catalin Marinas, Cody P Schafer,
	Cody P Schafer

From: Cody P Schafer <jmesmon@gmail.com>

In 2 VM_BUG()s, avoid open coding zone ownership of pfns.

Signed-off-by: Cody P Schafer <cody@linux.vnet.ibm.com>
---
 mm/page_alloc.c | 7 ++-----
 1 file changed, 2 insertions(+), 5 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 3911c1a..c5d70ce 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -242,9 +242,7 @@ static int page_outside_zone_boundaries(struct zone *zone, struct page *page)
 
 	do {
 		seq = zone_span_seqbegin(zone);
-		if (pfn >= zone->zone_start_pfn + zone->spanned_pages)
-			ret = 1;
-		else if (pfn < zone->zone_start_pfn)
+		if (!zone_spans_pfn(zone, pfn))
 			ret = 1;
 	} while (zone_span_seqretry(zone, seq));
 
@@ -5639,8 +5637,7 @@ void set_pageblock_flags_group(struct page *page, unsigned long flags,
 	pfn = page_to_pfn(page);
 	bitmap = get_pageblock_bitmap(zone, pfn);
 	bitidx = pfn_to_bitidx(zone, pfn);
-	VM_BUG_ON(pfn < zone->zone_start_pfn);
-	VM_BUG_ON(pfn >= zone->zone_start_pfn + zone->spanned_pages);
+	VM_BUG_ON(!zone_spans_pfn(zone, pfn));
 
 	for (; start_bitidx <= end_bitidx; start_bitidx++, value <<= 1)
 		if (flags & value)
-- 
1.8.0.3

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 09/17] mm/page_alloc: use zone_end_pfn() & zone_spans_pfn()
  2013-01-16  0:24 [PATCH 00/17] mm: zone & pgdat accessors plus some cleanup Cody P Schafer
                   ` (7 preceding siblings ...)
  2013-01-16  0:24 ` [PATCH 08/17] mm/page_alloc: use zone_spans_pfn() instead of open coded checks Cody P Schafer
@ 2013-01-16  0:24 ` Cody P Schafer
  2013-01-16  0:24 ` [PATCH 10/17] mm/vmstat: use zone_end_pfn() instead of opencoding Cody P Schafer
                   ` (7 subsequent siblings)
  16 siblings, 0 replies; 24+ messages in thread
From: Cody P Schafer @ 2013-01-16  0:24 UTC (permalink / raw)
  To: Linux MM
  Cc: LKML, Andrew Morton, Catalin Marinas, Cody P Schafer,
	Cody P Schafer

From: Cody P Schafer <jmesmon@gmail.com>

Change several open coded zone_end_pfn()s and a zone_spans_pfn() in the
page allocator to use the provided helper functions.

Signed-off-by: Cody P Schafer <cody@linux.vnet.ibm.com>
---
 mm/page_alloc.c | 11 +++++------
 1 file changed, 5 insertions(+), 6 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index c5d70ce..f8ed277 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1272,7 +1272,7 @@ void mark_free_pages(struct zone *zone)
 
 	spin_lock_irqsave(&zone->lock, flags);
 
-	max_zone_pfn = zone->zone_start_pfn + zone->spanned_pages;
+	max_zone_pfn = zone_end_pfn(zone);
 	for (pfn = zone->zone_start_pfn; pfn < max_zone_pfn; pfn++)
 		if (pfn_valid(pfn)) {
 			struct page *page = pfn_to_page(pfn);
@@ -3775,7 +3775,7 @@ static void setup_zone_migrate_reserve(struct zone *zone)
 	 * the block.
 	 */
 	start_pfn = zone->zone_start_pfn;
-	end_pfn = start_pfn + zone->spanned_pages;
+	end_pfn = zone_end_pfn(zone);
 	start_pfn = roundup(start_pfn, pageblock_nr_pages);
 	reserve = roundup(min_wmark_pages(zone), pageblock_nr_pages) >>
 							pageblock_order;
@@ -3889,7 +3889,7 @@ void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone,
 		 * pfn out of zone.
 		 */
 		if ((z->zone_start_pfn <= pfn)
-		    && (pfn < z->zone_start_pfn + z->spanned_pages)
+		    && (pfn < zone_end_pfn(z))
 		    && !(pfn & (pageblock_nr_pages - 1)))
 			set_pageblock_migratetype(page, MIGRATE_MOVABLE);
 
@@ -4617,7 +4617,7 @@ static void __init_refok alloc_node_mem_map(struct pglist_data *pgdat)
 		 * for the buddy allocator to function correctly.
 		 */
 		start = pgdat->node_start_pfn & ~(MAX_ORDER_NR_PAGES - 1);
-		end = pgdat->node_start_pfn + pgdat->node_spanned_pages;
+		end = pgdat_end_pfn(pgdat);
 		end = ALIGN(end, MAX_ORDER_NR_PAGES);
 		size =  (end - start) * sizeof(struct page);
 		map = alloc_remap(pgdat->node_id, size);
@@ -5735,8 +5735,7 @@ bool is_pageblock_removable_nolock(struct page *page)
 
 	zone = page_zone(page);
 	pfn = page_to_pfn(page);
-	if (zone->zone_start_pfn > pfn ||
-			zone->zone_start_pfn + zone->spanned_pages <= pfn)
+	if (!zone_spans_pfn(zone, pfn))
 		return false;
 
 	return !has_unmovable_pages(zone, page, 0, true);
-- 
1.8.0.3

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 10/17] mm/vmstat: use zone_end_pfn() instead of opencoding
  2013-01-16  0:24 [PATCH 00/17] mm: zone & pgdat accessors plus some cleanup Cody P Schafer
                   ` (8 preceding siblings ...)
  2013-01-16  0:24 ` [PATCH 09/17] mm/page_alloc: use zone_end_pfn() & zone_spans_pfn() Cody P Schafer
@ 2013-01-16  0:24 ` Cody P Schafer
  2013-01-16  0:24 ` [PATCH 11/17] mm/kmemleak: use node_{start,end}_pfn() Cody P Schafer
                   ` (6 subsequent siblings)
  16 siblings, 0 replies; 24+ messages in thread
From: Cody P Schafer @ 2013-01-16  0:24 UTC (permalink / raw)
  To: Linux MM; +Cc: LKML, Andrew Morton, Catalin Marinas, Cody P Schafer

In vmstat, use zone_end_pfn() instead of an opencoded version.

Signed-off-by: Cody P Schafer <cody@linux.vnet.ibm.com>
---
 mm/vmstat.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/vmstat.c b/mm/vmstat.c
index 9800306..ca99641 100644
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -890,7 +890,7 @@ static void pagetypeinfo_showblockcount_print(struct seq_file *m,
 	int mtype;
 	unsigned long pfn;
 	unsigned long start_pfn = zone->zone_start_pfn;
-	unsigned long end_pfn = start_pfn + zone->spanned_pages;
+	unsigned long end_pfn = zone_end_pfn(zone);
 	unsigned long count[MIGRATE_TYPES] = { 0, };
 
 	for (pfn = start_pfn; pfn < end_pfn; pfn += pageblock_nr_pages) {
-- 
1.8.0.3

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 11/17] mm/kmemleak: use node_{start,end}_pfn()
  2013-01-16  0:24 [PATCH 00/17] mm: zone & pgdat accessors plus some cleanup Cody P Schafer
                   ` (9 preceding siblings ...)
  2013-01-16  0:24 ` [PATCH 10/17] mm/vmstat: use zone_end_pfn() instead of opencoding Cody P Schafer
@ 2013-01-16  0:24 ` Cody P Schafer
  2013-01-16  0:24 ` [PATCH 12/17] mm/memory_hotplug: use pgdat_end_pfn() instead of open coding the same Cody P Schafer
                   ` (5 subsequent siblings)
  16 siblings, 0 replies; 24+ messages in thread
From: Cody P Schafer @ 2013-01-16  0:24 UTC (permalink / raw)
  To: Linux MM
  Cc: LKML, Andrew Morton, Catalin Marinas, Cody P Schafer,
	Cody P Schafer

From: Cody P Schafer <jmesmon@gmail.com>

Instead of open coding, use the exsisting node_start_pfn() and
node_end_pfn() helpers.

Signed-off-by: Cody P Schafer <cody@linux.vnet.ibm.com>
---
 mm/kmemleak.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/mm/kmemleak.c b/mm/kmemleak.c
index 752a705..83dd5fb 100644
--- a/mm/kmemleak.c
+++ b/mm/kmemleak.c
@@ -1300,9 +1300,8 @@ static void kmemleak_scan(void)
 	 */
 	lock_memory_hotplug();
 	for_each_online_node(i) {
-		pg_data_t *pgdat = NODE_DATA(i);
-		unsigned long start_pfn = pgdat->node_start_pfn;
-		unsigned long end_pfn = start_pfn + pgdat->node_spanned_pages;
+		unsigned long start_pfn = node_start_pfn(i);
+		unsigned long end_pfn = node_end_pfn(i);
 		unsigned long pfn;
 
 		for (pfn = start_pfn; pfn < end_pfn; pfn++) {
-- 
1.8.0.3

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 12/17] mm/memory_hotplug: use pgdat_end_pfn() instead of open coding the same.
  2013-01-16  0:24 [PATCH 00/17] mm: zone & pgdat accessors plus some cleanup Cody P Schafer
                   ` (10 preceding siblings ...)
  2013-01-16  0:24 ` [PATCH 11/17] mm/kmemleak: use node_{start,end}_pfn() Cody P Schafer
@ 2013-01-16  0:24 ` Cody P Schafer
  2013-01-16  0:24 ` [PATCH 13/17] mm: add SECTION_IN_PAGE_FLAGS Cody P Schafer
                   ` (4 subsequent siblings)
  16 siblings, 0 replies; 24+ messages in thread
From: Cody P Schafer @ 2013-01-16  0:24 UTC (permalink / raw)
  To: Linux MM
  Cc: LKML, Andrew Morton, Catalin Marinas, Cody P Schafer,
	Cody P Schafer

From: Cody P Schafer <jmesmon@gmail.com>

Replace open coded pgdat_end_pfn() with helper function.

Signed-off-by: Cody P Schafer <cody@linux.vnet.ibm.com>
---
 mm/memory_hotplug.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 8e352fe..0a74b86a 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -189,7 +189,7 @@ void register_page_bootmem_info_node(struct pglist_data *pgdat)
 	}
 
 	pfn = pgdat->node_start_pfn;
-	end_pfn = pfn + pgdat->node_spanned_pages;
+	end_pfn = pgdat_end_pfn(pgdat);
 
 	/* register_section info */
 	for (; pfn < end_pfn; pfn += PAGES_PER_SECTION) {
-- 
1.8.0.3

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 13/17] mm: add SECTION_IN_PAGE_FLAGS
  2013-01-16  0:24 [PATCH 00/17] mm: zone & pgdat accessors plus some cleanup Cody P Schafer
                   ` (11 preceding siblings ...)
  2013-01-16  0:24 ` [PATCH 12/17] mm/memory_hotplug: use pgdat_end_pfn() instead of open coding the same Cody P Schafer
@ 2013-01-16  0:24 ` Cody P Schafer
  2013-01-16  0:24 ` [PATCH 14/17] mm/memory_hotplug: factor out zone+pgdat growth Cody P Schafer
                   ` (3 subsequent siblings)
  16 siblings, 0 replies; 24+ messages in thread
From: Cody P Schafer @ 2013-01-16  0:24 UTC (permalink / raw)
  To: Linux MM; +Cc: LKML, Andrew Morton, Catalin Marinas, Cody P Schafer

Instead of directly utilizing a combination of config options to determine this,
add a macro to specifically address it.

Signed-off-by: Cody P Schafer <cody@linux.vnet.ibm.com>
---
 include/linux/mm.h | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 66e2f7c..ef69564 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -625,6 +625,10 @@ static inline pte_t maybe_mkwrite(pte_t pte, struct vm_area_struct *vma)
 #define NODE_NOT_IN_PAGE_FLAGS
 #endif
 
+#if defined(CONFIG_SPARSEMEM) && !defined(CONFIG_SPARSEMEM_VMEMMAP)
+#define SECTION_IN_PAGE_FLAGS
+#endif
+
 /*
  * Define the bit shifts to access each section.  For non-existent
  * sections we define the shift as 0; that plus a 0 mask ensures
@@ -727,7 +731,7 @@ static inline struct zone *page_zone(const struct page *page)
 	return &NODE_DATA(page_to_nid(page))->node_zones[page_zonenum(page)];
 }
 
-#if defined(CONFIG_SPARSEMEM) && !defined(CONFIG_SPARSEMEM_VMEMMAP)
+#ifdef SECTION_IN_PAGE_FLAGS
 static inline void set_page_section(struct page *page, unsigned long section)
 {
 	page->flags &= ~(SECTIONS_MASK << SECTIONS_PGSHIFT);
@@ -757,7 +761,7 @@ static inline void set_page_links(struct page *page, enum zone_type zone,
 {
 	set_page_zone(page, zone);
 	set_page_node(page, node);
-#if defined(CONFIG_SPARSEMEM) && !defined(CONFIG_SPARSEMEM_VMEMMAP)
+#ifdef SECTION_IN_PAGE_FLAGS
 	set_page_section(page, pfn_to_section_nr(pfn));
 #endif
 }
-- 
1.8.0.3

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 14/17] mm/memory_hotplug: factor out zone+pgdat growth.
  2013-01-16  0:24 [PATCH 00/17] mm: zone & pgdat accessors plus some cleanup Cody P Schafer
                   ` (12 preceding siblings ...)
  2013-01-16  0:24 ` [PATCH 13/17] mm: add SECTION_IN_PAGE_FLAGS Cody P Schafer
@ 2013-01-16  0:24 ` Cody P Schafer
  2013-01-16  1:27   ` Dave Hansen
  2013-01-16  0:24 ` [PATCH 15/17] mm/page_alloc: add informative debugging message in page_outside_zone_boundaries() Cody P Schafer
                   ` (2 subsequent siblings)
  16 siblings, 1 reply; 24+ messages in thread
From: Cody P Schafer @ 2013-01-16  0:24 UTC (permalink / raw)
  To: Linux MM; +Cc: LKML, Andrew Morton, Catalin Marinas, Cody P Schafer

Create a new function grow_pgdat_and_zone() which handles locking +
growth of a zone & the pgdat which it is associated with.

Signed-off-by: Cody P Schafer <cody@linux.vnet.ibm.com>
---
 mm/memory_hotplug.c | 16 +++++++++++-----
 1 file changed, 11 insertions(+), 5 deletions(-)

diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 0a74b86a..c6149a3 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -361,6 +361,16 @@ static void grow_pgdat_span(struct pglist_data *pgdat, unsigned long start_pfn,
 					pgdat->node_start_pfn;
 }
 
+static void grow_pgdat_and_zone(struct zone *zone, unsigned long start_pfn,
+		unsigned long end_pfn)
+{
+	unsigned long flags;
+	pgdat_resize_lock(zone->zone_pgdat, &flags);
+	grow_zone_span(zone, start_pfn, end_pfn);
+	grow_pgdat_span(zone->zone_pgdat, start_pfn, end_pfn);
+	pgdat_resize_unlock(zone->zone_pgdat, &flags);
+}
+
 static int __meminit __add_zone(struct zone *zone, unsigned long phys_start_pfn)
 {
 	struct pglist_data *pgdat = zone->zone_pgdat;
@@ -375,11 +385,7 @@ static int __meminit __add_zone(struct zone *zone, unsigned long phys_start_pfn)
 	if (ret)
 		return ret;
 
-	pgdat_resize_lock(zone->zone_pgdat, &flags);
-	grow_zone_span(zone, phys_start_pfn, phys_start_pfn + nr_pages);
-	grow_pgdat_span(zone->zone_pgdat, phys_start_pfn,
-			phys_start_pfn + nr_pages);
-	pgdat_resize_unlock(zone->zone_pgdat, &flags);
+	grow_pgdat_and_zone(zone, phys_start_pfn, phys_start_pfn + nr_pages);
 	memmap_init_zone(nr_pages, nid, zone_type,
 			 phys_start_pfn, MEMMAP_HOTPLUG);
 	return 0;
-- 
1.8.0.3

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 15/17] mm/page_alloc: add informative debugging message in page_outside_zone_boundaries()
  2013-01-16  0:24 [PATCH 00/17] mm: zone & pgdat accessors plus some cleanup Cody P Schafer
                   ` (13 preceding siblings ...)
  2013-01-16  0:24 ` [PATCH 14/17] mm/memory_hotplug: factor out zone+pgdat growth Cody P Schafer
@ 2013-01-16  0:24 ` Cody P Schafer
  2013-01-16  1:34   ` Dave Hansen
  2013-01-16  0:24 ` [PATCH 16/17] mm/memory_hotplug: use zone_end_pfn() instead of open coding Cody P Schafer
  2013-01-16  0:24 ` [PATCH 17/17] mm/compaction: use zone_end_pfn() Cody P Schafer
  16 siblings, 1 reply; 24+ messages in thread
From: Cody P Schafer @ 2013-01-16  0:24 UTC (permalink / raw)
  To: Linux MM; +Cc: LKML, Andrew Morton, Catalin Marinas, Cody P Schafer

Add a debug message which prints when a page is found outside of the
boundaries of the zone it should belong to. Format is:
	"page $pfn outside zone [ $start_pfn - $end_pfn ]"

Signed-off-by: Cody P Schafer <cody@linux.vnet.ibm.com>
---
 mm/page_alloc.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index f8ed277..f1783cf 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -239,13 +239,20 @@ static int page_outside_zone_boundaries(struct zone *zone, struct page *page)
 	int ret = 0;
 	unsigned seq;
 	unsigned long pfn = page_to_pfn(page);
+	unsigned long sp, start_pfn;
 
 	do {
 		seq = zone_span_seqbegin(zone);
+		start_pfn = zone->zone_start_pfn;
+		sp = zone->spanned_pages;
 		if (!zone_spans_pfn(zone, pfn))
 			ret = 1;
 	} while (zone_span_seqretry(zone, seq));
 
+	if (ret)
+		pr_debug("page %lu outside zone [ %lu - %lu ]\n",
+			pfn, start_pfn, start_pfn + sp);
+
 	return ret;
 }
 
-- 
1.8.0.3

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 16/17] mm/memory_hotplug: use zone_end_pfn() instead of open coding
  2013-01-16  0:24 [PATCH 00/17] mm: zone & pgdat accessors plus some cleanup Cody P Schafer
                   ` (14 preceding siblings ...)
  2013-01-16  0:24 ` [PATCH 15/17] mm/page_alloc: add informative debugging message in page_outside_zone_boundaries() Cody P Schafer
@ 2013-01-16  0:24 ` Cody P Schafer
  2013-01-16  0:24 ` [PATCH 17/17] mm/compaction: use zone_end_pfn() Cody P Schafer
  16 siblings, 0 replies; 24+ messages in thread
From: Cody P Schafer @ 2013-01-16  0:24 UTC (permalink / raw)
  To: Linux MM; +Cc: LKML, Andrew Morton, Catalin Marinas, Cody P Schafer

Switch to using zone_end_pfn() in move_pfn_range_left() and
move_pfn_range_right() instead of open coding the same.

Signed-off-by: Cody P Schafer <cody@linux.vnet.ibm.com>
---
 mm/memory_hotplug.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index c6149a3..515b917 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -278,7 +278,7 @@ static int __meminit move_pfn_range_left(struct zone *z1, struct zone *z2,
 	pgdat_resize_lock(z1->zone_pgdat, &flags);
 
 	/* can't move pfns which are higher than @z2 */
-	if (end_pfn > z2->zone_start_pfn + z2->spanned_pages)
+	if (end_pfn > zone_end_pfn(z2))
 		goto out_fail;
 	/* the move out part mast at the left most of @z2 */
 	if (start_pfn > z2->zone_start_pfn)
@@ -294,7 +294,7 @@ static int __meminit move_pfn_range_left(struct zone *z1, struct zone *z2,
 		z1_start_pfn = start_pfn;
 
 	resize_zone(z1, z1_start_pfn, end_pfn);
-	resize_zone(z2, end_pfn, z2->zone_start_pfn + z2->spanned_pages);
+	resize_zone(z2, end_pfn, zone_end_pfn(z2));
 
 	pgdat_resize_unlock(z1->zone_pgdat, &flags);
 
@@ -323,15 +323,15 @@ static int __meminit move_pfn_range_right(struct zone *z1, struct zone *z2,
 	if (z1->zone_start_pfn > start_pfn)
 		goto out_fail;
 	/* the move out part mast at the right most of @z1 */
-	if (z1->zone_start_pfn + z1->spanned_pages >  end_pfn)
+	if (zone_end_pfn(z1) >  end_pfn)
 		goto out_fail;
 	/* must included/overlap */
-	if (start_pfn >= z1->zone_start_pfn + z1->spanned_pages)
+	if (start_pfn >= zone_end_pfn(z1))
 		goto out_fail;
 
 	/* use end_pfn for z2's end_pfn if z2 is empty */
 	if (z2->spanned_pages)
-		z2_end_pfn = z2->zone_start_pfn + z2->spanned_pages;
+		z2_end_pfn = zone_end_pfn(z2);
 	else
 		z2_end_pfn = end_pfn;
 
-- 
1.8.0.3

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 17/17] mm/compaction: use zone_end_pfn()
  2013-01-16  0:24 [PATCH 00/17] mm: zone & pgdat accessors plus some cleanup Cody P Schafer
                   ` (15 preceding siblings ...)
  2013-01-16  0:24 ` [PATCH 16/17] mm/memory_hotplug: use zone_end_pfn() instead of open coding Cody P Schafer
@ 2013-01-16  0:24 ` Cody P Schafer
  2013-01-16  1:39   ` Dave Hansen
  16 siblings, 1 reply; 24+ messages in thread
From: Cody P Schafer @ 2013-01-16  0:24 UTC (permalink / raw)
  To: Linux MM; +Cc: LKML, Andrew Morton, Catalin Marinas, Cody P Schafer

Switch to using zone_end_pfn from open coding.

Signed-off-by: Cody P Schafer <cody@linux.vnet.ibm.com>
---
 mm/compaction.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/mm/compaction.c b/mm/compaction.c
index 1b52528..ea66be3 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -85,7 +85,7 @@ static inline bool isolation_suitable(struct compact_control *cc,
 static void __reset_isolation_suitable(struct zone *zone)
 {
 	unsigned long start_pfn = zone->zone_start_pfn;
-	unsigned long end_pfn = zone->zone_start_pfn + zone->spanned_pages;
+	unsigned long end_pfn = zone_end_pfn(zone);
 	unsigned long pfn;
 
 	zone->compact_cached_migrate_pfn = start_pfn;
@@ -663,7 +663,7 @@ static void isolate_freepages(struct zone *zone,
 	 */
 	high_pfn = min(low_pfn, pfn);
 
-	z_end_pfn = zone->zone_start_pfn + zone->spanned_pages;
+	z_end_pfn = zone_end_pfn(zone);
 
 	/*
 	 * Isolate free pages until enough are available to migrate the
@@ -920,7 +920,7 @@ static int compact_zone(struct zone *zone, struct compact_control *cc)
 {
 	int ret;
 	unsigned long start_pfn = zone->zone_start_pfn;
-	unsigned long end_pfn = zone->zone_start_pfn + zone->spanned_pages;
+	unsigned long end_pfn = zone_end_pfn(zone);
 
 	ret = compaction_suitable(zone, cc->order);
 	switch (ret) {
-- 
1.8.0.3

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* Re: [PATCH 01/17] mm/compaction: rename var zone_end_pfn to avoid conflicts with new function
  2013-01-16  0:24 ` [PATCH 01/17] mm/compaction: rename var zone_end_pfn to avoid conflicts with new function Cody P Schafer
@ 2013-01-16  1:08   ` Dave Hansen
  0 siblings, 0 replies; 24+ messages in thread
From: Dave Hansen @ 2013-01-16  1:08 UTC (permalink / raw)
  To: Cody P Schafer; +Cc: Linux MM, LKML, Andrew Morton, Catalin Marinas

On 01/15/2013 04:24 PM, Cody P Schafer wrote:
> Patches that follow add a inline function zone_end_pfn(), which
> conflicts with the naming of a local variable in isolate_freepages().
> 
> Rename the variable so it does not conflict.

It's probably worth a note here that you _will_ be migrating this use
over to the new function anyway.

> @@ -706,7 +706,7 @@ static void isolate_freepages(struct zone *zone,
>  		 * only scans within a pageblock
>  		 */
>  		end_pfn = ALIGN(pfn + 1, pageblock_nr_pages);
> -		end_pfn = min(end_pfn, zone_end_pfn);
> +		end_pfn = min(end_pfn, z_end_pfn);

Is there any reason not to just completely get rid of z_end_pfn (in the
later patches after you introduce zone_end_pfn() of course):

> +		end_pfn = min(end_pfn, zone_end_pfn(zone));

I wouldn't be completely opposed to you just introducing zone_end_pfn()
and doing all the replacements in a single patch.  It would make it
somewhat easier to review, and it would also save the juggling you have
to do with this one.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 02/17] mmzone: add various zone_*() helper functions.
  2013-01-16  0:24 ` [PATCH 02/17] mmzone: add various zone_*() helper functions Cody P Schafer
@ 2013-01-16  1:19   ` Dave Hansen
  2013-01-16  1:20   ` Dave Hansen
  1 sibling, 0 replies; 24+ messages in thread
From: Dave Hansen @ 2013-01-16  1:19 UTC (permalink / raw)
  To: Cody P Schafer
  Cc: Linux MM, LKML, Andrew Morton, Catalin Marinas, Cody P Schafer

On 01/15/2013 04:24 PM, Cody P Schafer wrote:
> +static inline bool zone_spans_pfn(const struct zone *zone, unsigned long pfn)
> +{
> +	return zone->zone_start_pfn <= pfn && pfn < zone_end_pfn(zone);
> +}

This needs some parenthesis, just for readability.  There's also no
crime in breaking it up to be multi-line if you want.

> +static inline bool zone_is_initialized(struct zone *zone)
> +{
> +	return !!zone->wait_table;
> +}
> +
> +static inline bool zone_is_empty(struct zone *zone)
> +{
> +	return zone->spanned_pages == 0;
> +}



--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 02/17] mmzone: add various zone_*() helper functions.
  2013-01-16  0:24 ` [PATCH 02/17] mmzone: add various zone_*() helper functions Cody P Schafer
  2013-01-16  1:19   ` Dave Hansen
@ 2013-01-16  1:20   ` Dave Hansen
  1 sibling, 0 replies; 24+ messages in thread
From: Dave Hansen @ 2013-01-16  1:20 UTC (permalink / raw)
  To: Cody P Schafer
  Cc: Linux MM, LKML, Andrew Morton, Catalin Marinas, Cody P Schafer

On 01/15/2013 04:24 PM, Cody P Schafer wrote:
> +static inline bool zone_is_empty(struct zone *zone)
> +{
> +	return zone->spanned_pages == 0;
> +}

Why did you choose spanned_pages for this?

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 14/17] mm/memory_hotplug: factor out zone+pgdat growth.
  2013-01-16  0:24 ` [PATCH 14/17] mm/memory_hotplug: factor out zone+pgdat growth Cody P Schafer
@ 2013-01-16  1:27   ` Dave Hansen
  0 siblings, 0 replies; 24+ messages in thread
From: Dave Hansen @ 2013-01-16  1:27 UTC (permalink / raw)
  To: Cody P Schafer; +Cc: Linux MM, LKML, Andrew Morton, Catalin Marinas

On 01/15/2013 04:24 PM, Cody P Schafer wrote:
> Create a new function grow_pgdat_and_zone() which handles locking +
> growth of a zone & the pgdat which it is associated with.

Why is this being factored out?  Will it be reused somewhere?

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 15/17] mm/page_alloc: add informative debugging message in page_outside_zone_boundaries()
  2013-01-16  0:24 ` [PATCH 15/17] mm/page_alloc: add informative debugging message in page_outside_zone_boundaries() Cody P Schafer
@ 2013-01-16  1:34   ` Dave Hansen
  0 siblings, 0 replies; 24+ messages in thread
From: Dave Hansen @ 2013-01-16  1:34 UTC (permalink / raw)
  To: Cody P Schafer; +Cc: Linux MM, LKML, Andrew Morton, Catalin Marinas

On 01/15/2013 04:24 PM, Cody P Schafer wrote:
> Add a debug message which prints when a page is found outside of the
> boundaries of the zone it should belong to. Format is:
> 	"page $pfn outside zone [ $start_pfn - $end_pfn ]"

I'd make sure to say 'pfn' here, just to make sure that it's explicitly
stated to be a pfn and not a 'struct page'

> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index f8ed277..f1783cf 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -239,13 +239,20 @@ static int page_outside_zone_boundaries(struct zone *zone, struct page *page)
>  	int ret = 0;
>  	unsigned seq;
>  	unsigned long pfn = page_to_pfn(page);
> +	unsigned long sp, start_pfn;

I think calling this zone_spanned is probably just fine.  Shouldn't take
up too much room.

>  	do {
>  		seq = zone_span_seqbegin(zone);
> +		start_pfn = zone->zone_start_pfn;
> +		sp = zone->spanned_pages;
>  		if (!zone_spans_pfn(zone, pfn))
>  			ret = 1;
>  	} while (zone_span_seqretry(zone, seq));
> 
> +	if (ret)
> +		pr_debug("page %lu outside zone [ %lu - %lu ]\n",
> +			pfn, start_pfn, start_pfn + sp);
> +
>  	return ret;
>  }

Is there a way we could also fit in something to disambiguate the zones?
 I can imagine a scenario where two zones might have identical
start/spanned_pages, so they might be impossible to tell apart in a
message like this.  Maybe we could add the NUMA node or the
DMA/Normal/Highmem text?

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 17/17] mm/compaction: use zone_end_pfn()
  2013-01-16  0:24 ` [PATCH 17/17] mm/compaction: use zone_end_pfn() Cody P Schafer
@ 2013-01-16  1:39   ` Dave Hansen
  0 siblings, 0 replies; 24+ messages in thread
From: Dave Hansen @ 2013-01-16  1:39 UTC (permalink / raw)
  To: Cody P Schafer; +Cc: Linux MM, LKML, Andrew Morton, Catalin Marinas

On 01/15/2013 04:24 PM, Cody P Schafer wrote:
> diff --git a/mm/compaction.c b/mm/compaction.c
> index 1b52528..ea66be3 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -85,7 +85,7 @@ static inline bool isolation_suitable(struct compact_control *cc,
>  static void __reset_isolation_suitable(struct zone *zone)
>  {
>  	unsigned long start_pfn = zone->zone_start_pfn;
> -	unsigned long end_pfn = zone->zone_start_pfn + zone->spanned_pages;
> +	unsigned long end_pfn = zone_end_pfn(zone);
>  	unsigned long pfn;
> 
>  	zone->compact_cached_migrate_pfn = start_pfn;
> @@ -663,7 +663,7 @@ static void isolate_freepages(struct zone *zone,
>  	 */
>  	high_pfn = min(low_pfn, pfn);
> 
> -	z_end_pfn = zone->zone_start_pfn + zone->spanned_pages;
> +	z_end_pfn = zone_end_pfn(zone);
> 
>  	/*
>  	 * Isolate free pages until enough are available to migrate the
> @@ -920,7 +920,7 @@ static int compact_zone(struct zone *zone, struct compact_control *cc)
>  {
>  	int ret;
>  	unsigned long start_pfn = zone->zone_start_pfn;
> -	unsigned long end_pfn = zone->zone_start_pfn + zone->spanned_pages;
> +	unsigned long end_pfn = zone_end_pfn(zone);
> 
>  	ret = compaction_suitable(zone, cc->order);
>  	switch (ret) {

I do think theses are a _wee_ bit _too_ broken out.  In this case, it's
highly beneficial to just be able to look in the same email to make sure
that, "yeah, zone_end_pfn() is the same as the code that it replaces".
The fact that it was defined 15 patches ago makes it a bit harder to
review.  It's much nicer to review if there's _one_ patch that does the
"define this new function and do all of the replacements".

Anyway, the series looks good.  Feel free to add my:

Reviewed-by: Dave Hansen <dave@linux.vnet.ibm.com>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 24+ messages in thread

end of thread, other threads:[~2013-01-16  1:39 UTC | newest]

Thread overview: 24+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-01-16  0:24 [PATCH 00/17] mm: zone & pgdat accessors plus some cleanup Cody P Schafer
2013-01-16  0:24 ` [PATCH 01/17] mm/compaction: rename var zone_end_pfn to avoid conflicts with new function Cody P Schafer
2013-01-16  1:08   ` Dave Hansen
2013-01-16  0:24 ` [PATCH 02/17] mmzone: add various zone_*() helper functions Cody P Schafer
2013-01-16  1:19   ` Dave Hansen
2013-01-16  1:20   ` Dave Hansen
2013-01-16  0:24 ` [PATCH 03/17] mm/page_alloc: add a VM_BUG in __free_one_page() if the zone is uninitialized Cody P Schafer
2013-01-16  0:24 ` [PATCH 04/17] mm: add helper ensure_zone_is_initialized() Cody P Schafer
2013-01-16  0:24 ` [PATCH 05/17] mm/memory_hotplug: use ensure_zone_is_initialized() Cody P Schafer
2013-01-16  0:24 ` [PATCH 06/17] mmzone: add pgdat_{end_pfn,is_empty}() helpers & consolidate Cody P Schafer
2013-01-16  0:24 ` [PATCH 07/17] mm/page_alloc: use zone_spans_pfn() instead of open coding Cody P Schafer
2013-01-16  0:24 ` [PATCH 08/17] mm/page_alloc: use zone_spans_pfn() instead of open coded checks Cody P Schafer
2013-01-16  0:24 ` [PATCH 09/17] mm/page_alloc: use zone_end_pfn() & zone_spans_pfn() Cody P Schafer
2013-01-16  0:24 ` [PATCH 10/17] mm/vmstat: use zone_end_pfn() instead of opencoding Cody P Schafer
2013-01-16  0:24 ` [PATCH 11/17] mm/kmemleak: use node_{start,end}_pfn() Cody P Schafer
2013-01-16  0:24 ` [PATCH 12/17] mm/memory_hotplug: use pgdat_end_pfn() instead of open coding the same Cody P Schafer
2013-01-16  0:24 ` [PATCH 13/17] mm: add SECTION_IN_PAGE_FLAGS Cody P Schafer
2013-01-16  0:24 ` [PATCH 14/17] mm/memory_hotplug: factor out zone+pgdat growth Cody P Schafer
2013-01-16  1:27   ` Dave Hansen
2013-01-16  0:24 ` [PATCH 15/17] mm/page_alloc: add informative debugging message in page_outside_zone_boundaries() Cody P Schafer
2013-01-16  1:34   ` Dave Hansen
2013-01-16  0:24 ` [PATCH 16/17] mm/memory_hotplug: use zone_end_pfn() instead of open coding Cody P Schafer
2013-01-16  0:24 ` [PATCH 17/17] mm/compaction: use zone_end_pfn() Cody P Schafer
2013-01-16  1:39   ` Dave Hansen

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).