linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [Patch v2] mm/sparse: only sub-section aligned range would be populated
@ 2020-07-03  3:18 Wei Yang
  2020-07-03  7:14 ` David Hildenbrand
  2020-08-05 21:49 ` Wei Yang
  0 siblings, 2 replies; 5+ messages in thread
From: Wei Yang @ 2020-07-03  3:18 UTC (permalink / raw)
  To: akpm; +Cc: linux-mm, linux-kernel, david, Wei Yang

There are two code path which invoke __populate_section_memmap()

  * sparse_init_nid()
  * sparse_add_section()

For both case, we are sure the memory range is sub-section aligned.

  * we pass PAGES_PER_SECTION to sparse_init_nid()
  * we check range by check_pfn_span() before calling
    sparse_add_section()

Also, the counterpart of __populate_section_memmap(), we don't do such
calculation and check since the range is checked by check_pfn_span() in
__remove_pages().

Clear the calculation and check to keep it simple and comply with its
counterpart.

Signed-off-by: Wei Yang <richard.weiyang@linux.alibaba.com>

---
v2:
  * add a warn on once for unaligned range, suggested by David
---
 mm/sparse-vmemmap.c | 20 ++++++--------------
 1 file changed, 6 insertions(+), 14 deletions(-)

diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c
index 0db7738d76e9..8d3a1b6287c5 100644
--- a/mm/sparse-vmemmap.c
+++ b/mm/sparse-vmemmap.c
@@ -247,20 +247,12 @@ int __meminit vmemmap_populate_basepages(unsigned long start,
 struct page * __meminit __populate_section_memmap(unsigned long pfn,
 		unsigned long nr_pages, int nid, struct vmem_altmap *altmap)
 {
-	unsigned long start;
-	unsigned long end;
-
-	/*
-	 * The minimum granularity of memmap extensions is
-	 * PAGES_PER_SUBSECTION as allocations are tracked in the
-	 * 'subsection_map' bitmap of the section.
-	 */
-	end = ALIGN(pfn + nr_pages, PAGES_PER_SUBSECTION);
-	pfn &= PAGE_SUBSECTION_MASK;
-	nr_pages = end - pfn;
-
-	start = (unsigned long) pfn_to_page(pfn);
-	end = start + nr_pages * sizeof(struct page);
+	unsigned long start = (unsigned long) pfn_to_page(pfn);
+	unsigned long end = start + nr_pages * sizeof(struct page);
+
+	if (WARN_ON_ONCE(!IS_ALIGNED(pfn, PAGES_PER_SUBSECTION) ||
+		!IS_ALIGNED(nr_pages, PAGES_PER_SUBSECTION)))
+		return NULL;
 
 	if (vmemmap_populate(start, end, nid, altmap))
 		return NULL;
-- 
2.20.1 (Apple Git-117)



^ permalink raw reply related	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2020-08-06 11:05 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2020-07-03  3:18 [Patch v2] mm/sparse: only sub-section aligned range would be populated Wei Yang
2020-07-03  7:14 ` David Hildenbrand
2020-08-05 21:49 ` Wei Yang
2020-08-06  7:29   ` David Hildenbrand
2020-08-06  9:59     ` Wei Yang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).