linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Mike Rapoport <rppt@kernel.org>
To: Qian Cai <qcai@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Andrea Arcangeli <aarcange@redhat.com>,
	Baoquan He <bhe@redhat.com>, David Hildenbrand <david@redhat.com>,
	Mel Gorman <mgorman@suse.de>, Michal Hocko <mhocko@kernel.org>,
	Mike Rapoport <rppt@linux.ibm.com>,
	Vlastimil Babka <vbabka@suse.cz>,
	linux-kernel@vger.kernel.org, linux-mm@kvack.org,
	stable@vger.kernel.org, Stephen Rothwell <sfr@canb.auug.org.au>,
	Linux Next Mailing List <linux-next@vger.kernel.org>
Subject: Re: [PATCH v2 2/2] mm: fix initialization of struct page for holes in memory layout
Date: Mon, 11 Jan 2021 19:47:58 +0200	[thread overview]
Message-ID: <20210111174758.GE1106298@kernel.org> (raw)
In-Reply-To: <782e710eac32b1ab3bf9713bcd6afcbc9483e16c.camel@redhat.com>

On Mon, Jan 11, 2021 at 10:06:43AM -0500, Qian Cai wrote:
> On Sun, 2021-01-10 at 17:39 +0200, Mike Rapoport wrote:
> > On Wed, Jan 06, 2021 at 04:04:21PM -0500, Qian Cai wrote:
> > > On Wed, 2021-01-06 at 10:05 +0200, Mike Rapoport wrote:
> > > > I think we trigger PF_POISONED_CHECK() in PageSlab(), then
> > > > fffffffffffffffe
> > > > is "accessed" from VM_BUG_ON_PAGE().
> > > > 
> > > > It seems to me that we are not initializing struct pages for holes at the
> > > > node
> > > > boundaries because zones are already clamped to exclude those holes.
> > > > 
> > > > Can you please try to see if the patch below will produce any useful info:
> > > 
> > > [    0.000000] init_unavailable_range: spfn: 8c, epfn: 9b, zone: DMA, node:
> > > 0
> > > [    0.000000] init_unavailable_range: spfn: 1f7be, epfn: 1f9fe, zone:
> > > DMA32, node: 0
> > > [    0.000000] init_unavailable_range: spfn: 28784, epfn: 288e4, zone:
> > > DMA32, node: 0
> > > [    0.000000] init_unavailable_range: spfn: 298b9, epfn: 298bd, zone:
> > > DMA32, node: 0
> > > [    0.000000] init_unavailable_range: spfn: 29923, epfn: 29931, zone:
> > > DMA32, node: 0
> > > [    0.000000] init_unavailable_range: spfn: 29933, epfn: 29941, zone:
> > > DMA32, node: 0
> > > [    0.000000] init_unavailable_range: spfn: 29945, epfn: 29946, zone:
> > > DMA32, node: 0
> > > [    0.000000] init_unavailable_range: spfn: 29ff9, epfn: 2a823, zone:
> > > DMA32, node: 0
> > > [    0.000000] init_unavailable_range: spfn: 33a23, epfn: 33a53, zone:
> > > DMA32, node: 0
> > > [    0.000000] init_unavailable_range: spfn: 78000, epfn: 100000, zone:
> > > DMA32, node: 0
> > > ...
> > > [  572.222563][ T2302] kpagecount_read: pfn 47f380 is poisoned
> > ...
> > > [  590.570032][ T2302] kpagecount_read: pfn 47ffff is poisoned
> > > [  604.268653][ T2302] kpagecount_read: pfn 87ff80 is poisoned
> > ...
> > > [  604.611698][ T2302] kpagecount_read: pfn 87ffbc is poisoned
> > > [  617.484205][ T2302] kpagecount_read: pfn c7ff80 is poisoned
> > ...
> > > [  618.212344][ T2302] kpagecount_read: pfn c7ffff is poisoned
> > > [  633.134228][ T2302] kpagecount_read: pfn 107ff80 is poisoned
> > ...
> > > [  633.874087][ T2302] kpagecount_read: pfn 107ffff is poisoned
> > > [  647.686412][ T2302] kpagecount_read: pfn 147ff80 is poisoned
> > ...
> > > [  648.425548][ T2302] kpagecount_read: pfn 147ffff is poisoned
> > > [  663.692630][ T2302] kpagecount_read: pfn 187ff80 is poisoned
> > ...
> > > [  664.432671][ T2302] kpagecount_read: pfn 187ffff is poisoned
> > > [  675.462757][ T2302] kpagecount_read: pfn 1c7ff80 is poisoned
> > ...
> > > [  676.202548][ T2302] kpagecount_read: pfn 1c7ffff is poisoned
> > > [  687.121605][ T2302] kpagecount_read: pfn 207ff80 is poisoned
> > ...
> > > [  687.860981][ T2302] kpagecount_read: pfn 207ffff is poisoned
> > 
> > The e820 map has a hole near the end of each node and these holes are not
> > initialized with init_unavailable_range() after it was interleaved with
> > memmap initialization because such holes are not accounted by
> > zone->spanned_pages.
> > 
> > Yet, I'm still cannot really understand how this never triggered 
> > 
> > 	VM_BUG_ON_PAGE(!zone_spans_pfn(page_zone(page), pfn), page);
> > 
> > before v5.7 as all the struct pages for these holes would have zone=0 and
> > node=0 ... 
> > 
> > @Qian, can you please boot your system with memblock=debug and share the
> > logs?
> > 
> 
> http://people.redhat.com/qcai/memblock.txt

Thanks!

So, we have these large allocations for the memory maps:

memblock_alloc_exact_nid_raw: 266338304 bytes align=0x200000 nid=0 from=0x0000000001000000 max_addr=0x0000000000000000 sparse_init_nid+0x13b/0x519
memblock_reserve: [0x000000046f400000-0x000000047f1fffff] memblock_alloc_range_nid+0x108/0x1b6
memblock_alloc_exact_nid_raw: 268435456 bytes align=0x200000 nid=1 from=0x0000000001000000 max_addr=0x0000000000000000 sparse_init_nid+0x13b/0x519
memblock_reserve: [0x000000086fe00000-0x000000087fdfffff] memblock_alloc_range_nid+0x108/0x1b6
memblock_alloc_exact_nid_raw: 268435456 bytes align=0x200000 nid=2 from=0x0000000001000000 max_addr=0x0000000000000000 sparse_init_nid+0x13b/0x519
memblock_reserve: [0x0000000c6fe00000-0x0000000c7fdfffff] memblock_alloc_range_nid+0x108/0x1b6
memblock_alloc_exact_nid_raw: 268435456 bytes align=0x200000 nid=3 from=0x0000000001000000 max_addr=0x0000000000000000 sparse_init_nid+0x13b/0x519
memblock_reserve: [0x000000106fe00000-0x000000107fdfffff] memblock_alloc_range_nid+0x108/0x1b6
memblock_alloc_exact_nid_raw: 268435456 bytes align=0x200000 nid=4 from=0x0000000001000000 max_addr=0x0000000000000000 sparse_init_nid+0x13b/0x519
memblock_reserve: [0x000000146fe00000-0x000000147fdfffff] memblock_alloc_range_nid+0x108/0x1b6
memblock_alloc_exact_nid_raw: 268435456 bytes align=0x200000 nid=5 from=0x0000000001000000 max_addr=0x0000000000000000 sparse_init_nid+0x13b/0x519
memblock_reserve: [0x000000186fe00000-0x000000187fdfffff] memblock_alloc_range_nid+0x108/0x1b6
memblock_alloc_exact_nid_raw: 268435456 bytes align=0x200000 nid=6 from=0x0000000001000000 max_addr=0x0000000000000000 sparse_init_nid+0x13b/0x519
memblock_reserve: [0x0000001c6fe00000-0x0000001c7fdfffff] memblock_alloc_range_nid+0x108/0x1b6
memblock_alloc_exact_nid_raw: 268435456 bytes align=0x200000 nid=7 from=0x0000000001000000 max_addr=0x0000000000000000 sparse_init_nid+0x13b/0x519
memblock_reserve: [0x000000206fc00000-0x000000207fbfffff] memblock_alloc_range_nid+0x108/0x1b6

that will be always next to the end of each node and so the last several
pageblocks in a node will never be used by the page allocator.
That masks wrong zone=0 links in the pages corresponding to the holes near
each node and the VM_BUG_ON_PAGE(!zone_spans_pfn) never triggers.

I'm going to send v3 soon that should take better care of the zone links.

--
Sincerely yours,
Mike.


      reply	other threads:[~2021-01-11 17:48 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-12-09 21:43 [PATCH v2 0/2] mm: fix initialization of struct page for holes in memory layout Mike Rapoport
2020-12-09 21:43 ` [PATCH v2 1/2] mm: memblock: enforce overlap of memory.memblock and memory.reserved Mike Rapoport
2020-12-10  9:28   ` Greg KH
2020-12-14 10:11   ` David Hildenbrand
2020-12-14 11:12     ` Mike Rapoport
2020-12-14 11:18       ` David Hildenbrand
2020-12-14 13:58         ` Andrea Arcangeli
2020-12-09 21:43 ` [PATCH v2 2/2] mm: fix initialization of struct page for holes in memory layout Mike Rapoport
2020-12-10  1:51   ` Andrea Arcangeli
2020-12-10  9:29   ` Greg KH
2021-01-04 19:03   ` Qian Cai
2021-01-05  8:24     ` Mike Rapoport
2021-01-05 18:45       ` Qian Cai
2021-01-06  8:05         ` Mike Rapoport
2021-01-06 21:04           ` Qian Cai
2021-01-10 15:39             ` Mike Rapoport
2021-01-11 15:06               ` Qian Cai
2021-01-11 17:47                 ` Mike Rapoport [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210111174758.GE1106298@kernel.org \
    --to=rppt@kernel.org \
    --cc=aarcange@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=bhe@redhat.com \
    --cc=david@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-next@vger.kernel.org \
    --cc=mgorman@suse.de \
    --cc=mhocko@kernel.org \
    --cc=qcai@redhat.com \
    --cc=rppt@linux.ibm.com \
    --cc=sfr@canb.auug.org.au \
    --cc=stable@vger.kernel.org \
    --cc=vbabka@suse.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).