linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Brian Foster <bfoster@redhat.com>
To: linux-mm@kvack.org
Cc: Matthew Wilcox <willy@infradead.org>,
	Oleksandr Natalenko <oleksandr@natalenko.name>
Subject: Re: [PATCH] mm/huge_memory: don't clear active swapcache entry from page->private
Date: Mon, 17 Oct 2022 12:14:52 -0400	[thread overview]
Message-ID: <Y01/fHiaoWv7Iq7W@bfoster> (raw)
In-Reply-To: <20220906190602.1626037-1-bfoster@redhat.com>

On Tue, Sep 06, 2022 at 03:06:02PM -0400, Brian Foster wrote:
> If a swap cache resident hugepage is passed into
> __split_huge_page(), the tail pages are incrementally split off and
> each offset in the swap cache covered by the hugepage is updated to
> point to the associated subpage instead of the original head page.
> As a final step, each subpage is individually passed to
> free_page_and_swap_cache() to free the associated swap cache entry
> and release the page. This eventually lands in
> delete_from_swap_cache(), which refers to page->private for the
> swp_entry_t, which in turn encodes the swap address space and page
> offset information.
> 
> The problem here is that the earlier call to
> __split_huge_page_tail() clears page->private of each tail page in
> the hugepage. This means that the swap entry passed to
> __delete_from_swap_cache() is zeroed, resulting in a bogus address
> space and offset tuple for the swapcache update. If DEBUG_VM is
> enabled, this results in a BUG() in the latter function upon
> detection of the old value in the swap address space not matching
> the page being removed.
> 
> The ramifications are less clear if DEBUG_VM is not enabled. In the
> particular stress-ng workload that reproduces this problem, this
> reliably occurs via MADV_PAGEOUT, which eventually triggers swap
> cache reclaim before the madvise() call returns. The swap cache
> reclaim sequence attempts to reuse the entry that should have been
> freed by the delete operation, but since that failed to correctly
> update the swap address space, swap cache reclaim attempts to look
> up the already freed page still stored at said offset and falls into
> a tight loop in find_get_page() -> __filemap_get_folio() due to
> repetitive folio_try_get_rcu() (reference count update) failures.
> This leads to a soft lockup BUG and never seems to recover.
> 
> To avoid this problem, update __split_huge_page_tail() to not clear
> page->private when the associated page has the swap cache flag set.
> Note that this flag is transferred to the tail page by the preceding
> ->flags update.
> 
> Fixes: b653db77350c7 ("mm: Clear page->private when splitting or migrating a page")
> Signed-off-by: Brian Foster <bfoster@redhat.com>
> ---
> 
> Original bug report is here [1]. I figure there's probably at least a
> couple different ways to fix this problem, but I started with what
> seemed most straightforward. Thoughts appreciated..
> 

Ping? I can still reproduce this on latest kernels as of last week or
so..

Brian

> Brian
> 
> [1] https://lore.kernel.org/linux-mm/YxDyZLfBdFHK1Y1P@bfoster/
> 
>  mm/huge_memory.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index e9414ee57c5b..c2ddbb81a743 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -2445,7 +2445,8 @@ static void __split_huge_page_tail(struct page *head, int tail,
>  			page_tail);
>  	page_tail->mapping = head->mapping;
>  	page_tail->index = head->index + tail;
> -	page_tail->private = 0;
> +	if (!PageSwapCache(page_tail))
> +		page_tail->private = 0;
>  
>  	/* Page flags must be visible before we make the page non-compound. */
>  	smp_wmb();
> -- 
> 2.37.1
> 
> 



  reply	other threads:[~2022-10-17 16:14 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-09-06 19:06 [PATCH] mm/huge_memory: don't clear active swapcache entry from page->private Brian Foster
2022-10-17 16:14 ` Brian Foster [this message]
2022-10-18 13:39 ` Kirill A. Shutemov
2022-10-18 17:41   ` Brian Foster

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Y01/fHiaoWv7Iq7W@bfoster \
    --to=bfoster@redhat.com \
    --cc=linux-mm@kvack.org \
    --cc=oleksandr@natalenko.name \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).