From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail.linuxfoundation.org ([140.211.169.12]:34420 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754461AbdJIMtF (ORCPT ); Mon, 9 Oct 2017 08:49:05 -0400 Subject: Patch "mm: avoid marking swap cached page as lazyfree" has been added to the 4.13-stable tree To: shli@fb.com, akpm@linux-foundation.org, asavkov@redhat.com, gregkh@linuxfoundation.org, hannes@cmpxchg.org, hdanton@sina.com, hughd@google.com, mgorman@techsingularity.net, mhocko@suse.com, minchan@kernel.org, riel@redhat.com, torvalds@linux-foundation.org Cc: , From: Date: Mon, 09 Oct 2017 14:49:08 +0200 Message-ID: <1507553348112208@kroah.com> MIME-Version: 1.0 Content-Type: text/plain; charset=ANSI_X3.4-1968 Content-Transfer-Encoding: 8bit Sender: stable-owner@vger.kernel.org List-ID: This is a note to let you know that I've just added the patch titled mm: avoid marking swap cached page as lazyfree to the 4.13-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: mm-avoid-marking-swap-cached-page-as-lazyfree.patch and it can be found in the queue-4.13 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let know about it. >>From 24c92eb7dce0a299b8e1a8c5fa585844a53bf7f0 Mon Sep 17 00:00:00 2001 From: Shaohua Li Date: Tue, 3 Oct 2017 16:15:29 -0700 Subject: mm: avoid marking swap cached page as lazyfree From: Shaohua Li commit 24c92eb7dce0a299b8e1a8c5fa585844a53bf7f0 upstream. MADV_FREE clears pte dirty bit and then marks the page lazyfree (clear SwapBacked). There is no lock to prevent the page is added to swap cache between these two steps by page reclaim. Page reclaim could add the page to swap cache and unmap the page. After page reclaim, the page is added back to lru. At that time, we probably start draining per-cpu pagevec and mark the page lazyfree. So the page could be in a state with SwapBacked cleared and PG_swapcache set. Next time there is a refault in the virtual address, do_swap_page can find the page from swap cache but the page has PageSwapCache false because SwapBacked isn't set, so do_swap_page will bail out and do nothing. The task will keep running into fault handler. Fixes: 802a3a92ad7a ("mm: reclaim MADV_FREE pages") Link: http://lkml.kernel.org/r/6537ef3814398c0073630b03f176263bc81f0902.1506446061.git.shli@fb.com Signed-off-by: Shaohua Li Reported-by: Artem Savkov Tested-by: Artem Savkov Reviewed-by: Rik van Riel Acked-by: Johannes Weiner Acked-by: Michal Hocko Acked-by: Minchan Kim Cc: Hillf Danton Cc: Hugh Dickins Cc: Mel Gorman Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds Signed-off-by: Greg Kroah-Hartman --- mm/swap.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) --- a/mm/swap.c +++ b/mm/swap.c @@ -575,7 +575,7 @@ static void lru_lazyfree_fn(struct page void *arg) { if (PageLRU(page) && PageAnon(page) && PageSwapBacked(page) && - !PageUnevictable(page)) { + !PageSwapCache(page) && !PageUnevictable(page)) { bool active = PageActive(page); del_page_from_lru_list(page, lruvec, @@ -665,7 +665,7 @@ void deactivate_file_page(struct page *p void mark_page_lazyfree(struct page *page) { if (PageLRU(page) && PageAnon(page) && PageSwapBacked(page) && - !PageUnevictable(page)) { + !PageSwapCache(page) && !PageUnevictable(page)) { struct pagevec *pvec = &get_cpu_var(lru_lazyfree_pvecs); get_page(page); Patches currently in stable-queue which might be from shli@fb.com are queue-4.13/mm-hugetlb-soft_offline-save-compound-page-order-before-page-migration.patch queue-4.13/mm-fix-data-corruption-caused-by-lazyfree-page.patch queue-4.13/mm-avoid-marking-swap-cached-page-as-lazyfree.patch