From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 497B3C433E1 for ; Fri, 19 Jun 2020 15:02:37 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1FDD52193E for ; Fri, 19 Jun 2020 15:02:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1592578957; bh=dP1InMCGNeI/kuoSR71LipecR7uDIc8gA1I6IprzfXs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=EVtOmSteNWkrQpG4TvC+0NZjH+4HbDM1l5XK4Cq1+sNdcbb6/QUGJIcsSkgjhrG4G egkGObtoER00W435zgb0tYK9S6Yn73JbXYZ+kfhNHD0sCpDdKnVKPloIBuoItcuqb7 nKzKpEjWnEO6zdD35yRmrT25zi0BQJ4IK+lbTzGQ= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2390852AbgFSPCf (ORCPT ); Fri, 19 Jun 2020 11:02:35 -0400 Received: from mail.kernel.org ([198.145.29.99]:59162 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2390849AbgFSPCf (ORCPT ); Fri, 19 Jun 2020 11:02:35 -0400 Received: from localhost (83-86-89-107.cable.dynamic.v4.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id B1B7520776; Fri, 19 Jun 2020 15:02:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1592578955; bh=dP1InMCGNeI/kuoSR71LipecR7uDIc8gA1I6IprzfXs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=iPmQL2eU5PyjNGsu5xx8ZA0Qop8voxUzG1OozdX2ptTg2edsV5hjOarVtonSpuxi3 KxCZNlk/gWsJp31lDdK+7P8wdnC0tui9WgSOt32vKLa/ZnNiK7tYwfi+8SqMzNZREl aC4kr+SEK0Qa/GEp5WzpeZmZ3/AYO5AP2lEHzamM= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Andrea Arcangeli , Jann Horn , "Kirill A. Shutemov" , Linus Torvalds Subject: [PATCH 4.19 190/267] mm: thp: make the THP mapcount atomic against __split_huge_pmd_locked() Date: Fri, 19 Jun 2020 16:32:55 +0200 Message-Id: <20200619141657.855794321@linuxfoundation.org> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20200619141648.840376470@linuxfoundation.org> References: <20200619141648.840376470@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Andrea Arcangeli commit c444eb564fb16645c172d550359cb3d75fe8a040 upstream. Write protect anon page faults require an accurate mapcount to decide if to break the COW or not. This is implemented in the THP path with reuse_swap_page() -> page_trans_huge_map_swapcount()/page_trans_huge_mapcount(). If the COW triggers while the other processes sharing the page are under a huge pmd split, to do an accurate reading, we must ensure the mapcount isn't computed while it's being transferred from the head page to the tail pages. reuse_swap_cache() already runs serialized by the page lock, so it's enough to add the page lock around __split_huge_pmd_locked too, in order to add the missing serialization. Note: the commit in "Fixes" is just to facilitate the backporting, because the code before such commit didn't try to do an accurate THP mapcount calculation and it instead used the page_count() to decide if to COW or not. Both the page_count and the pin_count are THP-wide refcounts, so they're inaccurate if used in reuse_swap_page(). Reverting such commit (besides the unrelated fix to the local anon_vma assignment) would have also opened the window for memory corruption side effects to certain workloads as documented in such commit header. Signed-off-by: Andrea Arcangeli Suggested-by: Jann Horn Reported-by: Jann Horn Acked-by: Kirill A. Shutemov Fixes: 6d0a07edd17c ("mm: thp: calculate the mapcount correctly for THP pages during WP faults") Cc: stable@vger.kernel.org Signed-off-by: Linus Torvalds Signed-off-by: Greg Kroah-Hartman --- mm/huge_memory.c | 31 ++++++++++++++++++++++++++++--- 1 file changed, 28 insertions(+), 3 deletions(-) --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2273,6 +2273,8 @@ void __split_huge_pmd(struct vm_area_str spinlock_t *ptl; struct mm_struct *mm = vma->vm_mm; unsigned long haddr = address & HPAGE_PMD_MASK; + bool was_locked = false; + pmd_t _pmd; mmu_notifier_invalidate_range_start(mm, haddr, haddr + HPAGE_PMD_SIZE); ptl = pmd_lock(mm, pmd); @@ -2282,11 +2284,32 @@ void __split_huge_pmd(struct vm_area_str * pmd against. Otherwise we can end up replacing wrong page. */ VM_BUG_ON(freeze && !page); - if (page && page != pmd_page(*pmd)) - goto out; + if (page) { + VM_WARN_ON_ONCE(!PageLocked(page)); + was_locked = true; + if (page != pmd_page(*pmd)) + goto out; + } +repeat: if (pmd_trans_huge(*pmd)) { - page = pmd_page(*pmd); + if (!page) { + page = pmd_page(*pmd); + if (unlikely(!trylock_page(page))) { + get_page(page); + _pmd = *pmd; + spin_unlock(ptl); + lock_page(page); + spin_lock(ptl); + if (unlikely(!pmd_same(*pmd, _pmd))) { + unlock_page(page); + put_page(page); + page = NULL; + goto repeat; + } + put_page(page); + } + } if (PageMlocked(page)) clear_page_mlock(page); } else if (!(pmd_devmap(*pmd) || is_pmd_migration_entry(*pmd))) @@ -2294,6 +2317,8 @@ void __split_huge_pmd(struct vm_area_str __split_huge_pmd_locked(vma, pmd, haddr, freeze); out: spin_unlock(ptl); + if (!was_locked && page) + unlock_page(page); /* * No need to double call mmu_notifier->invalidate_range() callback. * They are 3 cases to consider inside __split_huge_pmd_locked():