From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 83D53C83F17 for ; Mon, 28 Aug 2023 10:26:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231205AbjH1K0d (ORCPT ); Mon, 28 Aug 2023 06:26:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44112 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231229AbjH1K0D (ORCPT ); Mon, 28 Aug 2023 06:26:03 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B4FDBC6 for ; Mon, 28 Aug 2023 03:25:58 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 4B35663AD0 for ; Mon, 28 Aug 2023 10:25:58 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5DBFDC433CC; Mon, 28 Aug 2023 10:25:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1693218357; bh=vXicc+dP/mUnDXb25FqX3hSaMkEeBme+m9ZDbX/09S0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ckyVxIl8+hMY9gx7aSx4bnIVs/bMzl3ww9T8EekWnh401+q8M4mbSGxRCiQlVXO9i hmu+E1K3lFDKRxcmbJnTtB97TyIBPgfueF3R9a+3/LB9Kp+URna9efXKLRaskYiHYK 1k3hHn6pBXnrfBVsMph5fW7eL6omiphai2QjnEhk= From: Greg Kroah-Hartman To: stable@vger.kernel.org Cc: Greg Kroah-Hartman , patches@lists.linux.dev, =?UTF-8?q?Dan=20Hor=C3=A1k?= , Michael Ellerman , Sasha Levin Subject: [PATCH 4.19 030/129] powerpc/64s/radix: Fix soft dirty tracking Date: Mon, 28 Aug 2023 12:12:04 +0200 Message-ID: <20230828101154.204268374@linuxfoundation.org> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20230828101153.030066927@linuxfoundation.org> References: <20230828101153.030066927@linuxfoundation.org> User-Agent: quilt/0.67 X-stable: review X-Patchwork-Hint: ignore MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org 4.19-stable review patch. If anyone has any objections, please let me know. ------------------ From: Michael Ellerman [ Upstream commit 66b2ca086210732954a7790d63d35542936fc664 ] It was reported that soft dirty tracking doesn't work when using the Radix MMU. The tracking is supposed to work by clearing the soft dirty bit for a mapping and then write protecting the PTE. If/when the page is written to, a page fault occurs and the soft dirty bit is added back via pte_mkdirty(). For example in wp_page_reuse(): entry = maybe_mkwrite(pte_mkdirty(entry), vma); if (ptep_set_access_flags(vma, vmf->address, vmf->pte, entry, 1)) update_mmu_cache(vma, vmf->address, vmf->pte); Unfortunately on radix _PAGE_SOFTDIRTY is being dropped by radix__ptep_set_access_flags(), called from ptep_set_access_flags(), meaning the soft dirty bit is not set even though the page has been written to. Fix it by adding _PAGE_SOFTDIRTY to the set of bits that are able to be changed in radix__ptep_set_access_flags(). Fixes: b0b5e9b13047 ("powerpc/mm/radix: Add radix pte #defines") Cc: stable@vger.kernel.org # v4.7+ Reported-by: Dan HorĂ¡k Link: https://lore.kernel.org/r/20230511095558.56663a50f86bdc4cd97700b7@danny.cz Signed-off-by: Michael Ellerman Link: https://msgid.link/20230511114224.977423-1-mpe@ellerman.id.au Signed-off-by: Sasha Levin --- arch/powerpc/mm/pgtable-radix.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/powerpc/mm/pgtable-radix.c b/arch/powerpc/mm/pgtable-radix.c index 9ee235fca4278..75cbedaac5d26 100644 --- a/arch/powerpc/mm/pgtable-radix.c +++ b/arch/powerpc/mm/pgtable-radix.c @@ -1041,8 +1041,8 @@ void radix__ptep_set_access_flags(struct vm_area_struct *vma, pte_t *ptep, pte_t entry, unsigned long address, int psize) { struct mm_struct *mm = vma->vm_mm; - unsigned long set = pte_val(entry) & (_PAGE_DIRTY | _PAGE_ACCESSED | - _PAGE_RW | _PAGE_EXEC); + unsigned long set = pte_val(entry) & (_PAGE_DIRTY | _PAGE_SOFT_DIRTY | + _PAGE_ACCESSED | _PAGE_RW | _PAGE_EXEC); unsigned long change = pte_val(entry) ^ pte_val(*ptep); /* -- 2.40.1