From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9E31930F526; Mon, 27 Oct 2025 19:13:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761592431; cv=none; b=ArHPYHkYyVQ45h/uaBh27CFid4xIjURwPdiU0cisUvO81QT3jl4iNGDFLq7wWPqbLX/U3bk5tvWI29tZj76KKAy7WMZNwSNvRV5RkWWrVEmUWrikoPEjQtS+owKjKG2cSFfsovF7rg+SS7ByU/6ka/1gJlcbMT2mQcjWz7Js/Pk= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761592431; c=relaxed/simple; bh=bMimpoJlR+P0WrJFlVcbVT1/JxN3rw+nSMPMAhbBmQM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=A3MIs9vRtNR1WbUOQn4278Ldqq9ZQRoMYuvwrsFv6dxMdm7ngBpLlDlDK9aana8y0k3w3MPz0f3yVcO0Gi37mj5vfCB03jYQOBkWTVwBoiHALtBNAKGubL3q6k2epZ5Xo9EHH2O9MXkdmuU/mTu3N5PurVqZzjuFtCI7j3Ve/ew= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linuxfoundation.org header.i=@linuxfoundation.org header.b=r9DI4hEk; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linuxfoundation.org header.i=@linuxfoundation.org header.b="r9DI4hEk" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0C749C4CEF1; Mon, 27 Oct 2025 19:13:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1761592431; bh=bMimpoJlR+P0WrJFlVcbVT1/JxN3rw+nSMPMAhbBmQM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=r9DI4hEkFXiQqyhAidHrVMR24297TydLQXyJr4qr/CiVWWORTJ9Q4B+7y19jS0ggS azBwFo9ZHJqpPRLVWiMiaYvqrA8Jq0CoGpSP80MCyI/VUdurBzSJ3Hpec8eerHmt7I qD2EW1j+QwyDq0DnStA6Ur6/YMomnwFlG8Tb+fIQ= From: Greg Kroah-Hartman To: stable@vger.kernel.org Cc: Greg Kroah-Hartman , patches@lists.linux.dev, Huang Ying , Will Deacon , Anshuman Khandual , Ryan Roberts , Gavin Shan , Ard Biesheuvel , "Matthew Wilcox (Oracle)" , Yicong Yang , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Catalin Marinas , Sasha Levin Subject: [PATCH 6.1 085/157] arm64, mm: avoid always making PTE dirty in pte_mkwrite() Date: Mon, 27 Oct 2025 19:35:46 +0100 Message-ID: <20251027183503.554043304@linuxfoundation.org> X-Mailer: git-send-email 2.51.1 In-Reply-To: <20251027183501.227243846@linuxfoundation.org> References: <20251027183501.227243846@linuxfoundation.org> User-Agent: quilt/0.69 X-stable: review X-Patchwork-Hint: ignore Precedence: bulk X-Mailing-List: patches@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit 6.1-stable review patch. If anyone has any objections, please let me know. ------------------ From: Huang Ying [ Upstream commit 143937ca51cc6ae2fccc61a1cb916abb24cd34f5 ] Current pte_mkwrite_novma() makes PTE dirty unconditionally. This may mark some pages that are never written dirty wrongly. For example, do_swap_page() may map the exclusive pages with writable and clean PTEs if the VMA is writable and the page fault is for read access. However, current pte_mkwrite_novma() implementation always dirties the PTE. This may cause unnecessary disk writing if the pages are never written before being reclaimed. So, change pte_mkwrite_novma() to clear the PTE_RDONLY bit only if the PTE_DIRTY bit is set to make it possible to make the PTE writable and clean. The current behavior was introduced in commit 73e86cb03cf2 ("arm64: Move PTE_RDONLY bit handling out of set_pte_at()"). Before that, pte_mkwrite() only sets the PTE_WRITE bit, while set_pte_at() only clears the PTE_RDONLY bit if both the PTE_WRITE and the PTE_DIRTY bits are set. To test the performance impact of the patch, on an arm64 server machine, run 16 redis-server processes on socket 1 and 16 memtier_benchmark processes on socket 0 with mostly get transactions (that is, redis-server will mostly read memory only). The memory footprint of redis-server is larger than the available memory, so swap out/in will be triggered. Test results show that the patch can avoid most swapping out because the pages are mostly clean. And the benchmark throughput improves ~23.9% in the test. Fixes: 73e86cb03cf2 ("arm64: Move PTE_RDONLY bit handling out of set_pte_at()") Signed-off-by: Huang Ying Cc: Will Deacon Cc: Anshuman Khandual Cc: Ryan Roberts Cc: Gavin Shan Cc: Ard Biesheuvel Cc: Matthew Wilcox (Oracle) Cc: Yicong Yang Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Reviewed-by: Catalin Marinas Signed-off-by: Catalin Marinas Signed-off-by: Sasha Levin --- arch/arm64/include/asm/pgtable.h | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index 426c3cb3e3bb1..62326f249aa71 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -183,7 +183,8 @@ static inline pmd_t set_pmd_bit(pmd_t pmd, pgprot_t prot) static inline pte_t pte_mkwrite(pte_t pte) { pte = set_pte_bit(pte, __pgprot(PTE_WRITE)); - pte = clear_pte_bit(pte, __pgprot(PTE_RDONLY)); + if (pte_sw_dirty(pte)) + pte = clear_pte_bit(pte, __pgprot(PTE_RDONLY)); return pte; } -- 2.51.0