From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CE7E920F88 for ; Fri, 21 Jul 2023 19:12:24 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 26C8FC433CB; Fri, 21 Jul 2023 19:12:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1689966744; bh=6GGXTQ9muik6pDc4jpUit5pJjAdyxFU9aGvn4wKDsE4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=2ngtlGf9e2msouO4V1kyWbHQu4CBmkLH2aINE3pRJqUc1KBIBAWk31xNyCCmIFx6t kO6ZLwhuikacMwy+KPpMhzIKME5oJiJulFk/R35Rmwlrkiua0m5ToJB6VcWTowKxz4 CG3M1uj6IaOTJZ9hUvpJMlT1TdFK/PzBMXelbJOU= From: Greg Kroah-Hartman To: stable@vger.kernel.org Cc: Greg Kroah-Hartman , patches@lists.linux.dev, Ryan Roberts , Zi Yan , SeongJae Park , "Mike Rapoport (IBM)" , Christoph Hellwig , "Kirill A. Shutemov" , Lorenzo Stoakes , "Matthew Wilcox (Oracle)" , "Uladzislau Rezki (Sony)" , Yu Zhao , Andrew Morton Subject: [PATCH 5.15 453/532] mm/damon/ops-common: atomically test and clear young on ptes and pmds Date: Fri, 21 Jul 2023 18:05:57 +0200 Message-ID: <20230721160639.111033639@linuxfoundation.org> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230721160614.695323302@linuxfoundation.org> References: <20230721160614.695323302@linuxfoundation.org> User-Agent: quilt/0.67 Precedence: bulk X-Mailing-List: patches@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit From: Ryan Roberts commit c11d34fa139e4b0fb4249a30f37b178353533fa1 upstream. It is racy to non-atomically read a pte, then clear the young bit, then write it back as this could discard dirty information. Further, it is bad practice to directly set a pte entry within a table. Instead clearing young must go through the arch-provided helper, ptep_test_and_clear_young() to ensure it is modified atomically and to give the arch code visibility and allow it to check (and potentially modify) the operation. Link: https://lkml.kernel.org/r/20230602092949.545577-3-ryan.roberts@arm.com Fixes: 3f49584b262c ("mm/damon: implement primitives for the virtual memory address spaces"). Signed-off-by: Ryan Roberts Reviewed-by: Zi Yan Reviewed-by: SeongJae Park Reviewed-by: Mike Rapoport (IBM) Cc: Christoph Hellwig Cc: Kirill A. Shutemov Cc: Lorenzo Stoakes Cc: Matthew Wilcox (Oracle) Cc: Uladzislau Rezki (Sony) Cc: Yu Zhao Cc: Signed-off-by: Andrew Morton Signed-off-by: SeongJae Park Signed-off-by: Greg Kroah-Hartman --- mm/damon/vaddr.c | 20 ++++++++------------ 1 file changed, 8 insertions(+), 12 deletions(-) --- a/mm/damon/vaddr.c +++ b/mm/damon/vaddr.c @@ -393,7 +393,7 @@ static struct page *damon_get_page(unsig return page; } -static void damon_ptep_mkold(pte_t *pte, struct mm_struct *mm, +static void damon_ptep_mkold(pte_t *pte, struct vm_area_struct *vma, unsigned long addr) { bool referenced = false; @@ -402,13 +402,11 @@ static void damon_ptep_mkold(pte_t *pte, if (!page) return; - if (pte_young(*pte)) { + if (ptep_test_and_clear_young(vma, addr, pte)) referenced = true; - *pte = pte_mkold(*pte); - } #ifdef CONFIG_MMU_NOTIFIER - if (mmu_notifier_clear_young(mm, addr, addr + PAGE_SIZE)) + if (mmu_notifier_clear_young(vma->vm_mm, addr, addr + PAGE_SIZE)) referenced = true; #endif /* CONFIG_MMU_NOTIFIER */ @@ -419,7 +417,7 @@ static void damon_ptep_mkold(pte_t *pte, put_page(page); } -static void damon_pmdp_mkold(pmd_t *pmd, struct mm_struct *mm, +static void damon_pmdp_mkold(pmd_t *pmd, struct vm_area_struct *vma, unsigned long addr) { #ifdef CONFIG_TRANSPARENT_HUGEPAGE @@ -429,13 +427,11 @@ static void damon_pmdp_mkold(pmd_t *pmd, if (!page) return; - if (pmd_young(*pmd)) { + if (pmdp_test_and_clear_young(vma, addr, pmd)) referenced = true; - *pmd = pmd_mkold(*pmd); - } #ifdef CONFIG_MMU_NOTIFIER - if (mmu_notifier_clear_young(mm, addr, + if (mmu_notifier_clear_young(vma->vm_mm, addr, addr + ((1UL) << HPAGE_PMD_SHIFT))) referenced = true; #endif /* CONFIG_MMU_NOTIFIER */ @@ -462,7 +458,7 @@ static int damon_mkold_pmd_entry(pmd_t * } if (pmd_huge(*pmd)) { - damon_pmdp_mkold(pmd, walk->mm, addr); + damon_pmdp_mkold(pmd, walk->vma, addr); spin_unlock(ptl); return 0; } @@ -474,7 +470,7 @@ static int damon_mkold_pmd_entry(pmd_t * pte = pte_offset_map_lock(walk->mm, pmd, addr, &ptl); if (!pte_present(*pte)) goto out; - damon_ptep_mkold(pte, walk->mm, addr); + damon_ptep_mkold(pte, walk->vma, addr); out: pte_unmap_unlock(pte, ptl); return 0;