From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9496426FDAE; Tue, 8 Apr 2025 12:49:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744116591; cv=none; b=TKqV5NxssJKqvxxCHvA3/SqtTNJhW04tWPPnPAB7d13ZOvYb+4AD2C4a+FNYJJH0FeD3cQGV9cIOuMb5J1RmMKmker25ryBRHgRFFMi+XjUUSgTlci0RY459r0tDLGDVFT0sCVB3gcArP40Kks+nYeG+qABnVn2uW1ACUJk2uPM= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744116591; c=relaxed/simple; bh=Cc+WeCprrJokUFdBKAhtuXS1GQhLK7PIMizShk947vI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=DIsIDbvQ6pHcUxpD/GG/tLLvSeiPQbmgcKDdd4aPA998OtYDESNYwcXjNLrgWVAfQDuYzhwL/QOrC69aCOkABedy7iGQl1sKqtpqHFZXzXhDDF2TuymewYcN9nGGlkp/Pe06qOaMVg9Bxi8+gVoNtJfPKTf1q+Jsb5XQBZi7IG0= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linuxfoundation.org header.i=@linuxfoundation.org header.b=GuoJ3Anr; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linuxfoundation.org header.i=@linuxfoundation.org header.b="GuoJ3Anr" Received: by smtp.kernel.org (Postfix) with ESMTPSA id E9C07C4CEE5; Tue, 8 Apr 2025 12:49:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1744116591; bh=Cc+WeCprrJokUFdBKAhtuXS1GQhLK7PIMizShk947vI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=GuoJ3Anr+cqShjOe6IrSeSKhnMsHfFLQO87vsWyU2oY7lyiboPCG8mOCTH8KXuS+G BgoMbWvs23iXZ6iaQyq/vQA1XfvSWP25RNaUla5iP1AvaOtehHPGPOtcFkzYQrf271 6ermsoY8mBnu7QNWlNkzj9EZJ9SUK+I52M2tYMMc= From: Greg Kroah-Hartman To: stable@vger.kernel.org Cc: Greg Kroah-Hartman , patches@lists.linux.dev, David Hildenbrand , Alistair Popple , Alex Shi , Danilo Krummrich , Dave Airlie , Jann Horn , Jason Gunthorpe , Jerome Glisse , John Hubbard , Jonathan Corbet , Karol Herbst , Liam Howlett , Lorenzo Stoakes , Lyude , "Masami Hiramatsu (Google)" , Oleg Nesterov , Pasha Tatashin , Peter Xu , "Peter Zijlstra (Intel)" , SeongJae Park , Simona Vetter , Vlastimil Babka , Yanteng Si , Barry Song , Andrew Morton , Sasha Levin Subject: [PATCH 6.12 196/423] kernel/events/uprobes: handle device-exclusive entries correctly in __replace_page() Date: Tue, 8 Apr 2025 12:48:42 +0200 Message-ID: <20250408104850.300016619@linuxfoundation.org> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250408104845.675475678@linuxfoundation.org> References: <20250408104845.675475678@linuxfoundation.org> User-Agent: quilt/0.68 X-stable: review X-Patchwork-Hint: ignore Precedence: bulk X-Mailing-List: stable@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit 6.12-stable review patch. If anyone has any objections, please let me know. ------------------ From: David Hildenbrand [ Upstream commit 096cbb80ab3fd85a9035ec17a1312c2a7db8bc8c ] Ever since commit b756a3b5e7ea ("mm: device exclusive memory access") we can return with a device-exclusive entry from page_vma_mapped_walk(). __replace_page() is not prepared for that, so teach it about these PFN swap PTEs. Note that device-private entries are so far not applicable on that path, because GUP would never have returned such folios (conversion to device-private happens by page migration, not in-place conversion of the PTE). There is a race between GUP and us locking the folio to look it up using page_vma_mapped_walk(), so this is likely a fix (unless something else could prevent that race, but it doesn't look like). pte_pfn() on something that is not a present pte could give use garbage, and we'd wrongly mess up the mapcount because it was already adjusted by calling folio_remove_rmap_pte() when making the entry device-exclusive. Link: https://lkml.kernel.org/r/20250210193801.781278-9-david@redhat.com Fixes: b756a3b5e7ea ("mm: device exclusive memory access") Signed-off-by: David Hildenbrand Tested-by: Alistair Popple Cc: Alex Shi Cc: Danilo Krummrich Cc: Dave Airlie Cc: Jann Horn Cc: Jason Gunthorpe Cc: Jerome Glisse Cc: John Hubbard Cc: Jonathan Corbet Cc: Karol Herbst Cc: Liam Howlett Cc: Lorenzo Stoakes Cc: Lyude Cc: "Masami Hiramatsu (Google)" Cc: Oleg Nesterov Cc: Pasha Tatashin Cc: Peter Xu Cc: Peter Zijlstra (Intel) Cc: SeongJae Park Cc: Simona Vetter Cc: Vlastimil Babka Cc: Yanteng Si Cc: Barry Song Signed-off-by: Andrew Morton Signed-off-by: Sasha Levin --- kernel/events/uprobes.c | 13 ++++++++++++- 1 file changed, 12 insertions(+), 1 deletion(-) diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c index 4fdc08ca0f3cb..b0909c3839fd9 100644 --- a/kernel/events/uprobes.c +++ b/kernel/events/uprobes.c @@ -167,6 +167,7 @@ static int __replace_page(struct vm_area_struct *vma, unsigned long addr, DEFINE_FOLIO_VMA_WALK(pvmw, old_folio, vma, addr, 0); int err; struct mmu_notifier_range range; + pte_t pte; mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, mm, addr, addr + PAGE_SIZE); @@ -186,6 +187,16 @@ static int __replace_page(struct vm_area_struct *vma, unsigned long addr, if (!page_vma_mapped_walk(&pvmw)) goto unlock; VM_BUG_ON_PAGE(addr != pvmw.address, old_page); + pte = ptep_get(pvmw.pte); + + /* + * Handle PFN swap PTES, such as device-exclusive ones, that actually + * map pages: simply trigger GUP again to fix it up. + */ + if (unlikely(!pte_present(pte))) { + page_vma_mapped_walk_done(&pvmw); + goto unlock; + } if (new_page) { folio_get(new_folio); @@ -200,7 +211,7 @@ static int __replace_page(struct vm_area_struct *vma, unsigned long addr, inc_mm_counter(mm, MM_ANONPAGES); } - flush_cache_page(vma, addr, pte_pfn(ptep_get(pvmw.pte))); + flush_cache_page(vma, addr, pte_pfn(pte)); ptep_clear_flush(vma, addr, pvmw.pte); if (new_page) set_pte_at(mm, addr, pvmw.pte, -- 2.39.5