From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C453926A083; Tue, 8 Apr 2025 12:20:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744114807; cv=none; b=ICsJeLeLgsjCZL25AkOq8O/Ge2fDDkZFKEb/5TPDqJuu3tKxsgqsJ9fVNCRZMBWv19Ds65/1xYgwTs+6DjGuuVBALvVMs8kWMgb8U6/7DLig9usrSeuMqSL2aifQD8Syawns5qXrl+fCWm+ZE8AX8ucPtwU8OqIuX2fcD+VF/kk= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744114807; c=relaxed/simple; bh=+VTpk9flG7fYl0NqcpV48wv6sr2KnhDFhntwKWm3Nho=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=GDO1jC7RZgilr/9bxljkfLKhiXqYRYvilPXEew22PbDILF5YUntwXYJzUgRfrr26QUjxpDmFYr8I8An/Z9GHiGghcuGMKNRYXMO3T46lZcOxvCNPbB0s4Yct23N+d98xdDXFGDFxKt/N6yAyBTimgTHO97Wo7n/kBVtkJw0JQf8= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linuxfoundation.org header.i=@linuxfoundation.org header.b=TSVCnukP; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linuxfoundation.org header.i=@linuxfoundation.org header.b="TSVCnukP" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2D9D8C4CEE7; Tue, 8 Apr 2025 12:20:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1744114807; bh=+VTpk9flG7fYl0NqcpV48wv6sr2KnhDFhntwKWm3Nho=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=TSVCnukPHMTsHh6PNGhMyYg7NfKqcpgO4+1gGpsu9AIIH1Ug9kFogKfejM6/oa1u3 z0GTeoxZ5V0yrQQoEo5Xc2ZrGmqZSBOexrsmihXWl1ukYkLuLt50i+fAvWtBtiTqvm pnPNI2fzoGw+XIs1U8myodT/PTA27Oezv0Xfsmu8= From: Greg Kroah-Hartman To: stable@vger.kernel.org Cc: Greg Kroah-Hartman , patches@lists.linux.dev, David Hildenbrand , Alistair Popple , Alex Shi , Danilo Krummrich , Dave Airlie , Jann Horn , Jason Gunthorpe , Jerome Glisse , John Hubbard , Jonathan Corbet , Karol Herbst , Liam Howlett , Lorenzo Stoakes , Lyude , "Masami Hiramatsu (Google)" , Oleg Nesterov , Pasha Tatashin , Peter Xu , "Peter Zijlstra (Intel)" , SeongJae Park , Simona Vetter , Vlastimil Babka , Yanteng Si , Barry Song , Andrew Morton , Sasha Levin Subject: [PATCH 6.13 241/499] kernel/events/uprobes: handle device-exclusive entries correctly in __replace_page() Date: Tue, 8 Apr 2025 12:47:33 +0200 Message-ID: <20250408104857.227376244@linuxfoundation.org> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250408104851.256868745@linuxfoundation.org> References: <20250408104851.256868745@linuxfoundation.org> User-Agent: quilt/0.68 X-stable: review X-Patchwork-Hint: ignore Precedence: bulk X-Mailing-List: stable@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit 6.13-stable review patch. If anyone has any objections, please let me know. ------------------ From: David Hildenbrand [ Upstream commit 096cbb80ab3fd85a9035ec17a1312c2a7db8bc8c ] Ever since commit b756a3b5e7ea ("mm: device exclusive memory access") we can return with a device-exclusive entry from page_vma_mapped_walk(). __replace_page() is not prepared for that, so teach it about these PFN swap PTEs. Note that device-private entries are so far not applicable on that path, because GUP would never have returned such folios (conversion to device-private happens by page migration, not in-place conversion of the PTE). There is a race between GUP and us locking the folio to look it up using page_vma_mapped_walk(), so this is likely a fix (unless something else could prevent that race, but it doesn't look like). pte_pfn() on something that is not a present pte could give use garbage, and we'd wrongly mess up the mapcount because it was already adjusted by calling folio_remove_rmap_pte() when making the entry device-exclusive. Link: https://lkml.kernel.org/r/20250210193801.781278-9-david@redhat.com Fixes: b756a3b5e7ea ("mm: device exclusive memory access") Signed-off-by: David Hildenbrand Tested-by: Alistair Popple Cc: Alex Shi Cc: Danilo Krummrich Cc: Dave Airlie Cc: Jann Horn Cc: Jason Gunthorpe Cc: Jerome Glisse Cc: John Hubbard Cc: Jonathan Corbet Cc: Karol Herbst Cc: Liam Howlett Cc: Lorenzo Stoakes Cc: Lyude Cc: "Masami Hiramatsu (Google)" Cc: Oleg Nesterov Cc: Pasha Tatashin Cc: Peter Xu Cc: Peter Zijlstra (Intel) Cc: SeongJae Park Cc: Simona Vetter Cc: Vlastimil Babka Cc: Yanteng Si Cc: Barry Song Signed-off-by: Andrew Morton Signed-off-by: Sasha Levin --- kernel/events/uprobes.c | 13 ++++++++++++- 1 file changed, 12 insertions(+), 1 deletion(-) diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c index 3c34761c9ae73..5e43a3e3f414f 100644 --- a/kernel/events/uprobes.c +++ b/kernel/events/uprobes.c @@ -173,6 +173,7 @@ static int __replace_page(struct vm_area_struct *vma, unsigned long addr, DEFINE_FOLIO_VMA_WALK(pvmw, old_folio, vma, addr, 0); int err; struct mmu_notifier_range range; + pte_t pte; mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, mm, addr, addr + PAGE_SIZE); @@ -192,6 +193,16 @@ static int __replace_page(struct vm_area_struct *vma, unsigned long addr, if (!page_vma_mapped_walk(&pvmw)) goto unlock; VM_BUG_ON_PAGE(addr != pvmw.address, old_page); + pte = ptep_get(pvmw.pte); + + /* + * Handle PFN swap PTES, such as device-exclusive ones, that actually + * map pages: simply trigger GUP again to fix it up. + */ + if (unlikely(!pte_present(pte))) { + page_vma_mapped_walk_done(&pvmw); + goto unlock; + } if (new_page) { folio_get(new_folio); @@ -206,7 +217,7 @@ static int __replace_page(struct vm_area_struct *vma, unsigned long addr, inc_mm_counter(mm, MM_ANONPAGES); } - flush_cache_page(vma, addr, pte_pfn(ptep_get(pvmw.pte))); + flush_cache_page(vma, addr, pte_pfn(pte)); ptep_clear_flush(vma, addr, pvmw.pte); if (new_page) set_pte_at(mm, addr, pvmw.pte, -- 2.39.5