From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E64A931328B for ; Thu, 23 Apr 2026 14:28:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776954493; cv=none; b=TYXmU3FAgX7PSY1PnLEesNhc9Dwbei7ZeuIs3kpkNxel8Qo7ro6QoGMNEsSpK61w9Rsuyz3WI4eJf6tgB9BEIQkWj432PXcXWzty8LI9C1qK+BlMSwqTWfGkgujisBhe2YHAFNFmVMHXeSAh9NcAyjnuabxoxpYs2FCYJTPjagg= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776954493; c=relaxed/simple; bh=JWk89U6swzJmb7YDGRY0DozQlkMigh411V2GF7LYZmg=; h=From:To:Cc:Subject:Date:Message-ID:MIME-Version; b=MJW0L6AKq2H+4kc7tuL4S18XrkU/GcsMHAXipm3dFfkMofvkV3IWnL9qWfH0Z+zmJt4JC89xmdfAlpNi7Tkq2euwisMbwoIHAOX637+qH29AZruYO69t57Rn8HhRd5EBH9UPfXqREi70YX9bbZQvFXiqH7NOeAN/mJaLaROgEFU= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linuxfoundation.org header.i=@linuxfoundation.org header.b=rTUho39J; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linuxfoundation.org header.i=@linuxfoundation.org header.b="rTUho39J" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 43754C2BCAF; Thu, 23 Apr 2026 14:28:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1776954492; bh=JWk89U6swzJmb7YDGRY0DozQlkMigh411V2GF7LYZmg=; h=From:To:Cc:Subject:Date:From; b=rTUho39JZwtd2gxrGHXsghM6i0kIGa/veNJoDSk9TE7Ge27ule5btLnkqsT5ChhqY FP1/04kpdm08SVRyjQXtcd/q4D4jqSbuVF0VDQuiAXyVoBenJV4QaNhVzQ6yQ+gkQX GSexWue10dynTWpPzOC4AAmGtoJgKQrbBGTJuHuE= From: Greg Kroah-Hartman To: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, Greg Kroah-Hartman , Andrew Morton , David Hildenbrand , Jason Gunthorpe , John Hubbard , Peter Xu Subject: [PATCH v2] mm/gup: honour FOLL_PIN in NOMMU __get_user_pages_locked() Date: Thu, 23 Apr 2026 16:28:04 +0200 Message-ID: <2026042303-vendor-outright-b9d2@gregkh> X-Mailer: git-send-email 2.54.0 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=2864; i=gregkh@linuxfoundation.org; h=from:subject:message-id; bh=JWk89U6swzJmb7YDGRY0DozQlkMigh411V2GF7LYZmg=; b=owGbwMvMwCRo6H6F97bub03G02pJDJmvdEqS8ma+OxQnx+X5qr55XqPfXeGEwNv2F0M2CiY5h qn+5t3WEcvCIMjEICumyPJlG8/R/RWHFL0MbU/DzGFlAhnCwMUpABNhk2BYsI9fe5JJ+Trpji/a E6IexK23WJN8kmGe9sfbEZ9lZs0rkub98HHRyw2f+mTVAQ== X-Developer-Key: i=gregkh@linuxfoundation.org; a=openpgp; fpr=F4B60CC5BF78C2214A313DCB3147D40DDB2DFB29 Content-Transfer-Encoding: 8bit The !CONFIG_MMU implementation of __get_user_pages_locked() takes a bare get_page() reference for each page regardless of foll_flags: if (pages[i]) get_page(pages[i]); This is reached from pin_user_pages*() with FOLL_PIN set. unpin_user_page() is shared between MMU and NOMMU configurations and unconditionally calls gup_put_folio(..., FOLL_PIN), which subtracts GUP_PIN_COUNTING_BIAS (1024) from the folio refcount. This means that pin adds 1, and then unpin will subtract 1024. If a user maps a page (refcount 1), registers it 1023 times as an io_uring fixed buffer (1023 pin_user_pages calls -> refcount 1024), then unregisters: the first unpin_user_page subtracts 1024, refcount hits 0, the page is freed and returned to the buddy allocator. The remaining 1022 unpins write into whatever was reallocated, and the user's VMA still maps the freed page (NOMMU has no MMU to invalidate it). Reallocating the page for an io_uring pbuf_ring then lets userspace corrupt the new owner's data through the stale mapping. Use try_grab_folio() which adds GUP_PIN_COUNTING_BIAS for FOLL_PIN and 1 for FOLL_GET, mirroring the CONFIG_MMU path so pin and unpin are symmetric. Cc: Andrew Morton Cc: David Hildenbrand Cc: Jason Gunthorpe Cc: John Hubbard Cc: Peter Xu Reported-by: Anthropic Assisted-by: gkh_clanker_t1000 Signed-off-by: Greg Kroah-Hartman --- v2: - drop huge comment - rework error return value based on David's suggestion (heck, pretty much the full patch was written by him now) Link to v1: https://lore.kernel.org/r/2026042334-acutely-unadorned-e05c@gregkh mm/gup.c | 13 ++++++++++--- 1 file changed, 10 insertions(+), 3 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index ad9ded39609c..2f6f95a167af 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -1983,6 +1983,7 @@ static long __get_user_pages_locked(struct mm_struct *mm, unsigned long start, struct vm_area_struct *vma; bool must_unlock = false; vm_flags_t vm_flags; + int ret, err = -EFAULT; long i; if (!nr_pages) @@ -2019,8 +2020,14 @@ static long __get_user_pages_locked(struct mm_struct *mm, unsigned long start, if (pages) { pages[i] = virt_to_page((void *)start); - if (pages[i]) - get_page(pages[i]); + if (!pages[i]) + break; + ret = try_grab_folio(page_folio(pages[i]), 1, foll_flags); + if (unlikely(ret)) { + pages[i] = NULL; + err = ret; + break; + } } start = (start + PAGE_SIZE) & PAGE_MASK; @@ -2031,7 +2038,7 @@ static long __get_user_pages_locked(struct mm_struct *mm, unsigned long start, *locked = 0; } - return i ? : -EFAULT; + return i ? : err; } #endif /* !CONFIG_MMU */ -- 2.54.0