From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CD0953E8692 for ; Thu, 23 Apr 2026 12:31:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776947499; cv=none; b=hptUH11t3cXSRChqLm+n1qpl6Jkrt6GAMxgwR/NxZbe7MgH+/IQADsu5hakv2eeyz14GXiFUCZ9HHcM98ka9RssGgtBVP0mRpDDaTY2RsZu83Lto6puU/OkPJOKNAPNVeraM/ulfAOQmuUuayPbAtVbFSBIEkHFbpK2RvApTeBM= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776947499; c=relaxed/simple; bh=+6LhyMkl/nKAzLjoKX2oi1wS34vWVEBcgmpvLH9P6lE=; h=From:To:Cc:Subject:Date:Message-ID:MIME-Version; b=X9VFKSY7/Fw8Rw4xoyP2b4YXiE+AZIb00zqvaLBXoIsTELBmVdTkR+tydewhugwaGvsgmT/aZAiYskKK3Gw8Zi6CoTkUw0c6xdurGdkmMdoxNgQ8ixZw7zbhQVhnxq+F0bfiyD1Hxp+jOjj1mBc3a+PF2YLf1PDAn5PvAEZTO2s= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linuxfoundation.org header.i=@linuxfoundation.org header.b=AcqiMN6C; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linuxfoundation.org header.i=@linuxfoundation.org header.b="AcqiMN6C" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 62B6FC2BCAF; Thu, 23 Apr 2026 12:31:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1776947499; bh=+6LhyMkl/nKAzLjoKX2oi1wS34vWVEBcgmpvLH9P6lE=; h=From:To:Cc:Subject:Date:From; b=AcqiMN6C7u0toZ1LiX1WjkZHEypbS2/n8f7hB+4HiZPiCVYa1C9G0Cyv4N9ZLJ+HE r6tyYOAGAM5r46PxnA50fetdc1Yrc0ePd5Pbd8+bGkjkJ7WpSpsF+P5yfAbzVhJPcj rtO0sN0OnjOYfDEFgDQe17lMvIiXhZEvnk1jviWI= From: Greg Kroah-Hartman To: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, Greg Kroah-Hartman , Andrew Morton , David Hildenbrand , Jason Gunthorpe , John Hubbard , Peter Xu Subject: [PATCH] mm/gup: honour FOLL_PIN in NOMMU __get_user_pages_locked() Date: Thu, 23 Apr 2026 14:31:35 +0200 Message-ID: <2026042334-acutely-unadorned-e05c@gregkh> X-Mailer: git-send-email 2.54.0 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=3165; i=gregkh@linuxfoundation.org; h=from:subject:message-id; bh=+6LhyMkl/nKAzLjoKX2oi1wS34vWVEBcgmpvLH9P6lE=; b=owGbwMvMwCRo6H6F97bub03G02pJDJmvBNVsbn/IDea2sW/bu1NPIqnwQMVNO83zagd32bDHM 62fPeFsRywLgyATg6yYIsuXbTxH91ccUvQytD0NM4eVCWQIAxenAEyk4ynDgpunp/foFSyayqyZ YD4/LNslZMb3WIb5leVV7PPVl017luW3lPtkxfu/BzRNAA== X-Developer-Key: i=gregkh@linuxfoundation.org; a=openpgp; fpr=F4B60CC5BF78C2214A313DCB3147D40DDB2DFB29 Content-Transfer-Encoding: 8bit The !CONFIG_MMU implementation of __get_user_pages_locked() takes a bare get_page() reference for each page regardless of foll_flags: if (pages[i]) get_page(pages[i]); This is reached from pin_user_pages*() with FOLL_PIN set. unpin_user_page() is shared between MMU and NOMMU configurations and unconditionally calls gup_put_folio(..., FOLL_PIN), which subtracts GUP_PIN_COUNTING_BIAS (1024) from the folio refcount. This means that pin adds 1, and then unpin will subtract 1024. If a user maps a page (refcount 1), registers it 1023 times as an io_uring fixed buffer (1023 pin_user_pages calls -> refcount 1024), then unregisters: the first unpin_user_page subtracts 1024, refcount hits 0, the page is freed and returned to the buddy allocator. The remaining 1022 unpins write into whatever was reallocated, and the user's VMA still maps the freed page (NOMMU has no MMU to invalidate it). Reallocating the page for an io_uring pbuf_ring then lets userspace corrupt the new owner's data through the stale mapping. Use try_grab_folio() which adds GUP_PIN_COUNTING_BIAS for FOLL_PIN and 1 for FOLL_GET, mirroring the CONFIG_MMU path so pin and unpin are symmetric. Cc: Andrew Morton Cc: David Hildenbrand Cc: Jason Gunthorpe Cc: John Hubbard Cc: Peter Xu Reported-by: Anthropic Assisted-by: gkh_clanker_t1000 Signed-off-by: Greg Kroah-Hartman --- My first foray into -mm, eeek! Anyway, this was a crazy report sent to me, and I knocked up this change, and I have a reproducer if people need/want to see that as well (it's for nommu systems, so be wary of it.) If I should drop the huge comment, I'll be glad to respin, but I thought it was good to try to document this somewhere as it didn't seem obvious, at least to me, what was going on... thanks, greg k-h mm/gup.c | 23 +++++++++++++++++++++-- 1 file changed, 21 insertions(+), 2 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index ad9ded39609c..c8744fb8a395 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -2019,8 +2019,27 @@ static long __get_user_pages_locked(struct mm_struct *mm, unsigned long start, if (pages) { pages[i] = virt_to_page((void *)start); - if (pages[i]) - get_page(pages[i]); + if (pages[i]) { + /* + * pin_user_pages*() arrives here with FOLL_PIN + * set; unpin_user_page() (which is not + * !CONFIG_MMU-specific) calls + * gup_put_folio(..., FOLL_PIN) which subtracts + * GUP_PIN_COUNTING_BIAS (1024). A bare + * get_page() here adds only 1, so 1023 pins on + * a fresh page bring refcount to 1024 and a + * single unpin then frees it out from under the + * remaining 1022 pins and any live VMA + * mappings. Use the same grab path as the MMU + * implementation so pin and unpin are + * symmetric. + */ + if (try_grab_folio(page_folio(pages[i]), 1, + foll_flags)) { + pages[i] = NULL; + break; + } + } } start = (start + PAGE_SIZE) & PAGE_MASK; -- 2.54.0