From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EA91539E6F5 for ; Fri, 15 May 2026 09:31:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778837479; cv=none; b=KjZAP9WPkC9L0a8duJlH5rvAzwUu4ROO8NSl3triihqaJWudya1wyCC/otfSeHRhDF9j6QcjG480QERcp+OH2JtChVHob4KoPY4WSvkMAeGkD7hBUAZZiybgVyvc5ui6M9P3LZZnkd71s43qT1UVBCh2PNqZuWuK3u3Lf++fxY0= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778837479; c=relaxed/simple; bh=Ldi8caUUKtX81nPTSHXPidtuStDXEe8GTM+i2f/XlVk=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=qpOiL0Ztdw4HsZymhI9LSM0OvKohbQwx+0dA9ORWvQmXEXccOs6nfCkZkLWSOrDbVlIBkyrn3PTlwOrXfMkIKEvxldk+2A/ys1GE6TzTtcRuWm/jEX6D2z2TEkom52jd0GvoYPNjEBh779AGyqcIvbpFVFktYMuQkHPFo8bNJ20= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--jackmanb.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=QMsR2ZE5; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--jackmanb.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="QMsR2ZE5" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-488d3eec9bcso61332445e9.3 for ; Fri, 15 May 2026 02:31:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20251104; t=1778837476; x=1779442276; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=R0srtN8DhThOPFuT2716WE50Bz7Kq707yBHvPO80zMg=; b=QMsR2ZE5rWkznVB3/NDqhgJ58efGNBAVhNzIHJ9j4K7nhwmN7FMNnEAPrMdtIl/wEm lQdKCordfYsW3B13jgWmxFsMbji1xDK6T6rmh4ZfT2zcJ8Nb+6d7lqVdmlnk1XI4xUOt 984ObB4uhu824C6bK7nm4rN/Iza7V8lCF1PcYBn2Lsk73o22QrcvWoeQ9kUmtGr8933L SAzKS5XcDRGBV+//gJPMyxeLGHaTfW8dpkeok3QSv0CHHnK+bQc6k37skDOKv7dy7Hah BEy912dNNtG+V3illRygR61QfQEKspqW8tTgyoQf9Q+ynk3gMF7bfPvHugCuBFJgjn3J 4ffA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1778837476; x=1779442276; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=R0srtN8DhThOPFuT2716WE50Bz7Kq707yBHvPO80zMg=; b=FEVg6gUjpEtxFYTy7QeDh9Sri6yKkvPbpRK8Vpmv/hxXQor9LmKe1TeDymXdd/HveN E4nD521lfiMjpJxDDOyNHMtXX8pGNnoCTqpbeTDzI+OvWPeQDg4PzCiJT9832S4wcWBn t081lx5ufu6jwJhjFnacDM8K/6CJzRFYLLGizZzEHI7mcr84zBztArzsBUjZ9C37sijB PvGXxS+kFGmX3bbi3vz+BKu/WEdsHLOzO0Ua5wfW6BflePgvqi/aWSHSh6xm2opKRW8g RBA5GlLRBfm5Ay8GTZKOPnNrfFGOq9aAAU9rINbwg/ZCofKyF2kwOr+uRCbGc/SCizvc k7Cg== X-Forwarded-Encrypted: i=1; AFNElJ9XCO8hXWdpCGamCxGQQe0QnPy6QtiKn1L0TqdZWBiVcmDnNnkCE+4BRz5jWEHNqwzqI2l/bcDX+TdGKVc=@vger.kernel.org X-Gm-Message-State: AOJu0YyCUO4B4/j2Tl9SCAEJ1bN3WuL/cywSU1R/J+aAWZWfRbAM85n/ SMOqM2apAgqbUkiHrBwDqRjH0YAckmFDa3EOm35A1uVsEEn+6UF1c7lMkfb2OU5Mz8ZaIkNpZq0 S9kFbteqxW9YnUg== X-Received: from wmqn21.prod.google.com ([2002:a05:600c:4f95:b0:488:81f0:1a27]) (user=jackmanb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:46cf:b0:48e:8974:c377 with SMTP id 5b1f17b1804b1-48fe6626ba1mr41289605e9.29.1778837475938; Fri, 15 May 2026 02:31:15 -0700 (PDT) Date: Fri, 15 May 2026 09:31:15 +0000 In-Reply-To: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20260320-page_alloc-unmapped-v2-0-28bf1bd54f41@google.com> X-Mailer: aerc 0.21.0 Message-ID: Subject: Re: [PATCH v2 00/22] mm: Add __GFP_UNMAPPED From: Brendan Jackman To: Gregory Price , "Vlastimil Babka (SUSE)" Cc: Brendan Jackman , Borislav Petkov , Dave Hansen , Peter Zijlstra , Andrew Morton , David Hildenbrand , Wei Xu , Johannes Weiner , Zi Yan , Lorenzo Stoakes , , , , , Sumit Garg , , , Will Deacon , , "Kalyazin, Nikita" , , "Itazuri, Takahiro" , Andy Lutomirski , David Kaplan , Thomas Gleixner , Yosry Ahmed Content-Type: text/plain; charset="UTF-8" On Wed May 13, 2026 at 5:59 PM UTC, Gregory Price wrote: > On Wed, May 13, 2026 at 07:38:01PM +0200, Vlastimil Babka (SUSE) wrote: >> On 5/13/26 19:28, Gregory Price wrote: >> > >> > Hm. I'm not quite wrapping my head around the TLB issue fully. >> > >> > If there's no kernel direct mapping, and there's no userland mapping, >> > the stale TLB entry comes from... the page formerly being present in the >> > page tables and a stale TLB entry lying about after the page is freed? >> >> It's the direct mapping, we assume it's always there and unchanged, and only >> kernel can access the contents through it. So nobody flushes it when freeing >> any pages. Userspace processes can't exploit anything stale there, in >> absence of kernel's UAF bugs (or e.g. Meltdown like cpu bugs). >> > > Ah, I follow. > > If everything is default-unmapped, then you don't have to worry about > this issue - except when a stolen block is returned or an ephemeral > mapping is unmapped after the operation. > > pivoting... > > On the GFP front, i wonder if you could factor out the core of > alloc_frozen_pages_noprof() and add alloc_unmapped_pages_noprof() > which adds (alloc_flags |= ALLOC_UNMAPPED) instead of adding > __GFP_UNMAPPED. > > I have been considering something similar for __GFP_PRIVATE, but this > has the added downside of increasing the surface of the buddy for each > new narrow use case (in my case, private nodes, in this case unmapped > allocations). > > unless of course we nip that in the bud with something like > > struct page * > alloc_pages_special(enum buddy_context ctxt, gfp_t gfp_mask, ...) > { > switch (ctxt) { > ... internal-only details about how that case is handled ... > } > } > > and just go ahead and allow the buddy to grow internally without adding > new gfp flags or an infinite number of interfaces. Yeah, this is what I'm thinking too. I don't think growing the interface is such a big deal if we can put it in mm/internal.h. For __GFP_UNMAPPED and ASI's equivalent, we would eventually want to expose the functionality outside of mm/, but that doesn't mean we have to directly expose the page allocator interface itself. Do you think it's a similar story for __GFP_PRIVATE? Anyway my initial thought was a variant of alloc_pages that lets you directly specify alloc flags alongside/instead of GFP flags. This is actually a bit fiddly though since the GFP flags -> alloc flags thing isn't a clean division. Maybe it should be? > Of course that means users have to know the context in which they're > being allocated. Right now you can kind of "transiently cheat" by > passing a GFP flag through a bunch of interfaces and that makes certain > allocations reachable - but maybe we should not be encouraging that kind > of design for these kinds of allocator extensions? Hm, for __GFP_UNMAPPED (and __GFP_SENSITIVE in the future), it is nothing to do with the allocation context. It's really expressing something about the page, i.e: - __GFP_SENSITIVE means "We might put user data in this page" - __GFP_UNMAPPED means "We might put user data in this page, and I know the kernel doesn't need to access it in the direct map" So, for those cases, I think a GFP flag is actually conceptually correct, the only reason I can see to avoid it is because of bitmap space.