From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D3D9C41B362 for ; Mon, 11 May 2026 15:55:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778514953; cv=none; b=RcUku3qEZpsg74km7iHLdlxBDjlnZybyKROYi97yes6nRDCNiJtZAUEif/CWn44g8u+D+jILw7QP8jbw6bZD7nDIMScYzSjjS12rUMk2dotcXYMS2nXOtAqpwFObdm7opxsb43GDmUq5zBmyv5hGQqzCQ/MQjxTqP051fhoxWY4= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778514953; c=relaxed/simple; bh=TsGQPJnPEkYvT01y/40nouZgrDE845oWx7rogITeQEA=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=ZXpaomfSmSLCqLAg1gZ8BJnaSQaLkas0XDA9aDGp4Kavnzb1f7enL40ZOUY3aNmXzIdt5OpnpgrC8QOxXkC8eZfAVBxWjOkdYn2FvMVbbheY6V4Qyctt/wo3aoxT46u31KMJ/5JsM28jw5w8aVhkD0F3mbLFFAUWYgqd15u+7p8= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=RQavqM0K; dkim=pass (2048-bit key) header.d=redhat.com header.i=@redhat.com header.b=JVAk5/op; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="RQavqM0K"; dkim=pass (2048-bit key) header.d=redhat.com header.i=@redhat.com header.b="JVAk5/op" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1778514950; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=vh4TZQwOj1LDvIW+49RnIFBfwPgx5sko4Nvxu6iVtR8=; b=RQavqM0KnbA4RGAbhfMr90gmoCQqqZ88gE9FsdIJIKL5Ymc+0Nejaf1GzeKRpRs+nKwsz6 NLTm6Ryea86mL2Z+2p9EYGc/ovgtFVMZnZEF4eleA+yitliP0K61KG2w/nHOOrylAEdqEm u/5p9lef1TQxWEF0RRhpGiGyjvj5igM= Received: from mail-wr1-f69.google.com (mail-wr1-f69.google.com [209.85.221.69]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-193-Ugb6_N_FM6qtKULG1oZ8vQ-1; Mon, 11 May 2026 11:55:49 -0400 X-MC-Unique: Ugb6_N_FM6qtKULG1oZ8vQ-1 X-Mimecast-MFC-AGG-ID: Ugb6_N_FM6qtKULG1oZ8vQ_1778514948 Received: by mail-wr1-f69.google.com with SMTP id ffacd0b85a97d-43d789cebcfso3911893f8f.1 for ; Mon, 11 May 2026 08:55:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=google; t=1778514948; x=1779119748; darn=vger.kernel.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=vh4TZQwOj1LDvIW+49RnIFBfwPgx5sko4Nvxu6iVtR8=; b=JVAk5/opyHjwX9jYxAJRgWRlLw9wCAffJfDfYHOMsjRlZ6fSGk7CBmKNPh+IaOhbxO zidd0KtCnJHKze8t/RmUCPnxkBPzFzmC1YU6nLAK/QRA754F0RArRI57n69CGCrW8yqa J1UmMWqlLS765PVJhLqoGHTstrkpFzG49xE00JeY4PL537rJovkbAsdMNZVeV+Ip5cDr tXU9qYwVaOCJaTdF2T8Dt9gtzMqGghdDYH8PwOFIqmSGDDoa/tUTsTT1WCIhI87mo4B1 u784r7uY9xuxYmmNGSWympeAeA2aS2h1giqB66/PZBn3WdcyJQkSqKlKcPuRxqOV/WYY Ngqg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1778514948; x=1779119748; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-gg:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=vh4TZQwOj1LDvIW+49RnIFBfwPgx5sko4Nvxu6iVtR8=; b=kp+0lrFGq3nuj+ZTPajNWi7a1U7KxuC738Ju0QW9VToWylFRaFFCNEc7x8XZAhtjJX OSQxK6OPERZc6tQ0O7wxj8lKeTdqZuGsTv5wxQNZbl6wqMHwVuwcVbJI3I8HCB8ntbK3 2bdznFaghFcVzu614/4tBH+K3r/YBD+DBNS/QO3eaPTD7ILryllJlNOECXkk9E4G1pPq U1zD1S88d8K4eUY7o0mVnXeyOAwbHLcbLpbb/U1pJ4P9XDHxSgffaSZhVqIpP5ip+L3j 5f+ibB9Yg7zSek+dYN+z9Fb0miSZBq0+MTQF+CtvCnUvZWeQDSPx3bdzkR/7+YfwadLm My7Q== X-Gm-Message-State: AOJu0Yyk5ScY8O9nla3z6vV8cOJXjCNVLwRRKK6d5//EVtqDPjtUaWgR QCCia0md5QNa/Xeg4nLuILuzvLqKloF9LTjD11aIZy/w8mhWJvNQRYpwFkIsDOEy0xde/GaMkVI gRBN2mlwNy0LgdnzBYu20W+/sFiCskXI26NXnkMicra9pe8rh6olv4vUhJ9Xzi7RxOg== X-Gm-Gg: Acq92OHMCwSislKHMSH1Vc8PXaSjWILY1gXzyOHjPuih+WiXMuyT0L65wdF1Mi+Y7q9 sq03nde/erjxBMH8IPY9fWWlCpV8Q0kGaaZBo9EfoJ/CcqiyoaiAgdNyELy1u2Lud8bJYX30CL6 zg3poMq7ehK78ozJl2qgLpyRY/a2GHexjuGcBBkojX+Px8ASVufTJKE7nJbeWN/Ixxa/idgEIeC uX7gYmcKf3+dfxRfHV1ByorOR9uLrgjRwRGqxSr4uk3MU6jVqXroj3NiUC+xjab8GMXGCVTH30/ Be0kNuSn/VrPboLXonjQk+6AkYbZ2EtxyViwsHSnXmKx97hBhq7ktqBiDZUfZqWsgjpVggghK2r g979JFULPzElwzWnot/SmWEeCZoNiZjjxqWeYNkBE X-Received: by 2002:a05:600c:3547:b0:485:3c2e:60d5 with SMTP id 5b1f17b1804b1-48e8e1e6eb9mr1618965e9.2.1778514948223; Mon, 11 May 2026 08:55:48 -0700 (PDT) X-Received: by 2002:a05:600c:3547:b0:485:3c2e:60d5 with SMTP id 5b1f17b1804b1-48e8e1e6eb9mr1618285e9.2.1778514947593; Mon, 11 May 2026 08:55:47 -0700 (PDT) Received: from redhat.com (IGLD-80-230-48-7.inter.net.il. [80.230.48.7]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-48e6fff9ab8sm203070735e9.2.2026.05.11.08.55.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 11 May 2026 08:55:46 -0700 (PDT) Date: Mon, 11 May 2026 11:55:40 -0400 From: "Michael S. Tsirkin" To: Gregory Price Cc: linux-kernel@vger.kernel.org, "David Hildenbrand (Arm)" , Jason Wang , Xuan Zhuo , Eugenio =?iso-8859-1?Q?P=E9rez?= , Muchun Song , Oscar Salvador , Andrew Morton , Lorenzo Stoakes , "Liam R. Howlett" , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , Brendan Jackman , Johannes Weiner , Zi Yan , Baolin Wang , Nico Pache , Ryan Roberts , Dev Jain , Barry Song , Lance Yang , Hugh Dickins , Matthew Brost , Joshua Hahn , Rakie Kim , Byungchul Park , Ying Huang , Alistair Popple , Christoph Lameter , David Rientjes , Roman Gushchin , Harry Yoo , Axel Rasmussen , Yuanchu Xie , Wei Xu , Chris Li , Kairui Song , Kemeng Shi , Nhat Pham , Baoquan He , virtualization@lists.linux.dev, linux-mm@kvack.org, Andrea Arcangeli , "Liam R. Howlett" , Harry Yoo , Hao Li Subject: Re: [PATCH resend v6 03/30] mm: thread user_addr through page allocator for cache-friendly zeroing Message-ID: <20260511114853-mutt-send-email-mst@kernel.org> References: <9b53972f4854c1064b92cefc464f51949afeb83f.1778489843.git.mst@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: On Mon, May 11, 2026 at 11:37:37AM -0400, Gregory Price wrote: > On Mon, May 11, 2026 at 05:01:55AM -0400, Michael S. Tsirkin wrote: > > Thread a user virtual address from vma_alloc_folio() down through > > the page allocator to post_alloc_hook(). This is plumbing > > preparation for a subsequent patch that will use user_addr to > > call folio_zero_user() for cache-friendly zeroing of user pages. > > > > The user_addr is stored in struct alloc_context and flows through: > > vma_alloc_folio -> folio_alloc_mpol -> __alloc_pages_mpol -> > > __alloc_frozen_pages -> get_page_from_freelist -> prep_new_page -> > > post_alloc_hook > > This is the nitty-est of all nits, but when doing this can we please > prefer stack style? > > vma_alloc_folio > folio_alloc_mpol > __alloc_pages_mpol > __alloc_frozen_pages > get_page_from_freelist > prep_new_page > post_alloc_hook > > Claude has a bad habit of writing changelog changes this way, and it's > painful for a human to try to read. Sure. > > > > USER_ADDR_NONE ((unsigned long)-1) is used for non-user > > allocations, since address 0 is a valid userspace mapping. > > > > > +/* > > + * Sentinel for user_addr: indicates a non-user allocation. > > + * Cannot use 0 because address 0 is a valid userspace mapping. > > + */ > > +#define USER_ADDR_NONE ((unsigned long)-1) > > Ehm, hm. Does -1 hold as a non-user address across all architectures? > > What about in linear addressing / no VM mode? this is used on a fault. I don't think there are any faults then? But maybe FAULT_ADDR_NONE would be clearer. > > diff --git a/include/linux/gfp.h b/include/linux/gfp.h > > index 7ccbda35b9ad..ee35c5367abc 100644 > > --- a/include/linux/gfp.h > > +++ b/include/linux/gfp.h > > @@ -337,7 +337,7 @@ static inline struct folio *folio_alloc_noprof(gfp_t gfp, unsigned int order) > > static inline struct folio *folio_alloc_mpol_noprof(gfp_t gfp, unsigned int order, > > struct mempolicy *mpol, pgoff_t ilx, int nid) > > { > > - return folio_alloc_noprof(gfp, order); > > + return __folio_alloc_noprof(gfp, order, numa_node_id(), NULL); > > } > > #endif > > > > This change seems out of place unless i'm missing something. > Don't remember. Could be from a change I reverted. I'll look. > > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > > index f24bf49be047..a999f3ead852 100644 > > --- a/mm/hugetlb.c > > +++ b/mm/hugetlb.c > > @@ -1806,7 +1806,8 @@ struct address_space *hugetlb_folio_mapping_lock_write(struct folio *folio) > > } > > > > static struct folio *alloc_buddy_frozen_folio(int order, gfp_t gfp_mask, > > - int nid, nodemask_t *nmask, nodemask_t *node_alloc_noretry) > > + int nid, nodemask_t *nmask, nodemask_t *node_alloc_noretry, > > + unsigned long addr) > > user_addr? uaddr? ok > > @@ -1823,7 +1824,7 @@ static struct folio *alloc_buddy_frozen_folio(int order, gfp_t gfp_mask, > > if (alloc_try_hard) > > gfp_mask |= __GFP_RETRY_MAYFAIL; > > > > - folio = (struct folio *)__alloc_frozen_pages(gfp_mask, order, nid, nmask); > > + folio = (struct folio *)__alloc_frozen_pages(gfp_mask, order, nid, nmask, addr); > > Not on you, but the changes in hugetlb.c as a whole are :[ > > We do all of this to pass USER_ADDR_NONE all over the place, but the > alternative is having a separate function specifically for user-land > bound allocations. > > So the trade off is: > a) churn the current interface for everyone > b) add a user_ variant and know people will just get it wrong I was also explicitly asked not to proliferate too many new APIs. > IIRC you said the consequence of getting wrong here is subtle corruption > if a caller got it wrong, and this was related to cache flushing for the > provided user address? Yes. > Stupid question: Does this not apply to kernel allocations as well? Or > is it simply a matter of the cache having stale data that could leak, > and therefore it's not a concern in-kernel? > > ~Gregory Not a concern since we zero through the kernel address. -- MST