From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id ECA653C9893 for ; Tue, 12 May 2026 21:06:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778619968; cv=none; b=iGOUiqDXoW6ctmkjjPwILU3CuAJDHB4cV9SIGC6+MSpLHPXDj0naM9srgobiXiZMC3rGsnvHiZF1sJoL8Zp2kjVCXlY7Uu6rU6ZNDx7A4+J/6gYbvFmNzbGmjIof7FI1KLIUgFv4a1R/sNkBK8wBkSz693D836gScvAHqPyze5I= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778619968; c=relaxed/simple; bh=26OEPP7shIS2Aqo5H6bSRXYq6BtKNsY+s1121au6t2U=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=W/KTYExaBsJPU9aGuE7S9JWQovL7ulhIo6KMEcjgNIqApe05LEYnHswhj/TtN9JYKEaYp5na3RpXnZtVWLrop+UgEbvfwiA5LFvwQSe8QX+9jRrxg1jkHB9uQXZdjaPo7flrkAVQueSUUX7l5KOtY0MVsGqiLUC1295eSvFdU/Q= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=CFftqznb; dkim=pass (2048-bit key) header.d=redhat.com header.i=@redhat.com header.b=Q1x3wzTy; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="CFftqznb"; dkim=pass (2048-bit key) header.d=redhat.com header.i=@redhat.com header.b="Q1x3wzTy" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1778619966; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=fHWjppoupYJq6gg1bmhomqZmsJDzmE/Zol9ZNgufzks=; b=CFftqznbmri3HmCqb78OeolNjSzMDgXgfEwwpOJkBXrVgeqiHK0PSmq7FMTKbq6aDhfBLA vueibOgsW7ZfCQQUFB1FIUWTL//2UOJ6Nra5tUJ5JQV9hAd8qCVHFnriit5OPvK+DC6Un5 Xw/+RXLn/AUIhJCP3pj3eFBE7F+2WLQ= Received: from mail-wm1-f71.google.com (mail-wm1-f71.google.com [209.85.128.71]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-678-zOmfsVjuO_i_ymQv9RHTlQ-1; Tue, 12 May 2026 17:06:01 -0400 X-MC-Unique: zOmfsVjuO_i_ymQv9RHTlQ-1 X-Mimecast-MFC-AGG-ID: zOmfsVjuO_i_ymQv9RHTlQ_1778619960 Received: by mail-wm1-f71.google.com with SMTP id 5b1f17b1804b1-48e6af7a9cdso31232875e9.3 for ; Tue, 12 May 2026 14:06:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=google; t=1778619960; x=1779224760; darn=vger.kernel.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=fHWjppoupYJq6gg1bmhomqZmsJDzmE/Zol9ZNgufzks=; b=Q1x3wzTynKxexJQ3iR4qczKCW/6evmQe2Ib7qaKVJbkaGRF0tkI/2+daICyQ7GZgCy TlN9ZQOac7wGcGlTokaDbYKvYEO0OjQd3H0yHrx0ZUGR8rTEtGM0ofllMsi+iAaJmuNG 5CtlLfK/0W1Jg7spqAWMfpb488pbmP6Evnm9d9m595WgP8l3iODBo/lGqKAvR0nHtZkJ dd1vm9snIsrR4VKYyq9IAw/aQXNThcPeka/s0g0FFnMhv93cDuhVx7NT1hCDIFUnFFjy rLbNGOqZy7F7kZXgcUYzHeY/horGE55551g+KMl0y7XeTtLfx5lDkVdICo6+5GBlagvU MAuQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1778619960; x=1779224760; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-gg:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=fHWjppoupYJq6gg1bmhomqZmsJDzmE/Zol9ZNgufzks=; b=oVuq/gVgHKTx0oj7rSUn2KDq7A8XvFWdLWx9rqb7Ob+Dou7cEodGna6PmyI+YJym58 opLElsFTxaZTME9n3XjhHAcR25EUwJN+ciReEQQpVFaNW0EaKGexIw0Qluw1p45hFaUD P49wT3disoYpVxPvgFHWwMKK7DR6S569DMZh0R9/LSaO3xEZsOeGGbWO1b5sB9xWXYPK yR0WiPIrBtAHmYCRDIk6c3odlBL91ky65A9vpjLJzI2zfQzk3JFM+vlbZkd4A8aQyXMf wixpnJ1TyKYAhRWIJPsFY+sOUaRQEIXmRz3oRgxXcYgpbnW2nAyWA3z431z7QsZWL6bA IfEw== X-Gm-Message-State: AOJu0YxEp+cMWav+o/8/BdHb/46HWt2Z/ZaVBeVWxUxwtiCvMfl3ekZ3 ajdnNluk+HCDmj0ldBNY9XxSbhr38RFMrdnBxiFGQ42cnlSd5MMuMnz1iw5qLN6v2/VLDFWPKbJ B6JJnd4AVQGyG9QyhhITS5a91QWS2jDCfiBZAdFvzqBRoM/Ovzp4RZ5G2P2eA7RQUDhGsXxn97L vsH+i7dJlhsoCQ8VXCUKqMt3CaWlhFXv6f3jy+L6BMm8k= X-Gm-Gg: Acq92OFjLb5a5gqOhcH30ZWTVZ8KWwkQjMRyjZCCsw5/OVYxWkUWLXBpjg0WY1gtkJ5 HbsysC0BZCSuDq3a61xXgVI9qMbJYq0GRPxYFBI7QBNeU0Yh3ShADP4i9+RC2/PgMkoIQZzEdvd zmirgtJixU6gbuYB+B+/Lbs6cZq1SXqiV4poJ6Qu9WOBvdkZY5mRgu0qE+bCU8J3eHCHvmG59bz adYnBw6fGztsP+BLlZJ6ipRbsveKtgP4TAPwxdi291m5P8BqkKNvyidNku897GoWztnWfojQtv4 xrHjm59yvaGWgP3f0kAuNJpzneBfapVADL2SdAOqv644/jSTQvVyAg+uyc0eIF97UmresBIyXv7 G3AMXr3H1BFqpHGsFrBEpdlw1rXgUS/ua+zAm+Klw X-Received: by 2002:a05:600c:19ce:b0:48e:7f1c:8778 with SMTP id 5b1f17b1804b1-48fc9a34577mr5572315e9.17.1778619960138; Tue, 12 May 2026 14:06:00 -0700 (PDT) X-Received: by 2002:a05:600c:19ce:b0:48e:7f1c:8778 with SMTP id 5b1f17b1804b1-48fc9a34577mr5571635e9.17.1778619959429; Tue, 12 May 2026 14:05:59 -0700 (PDT) Received: from redhat.com (IGLD-80-230-48-7.inter.net.il. [80.230.48.7]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-48fcdf63f05sm95345e9.1.2026.05.12.14.05.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 May 2026 14:05:59 -0700 (PDT) Date: Tue, 12 May 2026 17:05:54 -0400 From: "Michael S. Tsirkin" To: linux-kernel@vger.kernel.org Cc: "David Hildenbrand (Arm)" , Jason Wang , Xuan Zhuo , Eugenio =?utf-8?B?UMOpcmV6?= , Muchun Song , Oscar Salvador , Andrew Morton , Lorenzo Stoakes , "Liam R. Howlett" , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , Brendan Jackman , Johannes Weiner , Zi Yan , Baolin Wang , Nico Pache , Ryan Roberts , Dev Jain , Barry Song , Lance Yang , Hugh Dickins , Matthew Brost , Joshua Hahn , Rakie Kim , Byungchul Park , Gregory Price , Ying Huang , Alistair Popple , Christoph Lameter , David Rientjes , Roman Gushchin , Harry Yoo , Axel Rasmussen , Yuanchu Xie , Wei Xu , Chris Li , Kairui Song , Kemeng Shi , Nhat Pham , Baoquan He , virtualization@lists.linux.dev, linux-mm@kvack.org, Andrea Arcangeli Subject: [PATCH v7 09/31] mm: use folio_zero_user for user pages in post_alloc_hook Message-ID: <45d1ea85b574399459a64fdba28fcf04abfa3e7e.1778616612.git.mst@redhat.com> References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Mailer: git-send-email 2.27.0.106.g8ac3dc51b1 X-Mutt-Fcc: =sent When post_alloc_hook() needs to zero a page for an explicit __GFP_ZERO allocation for a user page (user_addr is set), use folio_zero_user() instead of kernel_init_pages(). This zeros near the faulting address last, keeping those cachelines hot for the impending user access. folio_zero_user() is only used for explicit __GFP_ZERO, not for init_on_alloc. On architectures with virtually-indexed caches (e.g., ARM), clear_user_highpage() performs per-line cache operations; using it for init_on_alloc would add overhead that kernel_init_pages() avoids (the page fault path flushes the cache at PTE installation time regardless). No functional change yet: current callers do not pass __GFP_ZERO for user pages (they zero at the callsite instead). Subsequent patches will convert them. Signed-off-by: Michael S. Tsirkin Assisted-by: Claude:claude-opus-4-6 --- mm/page_alloc.c | 17 ++++++++++++++--- 1 file changed, 14 insertions(+), 3 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index db387dd6b813..76f39dd026ff 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1861,9 +1861,20 @@ inline void post_alloc_hook(struct page *page, unsigned int order, for (i = 0; i != 1 << order; ++i) page_kasan_tag_reset(page + i); } - /* If memory is still not initialized, initialize it now. */ - if (init) - kernel_init_pages(page, 1 << order); + /* + * If memory is still not initialized, initialize it now. + * When __GFP_ZERO was explicitly requested and user_addr is set, + * use folio_zero_user() which zeros near the faulting address + * last, keeping those cachelines hot. For init_on_alloc, use + * kernel_init_pages() to avoid unnecessary cache flush overhead + * on architectures with virtually-indexed caches. + */ + if (init) { + if ((gfp_flags & __GFP_ZERO) && user_addr != USER_ADDR_NONE) + folio_zero_user(page_folio(page), user_addr); + else + kernel_init_pages(page, 1 << order); + } set_page_owner(page, order, gfp_flags); page_table_check_alloc(page, order); -- MST