From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BF5501EB5FD for ; Thu, 14 May 2026 18:00:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778781645; cv=none; b=L5Y1fNhIVoICPLiupHthfqAKDcFuIfirJIAX1DGquuuexI0EPoYiHESGugp1QyUNHORRaC/9IpQFsf9Y8mMEQXYfFhlAl40yXcTUBhorWVoNSlSpL9wiWwcWj9/s+PxyKwAPv6NpwQKnFpvTQKANgXEPIOA5LUDeyECRvNZQue0= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778781645; c=relaxed/simple; bh=7M12n/sZI6wavmCkLjTlnZPULJZ5NWT41etWMhiFTNY=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=O3tHGxsMeBBW8Wv6pojBTa7Z4vyk8VUaqo+YmQCpCJS5QXhl7D+o+xtdOT9dPzdyta5tnkMCk74+EqQunhd0c7PTrkBfZncDf/BJNNeN4Hzf4SEidOTY70i8wkdL4d9f5sFwRQ16Zl/8EKHP4hL9559KwV9rfSAlVnattX8+QNg= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=HM5HzPJZ; dkim=pass (2048-bit key) header.d=redhat.com header.i=@redhat.com header.b=pjRNO7Ig; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="HM5HzPJZ"; dkim=pass (2048-bit key) header.d=redhat.com header.i=@redhat.com header.b="pjRNO7Ig" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1778781642; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=oQlL+kOZwMBzhWKBZwEOSjeyjgShl8X0ifAvXv35rZA=; b=HM5HzPJZ3HhqTdjLK3VfM3rU8Lthr+o1YLgWlzTwRjVjh7xIiL027JSUAqngRO+6yQ3HUN aWqJKdoylQqbZwQgFJ1AIM0ALg5U2jE2XNH2d8shwJRCdV9bSQu02Nxl5TUiidaUJXjJEh wmtwhz/a4WayRE2zaPYz7ta8YPYIpJk= Received: from mail-wr1-f69.google.com (mail-wr1-f69.google.com [209.85.221.69]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-384-yiR9Pi0CP3KFVZvdAEI9RQ-1; Thu, 14 May 2026 14:00:40 -0400 X-MC-Unique: yiR9Pi0CP3KFVZvdAEI9RQ-1 X-Mimecast-MFC-AGG-ID: yiR9Pi0CP3KFVZvdAEI9RQ_1778781639 Received: by mail-wr1-f69.google.com with SMTP id ffacd0b85a97d-43d7a5b9678so5457179f8f.2 for ; Thu, 14 May 2026 11:00:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=google; t=1778781639; x=1779386439; darn=vger.kernel.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=oQlL+kOZwMBzhWKBZwEOSjeyjgShl8X0ifAvXv35rZA=; b=pjRNO7IgRQILu8UArb3gF8zqRRmtysdQqew6toNv685gLBsPTPL/stf2UWEx3whIHY rD6d+BhHoCiwXephWG0RJGGEkSgr9qG1g8Eq7Xs1fDz+ZNqQaXga4rluG8zXLfZ0AVaI X1/RnuFGF0zgZxFzfwiU8hKkSAA/zYePo/BFNGwgo7UpPEUrfpD9QjqA5rceTokvCwqj OWAIc/8YHzlmdxA2iyUtHaTGtJefbJhBwtg47SmhSUqGbvhLA0hU+zJykWRGjf2awTjQ UC/l7Bzrcqk+T8rLBam0IYyIF/dl7zba9K7bDtCGJauIkMLm6HYCqvygrHDqSK6aI8se lkFA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1778781639; x=1779386439; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-gg:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=oQlL+kOZwMBzhWKBZwEOSjeyjgShl8X0ifAvXv35rZA=; b=E2ouw4zfG92+YbAq2RPOlnfZhI+28BPZj7h4MTt3zv9ab9MlrWPXSI+JQ1rFMc8Jmg aXQ101fD7UlU1cuNYjkDYewiCL1C5WipAqAAQLGSH4n/ZxSZ//nM/09TDa4CvLdORHyl TL2RlX/gqpmmyDPA4afCWxRwus5E8aeULOymG3AtFTnehahEqAB/rDRqGD6xaBIOKano pFO7w55z6ksrDALu5mSvS9N8L4x++5r8hJS1QQKqFNiJ4bxB7LJM8xKY9kuhA2FFibNw 013YgNlZmUX+40acUPdrWH80byATXO/8esPd8gXxhJmNsXT69hTBOVgiWIJ8qpSmQqEx y4Mw== X-Gm-Message-State: AOJu0YzkDr+Q0FPTyxRbsyfBLiQDhqRglx34jz8BS1WXgYfl5pZUypJV aoNg/Rrn9TRz0GSeGq2g67FBBSGraemp114kjvD6xx8jzmfW+s5BAfMFM4Ew7/iTJE3AsS0kekT KncvEIRCy0RO5wi7JhcpfX/4AUw/D+4CgrWag9okB7aEK1YGXiqlOd1EMfrIKWaRBLA== X-Gm-Gg: Acq92OFu9YfP1/pR1zuvtnS3k0dQxS2fMBphWxtEFDcpdyCHgiL9vsQQ756mg7nWMVF WUhJ7p9H+czksEAIi3dCoU43hcwCiPm+xeO0tH50Y1OUexpqqirscpwrZnoN2FY3DXGzxq1b2BA kZhXO7ytirZ/w4+xUG8TLU4/AHO+gVa6dbCjJo8HH1YynzkKl9xy7GWObeHREyhhN7sxwVSsR6+ GJs0ZIpyDfaX8lXdxny3cN0fNq3xJyhsS29CVZ7148Yx71N951lV5ARNT7nAX3YUGRjKZwqZMgB nsmLqoafhoZeRiYepzrlBcbKYj+7r2Nx6dWLs8x5ck9308+q5n2c/ZoieWkZQfgFSluJ69HKQ68 4hAUgTwbUos5gap8mAt2YNQX4Er6sy/FBbfdB/mwE X-Received: by 2002:a5d:5885:0:b0:43c:f7e5:817b with SMTP id ffacd0b85a97d-45e5c5cc2b5mr128635f8f.19.1778781638187; Thu, 14 May 2026 11:00:38 -0700 (PDT) X-Received: by 2002:a5d:5885:0:b0:43c:f7e5:817b with SMTP id ffacd0b85a97d-45e5c5cc2b5mr128505f8f.19.1778781637466; Thu, 14 May 2026 11:00:37 -0700 (PDT) Received: from redhat.com (IGLD-80-230-48-7.inter.net.il. [80.230.48.7]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-45da15a6449sm7948329f8f.37.2026.05.14.11.00.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 14 May 2026 11:00:36 -0700 (PDT) Date: Thu, 14 May 2026 14:00:31 -0400 From: "Michael S. Tsirkin" To: Gregory Price Cc: linux-kernel@vger.kernel.org, "David Hildenbrand (Arm)" , Jason Wang , Xuan Zhuo , Eugenio =?iso-8859-1?Q?P=E9rez?= , Muchun Song , Oscar Salvador , Andrew Morton , Lorenzo Stoakes , "Liam R. Howlett" , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , Brendan Jackman , Johannes Weiner , Zi Yan , Baolin Wang , Nico Pache , Ryan Roberts , Dev Jain , Barry Song , Lance Yang , Hugh Dickins , Matthew Brost , Joshua Hahn , Rakie Kim , Byungchul Park , Ying Huang , Alistair Popple , Christoph Lameter , David Rientjes , Roman Gushchin , Harry Yoo , Axel Rasmussen , Yuanchu Xie , Wei Xu , Chris Li , Kairui Song , Kemeng Shi , Nhat Pham , Baoquan He , virtualization@lists.linux.dev, linux-mm@kvack.org, Andrea Arcangeli Subject: Re: [PATCH v7 09/31] mm: use folio_zero_user for user pages in post_alloc_hook Message-ID: <20260514135214-mutt-send-email-mst@kernel.org> References: <45d1ea85b574399459a64fdba28fcf04abfa3e7e.1778616612.git.mst@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: On Thu, May 14, 2026 at 09:49:33AM -0400, Gregory Price wrote: > On Tue, May 12, 2026 at 05:05:54PM -0400, Michael S. Tsirkin wrote: > > When post_alloc_hook() needs to zero a page for an explicit > > __GFP_ZERO allocation for a user page (user_addr is set), use folio_zero_user() > > instead of kernel_init_pages(). This zeros near the faulting > > address last, keeping those cachelines hot for the impending > > user access. > > > > folio_zero_user() is only used for explicit __GFP_ZERO, not for > > init_on_alloc. On architectures with virtually-indexed caches > > (e.g., ARM), clear_user_highpage() performs per-line cache > > operations; using it for init_on_alloc would add overhead that > > kernel_init_pages() avoids (the page fault path flushes the > > cache at PTE installation time regardless). > > > > No functional change yet: current callers do not pass __GFP_ZERO > > for user pages (they zero at the callsite instead). Subsequent > > patches will convert them. > > > > Signed-off-by: Michael S. Tsirkin > > Assisted-by: Claude:claude-opus-4-6 > > --- > > mm/page_alloc.c | 17 ++++++++++++++--- > > 1 file changed, 14 insertions(+), 3 deletions(-) > > > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > > index db387dd6b813..76f39dd026ff 100644 > > --- a/mm/page_alloc.c > > +++ b/mm/page_alloc.c > > @@ -1861,9 +1861,20 @@ inline void post_alloc_hook(struct page *page, unsigned int order, > > for (i = 0; i != 1 << order; ++i) > > page_kasan_tag_reset(page + i); > > } > > - /* If memory is still not initialized, initialize it now. */ > > - if (init) > > - kernel_init_pages(page, 1 << order); > > + /* > > + * If memory is still not initialized, initialize it now. > > + * When __GFP_ZERO was explicitly requested and user_addr is set, > > + * use folio_zero_user() which zeros near the faulting address > > + * last, keeping those cachelines hot. For init_on_alloc, use > > + * kernel_init_pages() to avoid unnecessary cache flush overhead > > + * on architectures with virtually-indexed caches. > > + */ > > + if (init) { > > + if ((gfp_flags & __GFP_ZERO) && user_addr != USER_ADDR_NONE) > > + folio_zero_user(page_folio(page), user_addr); > > + else > > + kernel_init_pages(page, 1 << order); > > + } > > Open question but not necessarily in-scope: > > Should __GFP_ZERO just be implied if (user_addr != USER_ADDR_NONE)? There are calls with no __GFP_ZERO but they do not allocate userspace pages. - drm_pagemap.c: GFP_HIGHUSER -- no zero. But this is a DRM device page migration, the page content is preserved from the source. - test_hmm.c: GFP_HIGHUSER_MOVABLE -- no zero. Test driver, pages get content from device. - mm/ksm.c: GFP_HIGHUSER_MOVABLE -- no zero. KSM merges identical pages, content comes from the source page (copy). - mm/memory.c new_folio = GFP_HIGHUSER_MOVABLE - no zero. This is CoW, content is copied from old page. - mm/userfaultfd.c: GFP_HIGHUSER_MOVABLE - no zero. Content comes from userspace via userfaultfd. - arm64/fault.c: __GFP_ZEROTAGS not __GFP_ZERO. MTE tag zeroing, not page zeroing. Page is zeroed separately. > Putting aside how that's done without introducing another gfp flag > (maybe something explicit like `alloc_pages_nozero(...)` ), it seems > like a very short jump to just adding __GFP_ZERO to any user-alloc by > default. > > I'd be curious to know how many callers across the system omit > __GFP_ZERO when allocating a user-page, and whether there might be > scenarios where we subtly miss it (seems unlikely and narrow, but very > possibly something a driver could do unintentionally). > > ~Gregory I'd do this on top if possible. -- MST