From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E31E9CA0FFF for ; Thu, 28 Aug 2025 19:34:50 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 59EF310EAAF; Thu, 28 Aug 2025 19:34:49 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.b="Ji8h1Cae"; dkim-atps=neutral Received: from mail-pl1-f177.google.com (mail-pl1-f177.google.com [209.85.214.177]) by gabe.freedesktop.org (Postfix) with ESMTPS id B8F7510E895 for ; Thu, 28 Aug 2025 08:43:53 +0000 (UTC) Received: by mail-pl1-f177.google.com with SMTP id d9443c01a7336-24456ce0b96so8419515ad.0 for ; Thu, 28 Aug 2025 01:43:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1756370633; x=1756975433; darn=lists.freedesktop.org; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=jLT2osl3N94HJBBJWKPAk2avCo16Lo88yw1WJjz/wsM=; b=Ji8h1CaeHE+GbhOjp2OORH2Q/7usRsG45Z+YUXeptmguTr8gGwdhFhi08tw924Otpm l9hv2mf53X6bxk0ySDLRF9PjaExqr9HNItnJYcDc8JYF1wlnH6NPcnf9JS75sxXB3vti edijMj/rEeGEUVaKQ4U/u/4mFBMinKPjTU9abH7gKGJoe3chg4Sqkhl2Tvj0jCyLLUO9 wthfOS0nePSK9c7cOVNNTAEHiDZq+KHQeZICtN9ZtbKBJuA5/v9YTlWroukwULAdwdsw cgx9NTUAXw4BDxesud8+Y8+4p3dXW6qEB379DVnol2m29aZWKmpvpc09WCf/RN2aIkch K2jQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1756370633; x=1756975433; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=jLT2osl3N94HJBBJWKPAk2avCo16Lo88yw1WJjz/wsM=; b=J7TYjTE8XXkRZvl+0hF0WKwY/h+dDfwURuS3zBEfoUYCKPwTUOmQuIke5+KfNMuXDf TXh5aWDPSJwmvDsZKcY3ZrB/SFg5B8m3Zgs6X+zClaVAatc9dbFBVx6pS2JGlMgCyKxg AiPhlzRZAuWHgWRoWTF2CSAJyL//h0cFzjMnqORcP6DpXnItsr1UmRIruer/lKwMgR3S RbvlKpHhopohn2Y7FV1gEUmyjXzLcyR4o28VIpj+eokafGtY3rIk5GdzRpTGeJR2iCn3 vfqbeWn8aqc2K6qz9bLCoSMbIK4KjJI12Vj/sA8r/EZRyw9W1V0MTA/3m0p9WFJTHG1H j1jw== X-Forwarded-Encrypted: i=1; AJvYcCXz8sBVBXzlQqpxi8LHo0SRk0DBROWgjgR+Q3uj+Y3Jwjv/J3xAgf5aTc4XcbAhHmJQzPFnxtik0EE=@lists.freedesktop.org X-Gm-Message-State: AOJu0YyrkqWsj0YhBKBzkOh6Mto8awCALE6ssfLTs44nHMJGPL8IAT/F 5KFm659uBBtCxdB1z/P4dQenISywBzrwCbR2ZdUam4snSYwXzFNQvK2nftnVPdLzFIi5JgfoU4G mhd4mUSR3V22ohhfjlWlYAEQqkb1xQdzz9qKQ9l27 X-Gm-Gg: ASbGncuwaEFjtlLX/+g7USZF1IsL+Y+eF6IIJXoJ3yMyUcy8BE/JZIcywfDgmxZosq0 upgP0lxmZpInPRm7X9FNRlM8b+89V4CtGn8z0iUbD0zb4CCZv7gN3ONjLHikn0TEklbb5Q4JXBp Tr0+srLW7zyosLNkDvL8/lUu0RMPk/cClhGTHhLM/TCAOu/4Pkqv5U01QAmbh47bamjb3Eq81lf vb9vdzPKzgxbnV6PDRQC/RFGpA= X-Google-Smtp-Source: AGHT+IGJmjz+YHPyNxr90AMhiQNNDQu9deq8PWwNfyq+zFWi/YcgUyPYkmS3VX3aACMgtkUmoKeOvBcyBRDmMb3NDeE= X-Received: by 2002:a17:903:3d06:b0:248:8063:a8b4 with SMTP id d9443c01a7336-2488063abcbmr89508125ad.22.1756370632768; Thu, 28 Aug 2025 01:43:52 -0700 (PDT) MIME-Version: 1.0 References: <20250827220141.262669-1-david@redhat.com> <20250827220141.262669-35-david@redhat.com> In-Reply-To: <20250827220141.262669-35-david@redhat.com> From: Marco Elver Date: Thu, 28 Aug 2025 10:43:16 +0200 X-Gm-Features: Ac12FXwMzUnIHp_v7uH0kV3Hu6ram9vqgPmCMZ3TyuNNAlhDfe6K8rTgx1FpO8k Message-ID: Subject: Re: [PATCH v1 34/36] kfence: drop nth_page() usage To: David Hildenbrand Cc: linux-kernel@vger.kernel.org, Alexander Potapenko , Dmitry Vyukov , Andrew Morton , Brendan Jackman , Christoph Lameter , Dennis Zhou , dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, iommu@lists.linux.dev, io-uring@vger.kernel.org, Jason Gunthorpe , Jens Axboe , Johannes Weiner , John Hubbard , kasan-dev@googlegroups.com, kvm@vger.kernel.org, "Liam R. Howlett" , Linus Torvalds , linux-arm-kernel@axis.com, linux-arm-kernel@lists.infradead.org, linux-crypto@vger.kernel.org, linux-ide@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-mips@vger.kernel.org, linux-mmc@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-scsi@vger.kernel.org, Lorenzo Stoakes , Marek Szyprowski , Michal Hocko , Mike Rapoport , Muchun Song , netdev@vger.kernel.org, Oscar Salvador , Peter Xu , Robin Murphy , Suren Baghdasaryan , Tejun Heo , virtualization@lists.linux.dev, Vlastimil Babka , wireguard@lists.zx2c4.com, x86@kernel.org, Zi Yan Content-Type: text/plain; charset="UTF-8" X-Mailman-Approved-At: Thu, 28 Aug 2025 19:34:39 +0000 X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" On Thu, 28 Aug 2025 at 00:11, 'David Hildenbrand' via kasan-dev wrote: > > We want to get rid of nth_page(), and kfence init code is the last user. > > Unfortunately, we might actually walk a PFN range where the pages are > not contiguous, because we might be allocating an area from memblock > that could span memory sections in problematic kernel configs (SPARSEMEM > without SPARSEMEM_VMEMMAP). > > We could check whether the page range is contiguous > using page_range_contiguous() and failing kfence init, or making kfence > incompatible these problemtic kernel configs. > > Let's keep it simple and simply use pfn_to_page() by iterating PFNs. > > Cc: Alexander Potapenko > Cc: Marco Elver > Cc: Dmitry Vyukov > Signed-off-by: David Hildenbrand Reviewed-by: Marco Elver Thanks. > --- > mm/kfence/core.c | 12 +++++++----- > 1 file changed, 7 insertions(+), 5 deletions(-) > > diff --git a/mm/kfence/core.c b/mm/kfence/core.c > index 0ed3be100963a..727c20c94ac59 100644 > --- a/mm/kfence/core.c > +++ b/mm/kfence/core.c > @@ -594,15 +594,14 @@ static void rcu_guarded_free(struct rcu_head *h) > */ > static unsigned long kfence_init_pool(void) > { > - unsigned long addr; > - struct page *pages; > + unsigned long addr, start_pfn; > int i; > > if (!arch_kfence_init_pool()) > return (unsigned long)__kfence_pool; > > addr = (unsigned long)__kfence_pool; > - pages = virt_to_page(__kfence_pool); > + start_pfn = PHYS_PFN(virt_to_phys(__kfence_pool)); > > /* > * Set up object pages: they must have PGTY_slab set to avoid freeing > @@ -613,11 +612,12 @@ static unsigned long kfence_init_pool(void) > * enters __slab_free() slow-path. > */ > for (i = 0; i < KFENCE_POOL_SIZE / PAGE_SIZE; i++) { > - struct slab *slab = page_slab(nth_page(pages, i)); > + struct slab *slab; > > if (!i || (i % 2)) > continue; > > + slab = page_slab(pfn_to_page(start_pfn + i)); > __folio_set_slab(slab_folio(slab)); > #ifdef CONFIG_MEMCG > slab->obj_exts = (unsigned long)&kfence_metadata_init[i / 2 - 1].obj_exts | > @@ -665,10 +665,12 @@ static unsigned long kfence_init_pool(void) > > reset_slab: > for (i = 0; i < KFENCE_POOL_SIZE / PAGE_SIZE; i++) { > - struct slab *slab = page_slab(nth_page(pages, i)); > + struct slab *slab; > > if (!i || (i % 2)) > continue; > + > + slab = page_slab(pfn_to_page(start_pfn + i)); > #ifdef CONFIG_MEMCG > slab->obj_exts = 0; > #endif > -- > 2.50.1 > > -- > You received this message because you are subscribed to the Google Groups "kasan-dev" group. > To unsubscribe from this group and stop receiving emails from it, send an email to kasan-dev+unsubscribe@googlegroups.com. > To view this discussion visit https://groups.google.com/d/msgid/kasan-dev/20250827220141.262669-35-david%40redhat.com.