From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E1485CA0FE7 for ; Tue, 26 Aug 2025 13:03:27 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 6BE5910E27F; Tue, 26 Aug 2025 13:03:27 +0000 (UTC) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by gabe.freedesktop.org (Postfix) with ESMTP id C780C10E27F; Tue, 26 Aug 2025 13:03:26 +0000 (UTC) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 025E02C23; Tue, 26 Aug 2025 06:03:18 -0700 (PDT) Received: from raptor (usa-sjc-mx-foss1.foss.arm.com [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id A5B423F63F; Tue, 26 Aug 2025 06:03:19 -0700 (PDT) Date: Tue, 26 Aug 2025 14:03:16 +0100 From: Alexandru Elisei To: David Hildenbrand Cc: linux-kernel@vger.kernel.org, Alexander Potapenko , Andrew Morton , Brendan Jackman , Christoph Lameter , Dennis Zhou , Dmitry Vyukov , dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, iommu@lists.linux.dev, io-uring@vger.kernel.org, Jason Gunthorpe , Jens Axboe , Johannes Weiner , John Hubbard , kasan-dev@googlegroups.com, kvm@vger.kernel.org, "Liam R. Howlett" , Linus Torvalds , linux-arm-kernel@axis.com, linux-arm-kernel@lists.infradead.org, linux-crypto@vger.kernel.org, linux-ide@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-mips@vger.kernel.org, linux-mmc@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-scsi@vger.kernel.org, Lorenzo Stoakes , Marco Elver , Marek Szyprowski , Michal Hocko , Mike Rapoport , Muchun Song , netdev@vger.kernel.org, Oscar Salvador , Peter Xu , Robin Murphy , Suren Baghdasaryan , Tejun Heo , virtualization@lists.linux.dev, Vlastimil Babka , wireguard@lists.zx2c4.com, x86@kernel.org, Zi Yan Subject: Re: [PATCH RFC 21/35] mm/cma: refuse handing out non-contiguous page ranges Message-ID: References: <20250821200701.1329277-1-david@redhat.com> <20250821200701.1329277-22-david@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Hi David, On Tue, Aug 26, 2025 at 01:04:33PM +0200, David Hildenbrand wrote: .. > > Just so I can better understand the problem being fixed, I guess you can have > > two consecutive pfns with non-consecutive associated struct page if you have two > > adjacent memory sections spanning the same physical memory region, is that > > correct? > > Exactly. Essentially on SPARSEMEM without SPARSEMEM_VMEMMAP it is not > guaranteed that > > pfn_to_page(pfn + 1) == pfn_to_page(pfn) + 1 > > when we cross memory section boundaries. > > It can be the case for early boot memory if we allocated consecutive areas > from memblock when allocating the memmap (struct pages) per memory section, > but it's not guaranteed. Thank you for the explanation, but I'm a bit confused by the last paragraph. I think what you're saying is that we can also have the reverse problem, where consecutive struct page * represent non-consecutive pfns, because memmap allocations happened to return consecutive virtual addresses, is that right? If that's correct, I don't think that's the case for CMA, which deals out contiguous physical memory. Or were you just trying to explain the other side of the problem, and I'm just overthinking it? Thanks, Alex