From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DE61C2E92DE for ; Thu, 21 Aug 2025 20:07:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1755806834; cv=none; b=J9EAyBNsXN7PRLLYh/IGtRWMO0KEC6UZq1Zvhi+OO73iMBcEuQ2u2aZVsw50OEF/OicCwscZuDhc8DA0Z/ji/XGo6ntKR6pSNXxtnQ7M/snkfO+ipJx2mOeanjkLSAc9EqvJp6vNo7aPVXtr3ztjIaHZVNO5tpOH3quuG7ay164= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1755806834; c=relaxed/simple; bh=1aX6aTG8s6ax1CtK3pZ1UQ/TXpBX5+Ze6lBjUUjUQPw=; h=From:To:Cc:Subject:Date:Message-ID:MIME-Version; b=Yg4r683vSkvQ/3Q2mm1uuvFTQ9mPld1kUcoaj0n9UPobkorIAz1qAA4YX6DMlOCPSvWYfe2CWY1+Xt75Ofbd91lQPqkweWoppy8ZVaKOZJ0ZrYt3wvHiblrOA4H9cHsWRAQjZt7J0rx6bQLNi3EPY5Xrdz2XZo2dcLkgl5CUWG4= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=V/j5m5aa; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="V/j5m5aa" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1755806830; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding; bh=LLdMjSHtv+zm1bY5U2c0p8D/d+U3cemLYeHBKaj7voE=; b=V/j5m5aaGnkbuWEUpHL+pKDhqiCyKflPBA+oxbyRAWuWEbCjY8zAPXMQiAB2he823LdfK6 XiGPsCCPmnU5XlbXhRmonUJlrV8r1K8nTbfqT+gcgNS7IpzIghbyyuylWmFMA2GXdI2Upu ElHPR+cdFXMPflowz2DQzTxdRcEXC0E= Received: from mail-wm1-f72.google.com (mail-wm1-f72.google.com [209.85.128.72]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-245-Ka6DAz5TMXyqTgAYMVhPJQ-1; Thu, 21 Aug 2025 16:07:08 -0400 X-MC-Unique: Ka6DAz5TMXyqTgAYMVhPJQ-1 X-Mimecast-MFC-AGG-ID: Ka6DAz5TMXyqTgAYMVhPJQ_1755806827 Received: by mail-wm1-f72.google.com with SMTP id 5b1f17b1804b1-45b51411839so819125e9.0 for ; Thu, 21 Aug 2025 13:07:07 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1755806827; x=1756411627; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=LLdMjSHtv+zm1bY5U2c0p8D/d+U3cemLYeHBKaj7voE=; b=Ht9pDuYjksCREC+9t+tYh7ikk2jFWsYG9FPXSxReF/g0BabHsnfHcoWSnpZ8Zqc99t BXkNbLI0J0EWAzT5ZyLmiHOhDIMDKKz1FjMTKFXqTV+tS8TUT6cvrMzVP8FzzEUOQr7G 428PKkxLY7Q0spLMroW6Cwk4VnO+YzrAgAviQIJMscHPwLeYIuTJr2c3GLUMgBvjEmZQ 8BbK1kfqZvssFA3ACmzEzRpDJnXafBhtbJMt0QaRZUbl+peLFi8BKHLenbZImt8AMQ8e Tvl9tNSK7G60QCY0m/fc1xth+CUxFn9mlVEJ994W2IiTX134nw5idVEcrxqC0Mn7T8bt Fuig== X-Forwarded-Encrypted: i=1; AJvYcCVOF4iLXBjz+g/18MWdNd+J4FA0djgWGX9msqiENPLiIjnMdiHUdye/Ampb9aWNxl6AWZM=@vger.kernel.org X-Gm-Message-State: AOJu0YxLp0fqM4+FEPpepQp4/OyOF9lxMUkv8cDxjpNZ0J1hWFwVlD2l 4uQl7ebMPUfQuAPS/Bo53klGQagbcFZqUz5l1ZLMJ+q2BjVHPJVfRvNHzyXhJGBcZHQlygMZJNR XSSEoR4/nbi5CoWM4qHae/7e8dFD5xXir651EdQTHZllIYUwKLTCkew== X-Gm-Gg: ASbGncsCsCqarRyhoMeINPJo0ltW18mswLvX7UNwLgua27yISfM5V0TivvCJ53PwCny HFf4q1Xow8xDQbRHRRnR2OWL5u2hYLMpD5m4pPC+tAudyIludO2IDQpZ9+w+if9tFCOTprGp0Wz /ytBbEbncorPx9n9dsZggVWlz70rmZ3UB1V64pFnQnzNAZnkp+tbmKeMfyTlauKu+1Pnp1ZOVyV kzdcRZNksN2/Vg09O8BUFjE9Ajekf0A5erifl/n0a8h+L//6V/qw7ffLtEgQpc1z4CNwAXf1FaZ 7l0h++BfP5NKSyw22jhrMUozIfM6sRYhTWPJtfp8X6vNnBiaiuuDiOmxeUPT1GyljpthV3dwFVY i5zl/fCPs4gtMfUELZyi5Og== X-Received: by 2002:a05:600c:4506:b0:456:1bae:5470 with SMTP id 5b1f17b1804b1-45b5179b6camr3191505e9.8.1755806826507; Thu, 21 Aug 2025 13:07:06 -0700 (PDT) X-Google-Smtp-Source: AGHT+IG7XEacdneicK9OezeDu7E6BzvwjamEpx7+RbUEh0t7qSo20kryXdXhBwYQdhu2/m95zw+e6A== X-Received: by 2002:a05:600c:4506:b0:456:1bae:5470 with SMTP id 5b1f17b1804b1-45b5179b6camr3190665e9.8.1755806825859; Thu, 21 Aug 2025 13:07:05 -0700 (PDT) Received: from localhost (p200300d82f26ba0008036ec5991806fd.dip0.t-ipconnect.de. [2003:d8:2f26:ba00:803:6ec5:9918:6fd]) by smtp.gmail.com with UTF8SMTPSA id ffacd0b85a97d-3c077788df7sm12764142f8f.48.2025.08.21.13.07.02 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 21 Aug 2025 13:07:05 -0700 (PDT) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: David Hildenbrand , Andrew Morton , Linus Torvalds , Jason Gunthorpe , Lorenzo Stoakes , "Liam R. Howlett" , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , Jens Axboe , Marek Szyprowski , Robin Murphy , John Hubbard , Peter Xu , Alexander Potapenko , Marco Elver , Dmitry Vyukov , Brendan Jackman , Johannes Weiner , Zi Yan , Dennis Zhou , Tejun Heo , Christoph Lameter , Muchun Song , Oscar Salvador , x86@kernel.org, linux-arm-kernel@lists.infradead.org, linux-mips@vger.kernel.org, linux-s390@vger.kernel.org, linux-crypto@vger.kernel.org, linux-ide@vger.kernel.org, intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-mmc@vger.kernel.org, linux-arm-kernel@axis.com, linux-scsi@vger.kernel.org, kvm@vger.kernel.org, virtualization@lists.linux.dev, linux-mm@kvack.org, io-uring@vger.kernel.org, iommu@lists.linux.dev, kasan-dev@googlegroups.com, wireguard@lists.zx2c4.com, netdev@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-riscv@lists.infradead.org, Albert Ou , Alexander Gordeev , Alexandre Ghiti , Alex Dubov , Alex Williamson , Andreas Larsson , Borislav Petkov , Brett Creeley , Catalin Marinas , Christian Borntraeger , Christophe Leroy , Damien Le Moal , Dave Hansen , David Airlie , "David S. Miller" , Doug Gilbert , Heiko Carstens , Herbert Xu , Huacai Chen , Ingo Molnar , "James E.J. Bottomley" , Jani Nikula , "Jason A. Donenfeld" , Jason Gunthorpe , Jesper Nilsson , Joonas Lahtinen , Kevin Tian , Lars Persson , Madhavan Srinivasan , "Martin K. Petersen" , Maxim Levitsky , Michael Ellerman , Nicholas Piggin , Niklas Cassel , Palmer Dabbelt , Paul Walmsley , Rodrigo Vivi , Shameer Kolothum , Shuah Khan , Simona Vetter , Sven Schnelle , Thomas Bogendoerfer , Thomas Gleixner , Tvrtko Ursulin , Ulf Hansson , Vasily Gorbik , WANG Xuerui , Will Deacon , Yishai Hadas Subject: [PATCH RFC 00/35] mm: remove nth_page() Date: Thu, 21 Aug 2025 22:06:26 +0200 Message-ID: <20250821200701.1329277-1-david@redhat.com> X-Mailer: git-send-email 2.50.1 Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit This is based on mm-unstable and was cross-compiled heavily. I should probably have already dropped the RFC label but I want to hear first if I ignored some corner case (SG entries?) and I need to do at least a bit more testing. I will only CC non-MM folks on the cover letter and the respective patch to not flood too many inboxes (the lists receive all patches). --- As discussed recently with Linus, nth_page() is just nasty and we would like to remove it. To recap, the reason we currently need nth_page() within a folio is because on some kernel configs (SPARSEMEM without SPARSEMEM_VMEMMAP), the memmap is allocated per memory section. While buddy allocations cannot cross memory section boundaries, hugetlb and dax folios can. So crossing a memory section means that "page++" could do the wrong thing. Instead, nth_page() on these problematic configs always goes from page->pfn, to the go from (++pfn)->page, which is rather nasty. Likely, many people have no idea when nth_page() is required and when it might be dropped. We refer to such problematic PFN ranges and "non-contiguous pages". If we only deal with "contiguous pages", there is not need for nth_page(). Besides that "obvious" folio case, we might end up using nth_page() within CMA allocations (again, could span memory sections), and in one corner case (kfence) when processing memblock allocations (again, could span memory sections). So let's handle all that, add sanity checks, and remove nth_page(). Patch #1 -> #5 : stop making SPARSEMEM_VMEMMAP user-selectable + cleanups Patch #6 -> #12 : disallow folios to have non-contiguous pages Patch #13 -> #20 : remove nth_page() usage within folios Patch #21 : disallow CMA allocations of non-contiguous pages Patch #22 -> #31 : sanity+check + remove nth_page() usage within SG entry Patch #32 : sanity-check + remove nth_page() usage in unpin_user_page_range_dirty_lock() Patch #33 : remove nth_page() in kfence Patch #34 : adjust stale comment regarding nth_page Patch #35 : mm: remove nth_page() A lot of this is inspired from the discussion at [1] between Linus, Jason and me, so cudos to them. [1] https://lore.kernel.org/all/CAHk-=wiCYfNp4AJLBORU-c7ZyRBUp66W2-Et6cdQ4REx-GyQ_A@mail.gmail.com/T/#u Cc: Andrew Morton Cc: Linus Torvalds Cc: Jason Gunthorpe Cc: Lorenzo Stoakes Cc: "Liam R. Howlett" Cc: Vlastimil Babka Cc: Mike Rapoport Cc: Suren Baghdasaryan Cc: Michal Hocko Cc: Jens Axboe Cc: Marek Szyprowski Cc: Robin Murphy Cc: John Hubbard Cc: Peter Xu Cc: Alexander Potapenko Cc: Marco Elver Cc: Dmitry Vyukov Cc: Brendan Jackman Cc: Johannes Weiner Cc: Zi Yan Cc: Dennis Zhou Cc: Tejun Heo Cc: Christoph Lameter Cc: Muchun Song Cc: Oscar Salvador Cc: x86@kernel.org Cc: linux-arm-kernel@lists.infradead.org Cc: linux-mips@vger.kernel.org Cc: linux-s390@vger.kernel.org Cc: linux-crypto@vger.kernel.org Cc: linux-ide@vger.kernel.org Cc: intel-gfx@lists.freedesktop.org Cc: dri-devel@lists.freedesktop.org Cc: linux-mmc@vger.kernel.org Cc: linux-arm-kernel@axis.com Cc: linux-scsi@vger.kernel.org Cc: kvm@vger.kernel.org Cc: virtualization@lists.linux.dev Cc: linux-mm@kvack.org Cc: io-uring@vger.kernel.org Cc: iommu@lists.linux.dev Cc: kasan-dev@googlegroups.com Cc: wireguard@lists.zx2c4.com Cc: netdev@vger.kernel.org Cc: linux-kselftest@vger.kernel.org Cc: linux-riscv@lists.infradead.org David Hildenbrand (35): mm: stop making SPARSEMEM_VMEMMAP user-selectable arm64: Kconfig: drop superfluous "select SPARSEMEM_VMEMMAP" s390/Kconfig: drop superfluous "select SPARSEMEM_VMEMMAP" x86/Kconfig: drop superfluous "select SPARSEMEM_VMEMMAP" wireguard: selftests: remove CONFIG_SPARSEMEM_VMEMMAP=y from qemu kernel config mm/page_alloc: reject unreasonable folio/compound page sizes in alloc_contig_range_noprof() mm/memremap: reject unreasonable folio/compound page sizes in memremap_pages() mm/hugetlb: check for unreasonable folio sizes when registering hstate mm/mm_init: make memmap_init_compound() look more like prep_compound_page() mm/hugetlb: cleanup hugetlb_folio_init_tail_vmemmap() mm: sanity-check maximum folio size in folio_set_order() mm: limit folio/compound page sizes in problematic kernel configs mm: simplify folio_page() and folio_page_idx() mm/mm/percpu-km: drop nth_page() usage within single allocation fs: hugetlbfs: remove nth_page() usage within folio in adjust_range_hwpoison() mm/pagewalk: drop nth_page() usage within folio in folio_walk_start() mm/gup: drop nth_page() usage within folio when recording subpages io_uring/zcrx: remove "struct io_copy_cache" and one nth_page() usage io_uring/zcrx: remove nth_page() usage within folio mips: mm: convert __flush_dcache_pages() to __flush_dcache_folio_pages() mm/cma: refuse handing out non-contiguous page ranges dma-remap: drop nth_page() in dma_common_contiguous_remap() scatterlist: disallow non-contigous page ranges in a single SG entry ata: libata-eh: drop nth_page() usage within SG entry drm/i915/gem: drop nth_page() usage within SG entry mspro_block: drop nth_page() usage within SG entry memstick: drop nth_page() usage within SG entry mmc: drop nth_page() usage within SG entry scsi: core: drop nth_page() usage within SG entry vfio/pci: drop nth_page() usage within SG entry crypto: remove nth_page() usage within SG entry mm/gup: drop nth_page() usage in unpin_user_page_range_dirty_lock() kfence: drop nth_page() usage block: update comment of "struct bio_vec" regarding nth_page() mm: remove nth_page() arch/arm64/Kconfig | 1 - arch/mips/include/asm/cacheflush.h | 11 +++-- arch/mips/mm/cache.c | 8 ++-- arch/s390/Kconfig | 1 - arch/x86/Kconfig | 1 - crypto/ahash.c | 4 +- crypto/scompress.c | 8 ++-- drivers/ata/libata-sff.c | 6 +-- drivers/gpu/drm/i915/gem/i915_gem_pages.c | 2 +- drivers/memstick/core/mspro_block.c | 3 +- drivers/memstick/host/jmb38x_ms.c | 3 +- drivers/memstick/host/tifm_ms.c | 3 +- drivers/mmc/host/tifm_sd.c | 4 +- drivers/mmc/host/usdhi6rol0.c | 4 +- drivers/scsi/scsi_lib.c | 3 +- drivers/scsi/sg.c | 3 +- drivers/vfio/pci/pds/lm.c | 3 +- drivers/vfio/pci/virtio/migrate.c | 3 +- fs/hugetlbfs/inode.c | 25 ++++------ include/crypto/scatterwalk.h | 4 +- include/linux/bvec.h | 7 +-- include/linux/mm.h | 48 +++++++++++++++---- include/linux/page-flags.h | 5 +- include/linux/scatterlist.h | 4 +- io_uring/zcrx.c | 34 ++++--------- kernel/dma/remap.c | 2 +- mm/Kconfig | 3 +- mm/cma.c | 36 +++++++++----- mm/gup.c | 13 +++-- mm/hugetlb.c | 23 ++++----- mm/internal.h | 1 + mm/kfence/core.c | 17 ++++--- mm/memremap.c | 3 ++ mm/mm_init.c | 13 ++--- mm/page_alloc.c | 5 +- mm/pagewalk.c | 2 +- mm/percpu-km.c | 2 +- mm/util.c | 33 +++++++++++++ tools/testing/scatterlist/linux/mm.h | 1 - .../selftests/wireguard/qemu/kernel.config | 1 - 40 files changed, 203 insertions(+), 150 deletions(-) base-commit: c0e3b3f33ba7b767368de4afabaf7c1ddfdc3872 -- 2.50.1