From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E4834CA0EFF for ; Wed, 27 Aug 2025 22:02:33 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 215368E0008; Wed, 27 Aug 2025 18:02:33 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1ECD48E0001; Wed, 27 Aug 2025 18:02:33 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0DBFF8E0008; Wed, 27 Aug 2025 18:02:33 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id EA7F98E0001 for ; Wed, 27 Aug 2025 18:02:32 -0400 (EDT) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 9A5A05BAB9 for ; Wed, 27 Aug 2025 22:02:32 +0000 (UTC) X-FDA: 83823912144.18.A501671 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf20.hostedemail.com (Postfix) with ESMTP id 7C6C01C0011 for ; Wed, 27 Aug 2025 22:02:30 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=T6CojULO; spf=pass (imf20.hostedemail.com: domain of david@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=quarantine) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1756332150; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding:in-reply-to: references:dkim-signature; bh=f98zaxrJvycSnAQcu/39hCOw4ptNGiOoTMWM+eDqoaw=; b=fML8qrnK03f8HBaUQ4RtwQ0YdjYqdwIHMHVqjt7wJm9Ib6At5z3ffIXdWW2EZMDBYyybk0 DPIYHwYnR8L/+1zwomdzrNt1KSv9cU9QuQIwK86QsSXITPww8PNbHL2pzHroyWTibO6ucd edqP996PovK/udi6dKE5mplos5ffn+0= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=T6CojULO; spf=pass (imf20.hostedemail.com: domain of david@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=quarantine) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1756332150; a=rsa-sha256; cv=none; b=ptB+XciTGbJI2EIWl0qbC1o5MPo8Tx97M70e5JLWzMNFqV30GFH7JKOvioSFer95Efuene LieL+7toDfqG99uWztyO7BVbszhxb2olGyt6XpcBovis4xh/zPFNgEF0g1FZHpasJC3B/3 CCVMwEtqy86mllOPSORaThjpHVzf93g= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1756332149; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=f98zaxrJvycSnAQcu/39hCOw4ptNGiOoTMWM+eDqoaw=; b=T6CojULOwIlzTR4jO35cJFjgP715Mo8mme6INj+Bp2GCNdiypOdWM0XbL8HM4UmM1633r5 Aoz9Ah5HCXisyXqYtTLImJysuuRP4Gh6Nh+PuR+YmvgR2kfzLTOXRzuSRKjQpvSdwgZKi3 FJm3vw6MoV4WT4786NPGk7s6Efx6wy8= Received: from mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-652-4tIPN8FiOvyy6eG037Xd4A-1; Wed, 27 Aug 2025 18:02:26 -0400 X-MC-Unique: 4tIPN8FiOvyy6eG037Xd4A-1 X-Mimecast-MFC-AGG-ID: 4tIPN8FiOvyy6eG037Xd4A_1756332141 Received: from mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 65CEE19541AC; Wed, 27 Aug 2025 22:02:18 +0000 (UTC) Received: from t14s.redhat.com (unknown [10.22.80.195]) by mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id E90FC30001A5; Wed, 27 Aug 2025 22:01:42 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: David Hildenbrand , Andrew Morton , Linus Torvalds , Jason Gunthorpe , Lorenzo Stoakes , "Liam R. Howlett" , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , Jens Axboe , Marek Szyprowski , Robin Murphy , John Hubbard , Peter Xu , Alexander Potapenko , Marco Elver , Dmitry Vyukov , Brendan Jackman , Johannes Weiner , Zi Yan , Dennis Zhou , Tejun Heo , Christoph Lameter , Muchun Song , Oscar Salvador , x86@kernel.org, linux-arm-kernel@lists.infradead.org, linux-mips@vger.kernel.org, linux-s390@vger.kernel.org, linux-crypto@vger.kernel.org, linux-ide@vger.kernel.org, intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-mmc@vger.kernel.org, linux-arm-kernel@axis.com, linux-scsi@vger.kernel.org, kvm@vger.kernel.org, virtualization@lists.linux.dev, linux-mm@kvack.org, io-uring@vger.kernel.org, iommu@lists.linux.dev, kasan-dev@googlegroups.com, wireguard@lists.zx2c4.com, netdev@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-riscv@lists.infradead.org, Albert Ou , Alexander Gordeev , Alexandre Ghiti , Alexandru Elisei , Alex Dubov , Alex Williamson , Andreas Larsson , Bart Van Assche , Borislav Petkov , Brett Creeley , Catalin Marinas , Christian Borntraeger , Christophe Leroy , Damien Le Moal , Dave Hansen , David Airlie , "David S. Miller" , Doug Gilbert , Heiko Carstens , Herbert Xu , Huacai Chen , Ingo Molnar , "James E.J. Bottomley" , Jani Nikula , "Jason A. Donenfeld" , Jason Gunthorpe , Jesper Nilsson , Joonas Lahtinen , Kevin Tian , Lars Persson , Madhavan Srinivasan , "Martin K. Petersen" , Maxim Levitsky , Michael Ellerman , Nicholas Piggin , Niklas Cassel , Palmer Dabbelt , Paul Walmsley , Pavel Begunkov , Rodrigo Vivi , SeongJae Park , Shameer Kolothum , Shuah Khan , Simona Vetter , Sven Schnelle , Thomas Bogendoerfer , Thomas Gleixner , Tvrtko Ursulin , Ulf Hansson , Vasily Gorbik , WANG Xuerui , Will Deacon , Yishai Hadas Subject: [PATCH v1 00/36] mm: remove nth_page() Date: Thu, 28 Aug 2025 00:01:04 +0200 Message-ID: <20250827220141.262669-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.4 X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: -ndNRb930v4qk4iqL4EPr90ExkmehS0WlLGenTgEWCE_1756332141 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: 8bit content-type: text/plain; charset="US-ASCII"; x-default=true X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 7C6C01C0011 X-Stat-Signature: gorgotqspwobpsuzriwum61nr3rbyt1a X-Rspam-User: X-HE-Tag: 1756332150-213215 X-HE-Meta: U2FsdGVkX194jlFmdJb1cjfoURPE9zFPaJP1PhNqSNcqC9PVbyx/6dLHU8FnGI8+o1zx6XddpEQNbHTLCiixHRBAEBkDabxqbt7P1et/BF9f0RRYY6tmyQN43Sx7RElZqDBmsCZpUK3SbS9PeWgit7io9DELsXvS95iibBhIUq6dKEd2N0BHQrnZ9PLwQpBnuJtzryrF1SxVjwEi4nFztUKXXuqd9sIkiSTiDG19olY0sg6l8FJMx9wVAOBQI58EDccbUM5vwRx/N0VfqYQ45yxUU0nWKATYCei2SCAkIo4jufYMK452gJ5TqQybxmN5uKg7nwYFh8TbiJUNe2MN5/HaA8MyFlG9+hJIHnhTud5X4W64ju37jckw5o7pVczA4ioTnVGcoGmSUTDqTBbgWq8ZULFRNYsXXmpcSqd/roIV/krNEiIyKSVsEwTvRAUzv0hJ/g1KTEAC+RyFLAS5hwCq0wrk/vNnBeetQcqcPX7nkSb5RDXnln3NC/GkZyBjR+gZGPRy2Vk7pcNLTP2+7wZl4HFQLNSBuREV1iwFlqpiJalcaCp/EeA4zLsX5d3D6rXtcsu+fvGY8R9FWmBE83hxrg3OV+GiM5nSndMOucFtx0DGXMz6BcTnA2avraPzNmJZDc21aggl3t70xpGLzrcjvTH/b2YKO1T431HbpRvqJHwGSJHXj28uWRa9N2N99vPxsn8n3tc49MpAYKni4UQpGqQlb7Rlwqz9r7iLq6KTokRpONAexSiYylgG4m37XVbwvccm5uFWOCPRYYUAoD9VdfsCdyLCnXUtjirpGN0uEGq5EXDxzPbvd7biVT23v66cnMNVu/DspKH5GJW6o8dfSdxWl4hF7z+URnwvqiFYdbYqQ+XLtmCSNEUbPL/czh/bSRfFlEkI7wejIJyuJpeFXtt6ToTjIz8XEp+0u7RwtqMdCwdNe/4cyNd5M+Q/sA2+WxXPbLfyo6wypKg fbZhSWlp UzkAAzp9x6Bw/19PjW0AImgR4btI2a7mJtPo1awPplxyAQHMhqRl/ezw33IRWH1YMBNivBN6Ix+sPcQZUitAD//jWl7DnWwD8EiDBTCFd/n6teiwyQXxlCn/I/khELLibfqftQBnwQxCOGDr/v5JvFkmvNxri8BlnNXJ3SkuvoeqOOtyU+nmHderNRXotW1hxeI4UYZXwr4E9b+alAUgEwTrmBxqzOhbTdYjXnzpOsfDOxQBuXnHjnY7niVc4wR4GMJ5KaYftr/7AZZ9atVxPLDSWsAWWVjJ9wUgLwe27IHm1nttX9NHpRldljv5rFRYo3Sos0aaHUDJvfMzmkbSQkfqA6Z/E7e/V+0poRu5PXregdbt7zS1lTkT7rQgA75pBezjCBpA9r26sMSyVziXdc/x7XkuMcfS633i0lzfQ5wJZn1IEhqrmOQnfIhxX3r+Y3eFXHK4Ef6MMjWV6x89Lepp3GCKCpzVL++7m+0CfeMnEt6foR7FNneuedNBJA90XqcSepuTO35PjLfn2DxcReeRer4+uaenhca1Z7AOlEIbEPN4QyCU4whlW5VXpg9h5Ii0YAUxXIH273aU8ECk8qHE9Q4HcmN2RR/+CUMEp5BNEJb0YbPGuTWBnClIyGW2Fl6mEtHWEUylF6YM1tvPj9lb3CiCZdx5SQN7dnfA1oNedtXCm15KSGBE1+7oc3QxMh4cgVNzSbRA1A5ztwLZcfDmvzVEIQiBxtDBr5NoLaSg5Lt6Cr5nvEkR6ji9FY4Vc9hyMPEH13fD7nNSHceFSBX41x45cmdT70SqxfrKuoLpPEIsJtnq49PbIobopCgUELyvAM/bxWOPgCJ+uQe4/kMMZILbHqIS8lg13sILkzC9On67ZHZ1j7gTk6bedV/KrCqm33JhNF/sHAu7SNQW9F1k09CkHurTr3mwCHjQbdyqX1eTO0lomUduk0pxlZ5IC/3F6ANP054GNbUwwDte2vIjQK9Sd PQ6n3MMW tFMdyQ1P9Xf3cs9OCjs6XY4YJjRXO4K82UNTZrBxUxrkh511RyoF2GoQhptZ+x3j5VrijQe1yvx5CqZEnWfJz3Fk2l4H3RQj9pzdXflrr8Qy7aEU6YC7rV5hHsgw+TK223S/IWF0sA7OEE8K2e6AhZPLfDEJNWYIFpA8q8zjD2QUQyPhVIsvAvnRDIT00+8I6W7m+EMWbIvQLByIHKoObCeORnONaDGJP2P/FFA8qA8= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: This is based on mm-unstable. I will only CC non-MM folks on the cover letter and the respective patch to not flood too many inboxes (the lists receive all patches). -- As discussed recently with Linus, nth_page() is just nasty and we would like to remove it. To recap, the reason we currently need nth_page() within a folio is because on some kernel configs (SPARSEMEM without SPARSEMEM_VMEMMAP), the memmap is allocated per memory section. While buddy allocations cannot cross memory section boundaries, hugetlb and dax folios can. So crossing a memory section means that "page++" could do the wrong thing. Instead, nth_page() on these problematic configs always goes from page->pfn, to the go from (++pfn)->page, which is rather nasty. Likely, many people have no idea when nth_page() is required and when it might be dropped. We refer to such problematic PFN ranges and "non-contiguous pages". If we only deal with "contiguous pages", there is not need for nth_page(). Besides that "obvious" folio case, we might end up using nth_page() within CMA allocations (again, could span memory sections), and in one corner case (kfence) when processing memblock allocations (again, could span memory sections). So let's handle all that, add sanity checks, and remove nth_page(). Patch #1 -> #5 : stop making SPARSEMEM_VMEMMAP user-selectable + cleanups Patch #6 -> #13 : disallow folios to have non-contiguous pages Patch #14 -> #20 : remove nth_page() usage within folios Patch #21 : disallow CMA allocations of non-contiguous pages Patch #22 -> #32 : sanity+check + remove nth_page() usage within SG entry Patch #33 : sanity-check + remove nth_page() usage in unpin_user_page_range_dirty_lock() Patch #34 : remove nth_page() in kfence Patch #35 : adjust stale comment regarding nth_page Patch #36 : mm: remove nth_page() A lot of this is inspired from the discussion at [1] between Linus, Jason and me, so cudos to them. [1] https://lore.kernel.org/all/CAHk-=wiCYfNp4AJLBORU-c7ZyRBUp66W2-Et6cdQ4REx-GyQ_A@mail.gmail.com/T/#u RFC -> v1: * "wireguard: selftests: remove CONFIG_SPARSEMEM_VMEMMAP=y from qemu kernel config" -> Mention that it was never really relevant for the test * "mm/mm_init: make memmap_init_compound() look more like prep_compound_page()" -> Mention the setup of page links * "mm: limit folio/compound page sizes in problematic kernel configs" -> Improve comment for PUD handling, mentioning hugetlb and dax * "mm: simplify folio_page() and folio_page_idx()" -> Call variable "n" * "mm/hugetlb: cleanup hugetlb_folio_init_tail_vmemmap()" -> Keep __init_single_page() and refer to the usage of memblock_reserved_mark_noinit() * "fs: hugetlbfs: cleanup folio in adjust_range_hwpoison()" * "fs: hugetlbfs: remove nth_page() usage within folio in adjust_range_hwpoison()" -> Separate nth_page() removal from cleanups -> Further improve cleanups * "io_uring/zcrx: remove nth_page() usage within folio" -> Keep the io_copy_cache for now and limit to nth_page() removal * "mm/gup: drop nth_page() usage within folio when recording subpages" -> Cleanup record_subpages as bit * "mm/cma: refuse handing out non-contiguous page ranges" -> Replace another instance of "pfn_to_page(pfn)" where we already have the page * "scatterlist: disallow non-contigous page ranges in a single SG entry" -> We have to EXPORT the symbol. I thought about moving it to mm_inline.h, but I really don't want to include that in include/linux/scatterlist.h * "ata: libata-eh: drop nth_page() usage within SG entry" * "mspro_block: drop nth_page() usage within SG entry" * "memstick: drop nth_page() usage within SG entry" * "mmc: drop nth_page() usage within SG entry" -> Keep PAGE_SHIFT * "scsi: scsi_lib: drop nth_page() usage within SG entry" * "scsi: sg: drop nth_page() usage within SG entry" -> Split patches, Keep PAGE_SHIFT * "crypto: remove nth_page() usage within SG entry" -> Keep PAGE_SHIFT * "kfence: drop nth_page() usage" -> Keep modifying i and use "start_pfn" only instead Cc: Andrew Morton Cc: Linus Torvalds Cc: Jason Gunthorpe Cc: Lorenzo Stoakes Cc: "Liam R. Howlett" Cc: Vlastimil Babka Cc: Mike Rapoport Cc: Suren Baghdasaryan Cc: Michal Hocko Cc: Jens Axboe Cc: Marek Szyprowski Cc: Robin Murphy Cc: John Hubbard Cc: Peter Xu Cc: Alexander Potapenko Cc: Marco Elver Cc: Dmitry Vyukov Cc: Brendan Jackman Cc: Johannes Weiner Cc: Zi Yan Cc: Dennis Zhou Cc: Tejun Heo Cc: Christoph Lameter Cc: Muchun Song Cc: Oscar Salvador Cc: x86@kernel.org Cc: linux-arm-kernel@lists.infradead.org Cc: linux-mips@vger.kernel.org Cc: linux-s390@vger.kernel.org Cc: linux-crypto@vger.kernel.org Cc: linux-ide@vger.kernel.org Cc: intel-gfx@lists.freedesktop.org Cc: dri-devel@lists.freedesktop.org Cc: linux-mmc@vger.kernel.org Cc: linux-arm-kernel@axis.com Cc: linux-scsi@vger.kernel.org Cc: kvm@vger.kernel.org Cc: virtualization@lists.linux.dev Cc: linux-mm@kvack.org Cc: io-uring@vger.kernel.org Cc: iommu@lists.linux.dev Cc: kasan-dev@googlegroups.com Cc: wireguard@lists.zx2c4.com Cc: netdev@vger.kernel.org Cc: linux-kselftest@vger.kernel.org Cc: linux-riscv@lists.infradead.org David Hildenbrand (36): mm: stop making SPARSEMEM_VMEMMAP user-selectable arm64: Kconfig: drop superfluous "select SPARSEMEM_VMEMMAP" s390/Kconfig: drop superfluous "select SPARSEMEM_VMEMMAP" x86/Kconfig: drop superfluous "select SPARSEMEM_VMEMMAP" wireguard: selftests: remove CONFIG_SPARSEMEM_VMEMMAP=y from qemu kernel config mm/page_alloc: reject unreasonable folio/compound page sizes in alloc_contig_range_noprof() mm/memremap: reject unreasonable folio/compound page sizes in memremap_pages() mm/hugetlb: check for unreasonable folio sizes when registering hstate mm/mm_init: make memmap_init_compound() look more like prep_compound_page() mm: sanity-check maximum folio size in folio_set_order() mm: limit folio/compound page sizes in problematic kernel configs mm: simplify folio_page() and folio_page_idx() mm/hugetlb: cleanup hugetlb_folio_init_tail_vmemmap() mm/mm/percpu-km: drop nth_page() usage within single allocation fs: hugetlbfs: remove nth_page() usage within folio in adjust_range_hwpoison() fs: hugetlbfs: cleanup folio in adjust_range_hwpoison() mm/pagewalk: drop nth_page() usage within folio in folio_walk_start() mm/gup: drop nth_page() usage within folio when recording subpages io_uring/zcrx: remove nth_page() usage within folio mips: mm: convert __flush_dcache_pages() to __flush_dcache_folio_pages() mm/cma: refuse handing out non-contiguous page ranges dma-remap: drop nth_page() in dma_common_contiguous_remap() scatterlist: disallow non-contigous page ranges in a single SG entry ata: libata-eh: drop nth_page() usage within SG entry drm/i915/gem: drop nth_page() usage within SG entry mspro_block: drop nth_page() usage within SG entry memstick: drop nth_page() usage within SG entry mmc: drop nth_page() usage within SG entry scsi: scsi_lib: drop nth_page() usage within SG entry scsi: sg: drop nth_page() usage within SG entry vfio/pci: drop nth_page() usage within SG entry crypto: remove nth_page() usage within SG entry mm/gup: drop nth_page() usage in unpin_user_page_range_dirty_lock() kfence: drop nth_page() usage block: update comment of "struct bio_vec" regarding nth_page() mm: remove nth_page() arch/arm64/Kconfig | 1 - arch/mips/include/asm/cacheflush.h | 11 +++-- arch/mips/mm/cache.c | 8 ++-- arch/s390/Kconfig | 1 - arch/x86/Kconfig | 1 - crypto/ahash.c | 4 +- crypto/scompress.c | 8 ++-- drivers/ata/libata-sff.c | 6 +-- drivers/gpu/drm/i915/gem/i915_gem_pages.c | 2 +- drivers/memstick/core/mspro_block.c | 3 +- drivers/memstick/host/jmb38x_ms.c | 3 +- drivers/memstick/host/tifm_ms.c | 3 +- drivers/mmc/host/tifm_sd.c | 4 +- drivers/mmc/host/usdhi6rol0.c | 4 +- drivers/scsi/scsi_lib.c | 3 +- drivers/scsi/sg.c | 3 +- drivers/vfio/pci/pds/lm.c | 3 +- drivers/vfio/pci/virtio/migrate.c | 3 +- fs/hugetlbfs/inode.c | 33 +++++-------- include/crypto/scatterwalk.h | 4 +- include/linux/bvec.h | 7 +-- include/linux/mm.h | 48 +++++++++++++++---- include/linux/page-flags.h | 5 +- include/linux/scatterlist.h | 3 +- io_uring/zcrx.c | 4 +- kernel/dma/remap.c | 2 +- mm/Kconfig | 3 +- mm/cma.c | 39 +++++++++------ mm/gup.c | 14 ++++-- mm/hugetlb.c | 22 +++++---- mm/internal.h | 1 + mm/kfence/core.c | 12 +++-- mm/memremap.c | 3 ++ mm/mm_init.c | 15 +++--- mm/page_alloc.c | 5 +- mm/pagewalk.c | 2 +- mm/percpu-km.c | 2 +- mm/util.c | 34 +++++++++++++ tools/testing/scatterlist/linux/mm.h | 1 - .../selftests/wireguard/qemu/kernel.config | 1 - 40 files changed, 202 insertions(+), 129 deletions(-) base-commit: efa7612003b44c220551fd02466bfbad5180fc83 -- 2.50.1