From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 00A9B34DCE6 for ; Thu, 21 Aug 2025 20:07:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1755806836; cv=none; b=oqTdG/nxHMSuGMjtHZw9ZLJyMDh0TKbJeiAhEIv9Aib1KklHcCD/8HJa5MkBneS+S2kWSg2+NU488zVCv/siYXzGoMzn8jlHsDE0xqNddkySGqlru/zUvGJ1lLFCslwvohHv63iYLtS26GQ6lTuPv6/soEopXVd/YTBCLje13e0= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1755806836; c=relaxed/simple; bh=6nSseGe+74q2JGOSoz3zO112Eo1zTQLz4H2FdYfsyUE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=WkNq2cwDcrssQsD3LFpMF60XpNA23vFFsBts1WltDaR/z3svxog8bAZWYkKaoT0bW/7ViuJS/R+5+xMutGMvzj/xIg4FX7JrZL4KijoNgDO7pF5kuZ7RGRL52vr+GZGTKhtUgCyF3AL/afhQyWQw8Yt/6pzX63ohLBgvGjByCMo= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=AMHEgVhK; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="AMHEgVhK" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1755806833; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=XmvTcrL5C0dM7rPtm73IEcsYc8PUVzmaMdzwCc2EnvI=; b=AMHEgVhKsAOyLQUcGBbH1u7jlwCNbrL5HmIHPlJd8fv6ZbZdXuUAP3R+3pWWr6I8m3nV8r QTEq1nHDnb9yAAcW7fdZZBBAFlz8AL8h2zb8RooTf1K6VCTUlHLNkq/N8OJojEyf0nJ65C ExcKJGjs3WheDpImxlwUxEbc82aadW8= Received: from mail-wm1-f70.google.com (mail-wm1-f70.google.com [209.85.128.70]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-248-o1oSVJnrM5OyzQRnoyugdQ-1; Thu, 21 Aug 2025 16:07:11 -0400 X-MC-Unique: o1oSVJnrM5OyzQRnoyugdQ-1 X-Mimecast-MFC-AGG-ID: o1oSVJnrM5OyzQRnoyugdQ_1755806830 Received: by mail-wm1-f70.google.com with SMTP id 5b1f17b1804b1-45a1b0098c0so8404305e9.0 for ; Thu, 21 Aug 2025 13:07:10 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1755806829; x=1756411629; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=XmvTcrL5C0dM7rPtm73IEcsYc8PUVzmaMdzwCc2EnvI=; b=nlH/vFbog8a2Cs9W0F5wAqrmSBUi21NwHxzTopm7iOJuga86Dvp4Gv91ty/C9E/Pa+ dDla1k0986A7VTYZOQHoOouYS0t0i6xp5srknu+19qxZpFiiiNIdYUZyT/toQR9SUQ7+ I85GgW+F9ys/dkLQ+H8J3S51ElfurWGTYq3oDFSqBhWsWFUOxttB+ff+jhWEY+kn2oV6 VmYSLeZr2Rs6XMCxh9lCI/UIWuGcqbJhaYK1dTMsX0sK3aLoxZyCyo/H5+MqicyMZuNZ z2NYYkefosFbWj6tPg7SLH/rwn3fPlgndHrJ61zkAKtRIx2dvT9tOyD9kB6ue7xR0+AN bzCg== X-Forwarded-Encrypted: i=1; AJvYcCU2M/CdT1q9NvDme9+qwXXFGwctXSeKNKMRi+EoGB2d4aAtPoAlz29QVGk84VZuFDBXFxQdVIA=@vger.kernel.org X-Gm-Message-State: AOJu0YyK784QOlwoTKlbdC6GS3Zh3Kd1GQk7K3oVL0dh8vFVpVGgfV27 jT9l4K6jFFTp9oTfKVfNMrf7RcFOHB0CuQooE9qptWDukM2Ei6ZfKch52CfDpvk/B02Q/9eF3rj GVuxRvsZA7oIMRE5+bFggvO9Ljd6cDzQ+Li1+sWvMwdkKNcRj3C17uJjlnw== X-Gm-Gg: ASbGncs0AaRN+4GQT/K5su5Ke2SIAHacyNcdq10i8QeTMeyF5mj++nc4kKicdrD+lq5 DadVvoKH5pLpT+a/NCHWLl34NqCnRUe6oSo7uWzKWqePYzb7j91UDT0Evhv8VYNd2DuXN0zufkw izFJSGncoxtEUmo5MF+xyLo8ShwUeKaabJWFVmy67Qhxz9jIoJfvSoiQ1T24Je9HQb6Ixpg/ytw yZmkhPZ0zsBhOPslLq+0+QQIbd5za82kUO3PZ0ARIYCZQiIhPok33kgbU0doG7dZ8o+QEknkfjC 1vM2xQ6qqCesIVQTqxQ2cP1zauBHFGxbzAHNE+BnrAdJc1CpzqFAaib+8KKjRz4+aR/jvBIPAFu J/PEA5megXO6//hgr3WSymQ== X-Received: by 2002:a05:600c:4506:b0:456:eab:633e with SMTP id 5b1f17b1804b1-45b517c5f34mr3673895e9.17.1755806829526; Thu, 21 Aug 2025 13:07:09 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGvYkVaGxfZdplMxnR9eu9+L8mDJqjqWM2enq3Cze2cE8zEp05huuco+bDpuuERUQ/1xaOAqQ== X-Received: by 2002:a05:600c:4506:b0:456:eab:633e with SMTP id 5b1f17b1804b1-45b517c5f34mr3673145e9.17.1755806828996; Thu, 21 Aug 2025 13:07:08 -0700 (PDT) Received: from localhost (p200300d82f26ba0008036ec5991806fd.dip0.t-ipconnect.de. [2003:d8:2f26:ba00:803:6ec5:9918:6fd]) by smtp.gmail.com with UTF8SMTPSA id 5b1f17b1804b1-45b50e1852asm8722665e9.25.2025.08.21.13.07.06 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 21 Aug 2025 13:07:08 -0700 (PDT) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: David Hildenbrand , Huacai Chen , WANG Xuerui , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , "David S. Miller" , Andreas Larsson , Alexander Potapenko , Andrew Morton , Brendan Jackman , Christoph Lameter , Dennis Zhou , Dmitry Vyukov , dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, iommu@lists.linux.dev, io-uring@vger.kernel.org, Jason Gunthorpe , Jens Axboe , Johannes Weiner , John Hubbard , kasan-dev@googlegroups.com, kvm@vger.kernel.org, "Liam R. Howlett" , Linus Torvalds , linux-arm-kernel@axis.com, linux-arm-kernel@lists.infradead.org, linux-crypto@vger.kernel.org, linux-ide@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-mips@vger.kernel.org, linux-mmc@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-scsi@vger.kernel.org, Lorenzo Stoakes , Marco Elver , Marek Szyprowski , Michal Hocko , Mike Rapoport , Muchun Song , netdev@vger.kernel.org, Oscar Salvador , Peter Xu , Robin Murphy , Suren Baghdasaryan , Tejun Heo , virtualization@lists.linux.dev, Vlastimil Babka , wireguard@lists.zx2c4.com, x86@kernel.org, Zi Yan Subject: [PATCH RFC 01/35] mm: stop making SPARSEMEM_VMEMMAP user-selectable Date: Thu, 21 Aug 2025 22:06:27 +0200 Message-ID: <20250821200701.1329277-2-david@redhat.com> X-Mailer: git-send-email 2.50.1 In-Reply-To: <20250821200701.1329277-1-david@redhat.com> References: <20250821200701.1329277-1-david@redhat.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit In an ideal world, we wouldn't have to deal with SPARSEMEM without SPARSEMEM_VMEMMAP, but in particular for 32bit SPARSEMEM_VMEMMAP is considered too costly and consequently not supported. However, if an architecture does support SPARSEMEM with SPARSEMEM_VMEMMAP, let's forbid the user to disable VMEMMAP: just like we already do for arm64, s390 and x86. So if SPARSEMEM_VMEMMAP is supported, don't allow to use SPARSEMEM without SPARSEMEM_VMEMMAP. This implies that the option to not use SPARSEMEM_VMEMMAP will now be gone for loongarch, powerpc, riscv and sparc. All architectures only enable SPARSEMEM_VMEMMAP with 64bit support, so there should not really be a big downside to using the VMEMMAP (quite the contrary). This is a preparation for not supporting (1) folio sizes that exceed a single memory section (2) CMA allocations of non-contiguous page ranges in SPARSEMEM without SPARSEMEM_VMEMMAP configs, whereby we want to limit possible impact as much as possible (e.g., gigantic hugetlb page allocations suddenly fails). Cc: Huacai Chen Cc: WANG Xuerui Cc: Madhavan Srinivasan Cc: Michael Ellerman Cc: Nicholas Piggin Cc: Christophe Leroy Cc: Paul Walmsley Cc: Palmer Dabbelt Cc: Albert Ou Cc: Alexandre Ghiti Cc: "David S. Miller" Cc: Andreas Larsson Signed-off-by: David Hildenbrand --- mm/Kconfig | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/mm/Kconfig b/mm/Kconfig index 4108bcd967848..330d0e698ef96 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -439,9 +439,8 @@ config SPARSEMEM_VMEMMAP_ENABLE bool config SPARSEMEM_VMEMMAP - bool "Sparse Memory virtual memmap" + def_bool y depends on SPARSEMEM && SPARSEMEM_VMEMMAP_ENABLE - default y help SPARSEMEM_VMEMMAP uses a virtually mapped memmap to optimise pfn_to_page and page_to_pfn operations. This is the most -- 2.50.1