From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 784862E9735 for ; Thu, 21 Aug 2025 20:07:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1755806835; cv=none; b=mRj5lxj0M+fdMEnIxFQjYOCQfkUwjbLfIey+W0G/lyJclh4J1DGjFvWgGqgFspwT4Y9l3AlDx/jpxIXhPdVQe3+EUym429kAnQfOuC9LGzgOIkMn5kV5TBh7PTcf+4W2IA18f7xgCuagXNwpCznY1JiXvN3LG+9kdwM/PFKPtKQ= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1755806835; c=relaxed/simple; bh=6nSseGe+74q2JGOSoz3zO112Eo1zTQLz4H2FdYfsyUE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=U2c0W8BCJFXJiZOCNoIlKuodR/USJs4mxlLy33OAO7pZfSlUoA+Wy9voLtB00NAmUIzWV81p4Gb3WKqyhtpGIVyXL78RdP/4we6DHYRhRy9hchJNv9zncPKuq0KYw6/62SPNZ3GsqWjRp4XHvZZpSu7NK34E4FPwAEh7nFmCIn4= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=AoiZpf04; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="AoiZpf04" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1755806832; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=XmvTcrL5C0dM7rPtm73IEcsYc8PUVzmaMdzwCc2EnvI=; b=AoiZpf04JRoY9yaAqdpk3I+O4M+2+WbOriQ4y23dozlDLpW5qPiLh7Y5wIbHz0YA4cB8bL AdghLJoLP8R5Ltjc85RZQRqccMcKZfn5ZRQMtBCTtpDtwp6wwVKQRfMdlKzV5tGLBCEfvd oM5xBuLMtyWaUMt+8I/eBll0Uy6+k5s= Received: from mail-wm1-f70.google.com (mail-wm1-f70.google.com [209.85.128.70]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-159-r5ybPQLMMpK96CNccEB20g-1; Thu, 21 Aug 2025 16:07:10 -0400 X-MC-Unique: r5ybPQLMMpK96CNccEB20g-1 X-Mimecast-MFC-AGG-ID: r5ybPQLMMpK96CNccEB20g_1755806830 Received: by mail-wm1-f70.google.com with SMTP id 5b1f17b1804b1-45b467f5173so9411935e9.3 for ; Thu, 21 Aug 2025 13:07:10 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1755806829; x=1756411629; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=XmvTcrL5C0dM7rPtm73IEcsYc8PUVzmaMdzwCc2EnvI=; b=MqYinqTHP+ZCf3I7DQUUTQZzdph1q3LBjE43eDeerVMUI1sfkD03WLVdOY7aB5tRwq q/oBUvOEx9o3ilb1UpNaExh4wRqaZi4n1+EnyuPjq73hvDhxyrUnOkowOhUvSehJFN8c LAdwPl6Ed7qR/ByO8kuisA3tB9qtXyNq2UUHJ+u8Zzirs4Cp5d4jTmg5a7I2739ZJGZD VDYt48iHgvNbE6YEQ2DpmIPw3Yi5l8lXQt9vgX1Bz5w4WWbCxFbatYS1/TgvdnBFAEmX oXOzT+rBokQBbNsTqRKZNRecqnh+KuwDSBHRexldYO73JyGmYqUlLAe6c33FA2N4V8Mr +XMA== X-Forwarded-Encrypted: i=1; AJvYcCUvbdG2puYtA7C34AUfD+dOUyMcnWoxdtjKfZJacBpWXgXfKKNiRKAYdM9nODFtRGpQ2ZMha+EWJACg@vger.kernel.org X-Gm-Message-State: AOJu0YxRRYgEbNAtKzdCwrTKsBS2t+v6jjGCJxa4wngJ1E8vOABG5RwZ vgsXL6+4lW26PzkeAM4AXL7wcRDu5HdbFU6cX6wR94bnzpvtJaK5I0CdLQYCXcGjjoL2eLmLnmY rJGoDTn2zMITQZro63/466oq8YI+HHr3oUkOBoKIOXiBtPue6L7faDrDnOflJWIM= X-Gm-Gg: ASbGnctezTMwC/2109pwksxuWe69IklllMsdaOzwOeIzIZtF+N1a5ScCZfhEW+duXDa cT/JDit81jMdtV6FoqY9ysqAo0NYD+E0Zh4zEW4KvydhD4WiRqcE2UpREA0lpEjQuAqehS0P8+Z 2l0qlTojpAwNwWh3KZ1TFrUrFPoRqzov2tw2cu6EyALjBMDQqo5wzJRjyCFSLsi0GfdxQrsbHS2 XETnX9WmvCh0OvKyFpyPmI+rx/7enPhqr1faM01KPkztodkh6nfa36K/t6yadsLuZEXe+YvJnBp 8VtiFblKbya4oY91RtiRw3LBevwLvYLLNtqJJcNKRA2gwMaSaEprudOOE8JPFivZRLheFesnOob /EsuE1amhoOsM7rNmTgyzOQ== X-Received: by 2002:a05:600c:4506:b0:456:eab:633e with SMTP id 5b1f17b1804b1-45b517c5f34mr3673585e9.17.1755806829486; Thu, 21 Aug 2025 13:07:09 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGvYkVaGxfZdplMxnR9eu9+L8mDJqjqWM2enq3Cze2cE8zEp05huuco+bDpuuERUQ/1xaOAqQ== X-Received: by 2002:a05:600c:4506:b0:456:eab:633e with SMTP id 5b1f17b1804b1-45b517c5f34mr3673145e9.17.1755806828996; Thu, 21 Aug 2025 13:07:08 -0700 (PDT) Received: from localhost (p200300d82f26ba0008036ec5991806fd.dip0.t-ipconnect.de. [2003:d8:2f26:ba00:803:6ec5:9918:6fd]) by smtp.gmail.com with UTF8SMTPSA id 5b1f17b1804b1-45b50e1852asm8722665e9.25.2025.08.21.13.07.06 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 21 Aug 2025 13:07:08 -0700 (PDT) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: David Hildenbrand , Huacai Chen , WANG Xuerui , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , "David S. Miller" , Andreas Larsson , Alexander Potapenko , Andrew Morton , Brendan Jackman , Christoph Lameter , Dennis Zhou , Dmitry Vyukov , dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, iommu@lists.linux.dev, io-uring@vger.kernel.org, Jason Gunthorpe , Jens Axboe , Johannes Weiner , John Hubbard , kasan-dev@googlegroups.com, kvm@vger.kernel.org, "Liam R. Howlett" , Linus Torvalds , linux-arm-kernel@axis.com, linux-arm-kernel@lists.infradead.org, linux-crypto@vger.kernel.org, linux-ide@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-mips@vger.kernel.org, linux-mmc@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-scsi@vger.kernel.org, Lorenzo Stoakes , Marco Elver , Marek Szyprowski , Michal Hocko , Mike Rapoport , Muchun Song , netdev@vger.kernel.org, Oscar Salvador , Peter Xu , Robin Murphy , Suren Baghdasaryan , Tejun Heo , virtualization@lists.linux.dev, Vlastimil Babka , wireguard@lists.zx2c4.com, x86@kernel.org, Zi Yan Subject: [PATCH RFC 01/35] mm: stop making SPARSEMEM_VMEMMAP user-selectable Date: Thu, 21 Aug 2025 22:06:27 +0200 Message-ID: <20250821200701.1329277-2-david@redhat.com> X-Mailer: git-send-email 2.50.1 In-Reply-To: <20250821200701.1329277-1-david@redhat.com> References: <20250821200701.1329277-1-david@redhat.com> Precedence: bulk X-Mailing-List: linux-mips@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit In an ideal world, we wouldn't have to deal with SPARSEMEM without SPARSEMEM_VMEMMAP, but in particular for 32bit SPARSEMEM_VMEMMAP is considered too costly and consequently not supported. However, if an architecture does support SPARSEMEM with SPARSEMEM_VMEMMAP, let's forbid the user to disable VMEMMAP: just like we already do for arm64, s390 and x86. So if SPARSEMEM_VMEMMAP is supported, don't allow to use SPARSEMEM without SPARSEMEM_VMEMMAP. This implies that the option to not use SPARSEMEM_VMEMMAP will now be gone for loongarch, powerpc, riscv and sparc. All architectures only enable SPARSEMEM_VMEMMAP with 64bit support, so there should not really be a big downside to using the VMEMMAP (quite the contrary). This is a preparation for not supporting (1) folio sizes that exceed a single memory section (2) CMA allocations of non-contiguous page ranges in SPARSEMEM without SPARSEMEM_VMEMMAP configs, whereby we want to limit possible impact as much as possible (e.g., gigantic hugetlb page allocations suddenly fails). Cc: Huacai Chen Cc: WANG Xuerui Cc: Madhavan Srinivasan Cc: Michael Ellerman Cc: Nicholas Piggin Cc: Christophe Leroy Cc: Paul Walmsley Cc: Palmer Dabbelt Cc: Albert Ou Cc: Alexandre Ghiti Cc: "David S. Miller" Cc: Andreas Larsson Signed-off-by: David Hildenbrand --- mm/Kconfig | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/mm/Kconfig b/mm/Kconfig index 4108bcd967848..330d0e698ef96 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -439,9 +439,8 @@ config SPARSEMEM_VMEMMAP_ENABLE bool config SPARSEMEM_VMEMMAP - bool "Sparse Memory virtual memmap" + def_bool y depends on SPARSEMEM && SPARSEMEM_VMEMMAP_ENABLE - default y help SPARSEMEM_VMEMMAP uses a virtually mapped memmap to optimise pfn_to_page and page_to_pfn operations. This is the most -- 2.50.1