From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7895C2E9745 for ; Thu, 21 Aug 2025 20:07:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1755806835; cv=none; b=l9CZmDcoHAaTKACWeJ/x2kNnSo1jsIA53wNse55eMDVa93CO3Yl9sjQYDWJgBhZ/TMToc1+hfBAv+X5TOs+NsQtsWuyw6HT+kB91dh0PRfqR0EMfxHV0AU4I2ZToq1yERNYNF6tj2wNGZUI8T6jw7IcN/+ReNCnhvzmdtTunua8= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1755806835; c=relaxed/simple; bh=6nSseGe+74q2JGOSoz3zO112Eo1zTQLz4H2FdYfsyUE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=U2c0W8BCJFXJiZOCNoIlKuodR/USJs4mxlLy33OAO7pZfSlUoA+Wy9voLtB00NAmUIzWV81p4Gb3WKqyhtpGIVyXL78RdP/4we6DHYRhRy9hchJNv9zncPKuq0KYw6/62SPNZ3GsqWjRp4XHvZZpSu7NK34E4FPwAEh7nFmCIn4= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=AoiZpf04; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="AoiZpf04" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1755806832; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=XmvTcrL5C0dM7rPtm73IEcsYc8PUVzmaMdzwCc2EnvI=; b=AoiZpf04JRoY9yaAqdpk3I+O4M+2+WbOriQ4y23dozlDLpW5qPiLh7Y5wIbHz0YA4cB8bL AdghLJoLP8R5Ltjc85RZQRqccMcKZfn5ZRQMtBCTtpDtwp6wwVKQRfMdlKzV5tGLBCEfvd oM5xBuLMtyWaUMt+8I/eBll0Uy6+k5s= Received: from mail-wm1-f69.google.com (mail-wm1-f69.google.com [209.85.128.69]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-530-D3QIVxl2Mnm4p_XN1h6amQ-1; Thu, 21 Aug 2025 16:07:11 -0400 X-MC-Unique: D3QIVxl2Mnm4p_XN1h6amQ-1 X-Mimecast-MFC-AGG-ID: D3QIVxl2Mnm4p_XN1h6amQ_1755806830 Received: by mail-wm1-f69.google.com with SMTP id 5b1f17b1804b1-45a1b05d8d0so8767255e9.1 for ; Thu, 21 Aug 2025 13:07:10 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1755806830; x=1756411630; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=XmvTcrL5C0dM7rPtm73IEcsYc8PUVzmaMdzwCc2EnvI=; b=r4y1+BBkPuM11HvNT4Y+YtgWmazaYT/OK0V2rLQwDJSTCvmApJfbi9sPUJ/LkqsGiO V1VFAt1NcKLvp+2beH/vT+7Bj+w7KKR2eN4nt8PdreNXdv3jiAdT5KjGMCQtezf0vVRX JNPvp0n5YMWRlR9T5ACVwzb5VoJbHz/dbsFNNj8POKtojJxbPPZfPRh1lfCG7HoOnlld MH9fHXAvEP4osi0xljqMlLJGpsapN/VNHHTz6aKBQxzuBbOGMndhsds6o79eA9m3LCzv eCwVqREcX5sM4EeCQh/Gfn/2drK5msuCcaeu1ID7CMRWsYR3XapT9lYIoFz/9iH4xF+G 2eIQ== X-Forwarded-Encrypted: i=1; AJvYcCXL9ioAbaxwOXZilKB4CZy4KM/F3AZk8n617i9WSPhoxyZqiRFZGEMKQlgxLwBaPqemYhVVPYlnPD1w@vger.kernel.org X-Gm-Message-State: AOJu0Yxk/gBxLynFBb+SAVlaIyE/+xm/tSV4li2d02Me+fNMJCtCK68d +D1EeHr7DEwUZV1N8YYW/MiNnvQvntcX0V8AmfyVjUPNmYM8suTv9mp6kx4E19IcL+Q2Db10eKn 2ErWI14Mn5DQgaVCARR0+okcC5+MjxHXFVno4XEtPZcOsjvfVysAQm+4WLNapPt4= X-Gm-Gg: ASbGncuL+jVxiWMS6e+gvmkKckOulTeyqCuvpKl8Retl22OafjF0qhA/VPocLMXgYb8 Awst+wB/6zErfAkcW70atez6mrbODWPeON9UUYu3rsVBEUw+IidShO2fJIBeuC79Q5/awobttCu D5pEDtJe3AS78WAsV7Z8g4R6Og2qKaXpzhlUtZF98ZMyqXSift5vRMZZ1PmtcBSqRrKBv5v3Pmj /gIbgedHc611xvsIzN3v7atLhXyc8zlm6bvj9alJgncBgL6GNpM2ucpnvhd+KRTr5b1gkfzeSJE Q8KBEiJgeAb+240OayoKiNfvBj3UXeNTTg4pmdvc421jPu7o1BRI0wJCvxHOwOOseaVueLhNq3D 3tIxhnMuqzxGheQZtOQO/VA== X-Received: by 2002:a05:600c:4506:b0:456:eab:633e with SMTP id 5b1f17b1804b1-45b517c5f34mr3673675e9.17.1755806829494; Thu, 21 Aug 2025 13:07:09 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGvYkVaGxfZdplMxnR9eu9+L8mDJqjqWM2enq3Cze2cE8zEp05huuco+bDpuuERUQ/1xaOAqQ== X-Received: by 2002:a05:600c:4506:b0:456:eab:633e with SMTP id 5b1f17b1804b1-45b517c5f34mr3673145e9.17.1755806828996; Thu, 21 Aug 2025 13:07:08 -0700 (PDT) Received: from localhost (p200300d82f26ba0008036ec5991806fd.dip0.t-ipconnect.de. [2003:d8:2f26:ba00:803:6ec5:9918:6fd]) by smtp.gmail.com with UTF8SMTPSA id 5b1f17b1804b1-45b50e1852asm8722665e9.25.2025.08.21.13.07.06 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 21 Aug 2025 13:07:08 -0700 (PDT) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: David Hildenbrand , Huacai Chen , WANG Xuerui , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , "David S. Miller" , Andreas Larsson , Alexander Potapenko , Andrew Morton , Brendan Jackman , Christoph Lameter , Dennis Zhou , Dmitry Vyukov , dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, iommu@lists.linux.dev, io-uring@vger.kernel.org, Jason Gunthorpe , Jens Axboe , Johannes Weiner , John Hubbard , kasan-dev@googlegroups.com, kvm@vger.kernel.org, "Liam R. Howlett" , Linus Torvalds , linux-arm-kernel@axis.com, linux-arm-kernel@lists.infradead.org, linux-crypto@vger.kernel.org, linux-ide@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-mips@vger.kernel.org, linux-mmc@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-scsi@vger.kernel.org, Lorenzo Stoakes , Marco Elver , Marek Szyprowski , Michal Hocko , Mike Rapoport , Muchun Song , netdev@vger.kernel.org, Oscar Salvador , Peter Xu , Robin Murphy , Suren Baghdasaryan , Tejun Heo , virtualization@lists.linux.dev, Vlastimil Babka , wireguard@lists.zx2c4.com, x86@kernel.org, Zi Yan Subject: [PATCH RFC 01/35] mm: stop making SPARSEMEM_VMEMMAP user-selectable Date: Thu, 21 Aug 2025 22:06:27 +0200 Message-ID: <20250821200701.1329277-2-david@redhat.com> X-Mailer: git-send-email 2.50.1 In-Reply-To: <20250821200701.1329277-1-david@redhat.com> References: <20250821200701.1329277-1-david@redhat.com> Precedence: bulk X-Mailing-List: linux-s390@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit In an ideal world, we wouldn't have to deal with SPARSEMEM without SPARSEMEM_VMEMMAP, but in particular for 32bit SPARSEMEM_VMEMMAP is considered too costly and consequently not supported. However, if an architecture does support SPARSEMEM with SPARSEMEM_VMEMMAP, let's forbid the user to disable VMEMMAP: just like we already do for arm64, s390 and x86. So if SPARSEMEM_VMEMMAP is supported, don't allow to use SPARSEMEM without SPARSEMEM_VMEMMAP. This implies that the option to not use SPARSEMEM_VMEMMAP will now be gone for loongarch, powerpc, riscv and sparc. All architectures only enable SPARSEMEM_VMEMMAP with 64bit support, so there should not really be a big downside to using the VMEMMAP (quite the contrary). This is a preparation for not supporting (1) folio sizes that exceed a single memory section (2) CMA allocations of non-contiguous page ranges in SPARSEMEM without SPARSEMEM_VMEMMAP configs, whereby we want to limit possible impact as much as possible (e.g., gigantic hugetlb page allocations suddenly fails). Cc: Huacai Chen Cc: WANG Xuerui Cc: Madhavan Srinivasan Cc: Michael Ellerman Cc: Nicholas Piggin Cc: Christophe Leroy Cc: Paul Walmsley Cc: Palmer Dabbelt Cc: Albert Ou Cc: Alexandre Ghiti Cc: "David S. Miller" Cc: Andreas Larsson Signed-off-by: David Hildenbrand --- mm/Kconfig | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/mm/Kconfig b/mm/Kconfig index 4108bcd967848..330d0e698ef96 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -439,9 +439,8 @@ config SPARSEMEM_VMEMMAP_ENABLE bool config SPARSEMEM_VMEMMAP - bool "Sparse Memory virtual memmap" + def_bool y depends on SPARSEMEM && SPARSEMEM_VMEMMAP_ENABLE - default y help SPARSEMEM_VMEMMAP uses a virtually mapped memmap to optimise pfn_to_page and page_to_pfn operations. This is the most -- 2.50.1