From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from zeniv.linux.org.uk (zeniv.linux.org.uk [62.89.141.173]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DBB3F2FDC54 for ; Sun, 30 Nov 2025 23:37:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=62.89.141.173 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764545834; cv=none; b=H6Aq8MZwGLSS0kZtHVw/4JWZDokjpbns0zpuRqU2A+laNh5AKU0dMhSlREwdLv53bRyyQqmF8frtydNZbMgL/uPF9XMFkGVnOCREvJ7WxByVtgqjgbCDciHcYrWeeMJAPxLwCTkBiMm/fC1Gyvhp/zT1tJuNodmqjtgedkUNL4w= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764545834; c=relaxed/simple; bh=nErT/gQOpbZtWMfRU61HUsBx+6rrm4l5Grmw3APY820=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=P03rpQxYSlDWz8sPsSVlmYSAHjotQKBtF9pu2AfCJR0sX5AzVb7kh2r7gDdsAO8X8lBAsY/Xoh6vGgxk04x7F9Rwcu+4Zicl8b3K0Ejls6FpK9eZ+rqeOHRA+0ggk/EliRwAksO0KSm2WTkDq+oXkFpF/WMMCQ74viMVZz12Ekk= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=zeniv.linux.org.uk; spf=none smtp.mailfrom=ftp.linux.org.uk; dkim=pass (2048-bit key) header.d=linux.org.uk header.i=@linux.org.uk header.b=Vc4Kzk4+; arc=none smtp.client-ip=62.89.141.173 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=zeniv.linux.org.uk Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=ftp.linux.org.uk Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linux.org.uk header.i=@linux.org.uk header.b="Vc4Kzk4+" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=linux.org.uk; s=zeniv-20220401; h=Sender:In-Reply-To:Content-Type: MIME-Version:References:Message-ID:Subject:Cc:To:From:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=6A6nRlbxu81E5DgNLqhZWyXWMrox8y4JunfkDrfG3Zw=; b=Vc4Kzk4+jIE96zP+Grhjs8zm4A f6whS+K0GhxxDg8VB2bhDVc/HVy9YOH90cvf3N0JVpm4HOYwfT9whO8HUBQhP478rNs6HV3w2dOUZ uE5VZKKdHtPZuOJclMPU44XIF78AS9E3x35RhFGOCPvyEzYFE0H4zNJg/c2fEG2utFPKaQncZKK8S Hko9+2OL4/iaPkX/ITcipif3Gw112U7DPy59LPjllXxE8uJBmrNlPbUemzvQFRuR6RnLMVKrkefdN 4suhWsrSALdLL7msS4Z4U33L/cymFcuvUL4ZXCOVNNh5i2137mwQH4qO0NNjx+WXgE5PcU5c5XHaG 1I2eblNg==; Received: from viro by zeniv.linux.org.uk with local (Exim 4.99 #2 (Red Hat Linux)) id 1vPqyk-000000015iv-2Ojl; Sun, 30 Nov 2025 23:37:18 +0000 Date: Sun, 30 Nov 2025 23:37:18 +0000 From: Al Viro To: Linus Torvalds Cc: akpm@linux-foundation.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org Subject: Re: [RFC][alpha] saner vmalloc handling (was Re: [Bug report] hash_name() may cross page boundary and trigger sleep in RCU context) Message-ID: <20251130233718.GY3538@ZenIV> References: <20251126090505.3057219-1-wozizhi@huaweicloud.com> <20251126185545.GC3538@ZenIV> <20251129033728.GH3538@ZenIV> <20251130030146.GN3538@ZenIV> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Sender: Al Viro On Sun, Nov 30, 2025 at 02:16:13PM -0800, Linus Torvalds wrote: > On Sat, 29 Nov 2025 at 19:01, Al Viro wrote: > > > > + Default is 8Gb total and under normal circumstances, this is so > > + far and above what is needed as to be laughable. However, there are > > + certain applications (such as benchmark-grade in-kernel web serving) > > + that can make use of as much vmalloc space as is available. > > I wonder if we even need the config variable? > > Because this reads like the whole feature exists due to the old 'tux' > web server thing (from the early 2000's - long long gone, never merged > upstream). > > So I'm not sure there are any actual real use-cases for tons of > vmalloc space on alpha. > > Anyway, I see no real objections to the patch, only a "maybe it could > be cut down even more". FWIW, I'm trying to figure out what's going on with amd64 in that area; we used to do allocate-on-demand until 2020, when Joerg went for "let's preallocate them" and killed arch_sync_kernel_mappings(), which got reverted soon after, only to be brought back when Joerg had fixed the bug in preallocation. It stayed that way until this August, when commit 6659d027998083fbb6d42a165b0c90dc2e8ba989 Author: Harry Yoo Date: Mon Aug 18 11:02:06 2025 +0900 x86/mm/64: define ARCH_PAGE_TABLE_SYNC_MASK and arch_sync_kernel_mappings() happened, with reference to this commit 8d400913c231bd1da74067255816453f96cd35b0 Author: Oscar Salvador Date: Thu Apr 29 22:57:19 2021 -0700 x86/vmemmap: handle unpopulated sub-pmd ranges What I don't understand is how does that manage to avoid the same race - on #PF amd64 does not bother with vmalloc_fault logics. Exact same scenario with two vmalloc() on different CPUs would seem to apply here as well... Which callers of arch_sync_kernel_mappings() are involved? If it's anything in mm/vmalloc.c, I really don't see how that could be correct; if it's about apply_to_page_range() and calls never hit vmalloc space, we might be OK, but it would be nice to have described somewhere... Am I missing something obvious here?