From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 15ADD396B8C for ; Tue, 7 Apr 2026 07:35:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=193.142.43.55 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775547330; cv=none; b=jyZ2Gb2WS8p7oz6F+E3vcS0V3Ejs4NBbnH1h5W81xxJF5V7U2uxv0HN3j42dn6PxDccLTrAOFXWEKBfyZ1o6Mdzh8t9x/vjNouIpQKB3YTtBfYPkccya3GI0HqKzv6B3rbeUMSTZqJPqD3kkDT597lwh/IAwn/Tv2NWUfUXyJAU= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775547330; c=relaxed/simple; bh=HAEj2wxYiyXfPRDeaql30c6EcNm1ZP6ZeQngbi0CS3g=; h=From:To:Cc:Subject:In-Reply-To:References:Date:Message-ID: MIME-Version:Content-Type; b=gbkTVlV6yTdubdoOGugf/orOBWZEmzbyViLvCPp8hgLnG1T62Kgs0pVjkCDDwkuVqE+gLaH+F8OQ3fOq1yl7lPJID07WOwm1HZzv5S4QT5VxQDNjmlHrs58hDrqQtU1JHFVKDBOfnVpIuZqd7kOOt7kM/avBCIKDuRngpWsk2Jg= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de; spf=pass smtp.mailfrom=linutronix.de; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=tL7CFS10; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=h7a338o4; arc=none smtp.client-ip=193.142.43.55 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linutronix.de Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="tL7CFS10"; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="h7a338o4" From: Nam Cao DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1775547320; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=ugVAhhp+ofRCMbbySKGlB6bhEOTipVQtotrRtOXHRh8=; b=tL7CFS10gxqFMuZkhJYPUPLgJiYCtijpfwt3oo9080AThvkZ01xRJVurlmD9ecWcQtgev6 uAAZCkFIHIqB7uqJDSyE1pSOvC3LzMAEe4A5zkanQClieEQFvfGjLy8/uq+/I7YCPB19h8 lYAi2iAnWfJcMtT4F+PZ++wQrgTvB/z+VyqYlizfS9RbcMCNFvxbdPy35EV9x13SnPbzOr Iy1RU7aeZ1dgw1sn7R1WLnyavY1Zcuh9ONVEcgzw7NzX/3IiwSznzx+a8hN6lhLCPeHDKY M6caFh73KVA7gqC55GJ/9FwOTnIgehLGohcQXPpvK+yxlGmGCxsdfrMMg2W/uA== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1775547320; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=ugVAhhp+ofRCMbbySKGlB6bhEOTipVQtotrRtOXHRh8=; b=h7a338o4vDqDEAYCeUaA6GfA32JNeWx2Xzwo1avvEbRbYuWJ3Qy9bvGj3z2/z9d7fDz9N9 6mDGxQ8Y6cfqSwAQ== To: Michael Neuling , patchwork-bot+linux-riscv@kernel.org Cc: linux-riscv@lists.infradead.org, pjw@kernel.org, palmer@dabbelt.com, aou@eecs.berkeley.edu, alex@ghiti.fr, ajones@ventanamicro.com, cleger@rivosinc.com, linux-kernel@vger.kernel.org Subject: Re: [PATCH 0/5] riscv: Cleanup and deduplicate unaligned access speed probe In-Reply-To: References: <177524102078.1406513.13713846519162509736.git-patchwork-notify@kernel.org> Date: Tue, 07 Apr 2026 09:35:19 +0200 Message-ID: <87wlyj4408.fsf@yellow.woof> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain Michael Neuling writes: >> This series was applied to riscv/linux.git (for-next) >> by Paul Walmsley : > >> Here is the summary with links: >> - [1/5] riscv: Clean up & optimize unaligned scalar access probe >> https://git.kernel.org/riscv/c/c202d70b2244 > > I think this is causing a regression (SHA1 actually 6455c6c11827) . > Fast unaligned accesses are no longer being set ever. > > Analysis from Claude (Opus 4.6) with Chris Masons kernel patch review skills: I should start using these AIs.. > -- > > diff --git a/arch/riscv/kernel/unaligned_access_speed.c > b/arch/riscv/kernel/unaligned_access_speed.c > > index b36a6a56f4..1f4c128d73 100644 > > --- a/arch/riscv/kernel/unaligned_access_speed.c > > +++ b/arch/riscv/kernel/unaligned_access_speed.c > > [ ... ] > > > -arch_initcall(check_unaligned_access_all_cpus); > > +late_initcall(check_unaligned_access_all_cpus); > > With this change, check_unaligned_access_all_cpus() now runs at > late_initcall (level 7), but lock_and_set_unaligned_access_static_branch() > remains at arch_initcall_sync (level 3s): ... > Does this mean fast_unaligned_access_speed_key is never enabled at boot, > even on hardware with fast unaligned access? The comment in > set_unaligned_access_static_branches() says "This will be called after > check_unaligned_access_all_cpus" which is no longer true with this > ordering change. Thanks, you are indeed right. This affects do_csum()'s performance. The below patch should resolve the issue. I will send a proper patch later today after I have tested with my hardware. Nam diff --git a/arch/riscv/kernel/unaligned_access_speed.c b/arch/riscv/kernel/unaligned_access_speed.c index 485ab1d105d3..96ba80e6ea32 100644 --- a/arch/riscv/kernel/unaligned_access_speed.c +++ b/arch/riscv/kernel/unaligned_access_speed.c @@ -244,7 +244,7 @@ static int __init lock_and_set_unaligned_access_static_branch(void) return 0; } -arch_initcall_sync(lock_and_set_unaligned_access_static_branch); +late_initcall_sync(lock_and_set_unaligned_access_static_branch); static int riscv_online_cpu(unsigned int cpu) {