* [PATCH v2] riscv: hwprobe: Fix stale vDSO data for late-initialized keys at boot
@ 2025-05-22 7:33 Jingwei Wang
2025-05-23 1:40 ` Yanteng Si
2025-05-27 14:16 ` Jesse Taube
0 siblings, 2 replies; 3+ messages in thread
From: Jingwei Wang @ 2025-05-22 7:33 UTC (permalink / raw)
To: linux-riscv
Cc: Paul Walmsley, Palmer Dabbelt, Albert Ou, Alexandre Ghiti,
Andrew Jones, Conor Dooley, Clément Léger,
Charlie Jenkins, Jesse Taube, Yixun Lan, Tsukasa OI, stable,
Jingwei Wang
The riscv_hwprobe vDSO data is populated by init_hwprobe_vdso_data(),
an arch_initcall_sync. However, underlying data for some keys, like
RISCV_HWPROBE_KEY_MISALIGNED_VECTOR_PERF, is determined asynchronously.
Specifically, the per_cpu(vector_misaligned_access, cpu) values are set
by the vec_check_unaligned_access_speed_all_cpus kthread. This kthread
is spawned by an earlier arch_initcall (check_unaligned_access_all_cpus)
and may complete its benchmark *after* init_hwprobe_vdso_data() has
already populated the vDSO with default/stale values.
So, refresh the vDSO data for specified keys (e.g.,
MISALIGNED_VECTOR_PERF) ensuring it reflects the final boot-time values.
Test by comparing vDSO and syscall results for affected keys
(e.g., MISALIGNED_VECTOR_PERF), which now match their final
boot-time values.
Reported-by: Tsukasa OI <research_trasio@irq.a4lg.com>
Closes: https://lore.kernel.org/linux-riscv/760d637b-b13b-4518-b6bf-883d55d44e7f@irq.a4lg.com/
Fixes: e7c9d66e313b ("RISC-V: Report vector unaligned access speed hwprobe")
Cc: stable@vger.kernel.org
Signed-off-by: Jingwei Wang <wangjingwei@iscas.ac.cn>
---
Changes in v2:
- Addressed feedback from Yixun's regarding #ifdef CONFIG_MMU usage.
- Updated commit message to provide a high-level summary.
- Added Fixes tag for commit e7c9d66e313b.
v1: https://lore.kernel.org/linux-riscv/20250521052754.185231-1-wangjingwei@iscas.ac.cn/T/#u
arch/riscv/include/asm/hwprobe.h | 6 ++++++
arch/riscv/kernel/sys_hwprobe.c | 16 ++++++++++++++++
arch/riscv/kernel/unaligned_access_speed.c | 2 +-
3 files changed, 23 insertions(+), 1 deletion(-)
diff --git a/arch/riscv/include/asm/hwprobe.h b/arch/riscv/include/asm/hwprobe.h
index 1f690fea0e03de6a..58dc847d86c7f2b0 100644
--- a/arch/riscv/include/asm/hwprobe.h
+++ b/arch/riscv/include/asm/hwprobe.h
@@ -40,4 +40,10 @@ static inline bool riscv_hwprobe_pair_cmp(struct riscv_hwprobe *pair,
return pair->value == other_pair->value;
}
+#ifdef CONFIG_MMU
+void riscv_hwprobe_vdso_sync(__s64 sync_key);
+#else
+static inline void riscv_hwprobe_vdso_sync(__s64 sync_key) { };
+#endif /* CONFIG_MMU */
+
#endif
diff --git a/arch/riscv/kernel/sys_hwprobe.c b/arch/riscv/kernel/sys_hwprobe.c
index 249aec8594a92a80..2e3e612b7ac6fd57 100644
--- a/arch/riscv/kernel/sys_hwprobe.c
+++ b/arch/riscv/kernel/sys_hwprobe.c
@@ -17,6 +17,7 @@
#include <asm/vector.h>
#include <asm/vendor_extensions/thead_hwprobe.h>
#include <vdso/vsyscall.h>
+#include <vdso/datapage.h>
static void hwprobe_arch_id(struct riscv_hwprobe *pair,
@@ -500,6 +501,21 @@ static int __init init_hwprobe_vdso_data(void)
arch_initcall_sync(init_hwprobe_vdso_data);
+void riscv_hwprobe_vdso_sync(__s64 sync_key)
+{
+ struct vdso_arch_data *avd = vdso_k_arch_data;
+ struct riscv_hwprobe pair;
+
+ pair.key = sync_key;
+ hwprobe_one_pair(&pair, cpu_online_mask);
+ /*
+ * Update vDSO data for the given key.
+ * Currently for non-ID key updates (e.g. MISALIGNED_VECTOR_PERF),
+ * so 'homogeneous_cpus' is not re-evaluated here.
+ */
+ avd->all_cpu_hwprobe_values[sync_key] = pair.value;
+}
+
#endif /* CONFIG_MMU */
SYSCALL_DEFINE5(riscv_hwprobe, struct riscv_hwprobe __user *, pairs,
diff --git a/arch/riscv/kernel/unaligned_access_speed.c b/arch/riscv/kernel/unaligned_access_speed.c
index 585d2dcf2dab1ccb..81bc4997350acc87 100644
--- a/arch/riscv/kernel/unaligned_access_speed.c
+++ b/arch/riscv/kernel/unaligned_access_speed.c
@@ -375,7 +375,7 @@ static void check_vector_unaligned_access(struct work_struct *work __always_unus
static int __init vec_check_unaligned_access_speed_all_cpus(void *unused __always_unused)
{
schedule_on_each_cpu(check_vector_unaligned_access);
-
+ riscv_hwprobe_vdso_sync(RISCV_HWPROBE_KEY_MISALIGNED_VECTOR_PERF);
return 0;
}
#else /* CONFIG_RISCV_PROBE_VECTOR_UNALIGNED_ACCESS */
--
2.49.0
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply related [flat|nested] 3+ messages in thread
* Re: [PATCH v2] riscv: hwprobe: Fix stale vDSO data for late-initialized keys at boot
2025-05-22 7:33 [PATCH v2] riscv: hwprobe: Fix stale vDSO data for late-initialized keys at boot Jingwei Wang
@ 2025-05-23 1:40 ` Yanteng Si
2025-05-27 14:16 ` Jesse Taube
1 sibling, 0 replies; 3+ messages in thread
From: Yanteng Si @ 2025-05-23 1:40 UTC (permalink / raw)
To: Jingwei Wang, linux-riscv
Cc: Paul Walmsley, Palmer Dabbelt, Albert Ou, Alexandre Ghiti,
Andrew Jones, Conor Dooley, Clément Léger,
Charlie Jenkins, Jesse Taube, Yixun Lan, Tsukasa OI, stable
在 5/22/25 3:33 PM, Jingwei Wang 写道:
> The riscv_hwprobe vDSO data is populated by init_hwprobe_vdso_data(),
> an arch_initcall_sync. However, underlying data for some keys, like
> RISCV_HWPROBE_KEY_MISALIGNED_VECTOR_PERF, is determined asynchronously.
>
> Specifically, the per_cpu(vector_misaligned_access, cpu) values are set
> by the vec_check_unaligned_access_speed_all_cpus kthread. This kthread
> is spawned by an earlier arch_initcall (check_unaligned_access_all_cpus)
> and may complete its benchmark *after* init_hwprobe_vdso_data() has
> already populated the vDSO with default/stale values.
>
> So, refresh the vDSO data for specified keys (e.g.,
> MISALIGNED_VECTOR_PERF) ensuring it reflects the final boot-time values.
>
> Test by comparing vDSO and syscall results for affected keys
> (e.g., MISALIGNED_VECTOR_PERF), which now match their final
> boot-time values.
>
> Reported-by: Tsukasa OI <research_trasio@irq.a4lg.com>
> Closes: https://lore.kernel.org/linux-riscv/760d637b-b13b-4518-b6bf-883d55d44e7f@irq.a4lg.com/
> Fixes: e7c9d66e313b ("RISC-V: Report vector unaligned access speed hwprobe")
> Cc: stable@vger.kernel.org
> Signed-off-by: Jingwei Wang <wangjingwei@iscas.ac.cn>
> ---
> Changes in v2:
> - Addressed feedback from Yixun's regarding #ifdef CONFIG_MMU usage.
> - Updated commit message to provide a high-level summary.
> - Added Fixes tag for commit e7c9d66e313b.
>
> v1: https://lore.kernel.org/linux-riscv/20250521052754.185231-1-wangjingwei@iscas.ac.cn/T/#u
>
> arch/riscv/include/asm/hwprobe.h | 6 ++++++
> arch/riscv/kernel/sys_hwprobe.c | 16 ++++++++++++++++
> arch/riscv/kernel/unaligned_access_speed.c | 2 +-
> 3 files changed, 23 insertions(+), 1 deletion(-)
>
> diff --git a/arch/riscv/include/asm/hwprobe.h b/arch/riscv/include/asm/hwprobe.h
> index 1f690fea0e03de6a..58dc847d86c7f2b0 100644
> --- a/arch/riscv/include/asm/hwprobe.h
> +++ b/arch/riscv/include/asm/hwprobe.h
> @@ -40,4 +40,10 @@ static inline bool riscv_hwprobe_pair_cmp(struct riscv_hwprobe *pair,
> return pair->value == other_pair->value;
> }
>
> +#ifdef CONFIG_MMU
> +void riscv_hwprobe_vdso_sync(__s64 sync_key);
> +#else
> +static inline void riscv_hwprobe_vdso_sync(__s64 sync_key) { };
> +#endif /* CONFIG_MMU */
> +
> #endif
> diff --git a/arch/riscv/kernel/sys_hwprobe.c b/arch/riscv/kernel/sys_hwprobe.c
> index 249aec8594a92a80..2e3e612b7ac6fd57 100644
> --- a/arch/riscv/kernel/sys_hwprobe.c
> +++ b/arch/riscv/kernel/sys_hwprobe.c
> @@ -17,6 +17,7 @@
> #include <asm/vector.h>
> #include <asm/vendor_extensions/thead_hwprobe.h>
> #include <vdso/vsyscall.h>
> +#include <vdso/datapage.h>
>
>
> static void hwprobe_arch_id(struct riscv_hwprobe *pair,
> @@ -500,6 +501,21 @@ static int __init init_hwprobe_vdso_data(void)
>
> arch_initcall_sync(init_hwprobe_vdso_data);
>
> +void riscv_hwprobe_vdso_sync(__s64 sync_key)
> +{
> + struct vdso_arch_data *avd = vdso_k_arch_data;
> + struct riscv_hwprobe pair;
> +
> + pair.key = sync_key;
> + hwprobe_one_pair(&pair, cpu_online_mask);
> + /*
> + * Update vDSO data for the given key.
> + * Currently for non-ID key updates (e.g. MISALIGNED_VECTOR_PERF),
> + * so 'homogeneous_cpus' is not re-evaluated here.
> + */
> + avd->all_cpu_hwprobe_values[sync_key] = pair.value;
> +}
> +
> #endif /* CONFIG_MMU */
>
> SYSCALL_DEFINE5(riscv_hwprobe, struct riscv_hwprobe __user *, pairs,
> diff --git a/arch/riscv/kernel/unaligned_access_speed.c b/arch/riscv/kernel/unaligned_access_speed.c
> index 585d2dcf2dab1ccb..81bc4997350acc87 100644
> --- a/arch/riscv/kernel/unaligned_access_speed.c
> +++ b/arch/riscv/kernel/unaligned_access_speed.c
> @@ -375,7 +375,7 @@ static void check_vector_unaligned_access(struct work_struct *work __always_unus
> static int __init vec_check_unaligned_access_speed_all_cpus(void *unused __always_unused)
> {
> schedule_on_each_cpu(check_vector_unaligned_access);
> -
Although no one stipulates that a blank line must be left
before the return value, this patch is not intended to solve
this problem in the first place, so let's not delete this
blank line in the patch?
> + riscv_hwprobe_vdso_sync(RISCV_HWPROBE_KEY_MISALIGNED_VECTOR_PERF);
> return 0;
> }
> #else /* CONFIG_RISCV_PROBE_VECTOR_UNALIGNED_ACCESS */
LGTM, So
Reviewed-by: Yanteng Si <si.yanteng@linux.dev>
Thanks,
Yanteng
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [PATCH v2] riscv: hwprobe: Fix stale vDSO data for late-initialized keys at boot
2025-05-22 7:33 [PATCH v2] riscv: hwprobe: Fix stale vDSO data for late-initialized keys at boot Jingwei Wang
2025-05-23 1:40 ` Yanteng Si
@ 2025-05-27 14:16 ` Jesse Taube
1 sibling, 0 replies; 3+ messages in thread
From: Jesse Taube @ 2025-05-27 14:16 UTC (permalink / raw)
To: Jingwei Wang
Cc: linux-riscv, Paul Walmsley, Palmer Dabbelt, Albert Ou,
Alexandre Ghiti, Andrew Jones, Conor Dooley,
Clément Léger, Charlie Jenkins, Yixun Lan, Tsukasa OI,
stable
On Thu, May 22, 2025 at 12:34 AM Jingwei Wang <wangjingwei@iscas.ac.cn> wrote:
>
> The riscv_hwprobe vDSO data is populated by init_hwprobe_vdso_data(),
> an arch_initcall_sync. However, underlying data for some keys, like
> RISCV_HWPROBE_KEY_MISALIGNED_VECTOR_PERF, is determined asynchronously.
>
> Specifically, the per_cpu(vector_misaligned_access, cpu) values are set
> by the vec_check_unaligned_access_speed_all_cpus kthread. This kthread
> is spawned by an earlier arch_initcall (check_unaligned_access_all_cpus)
> and may complete its benchmark *after* init_hwprobe_vdso_data() has
> already populated the vDSO with default/stale values.
>
> So, refresh the vDSO data for specified keys (e.g.,
> MISALIGNED_VECTOR_PERF) ensuring it reflects the final boot-time values.
>
> Test by comparing vDSO and syscall results for affected keys
> (e.g., MISALIGNED_VECTOR_PERF), which now match their final
> boot-time values.
>
> Reported-by: Tsukasa OI <research_trasio@irq.a4lg.com>
> Closes: https://lore.kernel.org/linux-riscv/760d637b-b13b-4518-b6bf-883d55d44e7f@irq.a4lg.com/
> Fixes: e7c9d66e313b ("RISC-V: Report vector unaligned access speed hwprobe")
> Cc: stable@vger.kernel.org
> Signed-off-by: Jingwei Wang <wangjingwei@iscas.ac.cn>
> ---
> Changes in v2:
> - Addressed feedback from Yixun's regarding #ifdef CONFIG_MMU usage.
> - Updated commit message to provide a high-level summary.
> - Added Fixes tag for commit e7c9d66e313b.
>
> v1: https://lore.kernel.org/linux-riscv/20250521052754.185231-1-wangjingwei@iscas.ac.cn/T/#u
>
> arch/riscv/include/asm/hwprobe.h | 6 ++++++
> arch/riscv/kernel/sys_hwprobe.c | 16 ++++++++++++++++
> arch/riscv/kernel/unaligned_access_speed.c | 2 +-
> 3 files changed, 23 insertions(+), 1 deletion(-)
>
> diff --git a/arch/riscv/include/asm/hwprobe.h b/arch/riscv/include/asm/hwprobe.h
> index 1f690fea0e03de6a..58dc847d86c7f2b0 100644
> --- a/arch/riscv/include/asm/hwprobe.h
> +++ b/arch/riscv/include/asm/hwprobe.h
> @@ -40,4 +40,10 @@ static inline bool riscv_hwprobe_pair_cmp(struct riscv_hwprobe *pair,
> return pair->value == other_pair->value;
> }
>
> +#ifdef CONFIG_MMU
> +void riscv_hwprobe_vdso_sync(__s64 sync_key);
> +#else
> +static inline void riscv_hwprobe_vdso_sync(__s64 sync_key) { };
> +#endif /* CONFIG_MMU */
> +
> #endif
> diff --git a/arch/riscv/kernel/sys_hwprobe.c b/arch/riscv/kernel/sys_hwprobe.c
> index 249aec8594a92a80..2e3e612b7ac6fd57 100644
> --- a/arch/riscv/kernel/sys_hwprobe.c
> +++ b/arch/riscv/kernel/sys_hwprobe.c
> @@ -17,6 +17,7 @@
> #include <asm/vector.h>
> #include <asm/vendor_extensions/thead_hwprobe.h>
> #include <vdso/vsyscall.h>
> +#include <vdso/datapage.h>
>
>
> static void hwprobe_arch_id(struct riscv_hwprobe *pair,
> @@ -500,6 +501,21 @@ static int __init init_hwprobe_vdso_data(void)
>
> arch_initcall_sync(init_hwprobe_vdso_data);
>
> +void riscv_hwprobe_vdso_sync(__s64 sync_key)
> +{
> + struct vdso_arch_data *avd = vdso_k_arch_data;
> + struct riscv_hwprobe pair;
> +
> + pair.key = sync_key;
> + hwprobe_one_pair(&pair, cpu_online_mask);
> + /*
> + * Update vDSO data for the given key.
> + * Currently for non-ID key updates (e.g. MISALIGNED_VECTOR_PERF),
> + * so 'homogeneous_cpus' is not re-evaluated here.
> + */
> + avd->all_cpu_hwprobe_values[sync_key] = pair.value;
> +}
> +
> #endif /* CONFIG_MMU */
>
> SYSCALL_DEFINE5(riscv_hwprobe, struct riscv_hwprobe __user *, pairs,
> diff --git a/arch/riscv/kernel/unaligned_access_speed.c b/arch/riscv/kernel/unaligned_access_speed.c
> index 585d2dcf2dab1ccb..81bc4997350acc87 100644
> --- a/arch/riscv/kernel/unaligned_access_speed.c
> +++ b/arch/riscv/kernel/unaligned_access_speed.c
> @@ -375,7 +375,7 @@ static void check_vector_unaligned_access(struct work_struct *work __always_unus
> static int __init vec_check_unaligned_access_speed_all_cpus(void *unused __always_unused)
> {
> schedule_on_each_cpu(check_vector_unaligned_access);
> -
> + riscv_hwprobe_vdso_sync(RISCV_HWPROBE_KEY_MISALIGNED_VECTOR_PERF);
> return 0;
> }
> #else /* CONFIG_RISCV_PROBE_VECTOR_UNALIGNED_ACCESS */
> --
> 2.49.0
>
Reviewed-by: Jesse Taube <jesse@rivosinc.com>
Thanks,
Jesse Taube
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2025-05-27 14:16 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-05-22 7:33 [PATCH v2] riscv: hwprobe: Fix stale vDSO data for late-initialized keys at boot Jingwei Wang
2025-05-23 1:40 ` Yanteng Si
2025-05-27 14:16 ` Jesse Taube
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).