* ERRATUM_858921 is broken on 5.15 kernel
@ 2023-01-05 13:33 Yogesh Lal
2023-01-05 14:12 ` Mark Rutland
2023-01-05 14:22 ` Marc Zyngier
0 siblings, 2 replies; 7+ messages in thread
From: Yogesh Lal @ 2023-01-05 13:33 UTC (permalink / raw)
To: mark.rutland, maz, daniel.lezcano, tglx, linux-arm-kernel
Cc: linux-kernel@vger.kernel.org, linux-arm-msm@vger.kernel.org
Hi,
We are observing issue on A73 core where ERRATUM_858921 is broken.
On 5.15 kernel arch_timer_enable_workaround is set by reading
arm64_858921_read_cntpct_el0 and arm64_858921_read_cntvct_el0 during
timer register using following path.
arch_timer_enable_workaround->atomic_set(&timer_unstable_counter_workaround_in_use,
1);
[code snap]
564 static
565 void arch_timer_enable_workaround(const struct
arch_timer_erratum_workaround *wa,
566 bool local)
567 {
568 int i;
569
570 if (local) {
571 __this_cpu_write(timer_unstable_counter_workaround, wa);
572 } else {
573 for_each_possible_cpu(i)
574 per_cpu(timer_unstable_counter_workaround, i) = wa;
575 }
576
577 if (wa->read_cntvct_el0 || wa->read_cntpct_el0)
578 atomic_set(&timer_unstable_counter_workaround_in_use, 1);
and based on above workaround enablement , appropriate function to get
counter is used.
1008 static void __init arch_counter_register(unsigned type)
1009 {
1010 u64 start_count;
1011
1012 /* Register the CP15 based counter if we have one */
1013 if (type & ARCH_TIMER_TYPE_CP15) {
1014 u64 (*rd)(void);
1015
1016 if ((IS_ENABLED(CONFIG_ARM64) && !is_hyp_mode_available()) ||
1017 arch_timer_uses_ppi == ARCH_TIMER_VIRT_PPI) {
1018 if (arch_timer_counter_has_wa())
1019 rd = arch_counter_get_cntvct_stable;
1020 else
1021 rd = arch_counter_get_cntvct;
1022 } else {
1023 if (arch_timer_counter_has_wa())
1024 rd = arch_counter_get_cntpct_stable;
1025 else
1026 rd = arch_counter_get_cntpct;
1027 }
[snap]
1043 /* 56 bits minimum, so we assume worst case rollover */
1044 sched_clock_register(arch_timer_read_counter, 56, arch_timer_rate);
As our boot cores are not impacted by errata sched_clock_register() will
register !arch_timer_counter_has_wa() callback.
Now when errata impacted core boots up and sched_clock_register already
register will !arch_timer_counter_has_wa() path.
As sched_clock_register is not per_cpu bases so arch_timer_read_counter
will always point to !arch_timer_counter_has_wa() function calls.
Looks like this bug is side effect of following patch:
commit 0ea415390cd345b7d09e8c9ebd4b68adfe873043
Author: Marc Zyngier <marc.zyngier@arm.com>
Date: Mon Apr 8 16:49:07 2019 +0100
clocksource/arm_arch_timer: Use arch_timer_read_counter to access
stable counters
Instead of always going via arch_counter_get_cntvct_stable to
access the
counter workaround, let's have arch_timer_read_counter point to the
right method.
For that, we need to track whether any CPU in the system has a
workaround for the counter. This is done by having an atomic variable
tracking this.
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Thanks
Yogesh Lal
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 7+ messages in thread* Re: ERRATUM_858921 is broken on 5.15 kernel 2023-01-05 13:33 ERRATUM_858921 is broken on 5.15 kernel Yogesh Lal @ 2023-01-05 14:12 ` Mark Rutland 2023-01-06 16:38 ` Yogesh Lal 2023-01-05 14:22 ` Marc Zyngier 1 sibling, 1 reply; 7+ messages in thread From: Mark Rutland @ 2023-01-05 14:12 UTC (permalink / raw) To: Yogesh Lal Cc: maz, daniel.lezcano, tglx, linux-arm-kernel, linux-kernel@vger.kernel.org, linux-arm-msm@vger.kernel.org On Thu, Jan 05, 2023 at 07:03:48PM +0530, Yogesh Lal wrote: > Hi, > > We are observing issue on A73 core where ERRATUM_858921 is broken. Do you *only* see this issue on v5.15.y, or is mainline (e.g. v6.2-rc2) also broken? I don't see any fix that fits your exact description below, but I do see that we've made a bunch of changes in this area since. > > On 5.15 kernel arch_timer_enable_workaround is set by reading > arm64_858921_read_cntpct_el0 and arm64_858921_read_cntvct_el0 during timer > register using following path. > > arch_timer_enable_workaround->atomic_set(&timer_unstable_counter_workaround_in_use, > 1); > > [code snap] > 564 static > 565 void arch_timer_enable_workaround(const struct > arch_timer_erratum_workaround *wa, > 566 bool local) > 567 { > 568 int i; > 569 > 570 if (local) { > 571 __this_cpu_write(timer_unstable_counter_workaround, wa); > 572 } else { > 573 for_each_possible_cpu(i) > 574 per_cpu(timer_unstable_counter_workaround, i) = wa; > 575 } > 576 > 577 if (wa->read_cntvct_el0 || wa->read_cntpct_el0) > 578 atomic_set(&timer_unstable_counter_workaround_in_use, 1); > > > and based on above workaround enablement , appropriate function to get > counter is used. > > 1008 static void __init arch_counter_register(unsigned type) > 1009 { > 1010 u64 start_count; > 1011 > 1012 /* Register the CP15 based counter if we have one */ > 1013 if (type & ARCH_TIMER_TYPE_CP15) { > 1014 u64 (*rd)(void); > 1015 > 1016 if ((IS_ENABLED(CONFIG_ARM64) && !is_hyp_mode_available()) || > 1017 arch_timer_uses_ppi == ARCH_TIMER_VIRT_PPI) { > 1018 if (arch_timer_counter_has_wa()) > 1019 rd = arch_counter_get_cntvct_stable; > 1020 else > 1021 rd = arch_counter_get_cntvct; > 1022 } else { > 1023 if (arch_timer_counter_has_wa()) > 1024 rd = arch_counter_get_cntpct_stable; > 1025 else > 1026 rd = arch_counter_get_cntpct; > 1027 } > [snap] > 1043 /* 56 bits minimum, so we assume worst case rollover */ > 1044 sched_clock_register(arch_timer_read_counter, 56, arch_timer_rate); > > > As our boot cores are not impacted by errata sched_clock_register() will > register !arch_timer_counter_has_wa() callback. It would be helpful to mention this fact (that the system is big.LITTLE, and the boot cores are not Cortex-A73) earlier in the report. > Now when errata impacted core boots up and sched_clock_register already > register will !arch_timer_counter_has_wa() path. > As sched_clock_register is not per_cpu bases so arch_timer_read_counter will > always point to !arch_timer_counter_has_wa() function calls. Hmm... yes, AFAICT this cannot work unless the affected CPUs are up before we probe, and it doesn't make much sense for arch_counter_register() to look at arch_timer_counter_has_wa() since it can be called before all CPUs are up. > Looks like this bug is side effect of following patch: > > commit 0ea415390cd345b7d09e8c9ebd4b68adfe873043 > Author: Marc Zyngier <marc.zyngier@arm.com> > Date: Mon Apr 8 16:49:07 2019 +0100 > > clocksource/arm_arch_timer: Use arch_timer_read_counter to access stable > counters > > Instead of always going via arch_counter_get_cntvct_stable to access the > counter workaround, let's have arch_timer_read_counter point to the > right method. > > For that, we need to track whether any CPU in the system has a > workaround for the counter. This is done by having an atomic variable > tracking this. > > Acked-by: Mark Rutland <mark.rutland@arm.com> > Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> > Signed-off-by: Will Deacon <will.deacon@arm.com> > Yeah, that does look to be broken, but I think there are futher issues anyway (e.g. late onlining). AFAICT we need to detect this *stupidly early* in the CPU bringup path in order to handle this safely, which is quite painful. What a great. Thanks, Mark. _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: ERRATUM_858921 is broken on 5.15 kernel 2023-01-05 14:12 ` Mark Rutland @ 2023-01-06 16:38 ` Yogesh Lal 0 siblings, 0 replies; 7+ messages in thread From: Yogesh Lal @ 2023-01-06 16:38 UTC (permalink / raw) To: Mark Rutland Cc: maz, daniel.lezcano, tglx, linux-arm-kernel, linux-kernel@vger.kernel.org, linux-arm-msm@vger.kernel.org On 1/5/2023 7:42 PM, Mark Rutland wrote: > On Thu, Jan 05, 2023 at 07:03:48PM +0530, Yogesh Lal wrote: >> Hi, >> >> We are observing issue on A73 core where ERRATUM_858921 is broken. > Do you *only* see this issue on v5.15.y, or is mainline (e.g. v6.2-rc2) also > broken? Checked the code path and looks like its broken on mainline also. > I don't see any fix that fits your exact description below, but I do see that > we've made a bunch of changes in this area since. > >> On 5.15 kernel arch_timer_enable_workaround is set by reading >> arm64_858921_read_cntpct_el0 and arm64_858921_read_cntvct_el0 during timer >> register using following path. >> >> arch_timer_enable_workaround->atomic_set(&timer_unstable_counter_workaround_in_use, >> 1); >> >> [code snap] >> 564 static >> 565 void arch_timer_enable_workaround(const struct >> arch_timer_erratum_workaround *wa, >> 566 bool local) >> 567 { >> 568 int i; >> 569 >> 570 if (local) { >> 571 __this_cpu_write(timer_unstable_counter_workaround, wa); >> 572 } else { >> 573 for_each_possible_cpu(i) >> 574 per_cpu(timer_unstable_counter_workaround, i) = wa; >> 575 } >> 576 >> 577 if (wa->read_cntvct_el0 || wa->read_cntpct_el0) >> 578 atomic_set(&timer_unstable_counter_workaround_in_use, 1); >> >> >> and based on above workaround enablement , appropriate function to get >> counter is used. >> >> 1008 static void __init arch_counter_register(unsigned type) >> 1009 { >> 1010 u64 start_count; >> 1011 >> 1012 /* Register the CP15 based counter if we have one */ >> 1013 if (type & ARCH_TIMER_TYPE_CP15) { >> 1014 u64 (*rd)(void); >> 1015 >> 1016 if ((IS_ENABLED(CONFIG_ARM64) && !is_hyp_mode_available()) || >> 1017 arch_timer_uses_ppi == ARCH_TIMER_VIRT_PPI) { >> 1018 if (arch_timer_counter_has_wa()) >> 1019 rd = arch_counter_get_cntvct_stable; >> 1020 else >> 1021 rd = arch_counter_get_cntvct; >> 1022 } else { >> 1023 if (arch_timer_counter_has_wa()) >> 1024 rd = arch_counter_get_cntpct_stable; >> 1025 else >> 1026 rd = arch_counter_get_cntpct; >> 1027 } >> [snap] >> 1043 /* 56 bits minimum, so we assume worst case rollover */ >> 1044 sched_clock_register(arch_timer_read_counter, 56, arch_timer_rate); >> >> >> As our boot cores are not impacted by errata sched_clock_register() will >> register !arch_timer_counter_has_wa() callback. > It would be helpful to mention this fact (that the system is big.LITTLE, and > the boot cores are not Cortex-A73) earlier in the report. will take care > >> Now when errata impacted core boots up and sched_clock_register already >> register will !arch_timer_counter_has_wa() path. >> As sched_clock_register is not per_cpu bases so arch_timer_read_counter will >> always point to !arch_timer_counter_has_wa() function calls. > Hmm... yes, AFAICT this cannot work unless the affected CPUs are up before we > probe, and it doesn't make much sense for arch_counter_register() to look at > arch_timer_counter_has_wa() since it can be called before all CPUs are up. > >> Looks like this bug is side effect of following patch: >> >> commit 0ea415390cd345b7d09e8c9ebd4b68adfe873043 >> Author: Marc Zyngier <marc.zyngier@arm.com> >> Date: Mon Apr 8 16:49:07 2019 +0100 >> >> clocksource/arm_arch_timer: Use arch_timer_read_counter to access stable >> counters >> >> Instead of always going via arch_counter_get_cntvct_stable to access the >> counter workaround, let's have arch_timer_read_counter point to the >> right method. >> >> For that, we need to track whether any CPU in the system has a >> workaround for the counter. This is done by having an atomic variable >> tracking this. >> >> Acked-by: Mark Rutland <mark.rutland@arm.com> >> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> >> Signed-off-by: Will Deacon <will.deacon@arm.com> >> > Yeah, that does look to be broken, but I think there are futher issues anyway > (e.g. late onlining). > > AFAICT we need to detect this *stupidly early* in the CPU bringup path in order > to handle this safely, which is quite painful. > > What a great. > > Thanks, > Mark. _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: ERRATUM_858921 is broken on 5.15 kernel 2023-01-05 13:33 ERRATUM_858921 is broken on 5.15 kernel Yogesh Lal 2023-01-05 14:12 ` Mark Rutland @ 2023-01-05 14:22 ` Marc Zyngier 2023-01-09 6:52 ` Yogesh Lal 1 sibling, 1 reply; 7+ messages in thread From: Marc Zyngier @ 2023-01-05 14:22 UTC (permalink / raw) To: Yogesh Lal Cc: mark.rutland, daniel.lezcano, tglx, linux-arm-kernel, linux-kernel@vger.kernel.org, linux-arm-msm@vger.kernel.org On Thu, 05 Jan 2023 13:33:48 +0000, Yogesh Lal <quic_ylal@quicinc.com> wrote: > > Hi, > > We are observing issue on A73 core where ERRATUM_858921 is broken. > > On 5.15 kernel arch_timer_enable_workaround is set by reading > arm64_858921_read_cntpct_el0 and arm64_858921_read_cntvct_el0 during > timer register using following path. Have you checked whether the issue is still present on 6.1? > > arch_timer_enable_workaround->atomic_set(&timer_unstable_counter_workaround_in_use, > 1); > > [code snap] > 564 static > 565 void arch_timer_enable_workaround(const struct > arch_timer_erratum_workaround *wa, > 566 bool local) > 567 { > 568 int i; > 569 > 570 if (local) { > 571 __this_cpu_write(timer_unstable_counter_workaround, wa); > 572 } else { > 573 for_each_possible_cpu(i) > 574 per_cpu(timer_unstable_counter_workaround, i) = wa; > 575 } > 576 > 577 if (wa->read_cntvct_el0 || wa->read_cntpct_el0) > 578 atomic_set(&timer_unstable_counter_workaround_in_use, 1); > > > and based on above workaround enablement , appropriate function to get > counter is used. > > 1008 static void __init arch_counter_register(unsigned type) > 1009 { > 1010 u64 start_count; > 1011 > 1012 /* Register the CP15 based counter if we have one */ > 1013 if (type & ARCH_TIMER_TYPE_CP15) { > 1014 u64 (*rd)(void); > 1015 > 1016 if ((IS_ENABLED(CONFIG_ARM64) && !is_hyp_mode_available()) || > 1017 arch_timer_uses_ppi == ARCH_TIMER_VIRT_PPI) { > 1018 if (arch_timer_counter_has_wa()) > 1019 rd = arch_counter_get_cntvct_stable; > 1020 else > 1021 rd = arch_counter_get_cntvct; > 1022 } else { > 1023 if (arch_timer_counter_has_wa()) > 1024 rd = arch_counter_get_cntpct_stable; > 1025 else > 1026 rd = arch_counter_get_cntpct; > 1027 } > [snap] > 1043 /* 56 bits minimum, so we assume worst case rollover */ > 1044 sched_clock_register(arch_timer_read_counter, 56, arch_timer_rate); > > > As our boot cores are not impacted by errata sched_clock_register() > will register !arch_timer_counter_has_wa() callback. > > Now when errata impacted core boots up and sched_clock_register > already register will !arch_timer_counter_has_wa() path. > As sched_clock_register is not per_cpu bases so > arch_timer_read_counter will always point to > !arch_timer_counter_has_wa() function calls. Please try the following hack, only compile tested as I do not have access to any affected HW, and report whether this solves your issue or not. Note that this is based on 6.2-rc2. Thanks, M. diff --git a/drivers/clocksource/arm_arch_timer.c b/drivers/clocksource/arm_arch_timer.c index e09d4427f604..a7cf0a2c86d3 100644 --- a/drivers/clocksource/arm_arch_timer.c +++ b/drivers/clocksource/arm_arch_timer.c @@ -230,6 +230,28 @@ static u64 arch_counter_read_cc(const struct cyclecounter *cc) return arch_timer_read_counter(); } +static bool arch_timer_counter_has_wa(void); + +static u64 (*arch_counter_get_read_fn(void))(void) +{ + u64 (*rd)(void); + + if ((IS_ENABLED(CONFIG_ARM64) && !is_hyp_mode_available()) || + arch_timer_uses_ppi == ARCH_TIMER_VIRT_PPI) { + if (arch_timer_counter_has_wa()) + rd = arch_counter_get_cntvct_stable; + else + rd = arch_counter_get_cntvct; + } else { + if (arch_timer_counter_has_wa()) + rd = arch_counter_get_cntpct_stable; + else + rd = arch_counter_get_cntpct; + } + + return rd; +} + static struct clocksource clocksource_counter = { .name = "arch_sys_counter", .id = CSID_ARM_ARCH_COUNTER, @@ -571,8 +593,10 @@ void arch_timer_enable_workaround(const struct arch_timer_erratum_workaround *wa per_cpu(timer_unstable_counter_workaround, i) = wa; } - if (wa->read_cntvct_el0 || wa->read_cntpct_el0) + if (wa->read_cntvct_el0 || wa->read_cntpct_el0) { atomic_set(&timer_unstable_counter_workaround_in_use, 1); + arch_timer_read_counter = arch_counter_get_read_fn(); + } /* * Don't use the vdso fastpath if errata require using the @@ -641,7 +665,7 @@ static bool arch_timer_counter_has_wa(void) #else #define arch_timer_check_ool_workaround(t,a) do { } while(0) #define arch_timer_this_cpu_has_cntvct_wa() ({false;}) -#define arch_timer_counter_has_wa() ({false;}) +static bool arch_timer_counter_has_wa(void) { return false; } #endif /* CONFIG_ARM_ARCH_TIMER_OOL_WORKAROUND */ static __always_inline irqreturn_t timer_handler(const int access, @@ -1079,22 +1103,7 @@ static void __init arch_counter_register(unsigned type) /* Register the CP15 based counter if we have one */ if (type & ARCH_TIMER_TYPE_CP15) { - u64 (*rd)(void); - - if ((IS_ENABLED(CONFIG_ARM64) && !is_hyp_mode_available()) || - arch_timer_uses_ppi == ARCH_TIMER_VIRT_PPI) { - if (arch_timer_counter_has_wa()) - rd = arch_counter_get_cntvct_stable; - else - rd = arch_counter_get_cntvct; - } else { - if (arch_timer_counter_has_wa()) - rd = arch_counter_get_cntpct_stable; - else - rd = arch_counter_get_cntpct; - } - - arch_timer_read_counter = rd; + arch_timer_read_counter = arch_counter_get_read_fn(); clocksource_counter.vdso_clock_mode = vdso_default; } else { arch_timer_read_counter = arch_counter_get_cntvct_mem; -- Without deviation from the norm, progress is not possible. _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel ^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: ERRATUM_858921 is broken on 5.15 kernel 2023-01-05 14:22 ` Marc Zyngier @ 2023-01-09 6:52 ` Yogesh Lal 2023-01-09 10:39 ` Marc Zyngier 0 siblings, 1 reply; 7+ messages in thread From: Yogesh Lal @ 2023-01-09 6:52 UTC (permalink / raw) To: Marc Zyngier Cc: mark.rutland, daniel.lezcano, tglx, linux-arm-kernel, linux-kernel@vger.kernel.org, linux-arm-msm@vger.kernel.org On 1/5/2023 7:52 PM, Marc Zyngier wrote: > On Thu, 05 Jan 2023 13:33:48 +0000, > Yogesh Lal <quic_ylal@quicinc.com> wrote: >> Hi, >> >> We are observing issue on A73 core where ERRATUM_858921 is broken. >> >> On 5.15 kernel arch_timer_enable_workaround is set by reading >> arm64_858921_read_cntpct_el0 and arm64_858921_read_cntvct_el0 during >> timer register using following path. > Have you checked whether the issue is still present on 6.1? yes, its preset there as well. > >> arch_timer_enable_workaround->atomic_set(&timer_unstable_counter_workaround_in_use, >> 1); >> >> [code snap] >> 564 static >> 565 void arch_timer_enable_workaround(const struct >> arch_timer_erratum_workaround *wa, >> 566 bool local) >> 567 { >> 568 int i; >> 569 >> 570 if (local) { >> 571 __this_cpu_write(timer_unstable_counter_workaround, wa); >> 572 } else { >> 573 for_each_possible_cpu(i) >> 574 per_cpu(timer_unstable_counter_workaround, i) = wa; >> 575 } >> 576 >> 577 if (wa->read_cntvct_el0 || wa->read_cntpct_el0) >> 578 atomic_set(&timer_unstable_counter_workaround_in_use, 1); >> >> >> and based on above workaround enablement , appropriate function to get >> counter is used. >> >> 1008 static void __init arch_counter_register(unsigned type) >> 1009 { >> 1010 u64 start_count; >> 1011 >> 1012 /* Register the CP15 based counter if we have one */ >> 1013 if (type & ARCH_TIMER_TYPE_CP15) { >> 1014 u64 (*rd)(void); >> 1015 >> 1016 if ((IS_ENABLED(CONFIG_ARM64) && !is_hyp_mode_available()) || >> 1017 arch_timer_uses_ppi == ARCH_TIMER_VIRT_PPI) { >> 1018 if (arch_timer_counter_has_wa()) >> 1019 rd = arch_counter_get_cntvct_stable; >> 1020 else >> 1021 rd = arch_counter_get_cntvct; >> 1022 } else { >> 1023 if (arch_timer_counter_has_wa()) >> 1024 rd = arch_counter_get_cntpct_stable; >> 1025 else >> 1026 rd = arch_counter_get_cntpct; >> 1027 } >> [snap] >> 1043 /* 56 bits minimum, so we assume worst case rollover */ >> 1044 sched_clock_register(arch_timer_read_counter, 56, arch_timer_rate); >> >> >> As our boot cores are not impacted by errata sched_clock_register() >> will register !arch_timer_counter_has_wa() callback. >> >> Now when errata impacted core boots up and sched_clock_register >> already register will !arch_timer_counter_has_wa() path. >> As sched_clock_register is not per_cpu bases so >> arch_timer_read_counter will always point to >> !arch_timer_counter_has_wa() function calls. > Please try the following hack, only compile tested as I do not have > access to any affected HW, and report whether this solves your issue > or not. Note that this is based on 6.2-rc2. tested it on affected h/w but looks like sched_clock is still pointing to !arch_timer_counter_has_wa() function calls, may be due to sched_clock_register will register once during non errata impacted core booting. 1007 static void __init arch_counter_register(unsigned type) 1008 { [snap] 1043 /* 56 bits minimum, so we assume worst case rollover */ 1044 sched_clock_register(arch_timer_read_counter, 56, arch_timer_rate); Also update_clock_read_data will called with !arch_timer_counter_has_wa() read function calls. 153 void sched_clock_register(u64 (*read)(void), int bits, unsigned long rate) [snap] 183 cd.actual_read_sched_clock = read; 184 185 rd.read_sched_clock = read; 186 rd.sched_clock_mask = new_mask; 187 rd.mult = new_mult; 188 rd.shift = new_shift; 189 rd.epoch_cyc = new_epoch; 190 rd.epoch_ns = ns; 191 192 update_clock_read_data(&rd); > Thanks, > > M. > > diff --git a/drivers/clocksource/arm_arch_timer.c b/drivers/clocksource/arm_arch_timer.c > index e09d4427f604..a7cf0a2c86d3 100644 > --- a/drivers/clocksource/arm_arch_timer.c > +++ b/drivers/clocksource/arm_arch_timer.c > @@ -230,6 +230,28 @@ static u64 arch_counter_read_cc(const struct cyclecounter *cc) > return arch_timer_read_counter(); > } > > +static bool arch_timer_counter_has_wa(void); > + > +static u64 (*arch_counter_get_read_fn(void))(void) > +{ > + u64 (*rd)(void); > + > + if ((IS_ENABLED(CONFIG_ARM64) && !is_hyp_mode_available()) || > + arch_timer_uses_ppi == ARCH_TIMER_VIRT_PPI) { > + if (arch_timer_counter_has_wa()) > + rd = arch_counter_get_cntvct_stable; > + else > + rd = arch_counter_get_cntvct; > + } else { > + if (arch_timer_counter_has_wa()) > + rd = arch_counter_get_cntpct_stable; > + else > + rd = arch_counter_get_cntpct; > + } > + > + return rd; > +} > + > static struct clocksource clocksource_counter = { > .name = "arch_sys_counter", > .id = CSID_ARM_ARCH_COUNTER, > @@ -571,8 +593,10 @@ void arch_timer_enable_workaround(const struct arch_timer_erratum_workaround *wa > per_cpu(timer_unstable_counter_workaround, i) = wa; > } > > - if (wa->read_cntvct_el0 || wa->read_cntpct_el0) > + if (wa->read_cntvct_el0 || wa->read_cntpct_el0) { > atomic_set(&timer_unstable_counter_workaround_in_use, 1); > + arch_timer_read_counter = arch_counter_get_read_fn(); > + } > > /* > * Don't use the vdso fastpath if errata require using the > @@ -641,7 +665,7 @@ static bool arch_timer_counter_has_wa(void) > #else > #define arch_timer_check_ool_workaround(t,a) do { } while(0) > #define arch_timer_this_cpu_has_cntvct_wa() ({false;}) > -#define arch_timer_counter_has_wa() ({false;}) > +static bool arch_timer_counter_has_wa(void) { return false; } > #endif /* CONFIG_ARM_ARCH_TIMER_OOL_WORKAROUND */ > > static __always_inline irqreturn_t timer_handler(const int access, > @@ -1079,22 +1103,7 @@ static void __init arch_counter_register(unsigned type) > > /* Register the CP15 based counter if we have one */ > if (type & ARCH_TIMER_TYPE_CP15) { > - u64 (*rd)(void); > - > - if ((IS_ENABLED(CONFIG_ARM64) && !is_hyp_mode_available()) || > - arch_timer_uses_ppi == ARCH_TIMER_VIRT_PPI) { > - if (arch_timer_counter_has_wa()) > - rd = arch_counter_get_cntvct_stable; > - else > - rd = arch_counter_get_cntvct; > - } else { > - if (arch_timer_counter_has_wa()) > - rd = arch_counter_get_cntpct_stable; > - else > - rd = arch_counter_get_cntpct; > - } > - > - arch_timer_read_counter = rd; > + arch_timer_read_counter = arch_counter_get_read_fn(); > clocksource_counter.vdso_clock_mode = vdso_default; > } else { > arch_timer_read_counter = arch_counter_get_cntvct_mem; > _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: ERRATUM_858921 is broken on 5.15 kernel 2023-01-09 6:52 ` Yogesh Lal @ 2023-01-09 10:39 ` Marc Zyngier 2023-01-12 12:47 ` Yogesh Lal 0 siblings, 1 reply; 7+ messages in thread From: Marc Zyngier @ 2023-01-09 10:39 UTC (permalink / raw) To: Yogesh Lal Cc: mark.rutland, daniel.lezcano, tglx, linux-arm-kernel, linux-kernel@vger.kernel.org, linux-arm-msm@vger.kernel.org On Mon, 09 Jan 2023 06:52:20 +0000, Yogesh Lal <quic_ylal@quicinc.com> wrote: > > tested it on affected h/w but looks like sched_clock is still pointing > to !arch_timer_counter_has_wa() function calls, > may be due to sched_clock_register will register once during non > errata impacted core booting. Ah, of course. We register the function itself instead of an indirection. Please try this on top of the previous patch. Thanks, M. diff --git a/drivers/clocksource/arm_arch_timer.c b/drivers/clocksource/arm_arch_timer.c index a7cf0a2c86d3..8232c86b9e7c 100644 --- a/drivers/clocksource/arm_arch_timer.c +++ b/drivers/clocksource/arm_arch_timer.c @@ -217,7 +217,12 @@ static notrace u64 arch_counter_get_cntvct(void) * to exist on arm64. arm doesn't use this before DT is probed so even * if we don't have the cp15 accessors we won't have a problem. */ -u64 (*arch_timer_read_counter)(void) __ro_after_init = arch_counter_get_cntvct; +static u64 (*__arch_timer_read_counter)(void) __ro_after_init = arch_counter_get_cntvct; + +u64 arch_timer_read_counter(void) +{ + return __arch_timer_read_counter(); +} EXPORT_SYMBOL_GPL(arch_timer_read_counter); static u64 arch_counter_read(struct clocksource *cs) @@ -595,7 +600,7 @@ void arch_timer_enable_workaround(const struct arch_timer_erratum_workaround *wa if (wa->read_cntvct_el0 || wa->read_cntpct_el0) { atomic_set(&timer_unstable_counter_workaround_in_use, 1); - arch_timer_read_counter = arch_counter_get_read_fn(); + __arch_timer_read_counter = arch_counter_get_read_fn(); } /* @@ -1103,10 +1108,10 @@ static void __init arch_counter_register(unsigned type) /* Register the CP15 based counter if we have one */ if (type & ARCH_TIMER_TYPE_CP15) { - arch_timer_read_counter = arch_counter_get_read_fn(); + __arch_timer_read_counter = arch_counter_get_read_fn(); clocksource_counter.vdso_clock_mode = vdso_default; } else { - arch_timer_read_counter = arch_counter_get_cntvct_mem; + __arch_timer_read_counter = arch_counter_get_cntvct_mem; } width = arch_counter_get_width(); diff --git a/include/clocksource/arm_arch_timer.h b/include/clocksource/arm_arch_timer.h index e3c3816d19ba..3ac297a756e8 100644 --- a/include/clocksource/arm_arch_timer.h +++ b/include/clocksource/arm_arch_timer.h @@ -89,7 +89,7 @@ struct arch_timer_mem { #ifdef CONFIG_ARM_ARCH_TIMER extern u32 arch_timer_get_rate(void); -extern u64 (*arch_timer_read_counter)(void); +extern u64 arch_timer_read_counter(void); extern struct arch_timer_kvm_info *arch_timer_get_kvm_info(void); extern bool arch_timer_evtstrm_available(void); -- Without deviation from the norm, progress is not possible. _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel ^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: ERRATUM_858921 is broken on 5.15 kernel 2023-01-09 10:39 ` Marc Zyngier @ 2023-01-12 12:47 ` Yogesh Lal 0 siblings, 0 replies; 7+ messages in thread From: Yogesh Lal @ 2023-01-12 12:47 UTC (permalink / raw) To: Marc Zyngier Cc: mark.rutland, daniel.lezcano, tglx, linux-arm-kernel, linux-kernel@vger.kernel.org, linux-arm-msm@vger.kernel.org On 1/9/2023 4:09 PM, Marc Zyngier wrote: > On Mon, 09 Jan 2023 06:52:20 +0000, > Yogesh Lal <quic_ylal@quicinc.com> wrote: >> tested it on affected h/w but looks like sched_clock is still pointing >> to !arch_timer_counter_has_wa() function calls, >> may be due to sched_clock_register will register once during non >> errata impacted core booting. > Ah, of course. We register the function itself instead of an > indirection. Please try this on top of the previous patch. Thanks for the patch, its working fine now. > > Thanks, > > M. > > diff --git a/drivers/clocksource/arm_arch_timer.c b/drivers/clocksource/arm_arch_timer.c > index a7cf0a2c86d3..8232c86b9e7c 100644 > --- a/drivers/clocksource/arm_arch_timer.c > +++ b/drivers/clocksource/arm_arch_timer.c > @@ -217,7 +217,12 @@ static notrace u64 arch_counter_get_cntvct(void) > * to exist on arm64. arm doesn't use this before DT is probed so even > * if we don't have the cp15 accessors we won't have a problem. > */ > -u64 (*arch_timer_read_counter)(void) __ro_after_init = arch_counter_get_cntvct; > +static u64 (*__arch_timer_read_counter)(void) __ro_after_init = arch_counter_get_cntvct; > + > +u64 arch_timer_read_counter(void) > +{ > + return __arch_timer_read_counter(); > +} > EXPORT_SYMBOL_GPL(arch_timer_read_counter); > > static u64 arch_counter_read(struct clocksource *cs) > @@ -595,7 +600,7 @@ void arch_timer_enable_workaround(const struct arch_timer_erratum_workaround *wa > > if (wa->read_cntvct_el0 || wa->read_cntpct_el0) { > atomic_set(&timer_unstable_counter_workaround_in_use, 1); > - arch_timer_read_counter = arch_counter_get_read_fn(); > + __arch_timer_read_counter = arch_counter_get_read_fn(); > } > > /* > @@ -1103,10 +1108,10 @@ static void __init arch_counter_register(unsigned type) > > /* Register the CP15 based counter if we have one */ > if (type & ARCH_TIMER_TYPE_CP15) { > - arch_timer_read_counter = arch_counter_get_read_fn(); > + __arch_timer_read_counter = arch_counter_get_read_fn(); > clocksource_counter.vdso_clock_mode = vdso_default; > } else { > - arch_timer_read_counter = arch_counter_get_cntvct_mem; > + __arch_timer_read_counter = arch_counter_get_cntvct_mem; > } > > width = arch_counter_get_width(); > diff --git a/include/clocksource/arm_arch_timer.h b/include/clocksource/arm_arch_timer.h > index e3c3816d19ba..3ac297a756e8 100644 > --- a/include/clocksource/arm_arch_timer.h > +++ b/include/clocksource/arm_arch_timer.h > @@ -89,7 +89,7 @@ struct arch_timer_mem { > #ifdef CONFIG_ARM_ARCH_TIMER > > extern u32 arch_timer_get_rate(void); > -extern u64 (*arch_timer_read_counter)(void); > +extern u64 arch_timer_read_counter(void); > extern struct arch_timer_kvm_info *arch_timer_get_kvm_info(void); > extern bool arch_timer_evtstrm_available(void); > > _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel ^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2023-01-12 12:49 UTC | newest] Thread overview: 7+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2023-01-05 13:33 ERRATUM_858921 is broken on 5.15 kernel Yogesh Lal 2023-01-05 14:12 ` Mark Rutland 2023-01-06 16:38 ` Yogesh Lal 2023-01-05 14:22 ` Marc Zyngier 2023-01-09 6:52 ` Yogesh Lal 2023-01-09 10:39 ` Marc Zyngier 2023-01-12 12:47 ` Yogesh Lal
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).