* [PATCH] hrtimers: calculate expires_next after all timers are executed @ 2014-03-17 14:37 Stanislav Fomichev 2014-03-19 10:59 ` Thomas Gleixner 0 siblings, 1 reply; 15+ messages in thread From: Stanislav Fomichev @ 2014-03-17 14:37 UTC (permalink / raw) To: tglx, john.stultz, prarit, paul.gortmaker, juri.lelli, ddaney.cavm, mbohan, david.vrabel, david.engraf Cc: linux-kernel I think I'm hitting particularly subtle issue with NOHZ_IDLE kernel. The sequence is as follows: - CPU enters idle, we disable tick - hrtimer interrupt fires (for hrtimer_wakeup) - for clock base #1 (REALTIME) we wake up SCHED_RT thread and start RT period timer (from start_rt_bandwidth) for clock base #0 (MONOTONIC) - because we already checked expiry time for clock base #0 we end up programming wrong wake up time (minutes, from tick_sched_timer) - then, we exit idle loop and restart tick; but because tick_sched_timer is not leftmost (the leftmost one is RT period timer) we don't program it So in the end, I see working CPU without tick interrupt. This eventually leads to RCU stall on that CPU: rcu_gp_kthread is not woken up because there is no tick (this is the reason I started digging this up). The proposed fix runs expired timers first and only then tries to find next expiry time for all clocks. Signed-off-by: Stanislav Fomichev <stfomichev@yandex-team.ru> --- kernel/hrtimer.c | 38 ++++++++++++++++++++++++-------------- 1 file changed, 24 insertions(+), 14 deletions(-) diff --git a/kernel/hrtimer.c b/kernel/hrtimer.c index 09094361dce5..63805f9f9531 100644 --- a/kernel/hrtimer.c +++ b/kernel/hrtimer.c @@ -1282,6 +1282,9 @@ static void __run_hrtimer(struct hrtimer *timer, ktime_t *now) */ void hrtimer_interrupt(struct clock_event_device *dev) { + struct hrtimer_clock_base *base; + struct timerqueue_node *node; + struct hrtimer *timer; struct hrtimer_cpu_base *cpu_base = &__get_cpu_var(hrtimer_bases); ktime_t expires_next, now, entry_time, delta; int i, retries = 0; @@ -1304,8 +1307,6 @@ retry: cpu_base->expires_next.tv64 = KTIME_MAX; for (i = 0; i < HRTIMER_MAX_CLOCK_BASES; i++) { - struct hrtimer_clock_base *base; - struct timerqueue_node *node; ktime_t basenow; if (!(cpu_base->active_bases & (1 << i))) @@ -1315,8 +1316,6 @@ retry: basenow = ktime_add(now, base->offset); while ((node = timerqueue_getnext(&base->active))) { - struct hrtimer *timer; - timer = container_of(node, struct hrtimer, node); /* @@ -1332,22 +1331,33 @@ retry: * timer will have to trigger a wakeup anyway. */ - if (basenow.tv64 < hrtimer_get_softexpires_tv64(timer)) { - ktime_t expires; - - expires = ktime_sub(hrtimer_get_expires(timer), - base->offset); - if (expires.tv64 < 0) - expires.tv64 = KTIME_MAX; - if (expires.tv64 < expires_next.tv64) - expires_next = expires; + if (basenow.tv64 < hrtimer_get_softexpires_tv64(timer)) break; - } __run_hrtimer(timer, &basenow); } } + for (i = 0; i < HRTIMER_MAX_CLOCK_BASES; i++) { + ktime_t expires; + + if (!(cpu_base->active_bases & (1 << i))) + continue; + + base = cpu_base->clock_base + i; + + node = timerqueue_getnext(&base->active); + if (!node) + continue; + + timer = container_of(node, struct hrtimer, node); + expires = ktime_sub(hrtimer_get_expires(timer), base->offset); + if (expires.tv64 < 0) + expires.tv64 = KTIME_MAX; + if (expires.tv64 < expires_next.tv64) + expires_next = expires; + } + /* * Store the new expiry value so the migration code can verify * against it. -- 1.8.3.2 ^ permalink raw reply related [flat|nested] 15+ messages in thread
* Re: [PATCH] hrtimers: calculate expires_next after all timers are executed 2014-03-17 14:37 [PATCH] hrtimers: calculate expires_next after all timers are executed Stanislav Fomichev @ 2014-03-19 10:59 ` Thomas Gleixner 2014-03-19 14:16 ` [PATCH v2] " Stanislav Fomichev 0 siblings, 1 reply; 15+ messages in thread From: Thomas Gleixner @ 2014-03-19 10:59 UTC (permalink / raw) To: Stanislav Fomichev Cc: john.stultz, prarit, paul.gortmaker, juri.lelli, ddaney.cavm, mbohan, david.vrabel, david.engraf, linux-kernel On Mon, 17 Mar 2014, Stanislav Fomichev wrote: > I think I'm hitting particularly subtle issue with NOHZ_IDLE kernel. > > The sequence is as follows: > - CPU enters idle, we disable tick > - hrtimer interrupt fires (for hrtimer_wakeup) > - for clock base #1 (REALTIME) we wake up SCHED_RT thread and > start RT period timer (from start_rt_bandwidth) for clock base #0 (MONOTONIC) > - because we already checked expiry time for clock base #0 > we end up programming wrong wake up time (minutes, from tick_sched_timer) > - then, we exit idle loop and restart tick; > but because tick_sched_timer is not leftmost (the leftmost one > is RT period timer) we don't program it > > So in the end, I see working CPU without tick interrupt. > This eventually leads to RCU stall on that CPU: rcu_gp_kthread > is not woken up because there is no tick (this is the reason > I started digging this up). Nice analysis! > The proposed fix runs expired timers first and only then tries to find > next expiry time for all clocks. > > Signed-off-by: Stanislav Fomichev <stfomichev@yandex-team.ru> > --- > kernel/hrtimer.c | 38 ++++++++++++++++++++++++-------------- > 1 file changed, 24 insertions(+), 14 deletions(-) > > diff --git a/kernel/hrtimer.c b/kernel/hrtimer.c > index 09094361dce5..63805f9f9531 100644 > --- a/kernel/hrtimer.c > +++ b/kernel/hrtimer.c > @@ -1282,6 +1282,9 @@ static void __run_hrtimer(struct hrtimer *timer, ktime_t *now) > */ > void hrtimer_interrupt(struct clock_event_device *dev) > { > + struct hrtimer_clock_base *base; > + struct timerqueue_node *node; > + struct hrtimer *timer; > struct hrtimer_cpu_base *cpu_base = &__get_cpu_var(hrtimer_bases); > ktime_t expires_next, now, entry_time, delta; > int i, retries = 0; > @@ -1304,8 +1307,6 @@ retry: > cpu_base->expires_next.tv64 = KTIME_MAX; > > for (i = 0; i < HRTIMER_MAX_CLOCK_BASES; i++) { > - struct hrtimer_clock_base *base; > - struct timerqueue_node *node; > ktime_t basenow; > > if (!(cpu_base->active_bases & (1 << i))) > @@ -1315,8 +1316,6 @@ retry: > basenow = ktime_add(now, base->offset); > > while ((node = timerqueue_getnext(&base->active))) { > - struct hrtimer *timer; > - > timer = container_of(node, struct hrtimer, node); > > /* > @@ -1332,22 +1331,33 @@ retry: > * timer will have to trigger a wakeup anyway. > */ > > - if (basenow.tv64 < hrtimer_get_softexpires_tv64(timer)) { > - ktime_t expires; > - > - expires = ktime_sub(hrtimer_get_expires(timer), > - base->offset); > - if (expires.tv64 < 0) > - expires.tv64 = KTIME_MAX; > - if (expires.tv64 < expires_next.tv64) > - expires_next = expires; > + if (basenow.tv64 < hrtimer_get_softexpires_tv64(timer)) > break; > - } > > __run_hrtimer(timer, &basenow); > } > } > > + for (i = 0; i < HRTIMER_MAX_CLOCK_BASES; i++) { > + ktime_t expires; > + > + if (!(cpu_base->active_bases & (1 << i))) > + continue; > + > + base = cpu_base->clock_base + i; > + > + node = timerqueue_getnext(&base->active); > + if (!node) > + continue; > + > + timer = container_of(node, struct hrtimer, node); > + expires = ktime_sub(hrtimer_get_expires(timer), base->offset); > + if (expires.tv64 < 0) > + expires.tv64 = KTIME_MAX; > + if (expires.tv64 < expires_next.tv64) > + expires_next = expires; Actually we should store the expiry time of the first timer in a base in the base itself. That needs to be done at enqueue time and > + if (basenow.tv64 < hrtimer_get_softexpires_tv64(timer)) > break; here right before we break out of handling the base. When the last timer of the base goes away, set the value to KTIME_MAX. If we store it with the base offset already applied base->next = ktime_sub(hrtimer_get_expires(timer), base->offset); then the loop to find the next expiry in the interrupt becomes trivial and fast. Thanks, tglx ^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH v2] hrtimers: calculate expires_next after all timers are executed 2014-03-19 10:59 ` Thomas Gleixner @ 2014-03-19 14:16 ` Stanislav Fomichev 2014-03-24 8:42 ` Stanislav Fomichev ` (2 more replies) 0 siblings, 3 replies; 15+ messages in thread From: Stanislav Fomichev @ 2014-03-19 14:16 UTC (permalink / raw) To: Thomas Gleixner Cc: john.stultz, prarit, paul.gortmaker, juri.lelli, ddaney.cavm, mbohan, david.vrabel, david.engraf, linux-kernel I think I'm hitting particularly subtle issue with NOHZ_IDLE kernel. The sequence is as follows: - CPU enters idle, we disable tick - hrtimer interrupt fires (for hrtimer_wakeup) - for clock base #1 (REALTIME) we wake up SCHED_RT thread and start RT period timer (from start_rt_bandwidth) for clock base #0 (MONOTONIC) - because we already checked expiry time for clock base #0 we end up programming wrong wake up time (minutes, from tick_sched_timer) - then, we exit idle loop and restart tick; but because tick_sched_timer is not leftmost (the leftmost one is RT period timer) we don't program it So in the end, I see working CPU without tick interrupt. This eventually leads to RCU stall on that CPU: rcu_gp_kthread is not woken up because there is no tick (this is the reason I started digging this up). The proposed fix runs expired timers first and only then tries to find next expiry time for all clocks. Signed-off-by: Stanislav Fomichev <stfomichev@yandex-team.ru> --- include/linux/hrtimer.h | 2 ++ kernel/hrtimer.c | 41 +++++++++++++++++++++++++++++++---------- 2 files changed, 33 insertions(+), 10 deletions(-) diff --git a/include/linux/hrtimer.h b/include/linux/hrtimer.h index d19a5c2d2270..520a671f90ee 100644 --- a/include/linux/hrtimer.h +++ b/include/linux/hrtimer.h @@ -141,6 +141,7 @@ struct hrtimer_sleeper { * @get_time: function to retrieve the current time of the clock * @softirq_time: the time when running the hrtimer queue in the softirq * @offset: offset of this clock to the monotonic base + * @next: time of the next event on this clock base */ struct hrtimer_clock_base { struct hrtimer_cpu_base *cpu_base; @@ -151,6 +152,7 @@ struct hrtimer_clock_base { ktime_t (*get_time)(void); ktime_t softirq_time; ktime_t offset; + ktime_t next; }; enum hrtimer_base_type { diff --git a/kernel/hrtimer.c b/kernel/hrtimer.c index 09094361dce5..d284411e6dad 100644 --- a/kernel/hrtimer.c +++ b/kernel/hrtimer.c @@ -882,6 +882,8 @@ EXPORT_SYMBOL_GPL(hrtimer_forward); static int enqueue_hrtimer(struct hrtimer *timer, struct hrtimer_clock_base *base) { + ktime_t expires; + debug_activate(timer); timerqueue_add(&base->active, &timer->node); @@ -893,6 +895,10 @@ static int enqueue_hrtimer(struct hrtimer *timer, */ timer->state |= HRTIMER_STATE_ENQUEUED; + expires = ktime_sub(hrtimer_get_expires(timer), base->offset); + if (ktime_compare(base->next, expires) > 0) + base->next = expires; + return (&timer->node == base->active.next); } @@ -929,8 +935,10 @@ static void __remove_hrtimer(struct hrtimer *timer, } #endif } - if (!timerqueue_getnext(&base->active)) + if (!timerqueue_getnext(&base->active)) { base->cpu_base->active_bases &= ~(1 << base->index); + base->next = ktime_set(KTIME_SEC_MAX, 0); + } out: timer->state = newstate; } @@ -1282,6 +1290,7 @@ static void __run_hrtimer(struct hrtimer *timer, ktime_t *now) */ void hrtimer_interrupt(struct clock_event_device *dev) { + struct hrtimer_clock_base *base; struct hrtimer_cpu_base *cpu_base = &__get_cpu_var(hrtimer_bases); ktime_t expires_next, now, entry_time, delta; int i, retries = 0; @@ -1304,7 +1313,6 @@ retry: cpu_base->expires_next.tv64 = KTIME_MAX; for (i = 0; i < HRTIMER_MAX_CLOCK_BASES; i++) { - struct hrtimer_clock_base *base; struct timerqueue_node *node; ktime_t basenow; @@ -1333,14 +1341,8 @@ retry: */ if (basenow.tv64 < hrtimer_get_softexpires_tv64(timer)) { - ktime_t expires; - - expires = ktime_sub(hrtimer_get_expires(timer), - base->offset); - if (expires.tv64 < 0) - expires.tv64 = KTIME_MAX; - if (expires.tv64 < expires_next.tv64) - expires_next = expires; + base->next = ktime_sub(hrtimer_get_expires(timer), + base->offset); break; } @@ -1349,6 +1351,25 @@ retry: } /* + * Because timer handler may add new timer on a different clock base, + * we need to find next expiry only after we execute all timers. + */ + for (i = 0; i < HRTIMER_MAX_CLOCK_BASES; i++) { + ktime_t expires; + + if (!(cpu_base->active_bases & (1 << i))) + continue; + + base = cpu_base->clock_base + i; + expires = base->next; + + if (expires.tv64 < 0) + expires.tv64 = KTIME_MAX; + if (expires.tv64 < expires_next.tv64) + expires_next = expires; + } + + /* * Store the new expiry value so the migration code can verify * against it. */ -- 1.8.3.2 ^ permalink raw reply related [flat|nested] 15+ messages in thread
* Re: [PATCH v2] hrtimers: calculate expires_next after all timers are executed 2014-03-19 14:16 ` [PATCH v2] " Stanislav Fomichev @ 2014-03-24 8:42 ` Stanislav Fomichev 2014-03-24 9:04 ` Thomas Gleixner 2014-06-04 15:47 ` Stanislav Fomichev 2014-06-22 14:46 ` Thomas Gleixner 2 siblings, 1 reply; 15+ messages in thread From: Stanislav Fomichev @ 2014-03-24 8:42 UTC (permalink / raw) To: Thomas Gleixner; +Cc: linux-kernel Hi, Any comment on the v2? ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v2] hrtimers: calculate expires_next after all timers are executed 2014-03-24 8:42 ` Stanislav Fomichev @ 2014-03-24 9:04 ` Thomas Gleixner 2014-05-05 9:26 ` Stanislav Fomichev 0 siblings, 1 reply; 15+ messages in thread From: Thomas Gleixner @ 2014-03-24 9:04 UTC (permalink / raw) To: Stanislav Fomichev; +Cc: linux-kernel On 2014-03-24 09:42, Stanislav Fomichev wrote: > Hi, > > Any comment on the v2? I'm traveling and it's in my todo list ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v2] hrtimers: calculate expires_next after all timers are executed 2014-03-24 9:04 ` Thomas Gleixner @ 2014-05-05 9:26 ` Stanislav Fomichev 0 siblings, 0 replies; 15+ messages in thread From: Stanislav Fomichev @ 2014-05-05 9:26 UTC (permalink / raw) To: Thomas Gleixner; +Cc: linux-kernel > >Any comment on the v2? > > I'm traveling and it's in my todo list Ping? It's been a while since you added it to your todo list :-) Btw, I have a couple of machines that have been working without any issues with this patch applied... ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v2] hrtimers: calculate expires_next after all timers are executed 2014-03-19 14:16 ` [PATCH v2] " Stanislav Fomichev 2014-03-24 8:42 ` Stanislav Fomichev @ 2014-06-04 15:47 ` Stanislav Fomichev 2014-06-04 20:06 ` Thomas Gleixner 2014-06-22 14:46 ` Thomas Gleixner 2 siblings, 1 reply; 15+ messages in thread From: Stanislav Fomichev @ 2014-06-04 15:47 UTC (permalink / raw) To: Thomas Gleixner Cc: john.stultz, prarit, paul.gortmaker, juri.lelli, ddaney.cavm, mbohan, david.vrabel, david.engraf, linux-kernel Thomas, any estimate on when you'll be able to review this patch? It's been almost 3 months since is sent it. Is there any particular reason you don't want to pull it (because on my machines it's been working fine without any issues)? On Wed, Mar 19, 2014 at 06:16:15PM +0400, Stanislav Fomichev wrote: > I think I'm hitting particularly subtle issue with NOHZ_IDLE kernel. > > The sequence is as follows: > - CPU enters idle, we disable tick > - hrtimer interrupt fires (for hrtimer_wakeup) > - for clock base #1 (REALTIME) we wake up SCHED_RT thread and > start RT period timer (from start_rt_bandwidth) for clock base #0 (MONOTONIC) > - because we already checked expiry time for clock base #0 > we end up programming wrong wake up time (minutes, from tick_sched_timer) > - then, we exit idle loop and restart tick; > but because tick_sched_timer is not leftmost (the leftmost one > is RT period timer) we don't program it > > So in the end, I see working CPU without tick interrupt. > This eventually leads to RCU stall on that CPU: rcu_gp_kthread > is not woken up because there is no tick (this is the reason > I started digging this up). > > The proposed fix runs expired timers first and only then tries to find > next expiry time for all clocks. > > Signed-off-by: Stanislav Fomichev <stfomichev@yandex-team.ru> > --- > include/linux/hrtimer.h | 2 ++ > kernel/hrtimer.c | 41 +++++++++++++++++++++++++++++++---------- > 2 files changed, 33 insertions(+), 10 deletions(-) > > diff --git a/include/linux/hrtimer.h b/include/linux/hrtimer.h > index d19a5c2d2270..520a671f90ee 100644 > --- a/include/linux/hrtimer.h > +++ b/include/linux/hrtimer.h > @@ -141,6 +141,7 @@ struct hrtimer_sleeper { > * @get_time: function to retrieve the current time of the clock > * @softirq_time: the time when running the hrtimer queue in the softirq > * @offset: offset of this clock to the monotonic base > + * @next: time of the next event on this clock base > */ > struct hrtimer_clock_base { > struct hrtimer_cpu_base *cpu_base; > @@ -151,6 +152,7 @@ struct hrtimer_clock_base { > ktime_t (*get_time)(void); > ktime_t softirq_time; > ktime_t offset; > + ktime_t next; > }; > > enum hrtimer_base_type { > diff --git a/kernel/hrtimer.c b/kernel/hrtimer.c > index 09094361dce5..d284411e6dad 100644 > --- a/kernel/hrtimer.c > +++ b/kernel/hrtimer.c > @@ -882,6 +882,8 @@ EXPORT_SYMBOL_GPL(hrtimer_forward); > static int enqueue_hrtimer(struct hrtimer *timer, > struct hrtimer_clock_base *base) > { > + ktime_t expires; > + > debug_activate(timer); > > timerqueue_add(&base->active, &timer->node); > @@ -893,6 +895,10 @@ static int enqueue_hrtimer(struct hrtimer *timer, > */ > timer->state |= HRTIMER_STATE_ENQUEUED; > > + expires = ktime_sub(hrtimer_get_expires(timer), base->offset); > + if (ktime_compare(base->next, expires) > 0) > + base->next = expires; > + > return (&timer->node == base->active.next); > } > > @@ -929,8 +935,10 @@ static void __remove_hrtimer(struct hrtimer *timer, > } > #endif > } > - if (!timerqueue_getnext(&base->active)) > + if (!timerqueue_getnext(&base->active)) { > base->cpu_base->active_bases &= ~(1 << base->index); > + base->next = ktime_set(KTIME_SEC_MAX, 0); > + } > out: > timer->state = newstate; > } > @@ -1282,6 +1290,7 @@ static void __run_hrtimer(struct hrtimer *timer, ktime_t *now) > */ > void hrtimer_interrupt(struct clock_event_device *dev) > { > + struct hrtimer_clock_base *base; > struct hrtimer_cpu_base *cpu_base = &__get_cpu_var(hrtimer_bases); > ktime_t expires_next, now, entry_time, delta; > int i, retries = 0; > @@ -1304,7 +1313,6 @@ retry: > cpu_base->expires_next.tv64 = KTIME_MAX; > > for (i = 0; i < HRTIMER_MAX_CLOCK_BASES; i++) { > - struct hrtimer_clock_base *base; > struct timerqueue_node *node; > ktime_t basenow; > > @@ -1333,14 +1341,8 @@ retry: > */ > > if (basenow.tv64 < hrtimer_get_softexpires_tv64(timer)) { > - ktime_t expires; > - > - expires = ktime_sub(hrtimer_get_expires(timer), > - base->offset); > - if (expires.tv64 < 0) > - expires.tv64 = KTIME_MAX; > - if (expires.tv64 < expires_next.tv64) > - expires_next = expires; > + base->next = ktime_sub(hrtimer_get_expires(timer), > + base->offset); > break; > } > > @@ -1349,6 +1351,25 @@ retry: > } > > /* > + * Because timer handler may add new timer on a different clock base, > + * we need to find next expiry only after we execute all timers. > + */ > + for (i = 0; i < HRTIMER_MAX_CLOCK_BASES; i++) { > + ktime_t expires; > + > + if (!(cpu_base->active_bases & (1 << i))) > + continue; > + > + base = cpu_base->clock_base + i; > + expires = base->next; > + > + if (expires.tv64 < 0) > + expires.tv64 = KTIME_MAX; > + if (expires.tv64 < expires_next.tv64) > + expires_next = expires; > + } > + > + /* > * Store the new expiry value so the migration code can verify > * against it. > */ > -- > 1.8.3.2 > > -- > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html > Please read the FAQ at http://www.tux.org/lkml/ ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v2] hrtimers: calculate expires_next after all timers are executed 2014-06-04 15:47 ` Stanislav Fomichev @ 2014-06-04 20:06 ` Thomas Gleixner 0 siblings, 0 replies; 15+ messages in thread From: Thomas Gleixner @ 2014-06-04 20:06 UTC (permalink / raw) To: Stanislav Fomichev Cc: john.stultz, prarit, paul.gortmaker, juri.lelli, ddaney.cavm, mbohan, david.vrabel, david.engraf, linux-kernel On Wed, 4 Jun 2014, Stanislav Fomichev wrote: > Thomas, any estimate on when you'll be able to review this patch? > It's been almost 3 months since is sent it. Is there any particular > reason you don't want to pull it (because on my machines it's > been working fine without any issues)? It's in my ever growing backlog. :( I'm trying to get it into 3.16 though. Thanks, tglx ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v2] hrtimers: calculate expires_next after all timers are executed 2014-03-19 14:16 ` [PATCH v2] " Stanislav Fomichev 2014-03-24 8:42 ` Stanislav Fomichev 2014-06-04 15:47 ` Stanislav Fomichev @ 2014-06-22 14:46 ` Thomas Gleixner 2014-06-23 11:03 ` Stanislav Fomichev 2014-12-26 23:34 ` Stephen Boyd 2 siblings, 2 replies; 15+ messages in thread From: Thomas Gleixner @ 2014-06-22 14:46 UTC (permalink / raw) To: Stanislav Fomichev Cc: john.stultz, prarit, paul.gortmaker, juri.lelli, ddaney.cavm, mbohan, david.vrabel, david.engraf, linux-kernel On Wed, 19 Mar 2014, Stanislav Fomichev wrote: + * @next: time of the next event on this clock base What initializes that? It's 0 to begin with. > @@ -893,6 +895,10 @@ static int enqueue_hrtimer(struct hrtimer *timer, > */ > timer->state |= HRTIMER_STATE_ENQUEUED; > > + expires = ktime_sub(hrtimer_get_expires(timer), base->offset); This does not work when time gets set and the offset changes. We need to store the absolute expiry time and subtract the offset at evaluation time. > @@ -929,8 +935,10 @@ static void __remove_hrtimer(struct hrtimer *timer, > } > #endif > } > - if (!timerqueue_getnext(&base->active)) > + if (!timerqueue_getnext(&base->active)) { > base->cpu_base->active_bases &= ~(1 << base->index); > + base->next = ktime_set(KTIME_SEC_MAX, 0); > + } And what updates base->next if there are timers pending? > + for (i = 0; i < HRTIMER_MAX_CLOCK_BASES; i++) { > + ktime_t expires; So this adds the third incarnation of finding the next expiring timer to the code. Not really helpful. Untested patch which addresses the issues below. Thanks, tglx --------------------> include/linux/hrtimer.h | 2 kernel/hrtimer.c | 162 ++++++++++++++++++++++++++---------------------- 2 files changed, 90 insertions(+), 74 deletions(-) Index: linux/include/linux/hrtimer.h =================================================================== --- linux.orig/include/linux/hrtimer.h +++ linux/include/linux/hrtimer.h @@ -141,6 +141,7 @@ struct hrtimer_sleeper { * @get_time: function to retrieve the current time of the clock * @softirq_time: the time when running the hrtimer queue in the softirq * @offset: offset of this clock to the monotonic base + * @expires_next: absolute time of the next event on this clock base */ struct hrtimer_clock_base { struct hrtimer_cpu_base *cpu_base; @@ -151,6 +152,7 @@ struct hrtimer_clock_base { ktime_t (*get_time)(void); ktime_t softirq_time; ktime_t offset; + ktime_t expires_next; }; enum hrtimer_base_type { Index: linux/kernel/hrtimer.c =================================================================== --- linux.orig/kernel/hrtimer.c +++ linux/kernel/hrtimer.c @@ -494,6 +494,36 @@ static inline void debug_deactivate(stru trace_hrtimer_cancel(timer); } +/* + * Find the next expiring timer and return the expiry in absolute + * CLOCK_MONOTONIC time. + */ +static ktime_t hrtimer_get_expires_next(struct hrtimer_cpu_base *cpu_base) +{ + ktime_t expires, expires_next = { .tv64 = KTIME_MAX }; + unsigned long idx, active = cpu_base->active_bases; + struct hrtimer_clock_base *base; + + while (active) { + idx = __ffs(active); + active &= ~(1UL << idx); + + base = cpu_base->clock_base + idx; + /* Adjust to CLOCK_MONOTONIC */ + expires = ktime_sub(base->expires_next, base->offset); + + if (expires.tv64 < expires_next.tv64) + expires_next = expires; + } + /* + * If clock_was_set() has changed base->offset the result + * might be negative. Fix it up to prevent a false positive in + * clockevents_program_event() + */ + expires.tv64 = 0; + return expires_next.tv64 < 0 ? expires : expires_next; +} + /* High resolution timer related functions */ #ifdef CONFIG_HIGH_RES_TIMERS @@ -542,32 +572,7 @@ static inline int hrtimer_hres_active(vo static void hrtimer_force_reprogram(struct hrtimer_cpu_base *cpu_base, int skip_equal) { - int i; - struct hrtimer_clock_base *base = cpu_base->clock_base; - ktime_t expires, expires_next; - - expires_next.tv64 = KTIME_MAX; - - for (i = 0; i < HRTIMER_MAX_CLOCK_BASES; i++, base++) { - struct hrtimer *timer; - struct timerqueue_node *next; - - next = timerqueue_getnext(&base->active); - if (!next) - continue; - timer = container_of(next, struct hrtimer, node); - - expires = ktime_sub(hrtimer_get_expires(timer), base->offset); - /* - * clock_was_set() has changed base->offset so the - * result might be negative. Fix it up to prevent a - * false positive in clockevents_program_event() - */ - if (expires.tv64 < 0) - expires.tv64 = 0; - if (expires.tv64 < expires_next.tv64) - expires_next = expires; - } + ktime_t expires_next = hrtimer_get_expires_next(cpu_base); if (skip_equal && expires_next.tv64 == cpu_base->expires_next.tv64) return; @@ -891,6 +896,8 @@ EXPORT_SYMBOL_GPL(hrtimer_forward); static int enqueue_hrtimer(struct hrtimer *timer, struct hrtimer_clock_base *base) { + int leftmost; + debug_activate(timer); timerqueue_add(&base->active, &timer->node); @@ -902,7 +909,13 @@ static int enqueue_hrtimer(struct hrtime */ timer->state |= HRTIMER_STATE_ENQUEUED; - return (&timer->node == base->active.next); + leftmost = &timer->node == base->active.next; + /* + * If the new timer is the leftmost, update clock_base->expires_next. + */ + if (leftmost) + base->expires_next = hrtimer_get_expires(timer); + return leftmost; } /* @@ -919,27 +932,45 @@ static void __remove_hrtimer(struct hrti struct hrtimer_clock_base *base, unsigned long newstate, int reprogram) { - struct timerqueue_node *next_timer; + struct timerqueue_node *next; + struct hrtimer *next_timer; + bool first; + if (!(timer->state & HRTIMER_STATE_ENQUEUED)) goto out; - next_timer = timerqueue_getnext(&base->active); + /* + * If this is not the first timer of the clock base, we just + * remove it. No updates, no reprogramming. + */ + first = timerqueue_getnext(&base->active) == &timer->node; timerqueue_del(&base->active, &timer->node); - if (&timer->node == next_timer) { + if (!first) + goto out; + + /* + * Update cpu base and clock base. This needs to be done + * before calling into hrtimer_force_reprogram() as that + * relies on active_bases and expires_next. + */ + next = timerqueue_getnext(&base->active); + if (!next) { + base->cpu_base->active_bases &= ~(1 << base->index); + base->expires_next.tv64 = KTIME_MAX; + } else { + next_timer = container_of(next, struct hrtimer, node); + base->expires_next = hrtimer_get_expires(next_timer); + } + #ifdef CONFIG_HIGH_RES_TIMERS - /* Reprogram the clock event device. if enabled */ - if (reprogram && hrtimer_hres_active()) { - ktime_t expires; - - expires = ktime_sub(hrtimer_get_expires(timer), - base->offset); - if (base->cpu_base->expires_next.tv64 == expires.tv64) - hrtimer_force_reprogram(base->cpu_base, 1); - } -#endif + if (reprogram && hrtimer_hres_active()) { + ktime_t expires; + + expires = ktime_sub(hrtimer_get_expires(timer), base->offset); + if (base->cpu_base->expires_next.tv64 == expires.tv64) + hrtimer_force_reprogram(base->cpu_base, 1); } - if (!timerqueue_getnext(&base->active)) - base->cpu_base->active_bases &= ~(1 << base->index); +#endif out: timer->state = newstate; } @@ -1154,35 +1185,21 @@ EXPORT_SYMBOL_GPL(hrtimer_get_remaining) ktime_t hrtimer_get_next_event(void) { struct hrtimer_cpu_base *cpu_base = &__get_cpu_var(hrtimer_bases); - struct hrtimer_clock_base *base = cpu_base->clock_base; - ktime_t delta, mindelta = { .tv64 = KTIME_MAX }; + ktime_t next, delta = { .tv64 = KTIME_MAX }; unsigned long flags; - int i; raw_spin_lock_irqsave(&cpu_base->lock, flags); if (!hrtimer_hres_active()) { - for (i = 0; i < HRTIMER_MAX_CLOCK_BASES; i++, base++) { - struct hrtimer *timer; - struct timerqueue_node *next; - - next = timerqueue_getnext(&base->active); - if (!next) - continue; - - timer = container_of(next, struct hrtimer, node); - delta.tv64 = hrtimer_get_expires_tv64(timer); - delta = ktime_sub(delta, base->get_time()); - if (delta.tv64 < mindelta.tv64) - mindelta.tv64 = delta.tv64; - } + next = hrtimer_get_expires_next(cpu_base); + delta = ktime_sub(next, ktime_get()); + if (delta.tv64 < 0) + delta.tv64 = 0; } raw_spin_unlock_irqrestore(&cpu_base->lock, flags); - if (mindelta.tv64 < 0) - mindelta.tv64 = 0; - return mindelta; + return delta; } #endif @@ -1292,6 +1309,7 @@ static void __run_hrtimer(struct hrtimer */ void hrtimer_interrupt(struct clock_event_device *dev) { + struct hrtimer_clock_base *base; struct hrtimer_cpu_base *cpu_base = &__get_cpu_var(hrtimer_bases); ktime_t expires_next, now, entry_time, delta; int i, retries = 0; @@ -1303,7 +1321,6 @@ void hrtimer_interrupt(struct clock_even raw_spin_lock(&cpu_base->lock); entry_time = now = hrtimer_update_base(cpu_base); retry: - expires_next.tv64 = KTIME_MAX; /* * We set expires_next to KTIME_MAX here with cpu_base->lock * held to prevent that a timer is enqueued in our queue via @@ -1314,7 +1331,6 @@ retry: cpu_base->expires_next.tv64 = KTIME_MAX; for (i = 0; i < HRTIMER_MAX_CLOCK_BASES; i++) { - struct hrtimer_clock_base *base; struct timerqueue_node *node; ktime_t basenow; @@ -1342,23 +1358,20 @@ retry: * timer will have to trigger a wakeup anyway. */ - if (basenow.tv64 < hrtimer_get_softexpires_tv64(timer)) { - ktime_t expires; - - expires = ktime_sub(hrtimer_get_expires(timer), - base->offset); - if (expires.tv64 < 0) - expires.tv64 = KTIME_MAX; - if (expires.tv64 < expires_next.tv64) - expires_next = expires; + if (basenow.tv64 < hrtimer_get_softexpires_tv64(timer)) break; - } __run_hrtimer(timer, &basenow); } } /* + * We might have dropped cpu_base->lock and the callbacks + * might have added timers. Find the timer which expires first. + */ + expires_next = hrtimer_get_expires_next(cpu_base); + + /* * Store the new expiry value so the migration code can verify * against it. */ @@ -1678,6 +1691,7 @@ static void init_hrtimers_cpu(int cpu) for (i = 0; i < HRTIMER_MAX_CLOCK_BASES; i++) { cpu_base->clock_base[i].cpu_base = cpu_base; timerqueue_init_head(&cpu_base->clock_base[i].active); + cpu_base->clock_base[i].expires_next.tv64 = KTIME_MAX; } hrtimer_init_hres(cpu_base); ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v2] hrtimers: calculate expires_next after all timers are executed 2014-06-22 14:46 ` Thomas Gleixner @ 2014-06-23 11:03 ` Stanislav Fomichev 2014-06-23 17:19 ` Thomas Gleixner 2014-12-26 23:34 ` Stephen Boyd 1 sibling, 1 reply; 15+ messages in thread From: Stanislav Fomichev @ 2014-06-23 11:03 UTC (permalink / raw) To: Thomas Gleixner Cc: john.stultz, prarit, paul.gortmaker, juri.lelli, ddaney.cavm, mbohan, david.vrabel, david.engraf, linux-kernel Hi Thomas, > > + * @next: time of the next event on this clock base > > What initializes that? It's 0 to begin with. I thought I can skip initialization because I update base->next in the interrupt or in __remove_hrtimer, like: - enqueue_timer, base->next is 0 - reprogram device - device fires -> hrtimer_interrupt - __run_hrtimer - __remove_hrtimer - if last base->next = KTIME_MAX - otherwise base->next = ktime_sub(hrtimer_get_expires(timer), base->offset) in hrtimer_interrupt > > @@ -893,6 +895,10 @@ static int enqueue_hrtimer(struct hrtimer *timer, > > */ > > timer->state |= HRTIMER_STATE_ENQUEUED; > > > > + expires = ktime_sub(hrtimer_get_expires(timer), base->offset); > > This does not work when time gets set and the offset changes. We need > to store the absolute expiry time and subtract the offset at > evaluation time. Hm, looking at this code after a while it seems I don't need to update base->next in enqueue_hrtimer. It's enough to set it to KTIME_MAX in __remove_hrtimer or to actual value upon breaking from __run_hrtimer loop in hrtimer_interrupt. > > @@ -929,8 +935,10 @@ static void __remove_hrtimer(struct hrtimer *timer, > > } > > #endif > > } > > - if (!timerqueue_getnext(&base->active)) > > + if (!timerqueue_getnext(&base->active)) { > > base->cpu_base->active_bases &= ~(1 << base->index); > > + base->next = ktime_set(KTIME_SEC_MAX, 0); > > + } > > And what updates base->next if there are timers pending? See above, hrtimer_interrupt updates it before breaking or sets to KTIME_MAX in __remove_hrtimer if it's the last one. > > + for (i = 0; i < HRTIMER_MAX_CLOCK_BASES; i++) { > > + ktime_t expires; > > So this adds the third incarnation of finding the next expiring timer > to the code. Not really helpful. Didn't really think about all the other places, refactoring may come in another patch. > Untested patch which addresses the issues below. Aside from a small nitpick below, looks reasonable, I'll try to run it on a couple of machines. Should I send you a v3 afterwards with the changelog or tested-by would be enough? > + while (active) { > + idx = __ffs(active); > + active &= ~(1UL << idx); Is there any reason you did that instead of conventional: for (i = 0; i < HRTIMER_MAX_CLOCK_BASES; i++) { if (!(cpu_base->active_bases & (1 << i))) continue; ... } ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v2] hrtimers: calculate expires_next after all timers are executed 2014-06-23 11:03 ` Stanislav Fomichev @ 2014-06-23 17:19 ` Thomas Gleixner 2014-06-23 18:04 ` Stanislav Fomichev 0 siblings, 1 reply; 15+ messages in thread From: Thomas Gleixner @ 2014-06-23 17:19 UTC (permalink / raw) To: Stanislav Fomichev Cc: john.stultz, prarit, paul.gortmaker, juri.lelli, ddaney.cavm, mbohan, david.vrabel, david.engraf, linux-kernel On Mon, 23 Jun 2014, Stanislav Fomichev wrote: > > > > + * @next: time of the next event on this clock base > > > > What initializes that? It's 0 to begin with. > I thought I can skip initialization because I update base->next > in the interrupt or in __remove_hrtimer, like: > - enqueue_timer, base->next is 0 Right, and that makes inconsistent state. We can do without such magic, really. > - reprogram device > - device fires -> hrtimer_interrupt > - __run_hrtimer > - __remove_hrtimer > - if last base->next = KTIME_MAX > - otherwise base->next = ktime_sub(hrtimer_get_expires(timer), base->offset) > in hrtimer_interrupt > > > > @@ -893,6 +895,10 @@ static int enqueue_hrtimer(struct hrtimer *timer, > > > */ > > > timer->state |= HRTIMER_STATE_ENQUEUED; > > > > > > + expires = ktime_sub(hrtimer_get_expires(timer), base->offset); > > > > This does not work when time gets set and the offset changes. We need > > to store the absolute expiry time and subtract the offset at > > evaluation time. > Hm, looking at this code after a while it seems I don't need to update > base->next in enqueue_hrtimer. It's enough to set it to KTIME_MAX in > __remove_hrtimer or to actual value upon breaking from __run_hrtimer > loop in hrtimer_interrupt. Huch? Why don't you need to update in enqueue_hrtimer? That's the whole point of this excercise. hrtimer_interrupt() lock(cpu_base) if (not_expired(base0)) base0->next = expiry; if (expired(base1)) unlock(cpu_base) /* remote enqueue */ lock(cpu_base) run_timer_fn() enqeueue_to(base0) unlock(cpu_base) lock(cpu_base) evaluate_base_next() So in case the enqueued timer is earlier than base0->next, you are looking at the wrong data. Same as in the current code and why you started to look into this at all. > > > @@ -929,8 +935,10 @@ static void __remove_hrtimer(struct hrtimer *timer, > > > } > > > #endif > > > } > > > - if (!timerqueue_getnext(&base->active)) > > > + if (!timerqueue_getnext(&base->active)) { > > > base->cpu_base->active_bases &= ~(1 << base->index); > > > + base->next = ktime_set(KTIME_SEC_MAX, 0); > > > + } > > > > And what updates base->next if there are timers pending? > See above, hrtimer_interrupt updates it before breaking or sets to > KTIME_MAX in __remove_hrtimer if it's the last one. See above WHY it does NOT work. > > > + for (i = 0; i < HRTIMER_MAX_CLOCK_BASES; i++) { > > > + ktime_t expires; > > > > So this adds the third incarnation of finding the next expiring timer > > to the code. Not really helpful. > Didn't really think about all the other places, refactoring may come in > another patch. No, we're not going to add another one in the first place as I know how MAY COME works: it translates to NEVER, unless I do it myself. > > Untested patch which addresses the issues below. > Aside from a small nitpick below, looks reasonable, I'll try to run it on a > couple of machines. > Should I send you a v3 afterwards with the changelog or > tested-by would be enough? Tested-by is fine. I can cobble a changelog together. > > + while (active) { > > + idx = __ffs(active); > > + active &= ~(1UL << idx); > Is there any reason you did that instead of conventional: I thought about using __ffs before, just never came around it. Thanks, tglx ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v2] hrtimers: calculate expires_next after all timers are executed 2014-06-23 17:19 ` Thomas Gleixner @ 2014-06-23 18:04 ` Stanislav Fomichev 2014-06-24 17:18 ` Stanislav Fomichev 0 siblings, 1 reply; 15+ messages in thread From: Stanislav Fomichev @ 2014-06-23 18:04 UTC (permalink / raw) To: Thomas Gleixner Cc: john.stultz, prarit, paul.gortmaker, juri.lelli, ddaney.cavm, mbohan, david.vrabel, david.engraf, linux-kernel > So in case the enqueued timer is earlier than base0->next, you are > looking at the wrong data. Same as in the current code and why you > started to look into this at all. Right, even if we do a local enqueue from higher base timer into lower base we still need to update base->next in the enqueue. > See above WHY it does NOT work. It works, but without reprogramming in __remove_hrtimer we can end up with a spurious tick which will just rearm clockdev and do no useful work. At least no stall :-/ > No, we're not going to add another one in the first place as I know > how MAY COME works: it translates to NEVER, unless I do it myself. I'm fine with that, the point I'm trying to make is that I didn't even think about any cleanup/refactoring in the first place, I just tried to get an issue fixed and get some feedback. It's even better if we can also do a cleanup in the process. > > > Untested patch which addresses the issues below. > > Aside from a small nitpick below, looks reasonable, I'll try to run it on a > > couple of machines. > > Should I send you a v3 afterwards with the changelog or > > tested-by would be enough? > > Tested-by is fine. I can cobble a changelog together. Ok, I have a couple of machines running for ~5 hours without any visible issues, but let's give it at least a day. > > > + while (active) { > > > + idx = __ffs(active); > > > + active &= ~(1UL << idx); > > Is there any reason you did that instead of conventional: > > I thought about using __ffs before, just never came around it. Nothing against it, seems totally legit, just looks inconsistent. Now we have one place where we do __ffs stuff and other places where we do for(;HRTIMER_MAX_CLOCK_BASES;). ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v2] hrtimers: calculate expires_next after all timers are executed 2014-06-23 18:04 ` Stanislav Fomichev @ 2014-06-24 17:18 ` Stanislav Fomichev 0 siblings, 0 replies; 15+ messages in thread From: Stanislav Fomichev @ 2014-06-24 17:18 UTC (permalink / raw) To: Thomas Gleixner Cc: john.stultz, prarit, paul.gortmaker, juri.lelli, ddaney.cavm, mbohan, david.vrabel, david.engraf, linux-kernel > > Tested-by is fine. I can cobble a changelog together. No issues after more than a day of running, I think you can use my Tested-By: Stanislav Fomichev <stfomichev@yandex-team.ru> ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v2] hrtimers: calculate expires_next after all timers are executed 2014-06-22 14:46 ` Thomas Gleixner 2014-06-23 11:03 ` Stanislav Fomichev @ 2014-12-26 23:34 ` Stephen Boyd 2014-12-29 12:24 ` Stanislav Fomichev 1 sibling, 1 reply; 15+ messages in thread From: Stephen Boyd @ 2014-12-26 23:34 UTC (permalink / raw) To: Thomas Gleixner, Stanislav Fomichev Cc: john.stultz, prarit, paul.gortmaker, juri.lelli, ddaney.cavm, mbohan, david.vrabel, david.engraf, linux-kernel On 06/22/2014 07:46 AM, Thomas Gleixner wrote: >> + for (i = 0; i < HRTIMER_MAX_CLOCK_BASES; i++) { >> + ktime_t expires; > So this adds the third incarnation of finding the next expiring timer > to the code. Not really helpful. > > Untested patch which addresses the issues below. We think we're running into the same issue that's fixed by this patch, but the occurrence is very rare so reproducing is hard. Did this ever go anywhere? > > --------------------> > include/linux/hrtimer.h | 2 > kernel/hrtimer.c | 162 ++++++++++++++++++++++++++---------------------- > 2 files changed, 90 insertions(+), 74 deletions(-) > > Index: linux/include/linux/hrtimer.h > =================================================================== > --- linux.orig/include/linux/hrtimer.h > +++ linux/include/linux/hrtimer.h > @@ -141,6 +141,7 @@ struct hrtimer_sleeper { > * @get_time: function to retrieve the current time of the clock > * @softirq_time: the time when running the hrtimer queue in the softirq > * @offset: offset of this clock to the monotonic base > + * @expires_next: absolute time of the next event on this clock base > */ > struct hrtimer_clock_base { > struct hrtimer_cpu_base *cpu_base; > @@ -151,6 +152,7 @@ struct hrtimer_clock_base { > ktime_t (*get_time)(void); > ktime_t softirq_time; > ktime_t offset; > + ktime_t expires_next; > }; > > enum hrtimer_base_type { > Index: linux/kernel/hrtimer.c > =================================================================== > --- linux.orig/kernel/hrtimer.c > +++ linux/kernel/hrtimer.c > @@ -494,6 +494,36 @@ static inline void debug_deactivate(stru > trace_hrtimer_cancel(timer); > } > > +/* > + * Find the next expiring timer and return the expiry in absolute > + * CLOCK_MONOTONIC time. > + */ > +static ktime_t hrtimer_get_expires_next(struct hrtimer_cpu_base *cpu_base) > +{ > + ktime_t expires, expires_next = { .tv64 = KTIME_MAX }; > + unsigned long idx, active = cpu_base->active_bases; > + struct hrtimer_clock_base *base; > + > + while (active) { > + idx = __ffs(active); > + active &= ~(1UL << idx); > + > + base = cpu_base->clock_base + idx; > + /* Adjust to CLOCK_MONOTONIC */ > + expires = ktime_sub(base->expires_next, base->offset); > + > + if (expires.tv64 < expires_next.tv64) > + expires_next = expires; > + } > + /* > + * If clock_was_set() has changed base->offset the result > + * might be negative. Fix it up to prevent a false positive in > + * clockevents_program_event() > + */ > + expires.tv64 = 0; > + return expires_next.tv64 < 0 ? expires : expires_next; > +} > + > /* High resolution timer related functions */ > #ifdef CONFIG_HIGH_RES_TIMERS > > @@ -542,32 +572,7 @@ static inline int hrtimer_hres_active(vo > static void > hrtimer_force_reprogram(struct hrtimer_cpu_base *cpu_base, int skip_equal) > { > - int i; > - struct hrtimer_clock_base *base = cpu_base->clock_base; > - ktime_t expires, expires_next; > - > - expires_next.tv64 = KTIME_MAX; > - > - for (i = 0; i < HRTIMER_MAX_CLOCK_BASES; i++, base++) { > - struct hrtimer *timer; > - struct timerqueue_node *next; > - > - next = timerqueue_getnext(&base->active); > - if (!next) > - continue; > - timer = container_of(next, struct hrtimer, node); > - > - expires = ktime_sub(hrtimer_get_expires(timer), base->offset); > - /* > - * clock_was_set() has changed base->offset so the > - * result might be negative. Fix it up to prevent a > - * false positive in clockevents_program_event() > - */ > - if (expires.tv64 < 0) > - expires.tv64 = 0; > - if (expires.tv64 < expires_next.tv64) > - expires_next = expires; > - } > + ktime_t expires_next = hrtimer_get_expires_next(cpu_base); > > if (skip_equal && expires_next.tv64 == cpu_base->expires_next.tv64) > return; > @@ -891,6 +896,8 @@ EXPORT_SYMBOL_GPL(hrtimer_forward); > static int enqueue_hrtimer(struct hrtimer *timer, > struct hrtimer_clock_base *base) > { > + int leftmost; > + > debug_activate(timer); > > timerqueue_add(&base->active, &timer->node); > @@ -902,7 +909,13 @@ static int enqueue_hrtimer(struct hrtime > */ > timer->state |= HRTIMER_STATE_ENQUEUED; > > - return (&timer->node == base->active.next); > + leftmost = &timer->node == base->active.next; > + /* > + * If the new timer is the leftmost, update clock_base->expires_next. > + */ > + if (leftmost) > + base->expires_next = hrtimer_get_expires(timer); > + return leftmost; > } > > /* > @@ -919,27 +932,45 @@ static void __remove_hrtimer(struct hrti > struct hrtimer_clock_base *base, > unsigned long newstate, int reprogram) > { > - struct timerqueue_node *next_timer; > + struct timerqueue_node *next; > + struct hrtimer *next_timer; > + bool first; > + > if (!(timer->state & HRTIMER_STATE_ENQUEUED)) > goto out; > > - next_timer = timerqueue_getnext(&base->active); > + /* > + * If this is not the first timer of the clock base, we just > + * remove it. No updates, no reprogramming. > + */ > + first = timerqueue_getnext(&base->active) == &timer->node; > timerqueue_del(&base->active, &timer->node); > - if (&timer->node == next_timer) { > + if (!first) > + goto out; > + > + /* > + * Update cpu base and clock base. This needs to be done > + * before calling into hrtimer_force_reprogram() as that > + * relies on active_bases and expires_next. > + */ > + next = timerqueue_getnext(&base->active); > + if (!next) { > + base->cpu_base->active_bases &= ~(1 << base->index); > + base->expires_next.tv64 = KTIME_MAX; > + } else { > + next_timer = container_of(next, struct hrtimer, node); > + base->expires_next = hrtimer_get_expires(next_timer); > + } > + > #ifdef CONFIG_HIGH_RES_TIMERS > - /* Reprogram the clock event device. if enabled */ > - if (reprogram && hrtimer_hres_active()) { > - ktime_t expires; > - > - expires = ktime_sub(hrtimer_get_expires(timer), > - base->offset); > - if (base->cpu_base->expires_next.tv64 == expires.tv64) > - hrtimer_force_reprogram(base->cpu_base, 1); > - } > -#endif > + if (reprogram && hrtimer_hres_active()) { > + ktime_t expires; > + > + expires = ktime_sub(hrtimer_get_expires(timer), base->offset); > + if (base->cpu_base->expires_next.tv64 == expires.tv64) > + hrtimer_force_reprogram(base->cpu_base, 1); > } > - if (!timerqueue_getnext(&base->active)) > - base->cpu_base->active_bases &= ~(1 << base->index); > +#endif > out: > timer->state = newstate; > } > @@ -1154,35 +1185,21 @@ EXPORT_SYMBOL_GPL(hrtimer_get_remaining) > ktime_t hrtimer_get_next_event(void) > { > struct hrtimer_cpu_base *cpu_base = &__get_cpu_var(hrtimer_bases); > - struct hrtimer_clock_base *base = cpu_base->clock_base; > - ktime_t delta, mindelta = { .tv64 = KTIME_MAX }; > + ktime_t next, delta = { .tv64 = KTIME_MAX }; > unsigned long flags; > - int i; > > raw_spin_lock_irqsave(&cpu_base->lock, flags); > > if (!hrtimer_hres_active()) { > - for (i = 0; i < HRTIMER_MAX_CLOCK_BASES; i++, base++) { > - struct hrtimer *timer; > - struct timerqueue_node *next; > - > - next = timerqueue_getnext(&base->active); > - if (!next) > - continue; > - > - timer = container_of(next, struct hrtimer, node); > - delta.tv64 = hrtimer_get_expires_tv64(timer); > - delta = ktime_sub(delta, base->get_time()); > - if (delta.tv64 < mindelta.tv64) > - mindelta.tv64 = delta.tv64; > - } > + next = hrtimer_get_expires_next(cpu_base); > + delta = ktime_sub(next, ktime_get()); > + if (delta.tv64 < 0) > + delta.tv64 = 0; > } > > raw_spin_unlock_irqrestore(&cpu_base->lock, flags); > > - if (mindelta.tv64 < 0) > - mindelta.tv64 = 0; > - return mindelta; > + return delta; > } > #endif > > @@ -1292,6 +1309,7 @@ static void __run_hrtimer(struct hrtimer > */ > void hrtimer_interrupt(struct clock_event_device *dev) > { > + struct hrtimer_clock_base *base; > struct hrtimer_cpu_base *cpu_base = &__get_cpu_var(hrtimer_bases); > ktime_t expires_next, now, entry_time, delta; > int i, retries = 0; > @@ -1303,7 +1321,6 @@ void hrtimer_interrupt(struct clock_even > raw_spin_lock(&cpu_base->lock); > entry_time = now = hrtimer_update_base(cpu_base); > retry: > - expires_next.tv64 = KTIME_MAX; > /* > * We set expires_next to KTIME_MAX here with cpu_base->lock > * held to prevent that a timer is enqueued in our queue via > @@ -1314,7 +1331,6 @@ retry: > cpu_base->expires_next.tv64 = KTIME_MAX; > > for (i = 0; i < HRTIMER_MAX_CLOCK_BASES; i++) { > - struct hrtimer_clock_base *base; > struct timerqueue_node *node; > ktime_t basenow; > > @@ -1342,23 +1358,20 @@ retry: > * timer will have to trigger a wakeup anyway. > */ > > - if (basenow.tv64 < hrtimer_get_softexpires_tv64(timer)) { > - ktime_t expires; > - > - expires = ktime_sub(hrtimer_get_expires(timer), > - base->offset); > - if (expires.tv64 < 0) > - expires.tv64 = KTIME_MAX; > - if (expires.tv64 < expires_next.tv64) > - expires_next = expires; > + if (basenow.tv64 < hrtimer_get_softexpires_tv64(timer)) > break; > - } > > __run_hrtimer(timer, &basenow); > } > } > > /* > + * We might have dropped cpu_base->lock and the callbacks > + * might have added timers. Find the timer which expires first. > + */ > + expires_next = hrtimer_get_expires_next(cpu_base); > + > + /* > * Store the new expiry value so the migration code can verify > * against it. > */ > @@ -1678,6 +1691,7 @@ static void init_hrtimers_cpu(int cpu) > for (i = 0; i < HRTIMER_MAX_CLOCK_BASES; i++) { > cpu_base->clock_base[i].cpu_base = cpu_base; > timerqueue_init_head(&cpu_base->clock_base[i].active); > + cpu_base->clock_base[i].expires_next.tv64 = KTIME_MAX; > } > > hrtimer_init_hres(cpu_base); > -- Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux Foundation Collaborative Project ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v2] hrtimers: calculate expires_next after all timers are executed 2014-12-26 23:34 ` Stephen Boyd @ 2014-12-29 12:24 ` Stanislav Fomichev 0 siblings, 0 replies; 15+ messages in thread From: Stanislav Fomichev @ 2014-12-29 12:24 UTC (permalink / raw) To: Stephen Boyd, Thomas Gleixner Cc: john.stultz@linaro.org, prarit@redhat.com, paul.gortmaker@windriver.com, juri.lelli@gmail.com, ddaney.cavm@gmail.com, mbohan@codeaurora.org, david.vrabel@citrix.com, david.engraf@sysgo.com, linux-kernel@vger.kernel.org No, Thomas didn't yet push the fix. 27.12.2014, 02:34, "Stephen Boyd" <sboyd@codeaurora.org>: > On 06/22/2014 07:46 AM, Thomas Gleixner wrote: >>> + for (i = 0; i < HRTIMER_MAX_CLOCK_BASES; i++) { >>> + ktime_t expires; >> So this adds the third incarnation of finding the next expiring timer >> to the code. Not really helpful. >> >> Untested patch which addresses the issues below. > > We think we're running into the same issue that's fixed by this patch, > but the occurrence is very rare so reproducing is hard. Did this ever go > anywhere? >> --------------------> >> include/linux/hrtimer.h | 2 >> kernel/hrtimer.c | 162 ++++++++++++++++++++++++++---------------------- >> 2 files changed, 90 insertions(+), 74 deletions(-) >> >> Index: linux/include/linux/hrtimer.h >> =================================================================== >> --- linux.orig/include/linux/hrtimer.h >> +++ linux/include/linux/hrtimer.h >> @@ -141,6 +141,7 @@ struct hrtimer_sleeper { >> * @get_time: function to retrieve the current time of the clock >> * @softirq_time: the time when running the hrtimer queue in the softirq >> * @offset: offset of this clock to the monotonic base >> + * @expires_next: absolute time of the next event on this clock base >> */ >> struct hrtimer_clock_base { >> struct hrtimer_cpu_base *cpu_base; >> @@ -151,6 +152,7 @@ struct hrtimer_clock_base { >> ktime_t (*get_time)(void); >> ktime_t softirq_time; >> ktime_t offset; >> + ktime_t expires_next; >> }; >> >> enum hrtimer_base_type { >> Index: linux/kernel/hrtimer.c >> =================================================================== >> --- linux.orig/kernel/hrtimer.c >> +++ linux/kernel/hrtimer.c >> @@ -494,6 +494,36 @@ static inline void debug_deactivate(stru >> trace_hrtimer_cancel(timer); >> } >> >> +/* >> + * Find the next expiring timer and return the expiry in absolute >> + * CLOCK_MONOTONIC time. >> + */ >> +static ktime_t hrtimer_get_expires_next(struct hrtimer_cpu_base *cpu_base) >> +{ >> + ktime_t expires, expires_next = { .tv64 = KTIME_MAX }; >> + unsigned long idx, active = cpu_base->active_bases; >> + struct hrtimer_clock_base *base; >> + >> + while (active) { >> + idx = __ffs(active); >> + active &= ~(1UL << idx); >> + >> + base = cpu_base->clock_base + idx; >> + /* Adjust to CLOCK_MONOTONIC */ >> + expires = ktime_sub(base->expires_next, base->offset); >> + >> + if (expires.tv64 < expires_next.tv64) >> + expires_next = expires; >> + } >> + /* >> + * If clock_was_set() has changed base->offset the result >> + * might be negative. Fix it up to prevent a false positive in >> + * clockevents_program_event() >> + */ >> + expires.tv64 = 0; >> + return expires_next.tv64 < 0 ? expires : expires_next; >> +} >> + >> /* High resolution timer related functions */ >> #ifdef CONFIG_HIGH_RES_TIMERS >> >> @@ -542,32 +572,7 @@ static inline int hrtimer_hres_active(vo >> static void >> hrtimer_force_reprogram(struct hrtimer_cpu_base *cpu_base, int skip_equal) >> { >> - int i; >> - struct hrtimer_clock_base *base = cpu_base->clock_base; >> - ktime_t expires, expires_next; >> - >> - expires_next.tv64 = KTIME_MAX; >> - >> - for (i = 0; i < HRTIMER_MAX_CLOCK_BASES; i++, base++) { >> - struct hrtimer *timer; >> - struct timerqueue_node *next; >> - >> - next = timerqueue_getnext(&base->active); >> - if (!next) >> - continue; >> - timer = container_of(next, struct hrtimer, node); >> - >> - expires = ktime_sub(hrtimer_get_expires(timer), base->offset); >> - /* >> - * clock_was_set() has changed base->offset so the >> - * result might be negative. Fix it up to prevent a >> - * false positive in clockevents_program_event() >> - */ >> - if (expires.tv64 < 0) >> - expires.tv64 = 0; >> - if (expires.tv64 < expires_next.tv64) >> - expires_next = expires; >> - } >> + ktime_t expires_next = hrtimer_get_expires_next(cpu_base); >> >> if (skip_equal && expires_next.tv64 == cpu_base->expires_next.tv64) >> return; >> @@ -891,6 +896,8 @@ EXPORT_SYMBOL_GPL(hrtimer_forward); >> static int enqueue_hrtimer(struct hrtimer *timer, >> struct hrtimer_clock_base *base) >> { >> + int leftmost; >> + >> debug_activate(timer); >> >> timerqueue_add(&base->active, &timer->node); >> @@ -902,7 +909,13 @@ static int enqueue_hrtimer(struct hrtime >> */ >> timer->state |= HRTIMER_STATE_ENQUEUED; >> >> - return (&timer->node == base->active.next); >> + leftmost = &timer->node == base->active.next; >> + /* >> + * If the new timer is the leftmost, update clock_base->expires_next. >> + */ >> + if (leftmost) >> + base->expires_next = hrtimer_get_expires(timer); >> + return leftmost; >> } >> >> /* >> @@ -919,27 +932,45 @@ static void __remove_hrtimer(struct hrti >> struct hrtimer_clock_base *base, >> unsigned long newstate, int reprogram) >> { >> - struct timerqueue_node *next_timer; >> + struct timerqueue_node *next; >> + struct hrtimer *next_timer; >> + bool first; >> + >> if (!(timer->state & HRTIMER_STATE_ENQUEUED)) >> goto out; >> >> - next_timer = timerqueue_getnext(&base->active); >> + /* >> + * If this is not the first timer of the clock base, we just >> + * remove it. No updates, no reprogramming. >> + */ >> + first = timerqueue_getnext(&base->active) == &timer->node; >> timerqueue_del(&base->active, &timer->node); >> - if (&timer->node == next_timer) { >> + if (!first) >> + goto out; >> + >> + /* >> + * Update cpu base and clock base. This needs to be done >> + * before calling into hrtimer_force_reprogram() as that >> + * relies on active_bases and expires_next. >> + */ >> + next = timerqueue_getnext(&base->active); >> + if (!next) { >> + base->cpu_base->active_bases &= ~(1 << base->index); >> + base->expires_next.tv64 = KTIME_MAX; >> + } else { >> + next_timer = container_of(next, struct hrtimer, node); >> + base->expires_next = hrtimer_get_expires(next_timer); >> + } >> + >> #ifdef CONFIG_HIGH_RES_TIMERS >> - /* Reprogram the clock event device. if enabled */ >> - if (reprogram && hrtimer_hres_active()) { >> - ktime_t expires; >> - >> - expires = ktime_sub(hrtimer_get_expires(timer), >> - base->offset); >> - if (base->cpu_base->expires_next.tv64 == expires.tv64) >> - hrtimer_force_reprogram(base->cpu_base, 1); >> - } >> -#endif >> + if (reprogram && hrtimer_hres_active()) { >> + ktime_t expires; >> + >> + expires = ktime_sub(hrtimer_get_expires(timer), base->offset); >> + if (base->cpu_base->expires_next.tv64 == expires.tv64) >> + hrtimer_force_reprogram(base->cpu_base, 1); >> } >> - if (!timerqueue_getnext(&base->active)) >> - base->cpu_base->active_bases &= ~(1 << base->index); >> +#endif >> out: >> timer->state = newstate; >> } >> @@ -1154,35 +1185,21 @@ EXPORT_SYMBOL_GPL(hrtimer_get_remaining) >> ktime_t hrtimer_get_next_event(void) >> { >> struct hrtimer_cpu_base *cpu_base = &__get_cpu_var(hrtimer_bases); >> - struct hrtimer_clock_base *base = cpu_base->clock_base; >> - ktime_t delta, mindelta = { .tv64 = KTIME_MAX }; >> + ktime_t next, delta = { .tv64 = KTIME_MAX }; >> unsigned long flags; >> - int i; >> >> raw_spin_lock_irqsave(&cpu_base->lock, flags); >> >> if (!hrtimer_hres_active()) { >> - for (i = 0; i < HRTIMER_MAX_CLOCK_BASES; i++, base++) { >> - struct hrtimer *timer; >> - struct timerqueue_node *next; >> - >> - next = timerqueue_getnext(&base->active); >> - if (!next) >> - continue; >> - >> - timer = container_of(next, struct hrtimer, node); >> - delta.tv64 = hrtimer_get_expires_tv64(timer); >> - delta = ktime_sub(delta, base->get_time()); >> - if (delta.tv64 < mindelta.tv64) >> - mindelta.tv64 = delta.tv64; >> - } >> + next = hrtimer_get_expires_next(cpu_base); >> + delta = ktime_sub(next, ktime_get()); >> + if (delta.tv64 < 0) >> + delta.tv64 = 0; >> } >> >> raw_spin_unlock_irqrestore(&cpu_base->lock, flags); >> >> - if (mindelta.tv64 < 0) >> - mindelta.tv64 = 0; >> - return mindelta; >> + return delta; >> } >> #endif >> >> @@ -1292,6 +1309,7 @@ static void __run_hrtimer(struct hrtimer >> */ >> void hrtimer_interrupt(struct clock_event_device *dev) >> { >> + struct hrtimer_clock_base *base; >> struct hrtimer_cpu_base *cpu_base = &__get_cpu_var(hrtimer_bases); >> ktime_t expires_next, now, entry_time, delta; >> int i, retries = 0; >> @@ -1303,7 +1321,6 @@ void hrtimer_interrupt(struct clock_even >> raw_spin_lock(&cpu_base->lock); >> entry_time = now = hrtimer_update_base(cpu_base); >> retry: >> - expires_next.tv64 = KTIME_MAX; >> /* >> * We set expires_next to KTIME_MAX here with cpu_base->lock >> * held to prevent that a timer is enqueued in our queue via >> @@ -1314,7 +1331,6 @@ retry: >> cpu_base->expires_next.tv64 = KTIME_MAX; >> >> for (i = 0; i < HRTIMER_MAX_CLOCK_BASES; i++) { >> - struct hrtimer_clock_base *base; >> struct timerqueue_node *node; >> ktime_t basenow; >> >> @@ -1342,23 +1358,20 @@ retry: >> * timer will have to trigger a wakeup anyway. >> */ >> >> - if (basenow.tv64 < hrtimer_get_softexpires_tv64(timer)) { >> - ktime_t expires; >> - >> - expires = ktime_sub(hrtimer_get_expires(timer), >> - base->offset); >> - if (expires.tv64 < 0) >> - expires.tv64 = KTIME_MAX; >> - if (expires.tv64 < expires_next.tv64) >> - expires_next = expires; >> + if (basenow.tv64 < hrtimer_get_softexpires_tv64(timer)) >> break; >> - } >> >> __run_hrtimer(timer, &basenow); >> } >> } >> >> /* >> + * We might have dropped cpu_base->lock and the callbacks >> + * might have added timers. Find the timer which expires first. >> + */ >> + expires_next = hrtimer_get_expires_next(cpu_base); >> + >> + /* >> * Store the new expiry value so the migration code can verify >> * against it. >> */ >> @@ -1678,6 +1691,7 @@ static void init_hrtimers_cpu(int cpu) >> for (i = 0; i < HRTIMER_MAX_CLOCK_BASES; i++) { >> cpu_base->clock_base[i].cpu_base = cpu_base; >> timerqueue_init_head(&cpu_base->clock_base[i].active); >> + cpu_base->clock_base[i].expires_next.tv64 = KTIME_MAX; >> } >> >> hrtimer_init_hres(cpu_base); > > -- > Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, > a Linux Foundation Collaborative Project ^ permalink raw reply [flat|nested] 15+ messages in thread
end of thread, other threads:[~2014-12-29 12:25 UTC | newest] Thread overview: 15+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2014-03-17 14:37 [PATCH] hrtimers: calculate expires_next after all timers are executed Stanislav Fomichev 2014-03-19 10:59 ` Thomas Gleixner 2014-03-19 14:16 ` [PATCH v2] " Stanislav Fomichev 2014-03-24 8:42 ` Stanislav Fomichev 2014-03-24 9:04 ` Thomas Gleixner 2014-05-05 9:26 ` Stanislav Fomichev 2014-06-04 15:47 ` Stanislav Fomichev 2014-06-04 20:06 ` Thomas Gleixner 2014-06-22 14:46 ` Thomas Gleixner 2014-06-23 11:03 ` Stanislav Fomichev 2014-06-23 17:19 ` Thomas Gleixner 2014-06-23 18:04 ` Stanislav Fomichev 2014-06-24 17:18 ` Stanislav Fomichev 2014-12-26 23:34 ` Stephen Boyd 2014-12-29 12:24 ` Stanislav Fomichev
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox