* [PATCH RT 2/7] timer: Invoke timer_start_debug() where it makes sense
[not found] <20180404071652.24196-1-wagi@monom.org>
@ 2018-04-04 7:16 ` Daniel Wagner
2018-04-04 7:16 ` [PATCH RT 6/7] Revert "memcontrol: Prevent scheduling while atomic in cgroup code" Daniel Wagner
1 sibling, 0 replies; 2+ messages in thread
From: Daniel Wagner @ 2018-04-04 7:16 UTC (permalink / raw)
To: linux-kernel
Cc: linux-rt-users, Steven Rostedt, Thomas Gleixner, Carsten Emde,
John Kacur, Paul Gortmaker, Julia Cartwright, Daniel Wagner,
tom.zanussi, stable, rt, Sebastian Andrzej Siewior
From: Thomas Gleixner <tglx@linutronix.de>
The timer start debug function is called before the proper timer base is
set. As a consequence the trace data contains the stale CPU and flags
values.
Call the debug function after setting the new base and flags.
Fixes: 500462a9de65 ("timers: Switch to a non-cascading wheel")
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: stable@vger.kernel.org
Cc: rt@linutronix.de
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
kernel/time/timer.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/kernel/time/timer.c b/kernel/time/timer.c
index a8246d79cb5a..6b322aea1c46 100644
--- a/kernel/time/timer.c
+++ b/kernel/time/timer.c
@@ -838,8 +838,6 @@ __mod_timer(struct timer_list *timer, unsigned long expires,
if (!ret && pending_only)
goto out_unlock;
- debug_activate(timer, expires);
-
new_base = get_target_base(base, pinned);
if (base != new_base) {
@@ -854,6 +852,8 @@ __mod_timer(struct timer_list *timer, unsigned long expires,
base = switch_timer_base(timer, base, new_base);
}
+ debug_activate(timer, expires);
+
timer->expires = expires;
internal_add_timer(base, timer);
--
2.14.3
^ permalink raw reply related [flat|nested] 2+ messages in thread
* [PATCH RT 6/7] Revert "memcontrol: Prevent scheduling while atomic in cgroup code"
[not found] <20180404071652.24196-1-wagi@monom.org>
2018-04-04 7:16 ` [PATCH RT 2/7] timer: Invoke timer_start_debug() where it makes sense Daniel Wagner
@ 2018-04-04 7:16 ` Daniel Wagner
1 sibling, 0 replies; 2+ messages in thread
From: Daniel Wagner @ 2018-04-04 7:16 UTC (permalink / raw)
To: linux-kernel
Cc: linux-rt-users, Steven Rostedt, Thomas Gleixner, Carsten Emde,
John Kacur, Paul Gortmaker, Julia Cartwright, Daniel Wagner,
tom.zanussi, stable, Sebastian Andrzej Siewior
From: "Steven Rostedt (VMware)" <rostedt@goodmis.org>
The commit "memcontrol: Prevent scheduling while atomic in cgroup code"
fixed this issue:
refill_stock()
get_cpu_var()
drain_stock()
res_counter_uncharge()
res_counter_uncharge_until()
spin_lock() <== boom
But commit 3e32cb2e0a12b ("mm: memcontrol: lockless page counters") replaced
the calls to res_counter_uncharge() in drain_stock() to the lockless
function page_counter_uncharge(). There is no more spin lock there and no
more reason to have that local lock.
Cc: <stable@vger.kernel.org>
Reported-by: Haiyang HY1 Tan <tanhy1@lenovo.com>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
[bigeasy: That upstream commit appeared in v3.19 and the patch in
question in v3.18.7-rt2 and v3.18 seems still to be maintained. So I
guess that v3.18 would need the locallocks that we are about to remove
here. I am not sure if any earlier versions have the patch
backported.
The stable tag here is because Haiyang reported (and debugged) a crash
in 4.4-RT with this patch applied (which has get_cpu_light() instead
the locallocks it gained in v4.9-RT).
https://lkml.kernel.org/r/05AA4EC5C6EC1D48BE2CDCFF3AE0B8A637F78A15@CNMAILEX04.lenovo.com
]
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
mm/memcontrol.c | 7 ++-----
1 file changed, 2 insertions(+), 5 deletions(-)
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 493b4986d5dc..56f67a15937b 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -1925,17 +1925,14 @@ static void drain_local_stock(struct work_struct *dummy)
*/
static void refill_stock(struct mem_cgroup *memcg, unsigned int nr_pages)
{
- struct memcg_stock_pcp *stock;
- int cpu = get_cpu_light();
-
- stock = &per_cpu(memcg_stock, cpu);
+ struct memcg_stock_pcp *stock = &get_cpu_var(memcg_stock);
if (stock->cached != memcg) { /* reset if necessary */
drain_stock(stock);
stock->cached = memcg;
}
stock->nr_pages += nr_pages;
- put_cpu_light();
+ put_cpu_var(memcg_stock);
}
/*
--
2.14.3
^ permalink raw reply related [flat|nested] 2+ messages in thread
end of thread, other threads:[~2018-04-04 7:16 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <20180404071652.24196-1-wagi@monom.org>
2018-04-04 7:16 ` [PATCH RT 2/7] timer: Invoke timer_start_debug() where it makes sense Daniel Wagner
2018-04-04 7:16 ` [PATCH RT 6/7] Revert "memcontrol: Prevent scheduling while atomic in cgroup code" Daniel Wagner
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).