From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: References: <20200914204209.256266093@linutronix.de> <20200914204441.794954043@linutronix.de> From: Valentin Schneider Subject: Re: [patch 08/13] sched: Clenaup PREEMPT_COUNT leftovers In-reply-to: <20200914204441.794954043@linutronix.de> Date: Wed, 16 Sep 2020 11:56:23 +0100 Message-ID: MIME-Version: 1.0 Content-Type: text/plain To: Thomas Gleixner Cc: LKML , linux-arch@vger.kernel.org, Linus Torvalds , Sebastian Andrzej Siewior , Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Daniel Bristot de Oliveira , Richard Henderson , Ivan Kokshaysky , Matt Turner , linux-alpha@vger.kernel.org, Jeff Dike , Richard Weinberger , Anton Ivanov , linux-um@lists.infradead.org, Brian Cain , linux-hexagon@vger.kernel.org, Geert Uytterhoeven , linux-m68k@lists.linux-m68k.org, Ingo Molnar , Will Deacon , Andrew Morton , linux-mm@kvack.org, Russell King , linux-arm-kernel@lists.infradead.org, Chris Zankel , Max Filippov , linux-xtensa@linux-xtensa.org, Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , David Airlie , Daniel Vetter , intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, "Paul E. McKenney" , Josh Triplett , Mathieu Desnoyers , Lai Jiangshan , Shuah Khan , rcu@vger.kernel.org, linux-kselftest@vger.kernel.org List-ID: On 14/09/20 21:42, Thomas Gleixner wrote: > CONFIG_PREEMPT_COUNT is now unconditionally enabled and will be > removed. Cleanup the leftovers before doing so. > > Signed-off-by: Thomas Gleixner > Cc: Ingo Molnar > Cc: Peter Zijlstra > Cc: Juri Lelli > Cc: Vincent Guittot > Cc: Dietmar Eggemann > Cc: Steven Rostedt > Cc: Ben Segall > Cc: Mel Gorman > Cc: Daniel Bristot de Oliveira Small nit below; Reviewed-by: Valentin Schneider > --- > kernel/sched/core.c | 6 +----- > lib/Kconfig.debug | 1 - > 2 files changed, 1 insertion(+), 6 deletions(-) > > --- a/kernel/sched/core.c > +++ b/kernel/sched/core.c > @@ -3706,8 +3706,7 @@ asmlinkage __visible void schedule_tail( > * finish_task_switch() for details. > * > * finish_task_switch() will drop rq->lock() and lower preempt_count > - * and the preempt_enable() will end up enabling preemption (on > - * PREEMPT_COUNT kernels). I suppose this wanted to be s/PREEMPT_COUNT/PREEMPT/ in the first place, which ought to be still relevant. > + * and the preempt_enable() will end up enabling preemption. > */ > > rq = finish_task_switch(prev);