From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755761AbcGLQvx (ORCPT ); Tue, 12 Jul 2016 12:51:53 -0400 Received: from mail.kernel.org ([198.145.29.136]:43676 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755353AbcGLQuZ (ORCPT ); Tue, 12 Jul 2016 12:50:25 -0400 Message-Id: <20160712165021.413609982@goodmis.org> User-Agent: quilt/0.61-1 Date: Tue, 12 Jul 2016 12:49:56 -0400 From: Steven Rostedt To: linux-kernel@vger.kernel.org, linux-rt-users Cc: Thomas Gleixner , Carsten Emde , Sebastian Andrzej Siewior , John Kacur , Paul Gortmaker , Subject: [PATCH RT 06/11] arm: lazy preempt: correct resched condition References: <20160712164950.490572026@goodmis.org> MIME-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-15 Content-Disposition: inline; filename=0006-arm-lazy-preempt-correct-resched-condition.patch Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 3.12.61-rt82-rc1 stable review patch. If anyone has any objections, please let me know. ------------------ From: Sebastian Andrzej Siewior If we get out of preempt_schedule_irq() then we check for NEED_RESCHED and call the former function again if set because the preemption counter has be zero at this point. However the counter for lazy-preempt might not be zero therefore we have to check the counter before looking at the need_resched_lazy flag. Cc: stable-rt@vger.kernel.org Signed-off-by: Sebastian Andrzej Siewior Signed-off-by: Steven Rostedt --- arch/arm/kernel/entry-armv.S | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/arch/arm/kernel/entry-armv.S b/arch/arm/kernel/entry-armv.S index 8c5e809c1f07..96eb4d26a5c1 100644 --- a/arch/arm/kernel/entry-armv.S +++ b/arch/arm/kernel/entry-armv.S @@ -234,7 +234,11 @@ svc_preempt: bne 1b tst r0, #_TIF_NEED_RESCHED_LAZY moveq pc, r8 @ go again - b 1b + ldr r0, [tsk, #TI_PREEMPT_LAZY] @ get preempt lazy count + teq r0, #0 @ if preempt lazy count != 0 + beq 1b + mov pc, r8 @ go again + #endif __und_fault: -- 2.8.1