linux-rt-users.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* PATCH[2.6.32] scheduler patch
@ 2014-08-11 21:27 Sadasivan Shaiju
  0 siblings, 0 replies; 2+ messages in thread
From: Sadasivan Shaiju @ 2014-08-11 21:27 UTC (permalink / raw)
  To: linux-rt-users; +Cc: shaiju_sada

[-- Attachment #1: Type: text/plain, Size: 1280 bytes --]

Hi,

I  work for Montavista (Cavium Inc) as  a  Technical  Lead .  I want to
push some  of the kernel  patches to  rt community (2.6.32 kernel 2.6.33
rt patch)  , so  that  It  will  go  to  the  main line These patches  are
reviewed  and approved  by  our system Architect.  I request  you to
include  in the main line .  These  issues  were  reported  by our
customer CISCO.


Problem Description:
  In some cases the task state of a task is set incorrectly, resulting in
a
  hung task.

Root Cause:
  Trying to claim the BKL while PREEMPT_ACTIVE is set will result in
  __schedule returning immediately in __mutex_lock_common().  This means
  the task state will not be set to running by the wakeup, and it also
means
  that the kernel will just sit there and spin waiting for the mutex,
which
  is bad.

  This occurs in __cond_resched, which calls schedule() with
PREEMPT_ACTIVE
  set.  The other places that call schedule() with PREEMPT_ACTIVE set have
  special code that plays with the BKL.

How Solved:
  To fix this, moved releasing and reclaiming the BKL to outside setting
  the PREEMPT_ACTIVE bit.


I request  you to include the above patch  to  the main line .  If  any
questions  please contact me at  sshaiju@mvista.com
(shaiju_sada@yahoo.com)


Regards,
Shaiju.

[-- Attachment #2: 5913-Fix-BKL-problems-leading-to-bad-task-state.docx --]
[-- Type: application/vnd.openxmlformats-officedocument.wordprocessingml.document, Size: 17031 bytes --]

^ permalink raw reply	[flat|nested] 2+ messages in thread

* PATCH[2.6.32] scheduler patch
@ 2014-08-12  0:36 Sadasivan Shaiju
  0 siblings, 0 replies; 2+ messages in thread
From: Sadasivan Shaiju @ 2014-08-12  0:36 UTC (permalink / raw)
  To: linux-rt-users; +Cc: shaiju_sada

[-- Attachment #1: Type: text/plain, Size: 1277 bytes --]

Hi,

I  work for Montavista (Cavium Inc) as  a  Technical  Lead .  I want to
push some  of the kernel  patches to  rt community (2.6.32 kernel 2.6.33
rt patch)  , so  that  It  will  go  to  the  main line These patches  are
reviewed  and approved  by  our system Architect.  I request  you to
include  in the main line .  These  issue  was  reported  by our customer
CISCO.

Problem Description:
  In some cases the task state of a task is set incorrectly, resulting in
a
  hung task.

Root Cause:
  Trying to claim the BKL while PREEMPT_ACTIVE is set will result in
  __schedule returning immediately in __mutex_lock_common().  This means
  the task state will not be set to running by the wakeup, and it also
means
  that the kernel will just sit there and spin waiting for the mutex,
which
  is bad.

  This occurs in __cond_resched, which calls schedule() with
PREEMPT_ACTIVE
  set.  The other places that call schedule() with PREEMPT_ACTIVE set have
  special code that plays with the BKL.

How Solved:
  To fix this, moved releasing and reclaiming the BKL to outside setting
  the PREEMPT_ACTIVE bit.


I request  you to include the above patch  to  the main line .  If  any
questions  please contact me at  sshaiju@mvista.com
(shaiju_sada@yahoo.com)


Regards,
Shaiju.

[-- Attachment #2: 5913-Fix-BKL-problems-leading-to-bad-task-state.patch --]
[-- Type: application/octet-stream, Size: 2728 bytes --]

From 76ac3b7555ad581109f7ade28fb0ffdde27e5b42 Mon Sep 17 00:00:00 2001
From: Corey Minyard <cminyard@mvista.com>
Date: Fri, 29 Jun 2012 16:03:24 -0500
Subject: [PATCH] Fix BKL problems leading to bad task state

Source: MontaVista Software, LLC
MR: 51480
Type: Defect Fix
Disposition: Local
ChangeID: 11777c35bcd2b3ba09137c1ecd0ee5fe724d2b71
Description:

Trying to claim the BKL while PREEMPT_ACTIVE is set will result in
__schedule returning immediately in __mutex_lock_common().  This means
the task state won't be set to running by the wakeup, and it alse means
that the kernel will just sit there and spin waiting for the mutex, which
is bad.

This occurs in __cond_resched, which calls schedule() with PREEMPT_ACTIVE
set.  The other places that call schedule() with PREEMPT_ACTIVE set have
special code that plays with the BKL.

To fix this, move releasing and reclaiming the BKL to outside setting
the PREEMPT_ACTIVE bit.

Note that since there is no BKL any more, there is no need to send upstream.

Signed-off-by: Corey Minyard <cminyard@mvista.com>
Signed-off-by: Randy Vinson <rvinson@mvista.com>
Signed-off-by: Sadasivan Shaiju <sshaiju@mvista.com>
---
 kernel/sched.c |   29 +++++++++++++++++++++--------
 1 files changed, 21 insertions(+), 8 deletions(-)

diff --git a/kernel/sched.c b/kernel/sched.c
index 4e2d0de..7c773e5 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -5876,21 +5876,15 @@ pick_next_task(struct rq *rq)
 /*
  * schedule() is the main scheduler function.
  */
-asmlinkage void __sched __schedule(void)
+static void __sched __schedule_nobkl(int cpu, struct rq *rq)
 {
 	struct task_struct *prev, *next;
 	unsigned long *switch_count;
-	struct rq *rq;
-	int cpu;
 
-	cpu = smp_processor_id();
-	rq = cpu_rq(cpu);
 	rcu_sched_qs(cpu);
 	prev = rq->curr;
 	switch_count = &prev->nivcsw;
 
-	release_kernel_lock(prev);
-
 	schedule_debug(prev);
 
 	preempt_disable();
@@ -5942,7 +5936,17 @@ asmlinkage void __sched __schedule(void)
 	}
 
 	post_schedule(rq);
+}
 
+asmlinkage void __sched __schedule(void)
+{
+	struct rq *rq;
+	int cpu;
+
+	cpu = smp_processor_id();
+	rq = cpu_rq(cpu);
+	release_kernel_lock(rq->curr);
+	__schedule_nobkl(cpu, rq);
 	reacquire_kernel_lock(current);
 }
 
@@ -7260,10 +7264,19 @@ static inline int should_resched(void)
 
 static void __cond_resched(void)
 {
+	struct rq *rq;
+	int cpu;
+
 	do {
+		local_irq_disable();
+		cpu = smp_processor_id();
+		rq = cpu_rq(cpu);
+		release_kernel_lock(rq->curr);
 		add_preempt_count(PREEMPT_ACTIVE);
-		schedule();
+		__schedule_nobkl(cpu, rq);
 		sub_preempt_count(PREEMPT_ACTIVE);
+		reacquire_kernel_lock(current);
+		local_irq_enable();
 
 		/*
 		 * Check again in case we missed a preemption opportunity
-- 
1.7.0.1


^ permalink raw reply related	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2014-08-12  0:36 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-08-12  0:36 PATCH[2.6.32] scheduler patch Sadasivan Shaiju
  -- strict thread matches above, loose matches on Subject: below --
2014-08-11 21:27 Sadasivan Shaiju

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).