linux-rt-users.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/2] fixes for 33-rt4 found while testing on powerpc
@ 2010-03-02 21:51 Paul Gortmaker
  2010-03-02 21:51 ` [PATCH 1/2] powerpc: replace kmap_atomic with kmap in pte_offset_map Paul Gortmaker
  0 siblings, 1 reply; 3+ messages in thread
From: Paul Gortmaker @ 2010-03-02 21:51 UTC (permalink / raw)
  To: linux-rt-users

These two commits were originally done by Kevin to solve problems on a
2.6.31 based RT tree.  They carry forward onto 33-rt4 as-is, and I've
done a quick sanity test on an sbc8641d board (powerpc SMP dual core).

Paul.


^ permalink raw reply	[flat|nested] 3+ messages in thread

* [PATCH 1/2] powerpc: replace kmap_atomic with kmap in pte_offset_map
  2010-03-02 21:51 [PATCH 0/2] fixes for 33-rt4 found while testing on powerpc Paul Gortmaker
@ 2010-03-02 21:51 ` Paul Gortmaker
  2010-03-02 21:51   ` [PATCH 2/2] rt: reserve TASK_STOPPED state when blocking on a spin lock Paul Gortmaker
  0 siblings, 1 reply; 3+ messages in thread
From: Paul Gortmaker @ 2010-03-02 21:51 UTC (permalink / raw)
  To: linux-rt-users

From: Kevin Hao <kexin.hao@windriver.com>

The pte_offset_map/pte_offset_map_nested use kmap_atomic to get the
virtual address for the pte table, but kmap_atomic will disable preempt.
Hence there will be call trace if we acquire a spin lock after invoking
pte_offset_map/pte_offset_map_nested in preempt-rt.  To fix it, I've
replaced kmap_atomic with kmap in these macros.

Signed-off-by: Kevin Hao <kexin.hao@windriver.com>
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
---
 arch/powerpc/include/asm/pgtable-ppc32.h |   12 ++++++++++++
 1 files changed, 12 insertions(+), 0 deletions(-)

diff --git a/arch/powerpc/include/asm/pgtable-ppc32.h b/arch/powerpc/include/asm/pgtable-ppc32.h
index 55646ad..a838099 100644
--- a/arch/powerpc/include/asm/pgtable-ppc32.h
+++ b/arch/powerpc/include/asm/pgtable-ppc32.h
@@ -307,6 +307,17 @@ static inline void __ptep_set_access_flags(pte_t *ptep, pte_t entry)
 	(((address) >> PAGE_SHIFT) & (PTRS_PER_PTE - 1))
 #define pte_offset_kernel(dir, addr)	\
 	((pte_t *) pmd_page_vaddr(*(dir)) + pte_index(addr))
+#ifdef CONFIG_PREEMPT_RT
+#define pte_offset_map(dir, addr)		\
+	((pte_t *) kmap(pmd_page(*(dir))) + pte_index(addr))
+#define pte_offset_map_nested(dir, addr)	\
+	((pte_t *) kmap(pmd_page(*(dir))) + pte_index(addr))
+
+#define pte_unmap(pte)	\
+	kunmap((struct page *)_ALIGN_DOWN((unsigned int)pte, PAGE_SIZE))
+#define pte_unmap_nested(pte)	\
+	kunmap((struct page *)_ALIGN_DOWN((unsigned int)pte, PAGE_SIZE))
+#else
 #define pte_offset_map(dir, addr)		\
 	((pte_t *) kmap_atomic(pmd_page(*(dir)), KM_PTE0) + pte_index(addr))
 #define pte_offset_map_nested(dir, addr)	\
@@ -314,6 +325,7 @@ static inline void __ptep_set_access_flags(pte_t *ptep, pte_t entry)
 
 #define pte_unmap(pte)		kunmap_atomic(pte, KM_PTE0)
 #define pte_unmap_nested(pte)	kunmap_atomic(pte, KM_PTE1)
+#endif
 
 /*
  * Encode and decode a swap entry.
-- 
1.6.5.2


^ permalink raw reply related	[flat|nested] 3+ messages in thread

* [PATCH 2/2] rt: reserve TASK_STOPPED state when blocking on a spin lock
  2010-03-02 21:51 ` [PATCH 1/2] powerpc: replace kmap_atomic with kmap in pte_offset_map Paul Gortmaker
@ 2010-03-02 21:51   ` Paul Gortmaker
  0 siblings, 0 replies; 3+ messages in thread
From: Paul Gortmaker @ 2010-03-02 21:51 UTC (permalink / raw)
  To: linux-rt-users

From: Kevin Hao <kexin.hao@windriver.com>

When a process handles a SIGSTOP signal, it will set the state to
TASK_STOPPED, acquire tasklist_lock and notifiy the parent of the
status change. But in the rt kernel the process state will change
to TASK_UNINTERRUPTIBLE if it blocks on the tasklist_lock. So if
we send a SIGCONT signal to this process at this time, the SIGCONT
signal just does nothing because this process is not in TASK_STOPPED
state. Of course this is not what we wanted. Preserving the
TASK_STOPPED state when blocking on a spin lock can fix this bug.

Signed-off-by: Kevin Hao <kexin.hao@windriver.com>
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
---
 kernel/rtmutex.c |    5 +++--
 1 files changed, 3 insertions(+), 2 deletions(-)

diff --git a/kernel/rtmutex.c b/kernel/rtmutex.c
index 16bfa1c..23dd443 100644
--- a/kernel/rtmutex.c
+++ b/kernel/rtmutex.c
@@ -757,8 +757,9 @@ rt_set_current_blocked_state(unsigned long saved_state)
 	 * saved_state. Now we can ignore further wakeups as we will
 	 * return in state running from our "spin" sleep.
 	 */
-	if (saved_state == TASK_INTERRUPTIBLE)
-		block_state = TASK_INTERRUPTIBLE;
+	if (saved_state == TASK_INTERRUPTIBLE ||
+		saved_state == TASK_STOPPED)
+		block_state = saved_state;
 	else
 		block_state = TASK_UNINTERRUPTIBLE;
 
-- 
1.6.5.2


^ permalink raw reply related	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2010-03-02 21:52 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-03-02 21:51 [PATCH 0/2] fixes for 33-rt4 found while testing on powerpc Paul Gortmaker
2010-03-02 21:51 ` [PATCH 1/2] powerpc: replace kmap_atomic with kmap in pte_offset_map Paul Gortmaker
2010-03-02 21:51   ` [PATCH 2/2] rt: reserve TASK_STOPPED state when blocking on a spin lock Paul Gortmaker

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).