* [PATCH] 8xx: map_page() skip pinned region and tlbie debugging aid
@ 2005-06-25 14:53 Marcelo Tosatti
2005-06-25 22:24 ` Dan Malek
0 siblings, 1 reply; 30+ messages in thread
From: Marcelo Tosatti @ 2005-06-25 14:53 UTC (permalink / raw)
To: Dan Malek; +Cc: linux-ppc-embedded
Hi,
The following patch adds code to skip flushing of tlb's in the pinned TLB region
(assuming that it is contiguous), thus preserving the pinned region.
It also introduces CONFIG_DEBUG_PIN_TLBIE to catch for overlapping invalidates
(as suggested by Dan).
It could be smarter and aware of non-contiguous regions (instead of a simple
<start,end> tuple), but I'm not sure if thats worth at the moment.
Dan: I dont think ioremap() is an issue because it never works inside the
kernel's static virtual address space (which is the only one we're interested
in having pinned at the moment).
Comments on improvements are very welcome.
tree d682449fa55448a446081d8e9fc0fed8f92bf812
parent f17c5c6e4e1d1b7e8b01f323dfd2bd2197a0743f
author Marcelo <marcelo@xeon.cnet> 1119729961 -0300
committer Marcelo Tosatti <marcelo.tosatti@cyclades.com> 1119729961 -0300
Introduce pin_area_start and pin_area_end to hold info about pinned area.
Use that information in map_page() to skip invalidation of TLB in case
of overlapping address (to preserve the large pinned TLB).
Introduce a debugging aid in _tlbie() to catch overlapping invalidations,
governed by CONFIG_DEBUG_PIN_TLBIE.
diff --git a/arch/ppc/Kconfig b/arch/ppc/Kconfig
--- a/arch/ppc/Kconfig
+++ b/arch/ppc/Kconfig
@@ -1296,6 +1296,11 @@ config BOOT_LOAD
config PIN_TLB
bool "Pinned Kernel TLBs (860 ONLY)"
depends on ADVANCED_OPTIONS && 8xx
+
+config DEBUG_PIN_TLBIE
+ bool "Check for overlapping TLB invalidates inside the pinned area"
+ depends on ADVANCED_OPTIONS && 8xx && PIN_TLB
+
endmenu
source "drivers/Kconfig"
diff --git a/arch/ppc/kernel/misc.S b/arch/ppc/kernel/misc.S
--- a/arch/ppc/kernel/misc.S
+++ b/arch/ppc/kernel/misc.S
@@ -565,6 +565,19 @@ _GLOBAL(_tlbie)
SYNC_601
isync
#else /* CONFIG_SMP */
+#ifdef CONFIG_DEBUG_PIN_TLBIE
+/* check if the address being invalidated overlaps with the pinned region */
+ lis r4,(pin_area_start)@ha
+ lwz r5,(pin_area_start)@l(4)
+ cmplw r3, r5
+ blt 11f
+ lis r4,(pin_area_end)@ha
+ lwz r5,(pin_area_end)@l(4)
+ cmplw r3, r5
+ bge 11f
+ trap
+#endif
+11:
tlbie r3
sync
#endif /* CONFIG_SMP */
diff --git a/arch/ppc/mm/init.c b/arch/ppc/mm/init.c
--- a/arch/ppc/mm/init.c
+++ b/arch/ppc/mm/init.c
@@ -112,6 +112,12 @@ unsigned long __max_memory;
/* max amount of low RAM to map in */
unsigned long __max_low_memory = MAX_LOW_MEM;
+/* should be a per-platform definition */
+#ifdef CONFIG_PIN_TLB
+unsigned long pin_area_start = KERNELBASE;
+unsigned long pin_area_end = KERNELBASE + 0x00800000;
+#endif
+
void show_mem(void)
{
int i,free = 0,total = 0,reserved = 0;
diff --git a/arch/ppc/mm/pgtable.c b/arch/ppc/mm/pgtable.c
--- a/arch/ppc/mm/pgtable.c
+++ b/arch/ppc/mm/pgtable.c
@@ -274,6 +274,11 @@ void ioport_unmap(void __iomem *addr)
EXPORT_SYMBOL(ioport_map);
EXPORT_SYMBOL(ioport_unmap);
+#ifdef CONFIG_PIN_TLB
+extern unsigned long pin_area_start;
+extern unsigned long pin_area_end;
+#endif
+
int
map_page(unsigned long va, phys_addr_t pa, int flags)
{
@@ -290,7 +295,10 @@ map_page(unsigned long va, phys_addr_t p
err = 0;
set_pte_at(&init_mm, va, pg, pfn_pte(pa >> PAGE_SHIFT, __pgprot(flags)));
if (mem_init_done)
- flush_HPTE(0, va, pmd_val(*pd));
+#ifdef CONFIG_PIN_TLB
+ if (va < pin_area_start || va >= pin_area_end)
+#endif
+ flush_HPTE(0, va, pmd_val(*pd));
}
spin_unlock(&init_mm.page_table_lock);
return err;
^ permalink raw reply [flat|nested] 30+ messages in thread* Re: [PATCH] 8xx: map_page() skip pinned region and tlbie debugging aid
2005-06-25 14:53 [PATCH] 8xx: map_page() skip pinned region and tlbie debugging aid Marcelo Tosatti
@ 2005-06-25 22:24 ` Dan Malek
2005-06-26 14:30 ` Marcelo Tosatti
2005-06-27 14:28 ` [PATCH] 8xx: tlbie debugging aid (try #2) Marcelo Tosatti
0 siblings, 2 replies; 30+ messages in thread
From: Dan Malek @ 2005-06-25 22:24 UTC (permalink / raw)
To: Marcelo Tosatti; +Cc: linux-ppc-embedded
On Jun 25, 2005, at 10:53 AM, Marcelo Tosatti wrote:
> Dan: I dont think ioremap() is an issue because it never works inside
> the
> kernel's static virtual address space (which is the only one we're
> interested
> in having pinned at the moment).
Take a close look at the initialization code. I believe it also
pins the IMMR space, which is subject to ioremap().
> source "drivers/Kconfig"
> diff --git a/arch/ppc/kernel/misc.S b/arch/ppc/kernel/misc.S
> --- a/arch/ppc/kernel/misc.S
> +++ b/arch/ppc/kernel/misc.S
> @@ -565,6 +565,19 @@ _GLOBAL(_tlbie)
> SYNC_601
> isync
> #else /* CONFIG_SMP */
> +#ifdef CONFIG_DEBUG_PIN_TLBIE
> +/* check if the address being invalidated overlaps with the pinned
> region */
> + lis r4,(pin_area_start)@ha
> + lwz r5,(pin_area_start)@l(4)
> + cmplw r3, r5
> + blt 11f
> + lis r4,(pin_area_end)@ha
> + lwz r5,(pin_area_end)@l(4)
> + cmplw r3, r5
> + bge 11f
> + trap
> +#endif
> +11:
> tlbie r3
> sync
We don't need this kind of assembly code on the 8xx. Just define
_tlbie as a macro (which has always been done) and write this debug
stuff as C code.
> +#ifdef CONFIG_PIN_TLB
> +unsigned long pin_area_start = KERNELBASE;
> +unsigned long pin_area_end = KERNELBASE + 0x00800000;
> +#endif
This only covers the kernel instruction space. We pin 24M bytes
of data plus 8M bytes of IMMR.
> +#ifdef CONFIG_PIN_TLB
> + if (va < pin_area_start || va >= pin_area_end)
> +#endif
> + flush_HPTE(0, va, pmd_val(*pd));
We really want to see this generate an error. We shouldn't be
calling this on any of the pinned spaces. In the case of initially
mapping the kernel space, we should set up the page tables but
not call this far down that we get here.
Thanks.
-- Dan
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH] 8xx: map_page() skip pinned region and tlbie debugging aid
2005-06-25 22:24 ` Dan Malek
@ 2005-06-26 14:30 ` Marcelo Tosatti
2005-06-27 13:39 ` Marcelo Tosatti
2005-06-27 14:28 ` [PATCH] 8xx: tlbie debugging aid (try #2) Marcelo Tosatti
1 sibling, 1 reply; 30+ messages in thread
From: Marcelo Tosatti @ 2005-06-26 14:30 UTC (permalink / raw)
To: Dan Malek; +Cc: linux-ppc-embedded
Hi Dan,
On Sat, Jun 25, 2005 at 06:24:47PM -0400, Dan Malek wrote:
>
> On Jun 25, 2005, at 10:53 AM, Marcelo Tosatti wrote:
>
> >Dan: I dont think ioremap() is an issue because it never works inside
> >the
> >kernel's static virtual address space (which is the only one we're
> >interested
> >in having pinned at the moment).
>
> Take a close look at the initialization code. I believe it also
> pins the IMMR space, which is subject to ioremap().
OK. Now that makes me think that the IMMR pinned entry is also always
thrashed by the tlbie at map_page() :(
The IMMR space is a 16kB window (correct?), so I wonder if it might
be better to the use occupied pinned slot for another more accessed
region (an 8MB one preferably!).
> > source "drivers/Kconfig"
> >diff --git a/arch/ppc/kernel/misc.S b/arch/ppc/kernel/misc.S
> >--- a/arch/ppc/kernel/misc.S
> >+++ b/arch/ppc/kernel/misc.S
> >@@ -565,6 +565,19 @@ _GLOBAL(_tlbie)
> > SYNC_601
> > isync
> > #else /* CONFIG_SMP */
> >+#ifdef CONFIG_DEBUG_PIN_TLBIE
> >+/* check if the address being invalidated overlaps with the pinned
> >region */
> >+ lis r4,(pin_area_start)@ha
> >+ lwz r5,(pin_area_start)@l(4)
> >+ cmplw r3, r5
> >+ blt 11f
> >+ lis r4,(pin_area_end)@ha
> >+ lwz r5,(pin_area_end)@l(4)
> >+ cmplw r3, r5
> >+ bge 11f
> >+ trap
> >+#endif
> >+11:
> > tlbie r3
> > sync
>
> We don't need this kind of assembly code on the 8xx. Just define
> _tlbie as a macro (which has always been done) and write this debug
> stuff as C code.
OK, makes sense.
> >+#ifdef CONFIG_PIN_TLB
> >+unsigned long pin_area_start = KERNELBASE;
> >+unsigned long pin_area_end = KERNELBASE + 0x00800000;
> >+#endif
>
> This only covers the kernel instruction space. We pin 24M bytes
> of data plus 8M bytes of IMMR.
Ok, I'll represent the pinned regions by a node structure ordered on a
linked list and use that for both map_page() and the tlbie debugging aid.
> >+#ifdef CONFIG_PIN_TLB
> >+ if (va < pin_area_start || va >= pin_area_end)
> >+#endif
> >+ flush_HPTE(0, va, pmd_val(*pd));
>
> We really want to see this generate an error. We shouldn't be
> calling this on any of the pinned spaces. In the case of initially
> mapping the kernel space, we should set up the page tables but
> not call this far down that we get here.
But the page tables are setup at this level:
int
map_page(unsigned long va, phys_addr_t pa, int flags)
{
pmd_t *pd;
pte_t *pg;
int err = -ENOMEM;
spin_lock(&init_mm.page_table_lock);
/* Use upper 10 bits of VA to index the first level map */
pd = pmd_offset(pgd_offset_k(va), va);
/* Use middle 10 bits of VA to index the second-level map */
pg = pte_alloc_kernel(&init_mm, pd, va);
if (pg != 0) {
err = 0;
set_pte(pg, pfn_pte(pa >> PAGE_SHIFT, __pgprot(flags)));
if (mem_init_done)
#ifdef CONFIG_PIN_TLB
if (va < pin_area_start || va > pin_area_end)
#endif
flush_HPTE(0, va, pmd_val(*pd));
^ permalink raw reply [flat|nested] 30+ messages in thread* Re: [PATCH] 8xx: map_page() skip pinned region and tlbie debugging aid
2005-06-26 14:30 ` Marcelo Tosatti
@ 2005-06-27 13:39 ` Marcelo Tosatti
2005-06-27 20:46 ` Dan Malek
0 siblings, 1 reply; 30+ messages in thread
From: Marcelo Tosatti @ 2005-06-27 13:39 UTC (permalink / raw)
To: Dan Malek; +Cc: linux-ppc-embedded
On Sun, Jun 26, 2005 at 11:30:04AM -0300, Marcelo Tosatti wrote:
>
> Hi Dan,
>
> On Sat, Jun 25, 2005 at 06:24:47PM -0400, Dan Malek wrote:
> >
> > On Jun 25, 2005, at 10:53 AM, Marcelo Tosatti wrote:
> >
> > >Dan: I dont think ioremap() is an issue because it never works inside
> > >the
> > >kernel's static virtual address space (which is the only one we're
> > >interested
> > >in having pinned at the moment).
> >
> > Take a close look at the initialization code. I believe it also
> > pins the IMMR space, which is subject to ioremap().
>
> OK. Now that makes me think that the IMMR pinned entry is also always
> thrashed by the tlbie at map_page() :(
Bullshit, map_page()'s call for flush_HPTE() is conditioned by "mem_init_done",
and mapin_ram() is called before mem_init_done is set.
> The IMMR space is a 16kB window (correct?), so I wonder if it might
> be better to the use occupied pinned slot for another more accessed
> region (an 8MB one preferably!).
Thats still some food for thought...
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH] 8xx: map_page() skip pinned region and tlbie debugging aid
2005-06-27 13:39 ` Marcelo Tosatti
@ 2005-06-27 20:46 ` Dan Malek
2005-06-28 6:30 ` Benjamin Herrenschmidt
0 siblings, 1 reply; 30+ messages in thread
From: Dan Malek @ 2005-06-27 20:46 UTC (permalink / raw)
To: Marcelo Tosatti; +Cc: linux-ppc-embedded
On Jun 27, 2005, at 9:39 AM, Marcelo Tosatti wrote:
> Bullshit, map_page()'s call for flush_HPTE() is conditioned by
> "mem_init_done",
> and mapin_ram() is called before mem_init_done is set.
Excuse me?
>> The IMMR space is a 16kB window (correct?), so I wonder if it might
>> be better to the use occupied pinned slot for another more accessed
>> region (an 8MB one preferably!).
>
> Thats still some food for thought...
Actually, most 8xx systems map the IMMR, plus other external
devices (including flash memory), into a packed space at the top
of the address range. The 8M mapping catches the IMMR space,
but also many of these other devices subject to ioremap(). In
some systems, they trade off mapping the IMMR, or data pages,
to get an additional 8M of peripheral IO space.
The data space pinning is the big trade off right now. You can
pin 4 entries, or 32M of space. If you only have 16M of real
memory, the other two 8M entries can be used to cover IO space.
This is one (of a a few) advantages of using dynamic large
pages instead of pinned entries. You could map more 8M
spaces for IO. The problem is all of these have been custom
initialization and mapping code.
You see, this just keeps growing in features and complexity :-)
It would be nice for ioremap() to consider multiple, dynamic 8M
pages on 8xx like it does BATs on traditional PPC. It will do
this .... someday soon.
Perhaps I should go back and push some of this dynamic 8M
page stuff. It would eliminate the tlbie() problems, give us
more flexibility. Damn, not enough hours in a day ....
Thanks.
-- Dan
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH] 8xx: map_page() skip pinned region and tlbie debugging aid
2005-06-27 20:46 ` Dan Malek
@ 2005-06-28 6:30 ` Benjamin Herrenschmidt
2005-06-28 13:42 ` [PATCH] 8xx: get_mmu_context() for (very) FEW_CONTEXTS and KERNEL_PREEMPT race/starvation issue Guillaume Autran
2005-06-28 13:53 ` [PATCH] 8xx: map_page() skip pinned region and tlbie debugging aid Dan Malek
0 siblings, 2 replies; 30+ messages in thread
From: Benjamin Herrenschmidt @ 2005-06-28 6:30 UTC (permalink / raw)
To: Dan Malek; +Cc: linux-ppc-embedded
>
> You see, this just keeps growing in features and complexity :-)
> It would be nice for ioremap() to consider multiple, dynamic 8M
> pages on 8xx like it does BATs on traditional PPC. It will do
> this .... someday soon.
You should consider 8Mb pages the way we do BATs yes, that is have
something like an array of those or use 2 PMD entries to represent them,
and then, have ioremap treat them like it does with BATs. Such 8Mb pages
could then be setup using io_block_mapping().
Note that I'll soon send the patch I told you about that makes the
virtual address picked by io_block_mapping() dynamic, so we no longer
have to do crappy assumptions all over the place :)
> Perhaps I should go back and push some of this dynamic 8M
> page stuff. It would eliminate the tlbie() problems, give us
> more flexibility. Damn, not enough hours in a day ....
Looks like a good idea though :)
Ben.
^ permalink raw reply [flat|nested] 30+ messages in thread
* [PATCH] 8xx: get_mmu_context() for (very) FEW_CONTEXTS and KERNEL_PREEMPT race/starvation issue
2005-06-28 6:30 ` Benjamin Herrenschmidt
@ 2005-06-28 13:42 ` Guillaume Autran
2005-06-29 4:15 ` Benjamin Herrenschmidt
2005-06-28 13:53 ` [PATCH] 8xx: map_page() skip pinned region and tlbie debugging aid Dan Malek
1 sibling, 1 reply; 30+ messages in thread
From: Guillaume Autran @ 2005-06-28 13:42 UTC (permalink / raw)
To: linux-ppc-embedded
[-- Attachment #1: Type: text/plain, Size: 886 bytes --]
Hi,
I happen to notice a race condition in the mmu_context code for the 8xx
with very few context (16 MMU contexts) and kernel preemption enable. It
is hard to reproduce has it shows only when many processes are
created/destroy and the system is doing a lot of IRQ processing.
In short, one process is trying to steal a context that is in the
process of being freed (mm->context == NO_CONTEXT) but not completely
freed (nr_free_contexts == 0).
The steal_context() function does not do anything and the process stays
in the loop forever.
Anyway, I got a patch that fixes this part. Does not seem to affect
scheduling latency at all.
Comments are appreciated.
Guillaume.
--
=======================================
Guillaume Autran
Senior Software Engineer
MRV Communications, Inc.
Tel: (978) 952-4932 office
E-mail: gautran@mrv.com
=======================================
[-- Attachment #2: mmu_context.patch --]
[-- Type: text/plain, Size: 3222 bytes --]
diff -Nru --exclude=CVS linux-2.6.old/arch/ppc/mm/mmu_context.c linux-2.6/arch/ppc/mm/mmu_context.c
--- linux-2.6.old/arch/ppc/mm/mmu_context.c 2004-12-13 16:11:56.000000000 -0500
+++ linux-2.6/arch/ppc/mm/mmu_context.c 2005-06-28 09:08:13.000000000 -0400
@@ -31,9 +31,9 @@
#include <asm/tlbflush.h>
mm_context_t next_mmu_context;
+spinlock_t next_mmu_ctx_lock = SPIN_LOCK_UNLOCKED;
unsigned long context_map[LAST_CONTEXT / BITS_PER_LONG + 1];
#ifdef FEW_CONTEXTS
-atomic_t nr_free_contexts;
struct mm_struct *context_mm[LAST_CONTEXT+1];
void steal_context(void);
#endif /* FEW_CONTEXTS */
@@ -52,9 +52,6 @@
*/
context_map[0] = (1 << FIRST_CONTEXT) - 1;
next_mmu_context = FIRST_CONTEXT;
-#ifdef FEW_CONTEXTS
- atomic_set(&nr_free_contexts, LAST_CONTEXT - FIRST_CONTEXT + 1);
-#endif /* FEW_CONTEXTS */
}
#ifdef FEW_CONTEXTS
@@ -74,12 +71,21 @@
steal_context(void)
{
struct mm_struct *mm;
+ mm_context_t ctx = 0;
/* free up context `next_mmu_context' */
+ spin_lock(&next_mmu_ctx_lock);
+
/* if we shouldn't free context 0, don't... */
if (next_mmu_context < FIRST_CONTEXT)
next_mmu_context = FIRST_CONTEXT;
- mm = context_mm[next_mmu_context];
+
+ ctx = next_mmu_context;
+ next_mmu_context = (ctx + 1) & LAST_CONTEXT;
+
+ spin_unlock(&next_mmu_ctx_lock);
+
+ mm = context_mm[ctx];
flush_tlb_mm(mm);
destroy_context(mm);
}
diff -Nru --exclude=CVS linux-2.6.old/include/asm-ppc/mmu_context.h linux-2.6/include/asm-ppc/mmu_context.h
--- linux-2.6.old/include/asm-ppc/mmu_context.h 2004-12-13 16:11:21.000000000 -0500
+++ linux-2.6/include/asm-ppc/mmu_context.h 2005-06-28 09:08:13.000000000 -0400
@@ -100,6 +100,7 @@
* number to be free, but it usually will be.
*/
extern mm_context_t next_mmu_context;
+extern spinlock_t next_mmu_ctx_lock;
/*
* If we don't have sufficient contexts to give one to every task
@@ -108,7 +109,6 @@
*/
#if LAST_CONTEXT < 30000
#define FEW_CONTEXTS 1
-extern atomic_t nr_free_contexts;
extern struct mm_struct *context_mm[LAST_CONTEXT+1];
extern void steal_context(void);
#endif
@@ -119,24 +119,36 @@
static inline void get_mmu_context(struct mm_struct *mm)
{
mm_context_t ctx;
+ int flag;
if (mm->context != NO_CONTEXT)
return;
-#ifdef FEW_CONTEXTS
- while (atomic_dec_if_positive(&nr_free_contexts) < 0)
- steal_context();
-#endif
+
ctx = next_mmu_context;
+ flag = 0;
+
while (test_and_set_bit(ctx, context_map)) {
ctx = find_next_zero_bit(context_map, LAST_CONTEXT+1, ctx);
- if (ctx > LAST_CONTEXT)
+ if (ctx > LAST_CONTEXT) {
ctx = 0;
+#ifdef FEW_CONTEXTS
+ if( flag == 0 ) {
+ flag = 1;
+ } else {
+ ctx = next_mmu_context;
+ steal_context();
+ }
+#endif
+ }
}
+ spin_lock(&next_mmu_ctx_lock);
next_mmu_context = (ctx + 1) & LAST_CONTEXT;
mm->context = ctx;
#ifdef FEW_CONTEXTS
context_mm[ctx] = mm;
#endif
+ spin_unlock(&next_mmu_ctx_lock);
+
}
/*
@@ -150,11 +162,9 @@
static inline void destroy_context(struct mm_struct *mm)
{
if (mm->context != NO_CONTEXT) {
- clear_bit(mm->context, context_map);
+ mm_context_t ctx = mm->context;
mm->context = NO_CONTEXT;
-#ifdef FEW_CONTEXTS
- atomic_inc(&nr_free_contexts);
-#endif
+ clear_bit(ctx, context_map);
}
}
^ permalink raw reply [flat|nested] 30+ messages in thread* Re: [PATCH] 8xx: get_mmu_context() for (very) FEW_CONTEXTS and KERNEL_PREEMPT race/starvation issue
2005-06-28 13:42 ` [PATCH] 8xx: get_mmu_context() for (very) FEW_CONTEXTS and KERNEL_PREEMPT race/starvation issue Guillaume Autran
@ 2005-06-29 4:15 ` Benjamin Herrenschmidt
2005-06-29 15:32 ` Guillaume Autran
0 siblings, 1 reply; 30+ messages in thread
From: Benjamin Herrenschmidt @ 2005-06-29 4:15 UTC (permalink / raw)
To: Guillaume Autran; +Cc: linux-ppc-embedded
On Tue, 2005-06-28 at 09:42 -0400, Guillaume Autran wrote:
> Hi,
>
> I happen to notice a race condition in the mmu_context code for the 8xx
> with very few context (16 MMU contexts) and kernel preemption enable. It
> is hard to reproduce has it shows only when many processes are
> created/destroy and the system is doing a lot of IRQ processing.
>
> In short, one process is trying to steal a context that is in the
> process of being freed (mm->context == NO_CONTEXT) but not completely
> freed (nr_free_contexts == 0).
> The steal_context() function does not do anything and the process stays
> in the loop forever.
>
> Anyway, I got a patch that fixes this part. Does not seem to affect
> scheduling latency at all.
>
> Comments are appreciated.
Your patch seems to do a hell lot more than fixing this race ... What
about just calling preempt_disable() in destroy_context() instead ?
Ben.
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH] 8xx: get_mmu_context() for (very) FEW_CONTEXTS and KERNEL_PREEMPT race/starvation issue
2005-06-29 4:15 ` Benjamin Herrenschmidt
@ 2005-06-29 15:32 ` Guillaume Autran
2005-06-29 15:54 ` Marcelo Tosatti
2005-06-29 23:24 ` Benjamin Herrenschmidt
0 siblings, 2 replies; 30+ messages in thread
From: Guillaume Autran @ 2005-06-29 15:32 UTC (permalink / raw)
To: Benjamin Herrenschmidt; +Cc: linux-ppc-embedded
[-- Attachment #1: Type: text/plain, Size: 1646 bytes --]
Benjamin Herrenschmidt wrote:
>On Tue, 2005-06-28 at 09:42 -0400, Guillaume Autran wrote:
>
>
>>Hi,
>>
>>I happen to notice a race condition in the mmu_context code for the 8xx
>>with very few context (16 MMU contexts) and kernel preemption enable. It
>>is hard to reproduce has it shows only when many processes are
>>created/destroy and the system is doing a lot of IRQ processing.
>>
>>In short, one process is trying to steal a context that is in the
>>process of being freed (mm->context == NO_CONTEXT) but not completely
>>freed (nr_free_contexts == 0).
>>The steal_context() function does not do anything and the process stays
>>in the loop forever.
>>
>>Anyway, I got a patch that fixes this part. Does not seem to affect
>>scheduling latency at all.
>>
>>Comments are appreciated.
>>
>>
>
>Your patch seems to do a hell lot more than fixing this race ... What
>about just calling preempt_disable() in destroy_context() instead ?
>
>
I'm still a bit confused with "kernel preemption". One thing for sure is
that disabling kernel preemption does indeed fix my problem.
So, my question is, what if a task in the middle of being schedule gets
preempted by an IRQ handler, where will this task restart execution ?
Back at the beginning of schedule or where it left of ?
The idea behind my patch was to get rid of that nr_free_contexts counter
that is (I thing) redundant with the context_map.
Regards,
Guillaume.
--
=======================================
Guillaume Autran
Senior Software Engineer
MRV Communications, Inc.
Tel: (978) 952-4932 office
E-mail: gautran@mrv.com
=======================================
[-- Attachment #2: Type: text/html, Size: 2182 bytes --]
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH] 8xx: get_mmu_context() for (very) FEW_CONTEXTS and KERNEL_PREEMPT race/starvation issue
2005-06-29 15:32 ` Guillaume Autran
@ 2005-06-29 15:54 ` Marcelo Tosatti
2005-06-29 21:25 ` Guillaume Autran
2005-06-29 23:26 ` Benjamin Herrenschmidt
2005-06-29 23:24 ` Benjamin Herrenschmidt
1 sibling, 2 replies; 30+ messages in thread
From: Marcelo Tosatti @ 2005-06-29 15:54 UTC (permalink / raw)
To: Guillaume Autran, '; +Cc: linux-ppc-embedded
Hi Guillaume,
On Wed, Jun 29, 2005 at 11:32:19AM -0400, Guillaume Autran wrote:
>
> Benjamin Herrenschmidt wrote:
>
> >On Tue, 2005-06-28 at 09:42 -0400, Guillaume Autran wrote:
> >
> >
> >>Hi,
> >>
> >>I happen to notice a race condition in the mmu_context code for the 8xx
> >>with very few context (16 MMU contexts) and kernel preemption enable. It
> >>is hard to reproduce has it shows only when many processes are
> >>created/destroy and the system is doing a lot of IRQ processing.
> >>
> >>In short, one process is trying to steal a context that is in the
> >>process of being freed (mm->context == NO_CONTEXT) but not completely
> >>freed (nr_free_contexts == 0).
> >>The steal_context() function does not do anything and the process stays
> >>in the loop forever.
> >>
> >>Anyway, I got a patch that fixes this part. Does not seem to affect
> >>scheduling latency at all.
> >>
> >>Comments are appreciated.
> >>
> >>
> >
> >Your patch seems to do a hell lot more than fixing this race ... What
> >about just calling preempt_disable() in destroy_context() instead ?
> >
> >
> I'm still a bit confused with "kernel preemption". One thing for sure is
> that disabling kernel preemption does indeed fix my problem.
> So, my question is, what if a task in the middle of being schedule gets
> preempted by an IRQ handler, where will this task restart execution ?
> Back at the beginning of schedule or where it left of ?
Execution is resumed exactly where it has been interrupted.
> The idea behind my patch was to get rid of that nr_free_contexts counter
> that is (I thing) redundant with the context_map.
Apparently its there to avoid the spinlock exactly on !FEW_CONTEXTS machines.
I suppose that what happens is that get_mmu_context() gets preempted after stealing
a context (so nr_free_contexts = 0), but before setting next_mmu_context to the
next entry
next_mmu_context = (ctx + 1) & LAST_CONTEXT;
So if the now running higher prio tasks calls switch_mm() (which is likely to happen)
it loops forever on atomic_dec_if_positive(&nr_free_contexts), while steal_context()
sees "mm->context == CONTEXT".
I think that you should try "preempt_disable()/preempt_enable" pair at entry and
exit of get_mmu_context() - I suppose around destroy_context() is not enough (you
can try that also).
spinlock ends up calling preempt_disable().
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH] 8xx: get_mmu_context() for (very) FEW_CONTEXTS and KERNEL_PREEMPT race/starvation issue
2005-06-29 15:54 ` Marcelo Tosatti
@ 2005-06-29 21:25 ` Guillaume Autran
2005-06-29 17:00 ` Marcelo Tosatti
2005-06-29 23:26 ` Benjamin Herrenschmidt
1 sibling, 1 reply; 30+ messages in thread
From: Guillaume Autran @ 2005-06-29 21:25 UTC (permalink / raw)
To: Marcelo Tosatti; +Cc: ', linux-ppc-embedded
[-- Attachment #1: Type: text/plain, Size: 3282 bytes --]
Hi Marcelo,
Marcelo Tosatti wrote:
>Hi Guillaume,
>
>On Wed, Jun 29, 2005 at 11:32:19AM -0400, Guillaume Autran wrote:
>
>
>>Benjamin Herrenschmidt wrote:
>>
>>
>>
>>>On Tue, 2005-06-28 at 09:42 -0400, Guillaume Autran wrote:
>>>
>>>
>>>
>>>
>>>>Hi,
>>>>
>>>>I happen to notice a race condition in the mmu_context code for the 8xx
>>>>with very few context (16 MMU contexts) and kernel preemption enable. It
>>>>is hard to reproduce has it shows only when many processes are
>>>>created/destroy and the system is doing a lot of IRQ processing.
>>>>
>>>>In short, one process is trying to steal a context that is in the
>>>>process of being freed (mm->context == NO_CONTEXT) but not completely
>>>>freed (nr_free_contexts == 0).
>>>>The steal_context() function does not do anything and the process stays
>>>>in the loop forever.
>>>>
>>>>Anyway, I got a patch that fixes this part. Does not seem to affect
>>>>scheduling latency at all.
>>>>
>>>>Comments are appreciated.
>>>>
>>>>
>>>>
>>>>
>>>Your patch seems to do a hell lot more than fixing this race ... What
>>>about just calling preempt_disable() in destroy_context() instead ?
>>>
>>>
>>>
>>>
>>I'm still a bit confused with "kernel preemption". One thing for sure is
>>that disabling kernel preemption does indeed fix my problem.
>>So, my question is, what if a task in the middle of being schedule gets
>>preempted by an IRQ handler, where will this task restart execution ?
>>Back at the beginning of schedule or where it left of ?
>>
>>
>
>Execution is resumed exactly where it has been interrupted.
>
In that case, what happen when a higher priority task steal the context
of the lower priority task after get_mmu_context() but before
set_mmu_context() ?
Then when the lower priority task resumes, its context may no longer be
valid...
Do I get this right ?
>>The idea behind my patch was to get rid of that nr_free_contexts counter
>>that is (I thing) redundant with the context_map.
>>
>>
>
>Apparently its there to avoid the spinlock exactly on !FEW_CONTEXTS machines.
>
>I suppose that what happens is that get_mmu_context() gets preempted after stealing
>a context (so nr_free_contexts = 0), but before setting next_mmu_context to the
>next entry
>
>next_mmu_context = (ctx + 1) & LAST_CONTEXT;
>
>So if the now running higher prio tasks calls switch_mm() (which is likely to happen)
>it loops forever on atomic_dec_if_positive(&nr_free_contexts), while steal_context()
>sees "mm->context == CONTEXT".
>
>I think that you should try "preempt_disable()/preempt_enable" pair at entry and
>exit of get_mmu_context() - I suppose around destroy_context() is not enough (you
>can try that also).
>
>spinlock ends up calling preempt_disable().
>
>
>
I'm going to do like this instead of my previous attempt:
/* Setup new userspace context */
preempt_disable();
get_mmu_context(next);
set_context(next->context, next->pgd);
preempt_enable();
To make sure we don't loose our context in between.
Thanks.
Guillaume.
--
=======================================
Guillaume Autran
Senior Software Engineer
MRV Communications, Inc.
Tel: (978) 952-4932 office
E-mail: gautran@mrv.com
=======================================
[-- Attachment #2: Type: text/html, Size: 4293 bytes --]
^ permalink raw reply [flat|nested] 30+ messages in thread* Re: [PATCH] 8xx: get_mmu_context() for (very) FEW_CONTEXTS and KERNEL_PREEMPT race/starvation issue
2005-06-29 21:25 ` Guillaume Autran
@ 2005-06-29 17:00 ` Marcelo Tosatti
0 siblings, 0 replies; 30+ messages in thread
From: Marcelo Tosatti @ 2005-06-29 17:00 UTC (permalink / raw)
To: Guillaume Autran; +Cc: linux-ppc-embedded
Hi!
On Wed, Jun 29, 2005 at 05:25:33PM -0400, Guillaume Autran wrote:
> In that case, what happen when a higher priority task steal the context
> of the lower priority task after get_mmu_context() but before
> set_mmu_context() ?
> Then when the lower priority task resumes, its context may no longer be
> valid...
> Do I get this right ?
Yep... but its OK and expected for the "lower prio task" in question to have
its context invalidated: In this case it will call get_mmu_context() again
and reserve the next one available before executing.
> I'm going to do like this instead of my previous attempt:
>
> /* Setup new userspace context */
> preempt_disable();
> get_mmu_context(next);
> set_context(next->context, next->pgd);
> preempt_enable();
>
> To make sure we don't loose our context in between.
There should be no need - the window for the race is inside
get_mmu_context().
ie. It should be safe to preempt after setting "next_mm_context".
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH] 8xx: get_mmu_context() for (very) FEW_CONTEXTS and KERNEL_PREEMPT race/starvation issue
2005-06-29 15:54 ` Marcelo Tosatti
2005-06-29 21:25 ` Guillaume Autran
@ 2005-06-29 23:26 ` Benjamin Herrenschmidt
2005-06-29 19:38 ` Marcelo Tosatti
2005-06-30 0:34 ` Eugene Surovegin
1 sibling, 2 replies; 30+ messages in thread
From: Benjamin Herrenschmidt @ 2005-06-29 23:26 UTC (permalink / raw)
To: Marcelo Tosatti; +Cc: ', linux-ppc-embedded
> Execution is resumed exactly where it has been interrupted.
>
> > The idea behind my patch was to get rid of that nr_free_contexts counter
> > that is (I thing) redundant with the context_map.
>
> Apparently its there to avoid the spinlock exactly on !FEW_CONTEXTS machines.
>
> I suppose that what happens is that get_mmu_context() gets preempted after stealing
> a context (so nr_free_contexts = 0), but before setting next_mmu_context to the
> next entry
>
> next_mmu_context = (ctx + 1) & LAST_CONTEXT;
Ugh ? Can switch_mm() be preempted at all ? Did I miss yet another
"let's open 10 gazillion races for gun" Ingo patch ?
> So if the now running higher prio tasks calls switch_mm() (which is likely to happen)
> it loops forever on atomic_dec_if_positive(&nr_free_contexts), while steal_context()
> sees "mm->context == CONTEXT".
I think the race is only when destroy_context() is preempted, but maybe
I missed something.
> I think that you should try "preempt_disable()/preempt_enable" pair at entry and
> exit of get_mmu_context() - I suppose around destroy_context() is not enough (you
> can try that also).
>
> spinlock ends up calling preempt_disable().
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH] 8xx: get_mmu_context() for (very) FEW_CONTEXTS and KERNEL_PREEMPT race/starvation issue
2005-06-29 23:26 ` Benjamin Herrenschmidt
@ 2005-06-29 19:38 ` Marcelo Tosatti
2005-06-30 13:54 ` Guillaume Autran
2005-06-30 0:34 ` Eugene Surovegin
1 sibling, 1 reply; 30+ messages in thread
From: Marcelo Tosatti @ 2005-06-29 19:38 UTC (permalink / raw)
To: Benjamin Herrenschmidt; +Cc: linux-ppc-embedded
On Thu, Jun 30, 2005 at 09:26:07AM +1000, Benjamin Herrenschmidt wrote:
>
> > Execution is resumed exactly where it has been interrupted.
> >
> > > The idea behind my patch was to get rid of that nr_free_contexts counter
> > > that is (I thing) redundant with the context_map.
> >
> > Apparently its there to avoid the spinlock exactly on !FEW_CONTEXTS machines.
> >
> > I suppose that what happens is that get_mmu_context() gets preempted after stealing
> > a context (so nr_free_contexts = 0), but before setting next_mmu_context to the
> > next entry
> >
> > next_mmu_context = (ctx + 1) & LAST_CONTEXT;
>
> Ugh ? Can switch_mm() be preempted at all ? Did I miss yet another
> "let's open 10 gazillion races for gun" Ingo patch ?
Doh nope it can't - my bad.
> > So if the now running higher prio tasks calls switch_mm() (which is likely to happen)
> > it loops forever on atomic_dec_if_positive(&nr_free_contexts), while steal_context()
> > sees "mm->context == CONTEXT".
>
> I think the race is only when destroy_context() is preempted, but maybe
> I missed something.
Nope, I think you are right. My "theory" is obviously flawed now.
There seem to be several contexts where destroy_context() could be called
with preempt enabled - I should have been shutup in the first place :)
Lets wait for Guillaume to test...
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH] 8xx: get_mmu_context() for (very) FEW_CONTEXTS and KERNEL_PREEMPT race/starvation issue
2005-06-29 19:38 ` Marcelo Tosatti
@ 2005-06-30 13:54 ` Guillaume Autran
2005-07-05 13:12 ` Guillaume Autran
0 siblings, 1 reply; 30+ messages in thread
From: Guillaume Autran @ 2005-06-30 13:54 UTC (permalink / raw)
To: Marcelo Tosatti; +Cc: linux-ppc-embedded
[-- Attachment #1: Type: text/plain, Size: 1837 bytes --]
Well, disabling preemption in the get_mmu_context() does not help much...
I'm trying to disable preemption only inside destroy_mmu_context() as
suggested.
Will keep you posted.
Guillaume.
Marcelo Tosatti wrote:
>On Thu, Jun 30, 2005 at 09:26:07AM +1000, Benjamin Herrenschmidt wrote:
>
>
>>>Execution is resumed exactly where it has been interrupted.
>>>
>>>
>>>
>>>>The idea behind my patch was to get rid of that nr_free_contexts counter
>>>>that is (I thing) redundant with the context_map.
>>>>
>>>>
>>>Apparently its there to avoid the spinlock exactly on !FEW_CONTEXTS machines.
>>>
>>>I suppose that what happens is that get_mmu_context() gets preempted after stealing
>>>a context (so nr_free_contexts = 0), but before setting next_mmu_context to the
>>>next entry
>>>
>>>next_mmu_context = (ctx + 1) & LAST_CONTEXT;
>>>
>>>
>>Ugh ? Can switch_mm() be preempted at all ? Did I miss yet another
>>"let's open 10 gazillion races for gun" Ingo patch ?
>>
>>
>
>Doh nope it can't - my bad.
>
>
>
>>>So if the now running higher prio tasks calls switch_mm() (which is likely to happen)
>>>it loops forever on atomic_dec_if_positive(&nr_free_contexts), while steal_context()
>>>sees "mm->context == CONTEXT".
>>>
>>>
>>I think the race is only when destroy_context() is preempted, but maybe
>>I missed something.
>>
>>
>
>Nope, I think you are right. My "theory" is obviously flawed now.
>
>There seem to be several contexts where destroy_context() could be called
>with preempt enabled - I should have been shutup in the first place :)
>
>Lets wait for Guillaume to test...
>
>
>
--
=======================================
Guillaume Autran
Senior Software Engineer
MRV Communications, Inc.
Tel: (978) 952-4932 office
E-mail: gautran@mrv.com
=======================================
[-- Attachment #2: Type: text/html, Size: 2658 bytes --]
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH] 8xx: get_mmu_context() for (very) FEW_CONTEXTS and KERNEL_PREEMPT race/starvation issue
2005-06-30 13:54 ` Guillaume Autran
@ 2005-07-05 13:12 ` Guillaume Autran
0 siblings, 0 replies; 30+ messages in thread
From: Guillaume Autran @ 2005-07-05 13:12 UTC (permalink / raw)
To: linux-ppc-embedded
[-- Attachment #1.1: Type: text/plain, Size: 2596 bytes --]
Sorry for the late reply. I was away for the long weekend. However, my
validation test ran all the way through the long weekend ! So, we can
consider this a fix.
See the patch attached.
Thanks,
Guillaume.
Guillaume Autran wrote:
> Well, disabling preemption in the get_mmu_context() does not help much...
> I'm trying to disable preemption only inside destroy_mmu_context() as
> suggested.
> Will keep you posted.
>
> Guillaume.
>
>
>
> Marcelo Tosatti wrote:
>
>>On Thu, Jun 30, 2005 at 09:26:07AM +1000, Benjamin Herrenschmidt wrote:
>>
>>
>>>>Execution is resumed exactly where it has been interrupted.
>>>>
>>>>
>>>>
>>>>>The idea behind my patch was to get rid of that nr_free_contexts counter
>>>>>that is (I thing) redundant with the context_map.
>>>>>
>>>>>
>>>>Apparently its there to avoid the spinlock exactly on !FEW_CONTEXTS machines.
>>>>
>>>>I suppose that what happens is that get_mmu_context() gets preempted after stealing
>>>>a context (so nr_free_contexts = 0), but before setting next_mmu_context to the
>>>>next entry
>>>>
>>>>next_mmu_context = (ctx + 1) & LAST_CONTEXT;
>>>>
>>>>
>>>Ugh ? Can switch_mm() be preempted at all ? Did I miss yet another
>>>"let's open 10 gazillion races for gun" Ingo patch ?
>>>
>>>
>>
>>Doh nope it can't - my bad.
>>
>>
>>
>>>>So if the now running higher prio tasks calls switch_mm() (which is likely to happen)
>>>>it loops forever on atomic_dec_if_positive(&nr_free_contexts), while steal_context()
>>>>sees "mm->context == CONTEXT".
>>>>
>>>>
>>>I think the race is only when destroy_context() is preempted, but maybe
>>>I missed something.
>>>
>>>
>>
>>Nope, I think you are right. My "theory" is obviously flawed now.
>>
>>There seem to be several contexts where destroy_context() could be called
>>with preempt enabled - I should have been shutup in the first place :)
>>
>>Lets wait for Guillaume to test...
>>
>>
>>
>
>--
>=======================================
>Guillaume Autran
>Senior Software Engineer
>MRV Communications, Inc.
>Tel: (978) 952-4932 office
>E-mail: gautran@mrv.com
>=======================================
>
>------------------------------------------------------------------------
>
>_______________________________________________
>Linuxppc-embedded mailing list
>Linuxppc-embedded@ozlabs.org
>https://ozlabs.org/mailman/listinfo/linuxppc-embedded
>
--
=======================================
Guillaume Autran
Senior Software Engineer
MRV Communications, Inc.
Tel: (978) 952-4932 office
E-mail: gautran@mrv.com
=======================================
[-- Attachment #1.2: Type: text/html, Size: 3781 bytes --]
[-- Attachment #2: preempt.patch --]
[-- Type: text/plain, Size: 650 bytes --]
diff -Nru linux-2.6.12/include/asm-ppc/mmu_context.h linux-2.6.12.new/include/asm-ppc/mmu_context.h
--- linux-2.6.12/include/asm-ppc/mmu_context.h 2005-06-17 15:48:29.000000000 -0400
+++ linux-2.6.12.new/include/asm-ppc/mmu_context.h 2005-07-05 08:58:46.000000000 -0400
@@ -149,6 +149,7 @@
*/
static inline void destroy_context(struct mm_struct *mm)
{
+ preempt_disable();
if (mm->context != NO_CONTEXT) {
clear_bit(mm->context, context_map);
mm->context = NO_CONTEXT;
@@ -156,6 +157,7 @@
atomic_inc(&nr_free_contexts);
#endif
}
+ preempt_enable();
}
static inline void switch_mm(struct mm_struct *prev, struct mm_struct *next,
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH] 8xx: get_mmu_context() for (very) FEW_CONTEXTS and KERNEL_PREEMPT race/starvation issue
2005-06-29 23:26 ` Benjamin Herrenschmidt
2005-06-29 19:38 ` Marcelo Tosatti
@ 2005-06-30 0:34 ` Eugene Surovegin
1 sibling, 0 replies; 30+ messages in thread
From: Eugene Surovegin @ 2005-06-30 0:34 UTC (permalink / raw)
To: Benjamin Herrenschmidt; +Cc: linux-ppc-embedded
On Thu, Jun 30, 2005 at 09:26:07AM +1000, Benjamin Herrenschmidt wrote:
>
> > Execution is resumed exactly where it has been interrupted.
> >
> > > The idea behind my patch was to get rid of that nr_free_contexts counter
> > > that is (I thing) redundant with the context_map.
> >
> > Apparently its there to avoid the spinlock exactly on !FEW_CONTEXTS machines.
> >
> > I suppose that what happens is that get_mmu_context() gets preempted after stealing
> > a context (so nr_free_contexts = 0), but before setting next_mmu_context to the
> > next entry
> >
> > next_mmu_context = (ctx + 1) & LAST_CONTEXT;
>
> Ugh ? Can switch_mm() be preempted at all ? Did I miss yet another
> "let's open 10 gazillion races for gun" Ingo patch ?
No, it can't. schedule() disables preemption at the very beginning.
--
Eugene
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH] 8xx: get_mmu_context() for (very) FEW_CONTEXTS and KERNEL_PREEMPT race/starvation issue
2005-06-29 15:32 ` Guillaume Autran
2005-06-29 15:54 ` Marcelo Tosatti
@ 2005-06-29 23:24 ` Benjamin Herrenschmidt
1 sibling, 0 replies; 30+ messages in thread
From: Benjamin Herrenschmidt @ 2005-06-29 23:24 UTC (permalink / raw)
To: Guillaume Autran; +Cc: linux-ppc-embedded
> I'm still a bit confused with "kernel preemption". One thing for sure
> is that disabling kernel preemption does indeed fix my problem.
> So, my question is, what if a task in the middle of being schedule
> gets preempted by an IRQ handler, where will this task restart
> execution ? Back at the beginning of schedule or where it left of ?
I very much doubt that schedule itself can be preempted :)
> The idea behind my patch was to get rid of that nr_free_contexts
> counter that is (I thing) redundant with the context_map.
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH] 8xx: map_page() skip pinned region and tlbie debugging aid
2005-06-28 6:30 ` Benjamin Herrenschmidt
2005-06-28 13:42 ` [PATCH] 8xx: get_mmu_context() for (very) FEW_CONTEXTS and KERNEL_PREEMPT race/starvation issue Guillaume Autran
@ 2005-06-28 13:53 ` Dan Malek
2005-06-28 23:47 ` Benjamin Herrenschmidt
2005-06-29 17:19 ` Marcelo Tosatti
1 sibling, 2 replies; 30+ messages in thread
From: Dan Malek @ 2005-06-28 13:53 UTC (permalink / raw)
To: Benjamin Herrenschmidt; +Cc: linux-ppc-embedded
On Jun 28, 2005, at 2:30 AM, Benjamin Herrenschmidt wrote:
> You should consider 8Mb pages the way we do BATs yes,
It's always been considered, just never fully implemented :-)
> Note that I'll soon send the patch I told you about that makes the
> virtual address picked by io_block_mapping() dynamic, so we no longer
> have to do crappy assumptions all over the place :)
Whatever, I'll never use it that way and no one else should either.
All of the io_block_mapping() calls should be used to set these 8M
mapped IO spaces, everyone should use ioremap() to map them,
and ioremap() has to be modified to find them for the 8xx.
Thanks.
-- Dan
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH] 8xx: map_page() skip pinned region and tlbie debugging aid
2005-06-28 13:53 ` [PATCH] 8xx: map_page() skip pinned region and tlbie debugging aid Dan Malek
@ 2005-06-28 23:47 ` Benjamin Herrenschmidt
2005-06-29 17:19 ` Marcelo Tosatti
1 sibling, 0 replies; 30+ messages in thread
From: Benjamin Herrenschmidt @ 2005-06-28 23:47 UTC (permalink / raw)
To: Dan Malek; +Cc: linux-ppc-embedded
On Tue, 2005-06-28 at 09:53 -0400, Dan Malek wrote:
> On Jun 28, 2005, at 2:30 AM, Benjamin Herrenschmidt wrote:
>
> > You should consider 8Mb pages the way we do BATs yes,
>
> It's always been considered, just never fully implemented :-)
>
> > Note that I'll soon send the patch I told you about that makes the
> > virtual address picked by io_block_mapping() dynamic, so we no longer
> > have to do crappy assumptions all over the place :)
>
> Whatever, I'll never use it that way and no one else should either.
That is stupid
> All of the io_block_mapping() calls should be used to set these 8M
I think you just never bothered actually reading all I wrote about
that... None of what you describe requires hard coding the virtual
address. This is just plain bad.
> mapped IO spaces, everyone should use ioremap() to map them,
> and ioremap() has to be modified to find them for the 8xx.
We agree here.
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH] 8xx: map_page() skip pinned region and tlbie debugging aid
2005-06-28 13:53 ` [PATCH] 8xx: map_page() skip pinned region and tlbie debugging aid Dan Malek
2005-06-28 23:47 ` Benjamin Herrenschmidt
@ 2005-06-29 17:19 ` Marcelo Tosatti
2005-06-29 23:31 ` Benjamin Herrenschmidt
2005-06-30 17:49 ` Dan Malek
1 sibling, 2 replies; 30+ messages in thread
From: Marcelo Tosatti @ 2005-06-29 17:19 UTC (permalink / raw)
To: Dan Malek; +Cc: linux-ppc-embedded
On Tue, Jun 28, 2005 at 09:53:26AM -0400, Dan Malek wrote:
>
> On Jun 28, 2005, at 2:30 AM, Benjamin Herrenschmidt wrote:
>
> >You should consider 8Mb pages the way we do BATs yes,
>
> It's always been considered, just never fully implemented :-)
>
> >Note that I'll soon send the patch I told you about that makes the
> >virtual address picked by io_block_mapping() dynamic, so we no longer
> >have to do crappy assumptions all over the place :)
>
> Whatever, I'll never use it that way and no one else should either.
Why not? AFAICS the idea is to have the virtual mappings dynamic
and not static - this is the _whole_ point of ioremap() instead
of io_block_mapping(), isnt it?
I fail to see any practical arguments against it...
> All of the io_block_mapping() calls should be used to set these 8M
> mapped IO spaces, everyone should use ioremap() to map them,
> and ioremap() has to be modified to find them for the 8xx.
What do you mean "everyone should use ioremap() to map them"?
Once the physical->virtual mapping for device IO space are set
with io_block_mapping() (or with ioremap() for dynamic virtual
addresses), why would you want to ioremap() the physical address
again???
PS: I've had a quick try at converting the IMMAP to use
ioremap instead (and have that dynamic virtual address stored
in a pointer), changed drivers to use that pointer instead of
hardcoded "IMMAP". Didnt work immediately :) Its not that the
idea?
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH] 8xx: map_page() skip pinned region and tlbie debugging aid
2005-06-29 17:19 ` Marcelo Tosatti
@ 2005-06-29 23:31 ` Benjamin Herrenschmidt
2005-06-30 18:05 ` Dan Malek
2005-06-30 17:49 ` Dan Malek
1 sibling, 1 reply; 30+ messages in thread
From: Benjamin Herrenschmidt @ 2005-06-29 23:31 UTC (permalink / raw)
To: Marcelo Tosatti; +Cc: linux-ppc-embedded
On Wed, 2005-06-29 at 14:19 -0300, Marcelo Tosatti wrote:
> Once the physical->virtual mapping for device IO space are set
> with io_block_mapping() (or with ioremap() for dynamic virtual
> addresses), why would you want to ioremap() the physical address
> again???
>
> PS: I've had a quick try at converting the IMMAP to use
> ioremap instead (and have that dynamic virtual address stored
> in a pointer), changed drivers to use that pointer instead of
> hardcoded "IMMAP". Didnt work immediately :) Its not that the
> idea?
No, you are missing the point. The idea is:
- Everything should use ioremap(), that is the proper interface for a
driver or whatever else to get a virtual address for a physical area.
- The architecture may use io_block_mapping() to create large fixed
mappings (BATs, CAMs, 8M TLBs, ...) for some physical areas as an
optimisation. Those should be transparent to later code, that is you
shouldn't have to care about their existence when you are driver. You
just call ioremap. If that space was already part of a large fixed
mapping, then ioremap will just return an address within that range.
- The debate between Dan and me here is about the semantics of
io_block_mapping(). Currently, it takes both the physical and virtual
address. Thus you hard-code your mappings at known virtual addresses. I
find this unnecessary and source of problems, and I want to add to
io_block_mapping() a way to "allocate" virtual addresses dynamically (by
simply moving down ioremap_bot like ioremap would do if called that
early). That way, there is no "magic" hard coded virtual addresses at
all anymore.
Ben.
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH] 8xx: map_page() skip pinned region and tlbie debugging aid
2005-06-29 23:31 ` Benjamin Herrenschmidt
@ 2005-06-30 18:05 ` Dan Malek
2005-06-30 23:29 ` Benjamin Herrenschmidt
0 siblings, 1 reply; 30+ messages in thread
From: Dan Malek @ 2005-06-30 18:05 UTC (permalink / raw)
To: Benjamin Herrenschmidt; +Cc: linux-ppc-embedded
On Jun 29, 2005, at 7:31 PM, Benjamin Herrenschmidt wrote:
> - The debate between Dan and me here is about the semantics of
> io_block_mapping().
My point of discussion is this function needs to be much smarter than
simply allocating a virtual address space. We need to track the calls
so that we can "grow" previous spaces. A single io_block_mapping()
should not always allocate a new BAT, CAM or otherwise wired entry.
It has to know the alignment, size and amount of resource available.
For example, if an io_block_mapping() requests a 4M space, and it
isn't possible to wire such a size, we still need to keep track of that
such that a subsequent 4M request is combined into a space that
can be wired with an 8M entry. We need to make it smart enough
to coalesce the spaces to maximize the use of the available and
minimal mapping resources. If io_block_mapping() is just a simple
functions that decrements a pointer and sets a value in a register,
then you have already required the caller to know everything about
the mapping details, so why bother performing "hidden" arithmetic
that is likely to be known by the caller? If we are going to change
this,
let's make it truly useful, so it understands the capabilities of the
processor, optimizes the resources, and keeps generic mapping
information so ioremap() doesn't care if it is mapped by BATs, CAMs,
or large pages.
Thanks.
-- Dan
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH] 8xx: map_page() skip pinned region and tlbie debugging aid
2005-06-30 18:05 ` Dan Malek
@ 2005-06-30 23:29 ` Benjamin Herrenschmidt
2005-07-01 7:01 ` Pantelis Antoniou
0 siblings, 1 reply; 30+ messages in thread
From: Benjamin Herrenschmidt @ 2005-06-30 23:29 UTC (permalink / raw)
To: Dan Malek; +Cc: linux-ppc-embedded
On Thu, 2005-06-30 at 14:05 -0400, Dan Malek wrote:
> On Jun 29, 2005, at 7:31 PM, Benjamin Herrenschmidt wrote:
>
> > - The debate between Dan and me here is about the semantics of
> > io_block_mapping().
>
> My point of discussion is this function needs to be much smarter than
> simply allocating a virtual address space. We need to track the calls
> so that we can "grow" previous spaces. A single io_block_mapping()
> should not always allocate a new BAT, CAM or otherwise wired entry.
> It has to know the alignment, size and amount of resource available.
> For example, if an io_block_mapping() requests a 4M space, and it
> isn't possible to wire such a size, we still need to keep track of that
> such that a subsequent 4M request is combined into a space that
> can be wired with an 8M entry. We need to make it smart enough
> to coalesce the spaces to maximize the use of the available and
> minimal mapping resources. If io_block_mapping() is just a simple
> functions that decrements a pointer and sets a value in a register,
> then you have already required the caller to know everything about
> the mapping details, so why bother performing "hidden" arithmetic
> that is likely to be known by the caller? If we are going to change
> this
Everyting ... but the virtual address, which is quite a bit :) My
problem is really with virtual addresses beeing hard coded, which makes
things complicated every time we try to do something with the kenrel
virtual space. But ....
> let's make it tru
> y useful, so it understands the capabilities of the
> processor, optimizes the resources, and keeps generic mapping
> information so ioremap() doesn't care if it is mapped by BATs, CAMs,
> or large pages.
... I do agree that making it even smarter so it can coalesce block
mappings with the same attributes would be "interesting".
Ben.
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH] 8xx: map_page() skip pinned region and tlbie debugging aid
2005-06-30 23:29 ` Benjamin Herrenschmidt
@ 2005-07-01 7:01 ` Pantelis Antoniou
0 siblings, 0 replies; 30+ messages in thread
From: Pantelis Antoniou @ 2005-07-01 7:01 UTC (permalink / raw)
To: Benjamin Herrenschmidt; +Cc: linux-ppc-embedded
Benjamin Herrenschmidt wrote:
> On Thu, 2005-06-30 at 14:05 -0400, Dan Malek wrote:
>
>>On Jun 29, 2005, at 7:31 PM, Benjamin Herrenschmidt wrote:
>>
>>
>>> - The debate between Dan and me here is about the semantics of
>>>io_block_mapping().
>>
>>My point of discussion is this function needs to be much smarter than
>>simply allocating a virtual address space. We need to track the calls
>>so that we can "grow" previous spaces. A single io_block_mapping()
>>should not always allocate a new BAT, CAM or otherwise wired entry.
>>It has to know the alignment, size and amount of resource available.
>>For example, if an io_block_mapping() requests a 4M space, and it
>>isn't possible to wire such a size, we still need to keep track of that
>>such that a subsequent 4M request is combined into a space that
>>can be wired with an 8M entry. We need to make it smart enough
>>to coalesce the spaces to maximize the use of the available and
>>minimal mapping resources. If io_block_mapping() is just a simple
>>functions that decrements a pointer and sets a value in a register,
>>then you have already required the caller to know everything about
>>the mapping details, so why bother performing "hidden" arithmetic
>>that is likely to be known by the caller? If we are going to change
>>this
>
>
> Everyting ... but the virtual address, which is quite a bit :) My
> problem is really with virtual addresses beeing hard coded, which makes
> things complicated every time we try to do something with the kenrel
> virtual space. But ....
>
>
>>let's make it tru
>>y useful, so it understands the capabilities of the
>>processor, optimizes the resources, and keeps generic mapping
>>information so ioremap() doesn't care if it is mapped by BATs, CAMs,
>>or large pages.
>
>
> ... I do agree that making it even smarter so it can coalesce block
> mappings with the same attributes would be "interesting".
>
> Ben.
>
>
Let me pop in here.
My remote heap allocator has these properties, i.e. it can
coalesce adjucent areas if they are of the same "key".
Back to the depths which I now reside...
Regards
Pantelis
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH] 8xx: map_page() skip pinned region and tlbie debugging aid
2005-06-29 17:19 ` Marcelo Tosatti
2005-06-29 23:31 ` Benjamin Herrenschmidt
@ 2005-06-30 17:49 ` Dan Malek
1 sibling, 0 replies; 30+ messages in thread
From: Dan Malek @ 2005-06-30 17:49 UTC (permalink / raw)
To: Marcelo Tosatti; +Cc: linux-ppc-embedded
On Jun 29, 2005, at 1:19 PM, Marcelo Tosatti wrote:
> What do you mean "everyone should use ioremap() to map them"?
All software needs to invoke some kind of mapping function such
as ioremap() in the case of drivers or functions that access
peripherals.
You should not assume someone has done the mapping for you,
as we did in the past for some optimizations.
> Once the physical->virtual mapping for device IO space are set
> with io_block_mapping() (or with ioremap() for dynamic virtual
> addresses), why would you want to ioremap() the physical address
> again???
You shouldn't know that io_block_mapping() has done anything
for you. It is intended to be used as an optimization, not as a
replacement for ioremap(). The ioremap() function needs to be smart
enough to detect these optimizations and return efficient mappings
if that was done. If a platform decides to not use these optimizations
(which is sometimes beneficial for debugging), your software using
ioremap() should still work just fine.
> PS: I've had a quick try at converting the IMMAP to use
> ioremap instead (and have that dynamic virtual address stored
> in a pointer), changed drivers to use that pointer instead of
> hardcoded "IMMAP". Didnt work immediately :) Its not that the
> idea?
Yes, and we have done lots of this in the 82xx/83xx/85xx cpm2
drivers in 2.6. In fact, in 2.6 it has to be done.
Thanks.
-- Dan
^ permalink raw reply [flat|nested] 30+ messages in thread
* [PATCH] 8xx: tlbie debugging aid (try #2)
2005-06-25 22:24 ` Dan Malek
2005-06-26 14:30 ` Marcelo Tosatti
@ 2005-06-27 14:28 ` Marcelo Tosatti
2005-06-27 20:18 ` Dan Malek
1 sibling, 1 reply; 30+ messages in thread
From: Marcelo Tosatti @ 2005-06-27 14:28 UTC (permalink / raw)
To: Dan Malek; +Cc: linux-ppc-embedded
On Sat, Jun 25, 2005 at 06:24:47PM -0400, Dan Malek wrote:
> We really want to see this generate an error. We shouldn't be
> calling this on any of the pinned spaces. In the case of initially
> mapping the kernel space, we should set up the page tables but
> not call this far down that we get here.
How about the following addressing your comments:
1) handles more than a single contiguous region by having a list_head (I thought of a
binary tree for storing the ranges but sounds like overkill at first given the
small amount of pinned regions).
2) implement as C code.
it yields
_tlbie on pinned range: c0000000-c1800000
Badness in _tlbie at arch/ppc/mm/pgtable.c:527
Call trace:
[c0005530] dump_stack+0x18/0x28
[c0003628] check_bug_trap+0x84/0xac
[c00037b0] ProgramCheckException+0x160/0x1a0
[c0002d50] ret_from_except_full+0x0/0x4c
[c000a91c] _tlbie+0x94/0xa0
[c902f018] alloc_init_module+0x18/0x40 [alloc]
[c002c4ac] sys_init_module+0x224/0x324
[c00026f0] ret_from_syscall+0x0/0x44
diff --git a/arch/ppc/Kconfig b/arch/ppc/Kconfig
--- a/arch/ppc/Kconfig
+++ b/arch/ppc/Kconfig
@@ -1296,6 +1296,11 @@ config BOOT_LOAD
config PIN_TLB
bool "Pinned Kernel TLBs (860 ONLY)"
depends on ADVANCED_OPTIONS && 8xx
+
+config DEBUG_PIN_TLBIE
+ bool "Check for overlapping TLB invalidates inside the pinned area"
+ depends on ADVANCED_OPTIONS && 8xx && PIN_TLB
+
endmenu
source "drivers/Kconfig"
diff --git a/arch/ppc/kernel/misc.S b/arch/ppc/kernel/misc.S
--- a/arch/ppc/kernel/misc.S
+++ b/arch/ppc/kernel/misc.S
@@ -494,7 +494,7 @@ _GLOBAL(_tlbia)
/*
* Flush MMU TLB for a particular address
*/
-_GLOBAL(_tlbie)
+_GLOBAL(__tlbie)
#if defined(CONFIG_40x)
tlbsx. r3, 0, r3
bne 10f
diff --git a/arch/ppc/mm/pgtable.c b/arch/ppc/mm/pgtable.c
--- a/arch/ppc/mm/pgtable.c
+++ b/arch/ppc/mm/pgtable.c
@@ -32,6 +32,7 @@
#include <asm/pgtable.h>
#include <asm/pgalloc.h>
#include <asm/io.h>
+#include <asm/tlb.h>
#include "mmu_decl.h"
@@ -469,3 +470,48 @@ exit:
return ret;
}
+#ifndef CONFIG_DEBUG_PIN_TLBIE
+
+void _tlbie(unsigned long address)
+{
+ __tlbie(address);
+}
+
+#else
+LIST_HEAD(pin_range_root);
+
+static struct pinned_range kernelbase = {
+ start: KERNELBASE,
+ end: KERNELBASE+0x1800000,
+};
+
+static struct pinned_range immr = {
+ start: IMMR,
+ end: IMMR+0x800000,
+};
+
+inline void register_pinned_entries(void)
+{
+ list_add(&kernelbase.pin_list, &pin_range_root);
+ list_add(&immr.pin_list, &pin_range_root);
+}
+
+void _tlbie(unsigned long address)
+{
+ struct list_head *l;
+ struct pinned_range *r;
+
+ list_for_each(l, &pin_range_root) {
+ r = list_entry(l, struct pinned_range, pin_list);
+
+ if (address < r->start)
+ continue;
+ if (address >= r->end)
+ continue;
+ printk("_tlbie on pinned range: %lx-%lx\n", r->start, r->end);
+ WARN_ON(1);
+ }
+
+ __tlbie(address);
+}
+#endif
diff --git a/include/asm-ppc/tlb.h b/include/asm-ppc/tlb.h
--- a/include/asm-ppc/tlb.h
+++ b/include/asm-ppc/tlb.h
@@ -18,6 +18,18 @@
#include <asm/page.h>
#include <asm/mmu.h>
+#ifdef CONFIG_DEBUG_PIN_TLBIE
+struct pinned_range {
+ unsigned long start, end;
+ struct list_head pin_list;
+};
+inline void register_pinned_entries(void);
+#else
+inline void register_pinned_entries(void)
+{
+ return;
+}
+#endif
#ifdef CONFIG_PPC_STD_MMU
/* Classic PPC with hash-table based MMU... */
diff --git a/include/asm-ppc/tlbflush.h b/include/asm-ppc/tlbflush.h
--- a/include/asm-ppc/tlbflush.h
+++ b/include/asm-ppc/tlbflush.h
@@ -13,9 +13,14 @@
#include <linux/config.h>
#include <linux/mm.h>
+extern void __tlbie(unsigned long address);
extern void _tlbie(unsigned long address);
extern void _tlbia(void);
+#ifdef CONFIG_DEBUG_PIN_TLBIE
+extern struct list_head pin_range_root;
+#endif
+
#if defined(CONFIG_4xx)
#ifndef CONFIG_44x
^ permalink raw reply [flat|nested] 30+ messages in thread* Re: [PATCH] 8xx: tlbie debugging aid (try #2)
2005-06-27 14:28 ` [PATCH] 8xx: tlbie debugging aid (try #2) Marcelo Tosatti
@ 2005-06-27 20:18 ` Dan Malek
2005-06-27 14:56 ` Marcelo Tosatti
0 siblings, 1 reply; 30+ messages in thread
From: Dan Malek @ 2005-06-27 20:18 UTC (permalink / raw)
To: Marcelo Tosatti; +Cc: linux-ppc-embedded
On Jun 27, 2005, at 10:28 AM, Marcelo Tosatti wrote:
> it yields
>
> _tlbie on pinned range: c0000000-c1800000
> Badness in _tlbie at arch/ppc/mm/pgtable.c:527
> Call trace:
> [c0005530] dump_stack+0x18/0x28
> [c0003628] check_bug_trap+0x84/0xac
> [c00037b0] ProgramCheckException+0x160/0x1a0
> [c0002d50] ret_from_except_full+0x0/0x4c
> [c000a91c] _tlbie+0x94/0xa0
> [c902f018] alloc_init_module+0x18/0x40 [alloc]
> [c002c4ac] sys_init_module+0x224/0x324
> [c00026f0] ret_from_syscall+0x0/0x44
How much real memory on your board?
We need to ensure VMALLOC_START is beyond
the pinned entries. We should make all of the code
much smarter to pin on the real space that is on
the board. For testing now, just make VMALLOC_OFFSET
32M, which will push the start to the 32M boundary
after the kernel.
Thanks.
-- Dan
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH] 8xx: tlbie debugging aid (try #2)
2005-06-27 20:18 ` Dan Malek
@ 2005-06-27 14:56 ` Marcelo Tosatti
2005-06-27 20:53 ` Dan Malek
0 siblings, 1 reply; 30+ messages in thread
From: Marcelo Tosatti @ 2005-06-27 14:56 UTC (permalink / raw)
To: Dan Malek; +Cc: linux-ppc-embedded
Hi Dan,
On Mon, Jun 27, 2005 at 04:18:14PM -0400, Dan Malek wrote:
>
> On Jun 27, 2005, at 10:28 AM, Marcelo Tosatti wrote:
>
> >it yields
> >
> >_tlbie on pinned range: c0000000-c1800000
> >Badness in _tlbie at arch/ppc/mm/pgtable.c:527
> >Call trace:
> > [c0005530] dump_stack+0x18/0x28
> > [c0003628] check_bug_trap+0x84/0xac
> > [c00037b0] ProgramCheckException+0x160/0x1a0
> > [c0002d50] ret_from_except_full+0x0/0x4c
> > [c000a91c] _tlbie+0x94/0xa0
> > [c902f018] alloc_init_module+0x18/0x40 [alloc]
> > [c002c4ac] sys_init_module+0x224/0x324
> > [c00026f0] ret_from_syscall+0x0/0x44
Note: this was just a test module doing tlbie(0xc0000100)...
Hum, it should also print out the address in question...
> How much real memory on your board?
128M
> We need to ensure VMALLOC_START is beyond
> the pinned entries.
Right now VMALLOC_START is before the IMMR pinned space.
> We should make all of the code
> much smarter to pin on the real space that is on
> the board.
Oh! What are the side effects of such pinning as the
code is today?
The only issue I see with pinning virtual address translations
farther the the physical addresses (real space) is that we lose
some virtual space, but nothing more than that.
Is it only that?
> For testing now, just make VMALLOC_OFFSET
> 32M, which will push the start to the 32M boundary
> after the kernel.
For what purpose? Sorry I don't get you, please be more
verbose.
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH] 8xx: tlbie debugging aid (try #2)
2005-06-27 14:56 ` Marcelo Tosatti
@ 2005-06-27 20:53 ` Dan Malek
0 siblings, 0 replies; 30+ messages in thread
From: Dan Malek @ 2005-06-27 20:53 UTC (permalink / raw)
To: Marcelo Tosatti; +Cc: linux-ppc-embedded
On Jun 27, 2005, at 10:56 AM, Marcelo Tosatti wrote:
> Note: this was just a test module doing tlbie(0xc0000100)...
> Hum, it should also print out the address in question...
Oh, I see. I thought it was a bug with loading a module
in general, since it allocates from vmalloc() space.
>> How much real memory on your board?
>
> 128M
No problem.
>> We need to ensure VMALLOC_START is beyond
>> the pinned entries.
>
> Right now VMALLOC_START is before the IMMR pinned space.
Sorry, I wasn't clear :-) I meant we need to ensure VMALLOC_START
is beyond the pinned _data_ area.
> Oh! What are the side effects of such pinning as the
> code is today?
The code maps 24M of data space, plus 8M of IMMR (and anything
that follows). So, if you don't have enough real memory, you can
end up with both a pinned entry and vmalloc() trying to share the same
VM space, which is a bad thing. The problem depends on the amount
of real memory, plus the "offset" hole of the vmalloc() space.
> For what purpose? Sorry I don't get you, please be more
> verbose.
Just to prevent what I mentioned in the previous paragraph.
Since you have lots of real memory, this doesn't affect you.
Thanks.
-- Dan
^ permalink raw reply [flat|nested] 30+ messages in thread
end of thread, other threads:[~2005-07-05 13:10 UTC | newest]
Thread overview: 30+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2005-06-25 14:53 [PATCH] 8xx: map_page() skip pinned region and tlbie debugging aid Marcelo Tosatti
2005-06-25 22:24 ` Dan Malek
2005-06-26 14:30 ` Marcelo Tosatti
2005-06-27 13:39 ` Marcelo Tosatti
2005-06-27 20:46 ` Dan Malek
2005-06-28 6:30 ` Benjamin Herrenschmidt
2005-06-28 13:42 ` [PATCH] 8xx: get_mmu_context() for (very) FEW_CONTEXTS and KERNEL_PREEMPT race/starvation issue Guillaume Autran
2005-06-29 4:15 ` Benjamin Herrenschmidt
2005-06-29 15:32 ` Guillaume Autran
2005-06-29 15:54 ` Marcelo Tosatti
2005-06-29 21:25 ` Guillaume Autran
2005-06-29 17:00 ` Marcelo Tosatti
2005-06-29 23:26 ` Benjamin Herrenschmidt
2005-06-29 19:38 ` Marcelo Tosatti
2005-06-30 13:54 ` Guillaume Autran
2005-07-05 13:12 ` Guillaume Autran
2005-06-30 0:34 ` Eugene Surovegin
2005-06-29 23:24 ` Benjamin Herrenschmidt
2005-06-28 13:53 ` [PATCH] 8xx: map_page() skip pinned region and tlbie debugging aid Dan Malek
2005-06-28 23:47 ` Benjamin Herrenschmidt
2005-06-29 17:19 ` Marcelo Tosatti
2005-06-29 23:31 ` Benjamin Herrenschmidt
2005-06-30 18:05 ` Dan Malek
2005-06-30 23:29 ` Benjamin Herrenschmidt
2005-07-01 7:01 ` Pantelis Antoniou
2005-06-30 17:49 ` Dan Malek
2005-06-27 14:28 ` [PATCH] 8xx: tlbie debugging aid (try #2) Marcelo Tosatti
2005-06-27 20:18 ` Dan Malek
2005-06-27 14:56 ` Marcelo Tosatti
2005-06-27 20:53 ` Dan Malek
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).