public inbox for linux-arch@vger.kernel.org
 help / color / mirror / Atom feed
* Re: + sparc64-rename-tlb_flush_mmu.patch added to -mm tree
       [not found]   ` <20070717010324.833fee7e.akpm@linux-foundation.org>
@ 2007-07-17 13:56     ` Martin Schwidefsky
  2007-07-17 18:18       ` Russell King
  2007-07-17 21:55       ` Luck, Tony
  0 siblings, 2 replies; 9+ messages in thread
From: Martin Schwidefsky @ 2007-07-17 13:56 UTC (permalink / raw)
  To: Andrew Morton, linux-arch; +Cc: davem, hugh

On Tue, 2007-07-17 at 01:03 -0700, Andrew Morton wrote:
> OK, I don't understand how this patch works - from a quick glance it
> appears to be forgetting to flush stuff altogether on arm and arm26 at
> least and I see no sign that Russell, Tony and Ian have even seen it.

Added linux-arch so that affected arch-maintainers can comment.

> And it should have been loudly pointed out to various arch maintainers so
> they have an opportunity to implement the optimisation which it offers.

The idea behind the optimization of this patch is that for a full mm
flush (tlb_gather_mmu called with full_mm_flush==1) a single
flush_tlb_mm is enough to remove all TLBs of the mm. New ones cannot be
created since full_mm_flush==1 only for exit_mmap. The same is true for
a normal unmap if there is only one user of the mm and the mm is the
currently active mm.

-- 
blue skies,
  Martin.

"Reality continues to ruin my life." - Calvin.

---
Subject: [PATCH] avoid tlb gather restarts.

From: Martin Schwidefsky <schwidefsky@de.ibm.com>

If need_resched() is false in the inner loop of unmap_vmas it is
unnecessary to do a full blown tlb_finish_mmu / tlb_gather_mmu for
each ZAP_BLOCK_SIZE ptes. Do a tlb_flush_mmu() instead. That gives
architectures with a non-generic tlb flush implementation room for
optimization. The tlb_flush_mmu primitive is a available with the
generic tlb flush code, the ia64_tlb_flush_mm needs to be renamed
and a dummy function is added to arm and arm26.

Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
---

 include/asm-arm/tlb.h     |    5 +++++
 include/asm-arm26/tlb.h   |    5 +++++
 include/asm-ia64/tlb.h    |    6 +++---
 include/asm-sparc64/tlb.h |    6 +++---
 mm/memory.c               |   16 ++++++----------
 5 files changed, 22 insertions(+), 16 deletions(-)

diff -urpN linux-2.6/include/asm-arm/tlb.h linux-2.6-patched/include/asm-arm/tlb.h
--- linux-2.6/include/asm-arm/tlb.h	2006-11-08 10:45:43.000000000 +0100
+++ linux-2.6-patched/include/asm-arm/tlb.h	2007-07-17 15:18:02.000000000 +0200
@@ -52,6 +52,11 @@ tlb_gather_mmu(struct mm_struct *mm, uns
 }
 
 static inline void
+tlb_flush_mmu(struct mmu_gather *tlb, unsigned long start, unsigned long end)
+{
+}
+
+static inline void
 tlb_finish_mmu(struct mmu_gather *tlb, unsigned long start, unsigned long end)
 {
 	if (tlb->fullmm)
diff -urpN linux-2.6/include/asm-arm26/tlb.h linux-2.6-patched/include/asm-arm26/tlb.h
--- linux-2.6/include/asm-arm26/tlb.h	2006-11-08 10:45:43.000000000 +0100
+++ linux-2.6-patched/include/asm-arm26/tlb.h	2007-07-17 15:18:02.000000000 +0200
@@ -29,6 +29,11 @@ tlb_gather_mmu(struct mm_struct *mm, uns
 }
 
 static inline void
+tlb_flush_mmu(struct mmu_gather *tlb, unsigned long start, unsigned long end)
+{
+}
+
+static inline void
 tlb_finish_mmu(struct mmu_gather *tlb, unsigned long start, unsigned long end)
 {
         if (tlb->need_flush)
diff -urpN linux-2.6/include/asm-ia64/tlb.h linux-2.6-patched/include/asm-ia64/tlb.h
--- linux-2.6/include/asm-ia64/tlb.h	2006-11-08 10:45:45.000000000 +0100
+++ linux-2.6-patched/include/asm-ia64/tlb.h	2007-07-17 15:18:02.000000000 +0200
@@ -72,7 +72,7 @@ DECLARE_PER_CPU(struct mmu_gather, mmu_g
  * freed pages that where gathered up to this point.
  */
 static inline void
-ia64_tlb_flush_mmu (struct mmu_gather *tlb, unsigned long start, unsigned long end)
+tlb_flush_mmu (struct mmu_gather *tlb, unsigned long start, unsigned long end)
 {
 	unsigned int nr;
 
@@ -160,7 +160,7 @@ tlb_finish_mmu (struct mmu_gather *tlb, 
 	 * Note: tlb->nr may be 0 at this point, so we can't rely on tlb->start_addr and
 	 * tlb->end_addr.
 	 */
-	ia64_tlb_flush_mmu(tlb, start, end);
+	tlb_flush_mmu(tlb, start, end);
 
 	/* keep the page table cache within bounds */
 	check_pgt_cache();
@@ -184,7 +184,7 @@ tlb_remove_page (struct mmu_gather *tlb,
 	}
 	tlb->pages[tlb->nr++] = page;
 	if (tlb->nr >= FREE_PTE_NR)
-		ia64_tlb_flush_mmu(tlb, tlb->start_addr, tlb->end_addr);
+		tlb_flush_mmu(tlb, tlb->start_addr, tlb->end_addr);
 }
 
 /*
diff -urpN linux-2.6/include/asm-sparc64/tlb.h linux-2.6-patched/include/asm-sparc64/tlb.h
--- linux-2.6/include/asm-sparc64/tlb.h	2007-07-02 08:45:46.000000000 +0200
+++ linux-2.6-patched/include/asm-sparc64/tlb.h	2007-07-17 15:18:02.000000000 +0200
@@ -55,7 +55,7 @@ static inline struct mmu_gather *tlb_gat
 }
 
 
-static inline void tlb_flush_mmu(struct mmu_gather *mp)
+static inline void tlb_flush_mmu(struct mmu_gather *mp, unsigned long start, unsigned long end)
 {
 	if (mp->need_flush) {
 		free_pages_and_swap_cache(mp->pages, mp->pages_nr);
@@ -74,7 +74,7 @@ extern void smp_flush_tlb_mm(struct mm_s
 
 static inline void tlb_finish_mmu(struct mmu_gather *mp, unsigned long start, unsigned long end)
 {
-	tlb_flush_mmu(mp);
+	tlb_flush_mmu(mp, start, end);
 
 	if (mp->fullmm)
 		mp->fullmm = 0;
@@ -96,7 +96,7 @@ static inline void tlb_remove_page(struc
 	mp->need_flush = 1;
 	mp->pages[mp->pages_nr++] = page;
 	if (mp->pages_nr >= FREE_PTE_NR)
-		tlb_flush_mmu(mp);
+		tlb_flush_mmu(mp, 0, 0);
 }
 
 #define tlb_remove_tlb_entry(mp,ptep,addr) do { } while (0)
diff -urpN linux-2.6/mm/memory.c linux-2.6-patched/mm/memory.c
--- linux-2.6/mm/memory.c	2007-07-17 12:12:30.000000000 +0200
+++ linux-2.6-patched/mm/memory.c	2007-07-17 15:18:02.000000000 +0200
@@ -851,18 +851,15 @@ unsigned long unmap_vmas(struct mmu_gath
 				break;
 			}
 
-			tlb_finish_mmu(*tlbp, tlb_start, start);
-
 			if (need_resched() ||
 				(i_mmap_lock && need_lockbreak(i_mmap_lock))) {
-				if (i_mmap_lock) {
-					*tlbp = NULL;
+				if (i_mmap_lock)
 					goto out;
-				}
+				tlb_finish_mmu(*tlbp, tlb_start, start);
 				cond_resched();
-			}
-
-			*tlbp = tlb_gather_mmu(vma->vm_mm, fullmm);
+				*tlbp = tlb_gather_mmu(vma->vm_mm, fullmm);
+			} else
+				tlb_flush_mmu(*tlbp, tlb_start, start);
 			tlb_start_valid = 0;
 			zap_work = ZAP_BLOCK_SIZE;
 		}
@@ -890,8 +887,7 @@ unsigned long zap_page_range(struct vm_a
 	tlb = tlb_gather_mmu(mm, 0);
 	update_hiwater_rss(mm);
 	end = unmap_vmas(&tlb, vma, address, end, &nr_accounted, details);
-	if (tlb)
-		tlb_finish_mmu(tlb, address, end);
+	tlb_finish_mmu(tlb, address, end);
 	return end;
 }
 



^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: + sparc64-rename-tlb_flush_mmu.patch added to -mm tree
  2007-07-17 13:56     ` + sparc64-rename-tlb_flush_mmu.patch added to -mm tree Martin Schwidefsky
@ 2007-07-17 18:18       ` Russell King
  2007-07-17 19:08         ` Andrew Morton
  2007-07-17 21:55       ` Luck, Tony
  1 sibling, 1 reply; 9+ messages in thread
From: Russell King @ 2007-07-17 18:18 UTC (permalink / raw)
  To: Martin Schwidefsky; +Cc: Andrew Morton, linux-arch, davem, hugh

On Tue, Jul 17, 2007 at 03:56:23PM +0200, Martin Schwidefsky wrote:
> On Tue, 2007-07-17 at 01:03 -0700, Andrew Morton wrote:
> > OK, I don't understand how this patch works - from a quick glance it
> > appears to be forgetting to flush stuff altogether on arm and arm26 at
> > least and I see no sign that Russell, Tony and Ian have even seen it.
> 
> Added linux-arch so that affected arch-maintainers can comment.

Having a little more information about Andrew's concern would be nice.
Under what circumstances do you think we're forgetting to flush stuff?

> > And it should have been loudly pointed out to various arch maintainers so
> > they have an opportunity to implement the optimisation which it offers.
> 
> The idea behind the optimization of this patch is that for a full mm
> flush (tlb_gather_mmu called with full_mm_flush==1) a single
> flush_tlb_mm is enough to remove all TLBs of the mm. New ones cannot be
> created since full_mm_flush==1 only for exit_mmap. The same is true for
> a normal unmap if there is only one user of the mm and the mm is the
> currently active mm.
> 
> -- 
> blue skies,
>   Martin.
> 
> "Reality continues to ruin my life." - Calvin.
> 
> ---
> Subject: [PATCH] avoid tlb gather restarts.
> 
> From: Martin Schwidefsky <schwidefsky@de.ibm.com>
> 
> If need_resched() is false in the inner loop of unmap_vmas it is
> unnecessary to do a full blown tlb_finish_mmu / tlb_gather_mmu for
> each ZAP_BLOCK_SIZE ptes. Do a tlb_flush_mmu() instead. That gives
> architectures with a non-generic tlb flush implementation room for
> optimization. The tlb_flush_mmu primitive is a available with the
> generic tlb flush code, the ia64_tlb_flush_mm needs to be renamed
> and a dummy function is added to arm and arm26.

This description sounds sane.  Nothing really jumps out as a problem
from reading the patch.

-- 
Russell King
 Linux kernel    2.6 ARM Linux   - http://www.arm.linux.org.uk/
 maintainer of:

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: + sparc64-rename-tlb_flush_mmu.patch added to -mm tree
  2007-07-17 18:18       ` Russell King
@ 2007-07-17 19:08         ` Andrew Morton
  2007-07-17 21:14           ` Russell King
  0 siblings, 1 reply; 9+ messages in thread
From: Andrew Morton @ 2007-07-17 19:08 UTC (permalink / raw)
  To: Russell King; +Cc: Martin Schwidefsky, linux-arch, davem, hugh

On Tue, 17 Jul 2007 19:18:43 +0100 Russell King <rmk@arm.linux.org.uk> wrote:

> On Tue, Jul 17, 2007 at 03:56:23PM +0200, Martin Schwidefsky wrote:
> > On Tue, 2007-07-17 at 01:03 -0700, Andrew Morton wrote:
> > > OK, I don't understand how this patch works - from a quick glance it
> > > appears to be forgetting to flush stuff altogether on arm and arm26 at
> > > least and I see no sign that Russell, Tony and Ian have even seen it.
> > 
> > Added linux-arch so that affected arch-maintainers can comment.
> 
> Having a little more information about Andrew's concern would be nice.
> Under what circumstances do you think we're forgetting to flush stuff?
> 

"quick glance".  ARM's tlb_flush_mmu() becomes a no-op and I though that
some real tlb_finish_mmu() got replaced by that.  But I didn't look very
closely.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: + sparc64-rename-tlb_flush_mmu.patch added to -mm tree
  2007-07-17 19:08         ` Andrew Morton
@ 2007-07-17 21:14           ` Russell King
  2007-07-17 21:42             ` Andrew Morton
  0 siblings, 1 reply; 9+ messages in thread
From: Russell King @ 2007-07-17 21:14 UTC (permalink / raw)
  To: Andrew Morton; +Cc: Martin Schwidefsky, linux-arch, davem, hugh

On Tue, Jul 17, 2007 at 12:08:20PM -0700, Andrew Morton wrote:
> On Tue, 17 Jul 2007 19:18:43 +0100 Russell King <rmk@arm.linux.org.uk> wrote:
> 
> > On Tue, Jul 17, 2007 at 03:56:23PM +0200, Martin Schwidefsky wrote:
> > > On Tue, 2007-07-17 at 01:03 -0700, Andrew Morton wrote:
> > > > OK, I don't understand how this patch works - from a quick glance it
> > > > appears to be forgetting to flush stuff altogether on arm and arm26 at
> > > > least and I see no sign that Russell, Tony and Ian have even seen it.
> > > 
> > > Added linux-arch so that affected arch-maintainers can comment.
> > 
> > Having a little more information about Andrew's concern would be nice.
> > Under what circumstances do you think we're forgetting to flush stuff?
> > 
> 
> "quick glance".  ARM's tlb_flush_mmu() becomes a no-op and I though that
> some real tlb_finish_mmu() got replaced by that.  But I didn't look very
> closely.

I don't think ARM will have a problem with this change.

In the fullmm case, tlb_finish_mmu() will flush the entire mm, so
missing out the flush for each chunk is itself a worthwhile optimisation.

In the !fullmm case, tlb_finish_mmu() does nothing as far as flushing
is concerned, and in any case does nothing with it's start and end
variables.

So I think this patch suits us just fine.

-- 
Russell King
 Linux kernel    2.6 ARM Linux   - http://www.arm.linux.org.uk/
 maintainer of:

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: + sparc64-rename-tlb_flush_mmu.patch added to -mm tree
  2007-07-17 21:14           ` Russell King
@ 2007-07-17 21:42             ` Andrew Morton
  2007-07-18  7:48               ` Martin Schwidefsky
  0 siblings, 1 reply; 9+ messages in thread
From: Andrew Morton @ 2007-07-17 21:42 UTC (permalink / raw)
  To: Russell King; +Cc: Martin Schwidefsky, linux-arch, davem, hugh

On Tue, 17 Jul 2007 22:14:45 +0100
Russell King <rmk@arm.linux.org.uk> wrote:

> On Tue, Jul 17, 2007 at 12:08:20PM -0700, Andrew Morton wrote:
> > On Tue, 17 Jul 2007 19:18:43 +0100 Russell King <rmk@arm.linux.org.uk> wrote:
> > 
> > > On Tue, Jul 17, 2007 at 03:56:23PM +0200, Martin Schwidefsky wrote:
> > > > On Tue, 2007-07-17 at 01:03 -0700, Andrew Morton wrote:
> > > > > OK, I don't understand how this patch works - from a quick glance it
> > > > > appears to be forgetting to flush stuff altogether on arm and arm26 at
> > > > > least and I see no sign that Russell, Tony and Ian have even seen it.
> > > > 
> > > > Added linux-arch so that affected arch-maintainers can comment.
> > > 
> > > Having a little more information about Andrew's concern would be nice.
> > > Under what circumstances do you think we're forgetting to flush stuff?
> > > 
> > 
> > "quick glance".  ARM's tlb_flush_mmu() becomes a no-op and I though that
> > some real tlb_finish_mmu() got replaced by that.  But I didn't look very
> > closely.
> 
> I don't think ARM will have a problem with this change.
> 
> In the fullmm case, tlb_finish_mmu() will flush the entire mm, so
> missing out the flush for each chunk is itself a worthwhile optimisation.
> 
> In the !fullmm case, tlb_finish_mmu() does nothing as far as flushing
> is concerned, and in any case does nothing with it's start and end
> variables.
> 
> So I think this patch suits us just fine.

umm, OK, well what is the spec for this new interface which Martin
is proposing to add?

It _seems_ to be that if the arch implements tlb_flush_mmu() then its
tlb_finish_mmu() can (should) be a no-op?

Or if the arch's tlb_finish_mmu() does a full mm "flush" (god I hate that
term - here we meant writeback, and perhaps invalidate??)) then its
tlb_flush_mmu() can (should) be a no-op.

Or something like that.  Martin, I'd suggest that an update to
Documentation/cachetlb.txt is in order, spell this all out.

Doing this properly would require that Documentation/cachetlb.txt say
something about tlb_gather_mmu() and tlb_finish_mmu() too, I guess.
Please ;)



^ permalink raw reply	[flat|nested] 9+ messages in thread

* RE: + sparc64-rename-tlb_flush_mmu.patch added to -mm tree
  2007-07-17 13:56     ` + sparc64-rename-tlb_flush_mmu.patch added to -mm tree Martin Schwidefsky
  2007-07-17 18:18       ` Russell King
@ 2007-07-17 21:55       ` Luck, Tony
  2007-07-17 22:04         ` Russell King
  1 sibling, 1 reply; 9+ messages in thread
From: Luck, Tony @ 2007-07-17 21:55 UTC (permalink / raw)
  To: schwidefsky, Andrew Morton, linux-arch; +Cc: davem, hugh

-			tlb_finish_mmu(*tlbp, tlb_start, start);
-
 			if (need_resched() ||
 				(i_mmap_lock && need_lockbreak(i_mmap_lock))) {
-				if (i_mmap_lock) {
-					*tlbp = NULL;
+				if (i_mmap_lock)
 					goto out;

If we take this "goto out" path, then we'll miss out on calling
the tlb_finish_mmu() which you deleted just above.  At the very
least this will leave preemption disabled (since we'll miss calling
the put_cpu_var(mmu_gathers)).

I think I'm also missing the big picture view of what you are
doing here.

-Tony

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: + sparc64-rename-tlb_flush_mmu.patch added to -mm tree
  2007-07-17 21:55       ` Luck, Tony
@ 2007-07-17 22:04         ` Russell King
  2007-07-17 22:21           ` Luck, Tony
  0 siblings, 1 reply; 9+ messages in thread
From: Russell King @ 2007-07-17 22:04 UTC (permalink / raw)
  To: Luck, Tony; +Cc: schwidefsky, Andrew Morton, linux-arch, davem, hugh

On Tue, Jul 17, 2007 at 02:55:05PM -0700, Luck, Tony wrote:
> -			tlb_finish_mmu(*tlbp, tlb_start, start);
> -
>  			if (need_resched() ||
>  				(i_mmap_lock && need_lockbreak(i_mmap_lock))) {
> -				if (i_mmap_lock) {
> -					*tlbp = NULL;
> +				if (i_mmap_lock)
>  					goto out;
> 
> If we take this "goto out" path, then we'll miss out on calling
> the tlb_finish_mmu() which you deleted just above.

Look at the next hunk in the patch.  The old path set *tlbp to NULL
if we exit this function having called tlb_finish_mmu().  In that case,
we avoid calling tlb_finish_mmu() again.  Otherwise, *tlbp is left
pointing at the mmu_gather structure, and it's left for zap_page_range()
to call tlb_finish_mmu().

The new path actually cleans this up - we always exit unmap_vmas()s
_with_ the tlb context requiring tlb_finish_mmu(), so the call in
zap_page_range() becomes unconditional.

So, if anything, this is a much needed cleanup of the behaviour of
unmap_vmas().

> At the very
> least this will leave preemption disabled (since we'll miss calling
> the put_cpu_var(mmu_gathers)).
> 
> I think I'm also missing the big picture view of what you are
> doing here.

Avoiding calling tlb_finish_mmu() and tlb_gather_mmu() unnecessarily,
and (eg) thereby avoiding some repetitive entire TLB invalidations on
ARM.

-- 
Russell King
 Linux kernel    2.6 ARM Linux   - http://www.arm.linux.org.uk/
 maintainer of:

^ permalink raw reply	[flat|nested] 9+ messages in thread

* RE: + sparc64-rename-tlb_flush_mmu.patch added to -mm tree
  2007-07-17 22:04         ` Russell King
@ 2007-07-17 22:21           ` Luck, Tony
  0 siblings, 0 replies; 9+ messages in thread
From: Luck, Tony @ 2007-07-17 22:21 UTC (permalink / raw)
  To: Russell King; +Cc: schwidefsky, Andrew Morton, linux-arch, davem, hugh

> Look at the next hunk in the patch.  The old path set *tlbp to NULL
> if we exit this function having called tlb_finish_mmu().  In that case,
> we avoid calling tlb_finish_mmu() again.  Otherwise, *tlbp is left
> pointing at the mmu_gather structure, and it's left for zap_page_range()
> to call tlb_finish_mmu().

Ah yes.  That does look a bit cleaner.

So on ia64 we are just swapping some tlb_finish_mmu() calls for
tlb_flush_mmu() (formerly known as ia64_tlb_flush_mmu()).  The
only work we save are the calls to check_pgt_cache() and
put_cpu_var(mmu_gathers) ... which sounds like we may have less
preemption opportunities in this loop (in the cases where we take
the !need_resched() path).

-Tony

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: + sparc64-rename-tlb_flush_mmu.patch added to -mm tree
  2007-07-17 21:42             ` Andrew Morton
@ 2007-07-18  7:48               ` Martin Schwidefsky
  0 siblings, 0 replies; 9+ messages in thread
From: Martin Schwidefsky @ 2007-07-18  7:48 UTC (permalink / raw)
  To: Andrew Morton; +Cc: Russell King, linux-arch, davem, hugh

On Tue, 2007-07-17 at 14:42 -0700, Andrew Morton wrote:
> umm, OK, well what is the spec for this new interface which Martin
> is proposing to add?

Actually the interface is already there but so far it has been internal
to the generic tlb gather code and most architecture variants.

> It _seems_ to be that if the arch implements tlb_flush_mmu() then its
> tlb_finish_mmu() can (should) be a no-op?

Yes, tlb_flush_mmu() can be a no-op, it basically says that "now is a
good time to get rid of the tlbs / pages gathered in the tlb gather
structure so far". In case you have already freed everything
tlb_flush_mmu is naturally a no-op, e.g. for arm. 

> Or if the arch's tlb_finish_mmu() does a full mm "flush" (god I hate that
> term - here we meant writeback, and perhaps invalidate??)) then its
> tlb_flush_mmu() can (should) be a no-op.

This is one case where the architecture can flush at the beginning of
the tlb gather operation and tlb_flush_mmu turns into a nop.
There are four difference batched tlb operations:
1) change_protection does a number of ptep_get_and_clear()/set_pte_at()
calls followed by a flush_tlb_range().
2) dup_mmap/copy_page_range does ptep_set_wrprotect() on a choice of
ptes followed by a flush_tlb_mm().
3) unmap_region() and zap_page_range() use the tlb gatherer with
fullmm==0 which will make zap_pte_range() call ptep_get_and_clear_full()
with fullmm==0.
4) exit_map() uses the tlb gatherer with fullmm==1 which will make
zap_pte_range() call ptep_get_and_clear_full() with fullmm==1.

So we have the "full" flush, the partial flush, the wrprotect flush and
the change protection flush. I've been halfway through the code that
introduces the notion of these four different flush types before I gave
up because it resulted in too much code.

> Or something like that.  Martin, I'd suggest that an update to
> Documentation/cachetlb.txt is in order, spell this all out.

Ok, I will see what I can do.

> Doing this properly would require that Documentation/cachetlb.txt say
> something about tlb_gather_mmu() and tlb_finish_mmu() too, I guess.
> Please ;)

That is the sore spot, we (I?) have to write documentation for all the
primitives of the tlb gatherer.

-- 
blue skies,
  Martin.

"Reality continues to ruin my life." - Calvin.



^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2007-07-18  7:46 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <200707170748.l6H7m1so005969@imap1.linux-foundation.org>
     [not found] ` <20070717005551.cdb9504e.akpm@linux-foundation.org>
     [not found]   ` <20070717010324.833fee7e.akpm@linux-foundation.org>
2007-07-17 13:56     ` + sparc64-rename-tlb_flush_mmu.patch added to -mm tree Martin Schwidefsky
2007-07-17 18:18       ` Russell King
2007-07-17 19:08         ` Andrew Morton
2007-07-17 21:14           ` Russell King
2007-07-17 21:42             ` Andrew Morton
2007-07-18  7:48               ` Martin Schwidefsky
2007-07-17 21:55       ` Luck, Tony
2007-07-17 22:04         ` Russell King
2007-07-17 22:21           ` Luck, Tony

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox