public inbox for linux-arch@vger.kernel.org
 help / color / mirror / Atom feed
* Changing  update_mmu_cache()
@ 2005-02-22  4:53 Benjamin Herrenschmidt
  2005-02-22  5:43 ` David S. Miller
                   ` (2 more replies)
  0 siblings, 3 replies; 20+ messages in thread
From: Benjamin Herrenschmidt @ 2005-02-22  4:53 UTC (permalink / raw)
  To: Linux Arch list; +Cc: David S. Miller

Hi !

I'm doing some work on the ppc32 MMU stuff and I'm faced to a problem
related to HIGHMEM, and more specifically to PTE pages in HIGHMEM: 

update_mmu_cache() currently doesn't take the pte pointer. This means it
has to look it up on ppc, and eventually map the pte page (gack !). But
if you look at all the call sites for update_mmu_cache, they all have
the pte pointer and the PTE page already kmap'ed either just before or
around the call to update_mmu_cache().

My changes require me to write to the PTE (well, we already did that but
with MMU off which was sort-of ok, but my new code is different) in
update_mmu_cache().

So I want to change all call sites to pass the ptep to
update_mmu_cache() with the semantics that it will always be kmap'ed by
the caller.

Is that ok with everybody ?

Ben.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: Changing  update_mmu_cache()
  2005-02-22  4:53 Changing update_mmu_cache() Benjamin Herrenschmidt
@ 2005-02-22  5:43 ` David S. Miller
  2005-02-22  9:07 ` Russell King
  2005-02-23  5:35 ` Changing update_mmu_cache() or set_pte() ? Benjamin Herrenschmidt
  2 siblings, 0 replies; 20+ messages in thread
From: David S. Miller @ 2005-02-22  5:43 UTC (permalink / raw)
  To: Benjamin Herrenschmidt; +Cc: linux-arch

On Tue, 22 Feb 2005 15:53:17 +1100
Benjamin Herrenschmidt <benh@kernel.crashing.org> wrote:

> So I want to change all call sites to pass the ptep to
> update_mmu_cache() with the semantics that it will always be kmap'ed by
> the caller.
> 
> Is that ok with everybody ?

No objection.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: Changing  update_mmu_cache()
  2005-02-22  4:53 Changing update_mmu_cache() Benjamin Herrenschmidt
  2005-02-22  5:43 ` David S. Miller
@ 2005-02-22  9:07 ` Russell King
  2005-02-22 18:08   ` David S. Miller
  2005-02-22 20:51   ` Benjamin Herrenschmidt
  2005-02-23  5:35 ` Changing update_mmu_cache() or set_pte() ? Benjamin Herrenschmidt
  2 siblings, 2 replies; 20+ messages in thread
From: Russell King @ 2005-02-22  9:07 UTC (permalink / raw)
  To: Benjamin Herrenschmidt; +Cc: Linux Arch list, David S. Miller

On Tue, Feb 22, 2005 at 03:53:17PM +1100, Benjamin Herrenschmidt wrote:
> Is that ok with everybody ?

On the grounds that I can't get my MM changes past Linus which I need
for newer ARM CPUs, so I don't see why this change which adds extra code
(which is the complaint against my changes) should be considered.

Sorry if I'm getting cranky in my old age.

-- 
Russell King
 Linux kernel    2.6 ARM Linux   - http://www.arm.linux.org.uk/
 maintainer of:  2.6 PCMCIA      - http://pcmcia.arm.linux.org.uk/
                 2.6 Serial core

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: Changing  update_mmu_cache()
  2005-02-22  9:07 ` Russell King
@ 2005-02-22 18:08   ` David S. Miller
  2005-02-25 20:15     ` Russell King
  2005-02-22 20:51   ` Benjamin Herrenschmidt
  1 sibling, 1 reply; 20+ messages in thread
From: David S. Miller @ 2005-02-22 18:08 UTC (permalink / raw)
  To: Russell King; +Cc: benh, linux-arch

On Tue, 22 Feb 2005 09:07:41 +0000
Russell King <rmk@arm.linux.org.uk> wrote:

> On Tue, Feb 22, 2005 at 03:53:17PM +1100, Benjamin Herrenschmidt wrote:
> > Is that ok with everybody ?
> 
> On the grounds that I can't get my MM changes past Linus which I need
> for newer ARM CPUs, so I don't see why this change which adds extra code
> (which is the complaint against my changes) should be considered.

It doesn't add extra code:

1) the pte gets deref'd currently anyways
2) platforms not using the pte pointer arg will simply get that
   argument optimized away

Russell, send me what you're having trouble merging under seperate
cover and I'll give you a hand, from one old man to another :-)

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: Changing  update_mmu_cache()
  2005-02-22  9:07 ` Russell King
  2005-02-22 18:08   ` David S. Miller
@ 2005-02-22 20:51   ` Benjamin Herrenschmidt
  1 sibling, 0 replies; 20+ messages in thread
From: Benjamin Herrenschmidt @ 2005-02-22 20:51 UTC (permalink / raw)
  To: Russell King; +Cc: Linux Arch list, David S. Miller

On Tue, 2005-02-22 at 09:07 +0000, Russell King wrote:
> On Tue, Feb 22, 2005 at 03:53:17PM +1100, Benjamin Herrenschmidt wrote:
> > Is that ok with everybody ?
> 
> On the grounds that I can't get my MM changes past Linus which I need
> for newer ARM CPUs, so I don't see why this change which adds extra code
> (which is the complaint against my changes) should be considered.
> 
> Sorry if I'm getting cranky in my old age.

You are, since it's not adding extra code, just moving a function around
and changing the parameters :) Ok well, the extra parameter may be extra
code if you don't inline ... bugger... ;)

Ben.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: Changing  update_mmu_cache() or set_pte() ?
  2005-02-22  4:53 Changing update_mmu_cache() Benjamin Herrenschmidt
  2005-02-22  5:43 ` David S. Miller
  2005-02-22  9:07 ` Russell King
@ 2005-02-23  5:35 ` Benjamin Herrenschmidt
  2005-02-23  5:47   ` Benjamin Herrenschmidt
  2 siblings, 1 reply; 20+ messages in thread
From: Benjamin Herrenschmidt @ 2005-02-23  5:35 UTC (permalink / raw)
  To: Linux Arch list

On Tue, 2005-02-22 at 15:53 +1100, Benjamin Herrenschmidt wrote:
> Hi !
> 
> I'm doing some work on the ppc32 MMU stuff and I'm faced to a problem
> related to HIGHMEM, and more specifically to PTE pages in HIGHMEM: 
> 
> update_mmu_cache() currently doesn't take the pte pointer. This means it
> has to look it up on ppc, and eventually map the pte page (gack !). But
> if you look at all the call sites for update_mmu_cache, they all have
> the pte pointer and the PTE page already kmap'ed either just before or
> around the call to update_mmu_cache().
> .../...

Ok, the story gets more complicated... While digging around, I found an
SMP race on ppc32 with the dcache/icache sync that could get a simple
fix if set_pte() and update_mmu_cache could be made "atomic" (that is,
if set_pte() was told "put that PTE in your cache too" in places where
update_mmu_cache would normally be called just after set_pte).

What do you guys thing about this ? It's relatively trivial to fix
everybody to add that argument to set_pte() ... It would be constant at
compile time anyway, so totally optimized away (0 or 1 depending on the
call site). And we would then kill update_mmu_cache() completely.

Or maybe the simpler is to define a set_pte_cache() that takes all 3
arguments and is overridable by the architecture ? With a default in
asm-generic that just does set_pte() and update_mmu_cache() ?

The only "special case" that prevents just changing set_pte() as is and
be done with update_mmu_cache() completely is ptep_set_access_flags()
which has an update_mmu_cache() right after it too. But I wonder...
arches that do care about update_mmu_cache() could just have their own
ptep_set_access_flags() that does the right thing..

Or somebody has a better idea ?

(There _is_ a complicated fix which is to do the dcache/icache flush
from the hash fault handler, thus impacting the performance of all hash
faults, and it's all in asm etc..., but I've always disliked the
set_pte/update_mmu_cache separation so ...)

Ben.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: Changing  update_mmu_cache() or set_pte() ?
  2005-02-23  5:35 ` Changing update_mmu_cache() or set_pte() ? Benjamin Herrenschmidt
@ 2005-02-23  5:47   ` Benjamin Herrenschmidt
  0 siblings, 0 replies; 20+ messages in thread
From: Benjamin Herrenschmidt @ 2005-02-23  5:47 UTC (permalink / raw)
  To: Linux Arch list


> Ok, the story gets more complicated... While digging around, I found an
> SMP race on ppc32 with the dcache/icache sync that could get a simple
> fix if set_pte() and update_mmu_cache could be made "atomic" (that is,
> if set_pte() was told "put that PTE in your cache too" in places where
> update_mmu_cache would normally be called just after set_pte).
> 
> What do you guys thing about this ? It's relatively trivial to fix
> everybody to add that argument to set_pte() ... It would be constant at
> compile time anyway, so totally optimized away (0 or 1 depending on the
> call site). And we would then kill update_mmu_cache() completely.
> 
> Or maybe the simpler is to define a set_pte_cache() that takes all 3
> arguments and is overridable by the architecture ? With a default in
> asm-generic that just does set_pte() and update_mmu_cache() ?
> 
> The only "special case" that prevents just changing set_pte() as is and
> be done with update_mmu_cache() completely is ptep_set_access_flags()
> which has an update_mmu_cache() right after it too. But I wonder...
> arches that do care about update_mmu_cache() could just have their own
> ptep_set_access_flags() that does the right thing..
> 
> Or somebody has a better idea ?

Hrm... ok, I may have an idea to fix it differently without touching
set_pte and update_mmu_cache() (well, my other need is still there, taht
is adding ptep argument).

Also, I have an idea to improve perfs of execute pages on some ppc64
like the G5 that would need keeping update_mmu_cache() around, and
adding another argument to it, passed all the way up from
do_page_fault(), indicating wether it's an execute access...

Nothing quite firm yet, but I'd like to change the "write_access"
argument of handle_mm_fault into some "access_type" which could be a
bitmask of "read", "execute" and "write". The current test of
"write_access" would just be if (access_type & MM_WRITE_FAULT) while
architectures who can't differenciate execute faults would just have
exec set all the time.

The idea here is that currently, on ppc64, we always map hardware PTEs
with the non-exec bit first.

Then, when taking a fault, we eventually allow execution _and_ flush the
cache appropriately.

Which means that upon a normal fault on paged code, we end up taking a
first hash fault, not finding the PTE, go to do_page_fault(), which the
PTE in, calls update_mmu_cache() which puts a (useless) read HPTE in the
hash, go back to userland, re-fault due to lack of execute permission,
flush the cache, fix the permission, go back to userland.

The only way I see to fix that properly to avoid this double fault is to
know either at set_pte() time or at update_mmu_cache() time (the later I
suppose is much simpler) wether we originate from an exec fault or not,
in which case it would do the cache cleaning before filling the HPTE
with the execute permission set.

I will prototype to measure the perf difference by using a thread flag
to "remember" weather we are coming from do_page_fault() as the result
of an execute fault and let you know if it makes a worthwhile
difference.

Ben.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: Changing  update_mmu_cache()
  2005-02-22 18:08   ` David S. Miller
@ 2005-02-25 20:15     ` Russell King
  2005-02-25 21:43       ` Andrew Morton
                         ` (2 more replies)
  0 siblings, 3 replies; 20+ messages in thread
From: Russell King @ 2005-02-25 20:15 UTC (permalink / raw)
  To: David S. Miller; +Cc: linux-arch

[-- Attachment #1: Type: text/plain, Size: 2454 bytes --]

On Tue, Feb 22, 2005 at 10:08:58AM -0800, David S. Miller wrote:
> On Tue, 22 Feb 2005 09:07:41 +0000
> Russell King <rmk@arm.linux.org.uk> wrote:
> 
> > On Tue, Feb 22, 2005 at 03:53:17PM +1100, Benjamin Herrenschmidt wrote:
> > > Is that ok with everybody ?
> > 
> > On the grounds that I can't get my MM changes past Linus which I need
> > for newer ARM CPUs, so I don't see why this change which adds extra code
> > (which is the complaint against my changes) should be considered.
> 
> It doesn't add extra code:
> 
> 1) the pte gets deref'd currently anyways
> 2) platforms not using the pte pointer arg will simply get that
>    argument optimized away
> 
> Russell, send me what you're having trouble merging under seperate
> cover and I'll give you a hand, from one old man to another :-)

Sorry David, I've been off doing other things for the last week or
so due to them being more personally interesting and productive than
chasing kernel stuff at the moment.

The item I was referring to was my flush_cache_page() changes from
January 11th (attached), posted to both linux-arch and lkml, and
previous to that in November some time, along with Linus' reply,
and my somewhat later reply.

To be completely honest, because it has been such a long time since
the solution was first developed, I no longer even know if this
solution still works.  I also suspect, since I don't follow VM
progress, that my knowledge of the VM is now rather out of date.

On the plus side, a couple of architecture people have come forward
to say that it could be beneficial to their architecture as well.

I just find it extremely hard to do these architecture-wide changes
and get them past Linus with little or no help from any other
architecture people, especially when I'm then asked to prove that
the changes do not hurt other architectures.

I'm not really expecting anyone to do lots of hard work on this
though... maybe just enough satisfaction feedback to from architecture
people to Linus will be sufficient.

The problem I now face is that we're almost at 2.6.11, and its been
almost three months, so I think it's safe to assume that Linus will
have forgotten everything about this, and will probably hate the
patch next time around.  But maybe I'm underestimating Linus.

-- 
Russell King
 Linux kernel    2.6 ARM Linux   - http://www.arm.linux.org.uk/
 maintainer of:  2.6 PCMCIA      - http://pcmcia.arm.linux.org.uk/
                 2.6 Serial core

[-- Attachment #2: Type: message/rfc822, Size: 34495 bytes --]

From: Russell King <rmk+lkml@arm.linux.org.uk>
To: Linus Torvalds <torvalds@osdl.org>, Andrew Morton <akpm@osdl.org>, Linux Kernel List <linux-kernel@vger.kernel.org>
Subject: Fwd: Re: flush_cache_page()
Date: Tue, 11 Jan 2005 22:36:52 +0000
Message-ID: <20050111223652.D30946@flint.arm.linux.org.uk>

Any responses on this?  Didn't get any last time I mailed this out.

I guess we're now at a point where we can start considering whether
to merge this or not.

However, since it's been rather a long time, I will need to go
back and redo this patch, along with all the other patches which
get ARMv6 VIPT aliasing caches working, and then confirm that this
does indeed end up with something which works.

I just don't want to go chasing my tail on something which essentially
is unacceptable.

----- Forwarded message from Russell King <rmk@arm.linux.org.uk> -----
Date:	Mon, 15 Nov 2004 20:37:44 +0000
From:	Russell King <rmk@arm.linux.org.uk>
To:	Linus Torvalds <torvalds@osdl.org>, Andrew Morton <akpm@osdl.org>,
	linux-arch@vger.kernel.org
Subject: Re: flush_cache_page()

On Thu, Aug 26, 2004 at 11:18:54AM -0700, Linus Torvalds wrote:
> On Thu, 26 Aug 2004, Russell King wrote:
> > 
> > Not quite.  Take an example of, say, two binaries mapped at 0x8000. With
> > one set of page tables, lets say physical address 0x1000 is mapped.  
> > The other process has physical address 0x2000 mapped there.
> 
> I'd have assumed that a virtual flush would just flush _all_ entries with 
> that virtual tag.
> 
> But if not, then I guess passing in the physical page wouldn't be too bad.
> 
> What do we have there? I assume it can't be "struct page *", since you
> might have mapped IO pages etc that don't have a "struct page" associated
> with them. So it would either have to be the PFN of the page, or the PTE
> entry. Are those available (or can they be made available easily?) in the
> routines that need this?
> 
> If the generic code would need to do a page table walk anyway to get the 
> information, then I'd just ask that you do it by hand.

It's been a while for this thread, but unfortunately other stuff took
precidence. 8(

I wish to widen the audience and include a patch for people to *think*
about and definitely _NOT_ (underlined in triplicate) for applying!

This merely changes all callsites of flush_cache_page to take the PFN
in addition to other stuff, so we know which alias of a VIPT write-back
cached system to flush - which newer ARM CPUs have.

I believe that there is an invariant that the page being flushed by
flush_cache_page() will also be mapped (someone who knows this stuff
better than me needs to confirm that), so I suspect this may be a win
for those who are walking page tables.

sh and sh64 people may like to note that this saves them walking the
page table - from the PFN they can derive the physical address directly.

Why a PFN ?  It seems to be the only data within easy reach for all
flush_cache_page callers... unless people don't object to finding
some way to pass a PTE from places like fs/binfmt_elf.c and
include/asm-*/cacheflush.h for copy_*_user_page().

TBH, I'm no longer convinced that this actually benefits anyone except
ARM VIPT CPUs, where knowing which alias to kick seems to be the right
thing to do... rather than the alternative of mapping all four aliases
and just flushing each 32 byte cache line in 16K just for the hell of
it for every 4K page.

Comments?

(PS, there is another way which jejb pointed out, and wants to implement
for PA-RISC, if given the chance.  However, although it sounded like a
good idea, it seems to be a fair amount of work to achieve it, and I'm
not entirely sure that it's appropriate at this time in the 2.6
lifecycle.)

===== arch/arm/mm/fault-armv.c 1.35 vs edited =====
--- 1.35/arch/arm/mm/fault-armv.c	2004-09-07 14:36:20 +01:00
+++ edited/arch/arm/mm/fault-armv.c	2004-11-15 19:43:50 +00:00
@@ -54,7 +54,7 @@
 	 * fault (ie, is old), we can safely ignore any issues.
 	 */
 	if (pte_present(entry) && pte_val(entry) & shared_pte_mask) {
-		flush_cache_page(vma, address);
+		flush_cache_page(vma, address, pte_pfn(entry));
 		pte_val(entry) &= ~shared_pte_mask;
 		set_pte(pte, entry);
 		flush_tlb_page(vma, address);
@@ -115,7 +115,7 @@
 	if (aliases)
 		adjust_pte(vma, addr);
 	else
-		flush_cache_page(vma, addr);
+		flush_cache_page(vma, addr, page_to_pfn(page));
 }
 
 /*
===== arch/arm/mm/flush.c 1.3 vs edited =====
--- 1.3/arch/arm/mm/flush.c	2004-09-07 14:36:20 +01:00
+++ edited/arch/arm/mm/flush.c	2004-11-15 19:43:50 +00:00
@@ -56,7 +56,7 @@
 		if (!(mpnt->vm_flags & VM_MAYSHARE))
 			continue;
 		offset = (pgoff - mpnt->vm_pgoff) << PAGE_SHIFT;
-		flush_cache_page(mpnt, mpnt->vm_start + offset);
+		flush_cache_page(mpnt, mpnt->vm_start + offset, page_to_pfn(page));
 		if (cache_is_vipt())
 			break;
 	}
===== arch/mips/mm/c-r3k.c 1.3 vs edited =====
--- 1.3/arch/mips/mm/c-r3k.c	2004-04-20 07:53:22 +01:00
+++ edited/arch/mips/mm/c-r3k.c	2004-11-15 20:00:12 +00:00
@@ -255,7 +255,7 @@
 }
 
 static void r3k_flush_cache_page(struct vm_area_struct *vma,
-	unsigned long page)
+	unsigned long page, unsigned long pfn)
 {
 }
 
===== arch/mips/mm/c-r4k.c 1.4 vs edited =====
--- 1.4/arch/mips/mm/c-r4k.c	2004-04-20 07:53:22 +01:00
+++ edited/arch/mips/mm/c-r4k.c	2004-11-15 20:00:12 +00:00
@@ -317,7 +317,7 @@
 }
 
 static void r4k_flush_cache_page(struct vm_area_struct *vma,
-					unsigned long page)
+				 unsigned long page, unsigned long pfn)
 {
 	int exec = vma->vm_flags & VM_EXEC;
 	struct mm_struct *mm = vma->vm_mm;
===== arch/mips/mm/c-sb1.c 1.4 vs edited =====
--- 1.4/arch/mips/mm/c-sb1.c	2004-04-20 07:53:22 +01:00
+++ edited/arch/mips/mm/c-sb1.c	2004-11-15 20:00:12 +00:00
@@ -158,7 +158,7 @@
  * executable, nothing is required.
  */
 static void local_sb1_flush_cache_page(struct vm_area_struct *vma,
-	unsigned long addr)
+	unsigned long addr, unsigned long pfn)
 {
 	int cpu = smp_processor_id();
 
@@ -180,17 +180,18 @@
 struct flush_cache_page_args {
 	struct vm_area_struct *vma;
 	unsigned long addr;
+	unsigned long pfn;
 };
 
 static void sb1_flush_cache_page_ipi(void *info)
 {
 	struct flush_cache_page_args *args = info;
 
-	local_sb1_flush_cache_page(args->vma, args->addr);
+	local_sb1_flush_cache_page(args->vma, args->addr, args->pfn);
 }
 
 /* Dirty dcache could be on another CPU, so do the IPIs */
-static void sb1_flush_cache_page(struct vm_area_struct *vma, unsigned long addr)
+static void sb1_flush_cache_page(struct vm_area_struct *vma, unsigned long addr, unsigned long pfn)
 {
 	struct flush_cache_page_args args;
 
@@ -200,10 +201,11 @@
 	addr &= PAGE_MASK;
 	args.vma = vma;
 	args.addr = addr;
+	args.pfn = pfn;
 	on_each_cpu(sb1_flush_cache_page_ipi, (void *) &args, 1, 1);
 }
 #else
-void sb1_flush_cache_page(struct vm_area_struct *vma, unsigned long addr)
+void sb1_flush_cache_page(struct vm_area_struct *vma, unsigned long addr, unsigned long pfn)
 	__attribute__((alias("local_sb1_flush_cache_page")));
 #endif
 
===== arch/mips/mm/c-tx39.c 1.4 vs edited =====
--- 1.4/arch/mips/mm/c-tx39.c	2004-04-20 07:53:22 +01:00
+++ edited/arch/mips/mm/c-tx39.c	2004-11-15 20:00:12 +00:00
@@ -179,7 +179,7 @@
 }
 
 static void tx39_flush_cache_page(struct vm_area_struct *vma,
-				   unsigned long page)
+				   unsigned long page, unsigned long pfn)
 {
 	int exec = vma->vm_flags & VM_EXEC;
 	struct mm_struct *mm = vma->vm_mm;
===== arch/mips/mm/cache.c 1.6 vs edited =====
--- 1.6/arch/mips/mm/cache.c	2004-04-20 07:53:22 +01:00
+++ edited/arch/mips/mm/cache.c	2004-11-15 19:57:04 +00:00
@@ -23,7 +23,8 @@
 void (*flush_cache_mm)(struct mm_struct *mm);
 void (*flush_cache_range)(struct vm_area_struct *vma, unsigned long start,
 	unsigned long end);
-void (*flush_cache_page)(struct vm_area_struct *vma, unsigned long page);
+void (*flush_cache_page)(struct vm_area_struct *vma, unsigned long page,
+	unsigned long pfn);
 void (*flush_icache_range)(unsigned long start, unsigned long end);
 void (*flush_icache_page)(struct vm_area_struct *vma, struct page *page);
 
===== arch/sh/mm/cache-sh4.c 1.9 vs edited =====
--- 1.9/arch/sh/mm/cache-sh4.c	2004-10-19 06:26:44 +01:00
+++ edited/arch/sh/mm/cache-sh4.c	2004-11-15 20:04:20 +00:00
@@ -346,7 +346,7 @@
  *
  * ADDR: Virtual Address (U0 address)
  */
-void flush_cache_page(struct vm_area_struct *vma, unsigned long address)
+void flush_cache_page(struct vm_area_struct *vma, unsigned long address, unsigned long pfn)
 {
 	pgd_t *dir;
 	pmd_t *pmd;
===== arch/sh/mm/cache-sh7705.c 1.1 vs edited =====
--- 1.1/arch/sh/mm/cache-sh7705.c	2004-10-19 06:26:41 +01:00
+++ edited/arch/sh/mm/cache-sh7705.c	2004-11-15 20:04:19 +00:00
@@ -186,7 +186,7 @@
  *
  * ADDRESS: Virtual Address (U0 address)
  */
-void flush_cache_page(struct vm_area_struct *vma, unsigned long address)
+void flush_cache_page(struct vm_area_struct *vma, unsigned long address, unsigned long pfn)
 {
 	pgd_t *dir;
 	pmd_t *pmd;
===== arch/sh64/mm/cache.c 1.1 vs edited =====
--- 1.1/arch/sh64/mm/cache.c	2004-06-29 15:44:46 +01:00
+++ edited/arch/sh64/mm/cache.c	2004-11-15 20:03:28 +00:00
@@ -904,7 +904,7 @@
 
 /****************************************************************************/
 
-void flush_cache_page(struct vm_area_struct *vma, unsigned long eaddr)
+void flush_cache_page(struct vm_area_struct *vma, unsigned long eaddr, unsigned long pfn)
 {
 	/* Invalidate any entries in either cache for the vma within the user
 	   address space vma->vm_mm for the page starting at virtual address
===== fs/binfmt_elf.c 1.91 vs edited =====
--- 1.91/fs/binfmt_elf.c	2004-11-10 17:45:38 +00:00
+++ edited/fs/binfmt_elf.c	2004-11-15 19:43:50 +00:00
@@ -1533,7 +1533,7 @@
 					DUMP_SEEK (file->f_pos + PAGE_SIZE);
 				} else {
 					void *kaddr;
-					flush_cache_page(vma, addr);
+					flush_cache_page(vma, addr, page_to_pfn(page));
 					kaddr = kmap(page);
 					if ((size += PAGE_SIZE) > limit ||
 					    !dump_write(file, kaddr,
===== include/asm-alpha/cacheflush.h 1.4 vs edited =====
--- 1.4/include/asm-alpha/cacheflush.h	2004-05-22 22:56:28 +01:00
+++ edited/include/asm-alpha/cacheflush.h	2004-11-15 19:55:13 +00:00
@@ -8,7 +8,7 @@
 #define flush_cache_all()			do { } while (0)
 #define flush_cache_mm(mm)			do { } while (0)
 #define flush_cache_range(vma, start, end)	do { } while (0)
-#define flush_cache_page(vma, vmaddr)		do { } while (0)
+#define flush_cache_page(vma, vmaddr, pfn)	do { } while (0)
 #define flush_dcache_page(page)			do { } while (0)
 #define flush_dcache_mmap_lock(mapping)		do { } while (0)
 #define flush_dcache_mmap_unlock(mapping)	do { } while (0)
===== include/asm-arm/cacheflush.h 1.18 vs edited =====
--- 1.18/include/asm-arm/cacheflush.h	2004-11-05 10:53:14 +00:00
+++ edited/include/asm-arm/cacheflush.h	2004-11-15 19:55:13 +00:00
@@ -237,16 +237,16 @@
  * space" model to handle this.
  */
 #define copy_to_user_page(vma, page, vaddr, dst, src, len) \
-	do {					\
-		flush_cache_page(vma, vaddr);	\
-		memcpy(dst, src, len);		\
-		flush_dcache_page(page);	\
+	do {							\
+		flush_cache_page(vma, vaddr, page_to_pfn(page));\
+		memcpy(dst, src, len);				\
+		flush_dcache_page(page);			\
 	} while (0)
 
 #define copy_from_user_page(vma, page, vaddr, dst, src, len) \
-	do {					\
-		flush_cache_page(vma, vaddr);	\
-		memcpy(dst, src, len);		\
+	do {							\
+		flush_cache_page(vma, vaddr, page_to_pfn(page));\
+		memcpy(dst, src, len);				\
 	} while (0)
 
 /*
@@ -269,7 +269,7 @@
 }
 
 static inline void
-flush_cache_page(struct vm_area_struct *vma, unsigned long user_addr)
+flush_cache_page(struct vm_area_struct *vma, unsigned long user_addr, unsigned long pfn)
 {
 	if (cpu_isset(smp_processor_id(), vma->vm_mm->cpu_vm_mask)) {
 		unsigned long addr = user_addr & PAGE_MASK;
===== include/asm-arm26/cacheflush.h 1.3 vs edited =====
--- 1.3/include/asm-arm26/cacheflush.h	2004-05-22 22:56:28 +01:00
+++ edited/include/asm-arm26/cacheflush.h	2004-11-15 19:55:13 +00:00
@@ -23,7 +23,7 @@
 #define flush_cache_all()                       do { } while (0)
 #define flush_cache_mm(mm)                      do { } while (0)
 #define flush_cache_range(vma,start,end)        do { } while (0)
-#define flush_cache_page(vma,vmaddr)            do { } while (0)
+#define flush_cache_page(vma,vmaddr,pfn)        do { } while (0)
 #define flush_cache_vmap(start, end)		do { } while (0)
 #define flush_cache_vunmap(start, end)		do { } while (0)
 
===== include/asm-cris/cacheflush.h 1.3 vs edited =====
--- 1.3/include/asm-cris/cacheflush.h	2004-05-22 22:56:28 +01:00
+++ edited/include/asm-cris/cacheflush.h	2004-11-15 19:55:13 +00:00
@@ -10,7 +10,7 @@
 #define flush_cache_all()			do { } while (0)
 #define flush_cache_mm(mm)			do { } while (0)
 #define flush_cache_range(vma, start, end)	do { } while (0)
-#define flush_cache_page(vma, vmaddr)		do { } while (0)
+#define flush_cache_page(vma, vmaddr, pfn)	do { } while (0)
 #define flush_dcache_page(page)			do { } while (0)
 #define flush_dcache_mmap_lock(mapping)		do { } while (0)
 #define flush_dcache_mmap_unlock(mapping)	do { } while (0)
===== include/asm-h8300/cacheflush.h 1.3 vs edited =====
--- 1.3/include/asm-h8300/cacheflush.h	2004-05-22 22:56:28 +01:00
+++ edited/include/asm-h8300/cacheflush.h	2004-11-15 19:55:13 +00:00
@@ -13,7 +13,7 @@
 #define flush_cache_all()
 #define	flush_cache_mm(mm)
 #define	flush_cache_range(vma,a,b)
-#define	flush_cache_page(vma,p)
+#define	flush_cache_page(vma,p,pfn)
 #define	flush_dcache_page(page)
 #define	flush_dcache_mmap_lock(mapping)
 #define	flush_dcache_mmap_unlock(mapping)
===== include/asm-i386/cacheflush.h 1.6 vs edited =====
--- 1.6/include/asm-i386/cacheflush.h	2004-05-22 22:56:28 +01:00
+++ edited/include/asm-i386/cacheflush.h	2004-11-15 19:55:13 +00:00
@@ -8,7 +8,7 @@
 #define flush_cache_all()			do { } while (0)
 #define flush_cache_mm(mm)			do { } while (0)
 #define flush_cache_range(vma, start, end)	do { } while (0)
-#define flush_cache_page(vma, vmaddr)		do { } while (0)
+#define flush_cache_page(vma, vmaddr, pfn)	do { } while (0)
 #define flush_dcache_page(page)			do { } while (0)
 #define flush_dcache_mmap_lock(mapping)		do { } while (0)
 #define flush_dcache_mmap_unlock(mapping)	do { } while (0)
===== include/asm-ia64/cacheflush.h 1.6 vs edited =====
--- 1.6/include/asm-ia64/cacheflush.h	2004-05-22 22:56:28 +01:00
+++ edited/include/asm-ia64/cacheflush.h	2004-11-15 19:55:13 +00:00
@@ -19,7 +19,7 @@
 #define flush_cache_all()			do { } while (0)
 #define flush_cache_mm(mm)			do { } while (0)
 #define flush_cache_range(vma, start, end)	do { } while (0)
-#define flush_cache_page(vma, vmaddr)		do { } while (0)
+#define flush_cache_page(vma, vmaddr, pfn)	do { } while (0)
 #define flush_icache_page(vma,page)		do { } while (0)
 #define flush_cache_vmap(start, end)		do { } while (0)
 #define flush_cache_vunmap(start, end)		do { } while (0)
===== include/asm-m32r/cacheflush.h 1.1 vs edited =====
--- 1.1/include/asm-m32r/cacheflush.h	2004-09-17 08:06:56 +01:00
+++ edited/include/asm-m32r/cacheflush.h	2004-11-15 19:55:13 +00:00
@@ -11,7 +11,7 @@
 #define flush_cache_all()			do { } while (0)
 #define flush_cache_mm(mm)			do { } while (0)
 #define flush_cache_range(vma, start, end)	do { } while (0)
-#define flush_cache_page(vma, vmaddr)		do { } while (0)
+#define flush_cache_page(vma, vmaddr, pfn)	do { } while (0)
 #define flush_dcache_page(page)			do { } while (0)
 #define flush_dcache_mmap_lock(mapping)		do { } while (0)
 #define flush_dcache_mmap_unlock(mapping)	do { } while (0)
@@ -31,7 +31,7 @@
 #define flush_cache_all()			do { } while (0)
 #define flush_cache_mm(mm)			do { } while (0)
 #define flush_cache_range(vma, start, end)	do { } while (0)
-#define flush_cache_page(vma, vmaddr)		do { } while (0)
+#define flush_cache_page(vma, vmaddr, pfn)	do { } while (0)
 #define flush_dcache_page(page)			do { } while (0)
 #define flush_dcache_mmap_lock(mapping)		do { } while (0)
 #define flush_dcache_mmap_unlock(mapping)	do { } while (0)
@@ -43,7 +43,7 @@
 #define flush_cache_all()			do { } while (0)
 #define flush_cache_mm(mm)			do { } while (0)
 #define flush_cache_range(vma, start, end)	do { } while (0)
-#define flush_cache_page(vma, vmaddr)		do { } while (0)
+#define flush_cache_page(vma, vmaddr, pfn)	do { } while (0)
 #define flush_dcache_page(page)			do { } while (0)
 #define flush_dcache_mmap_lock(mapping)		do { } while (0)
 #define flush_dcache_mmap_unlock(mapping)	do { } while (0)
===== include/asm-m68k/cacheflush.h 1.10 vs edited =====
--- 1.10/include/asm-m68k/cacheflush.h	2004-09-17 15:15:21 +01:00
+++ edited/include/asm-m68k/cacheflush.h	2004-11-15 19:55:13 +00:00
@@ -100,7 +100,8 @@
 }
 
 static inline void flush_cache_page(struct vm_area_struct *vma,
-				    unsigned long vmaddr)
+				    unsigned long vmaddr,
+				    unsigned long pfn)
 {
 	if (vma->vm_mm == current->mm)
 	        __flush_cache_030();
@@ -134,15 +135,15 @@
 #define flush_icache_user_range(vma,pg,adr,len)	do { } while (0)
 
 #define copy_to_user_page(vma, page, vaddr, dst, src, len) \
-	do {					\
-		flush_cache_page(vma, vaddr);	\
-		memcpy(dst, src, len);		\
+	do {							\
+		flush_cache_page(vma, vaddr, page_to_pfn(page));\
+		memcpy(dst, src, len);				\
 	} while (0)
 
 #define copy_from_user_page(vma, page, vaddr, dst, src, len) \
-	do {					\
-		flush_cache_page(vma, vaddr);	\
-		memcpy(dst, src, len);		\
+	do {							\
+		flush_cache_page(vma, vaddr, page_to_pfn(page));\
+		memcpy(dst, src, len);				\
 	} while (0)
 
 extern void flush_icache_range(unsigned long address, unsigned long endaddr);
===== include/asm-m68knommu/cacheflush.h 1.7 vs edited =====
--- 1.7/include/asm-m68knommu/cacheflush.h	2004-05-22 22:56:28 +01:00
+++ edited/include/asm-m68knommu/cacheflush.h	2004-11-15 19:55:13 +00:00
@@ -9,7 +9,7 @@
 #define flush_cache_all()			__flush_cache_all()
 #define flush_cache_mm(mm)			do { } while (0)
 #define flush_cache_range(vma, start, end)	do { } while (0)
-#define flush_cache_page(vma, vmaddr)		do { } while (0)
+#define flush_cache_page(vma, vmaddr, pfn)	do { } while (0)
 #define flush_dcache_range(start,len)		do { } while (0)
 #define flush_dcache_page(page)			do { } while (0)
 #define flush_dcache_mmap_lock(mapping)		do { } while (0)
===== include/asm-mips/cacheflush.h 1.6 vs edited =====
--- 1.6/include/asm-mips/cacheflush.h	2004-05-22 22:56:28 +01:00
+++ edited/include/asm-mips/cacheflush.h	2004-11-15 19:55:13 +00:00
@@ -17,7 +17,7 @@
  *
  *  - flush_cache_all() flushes entire cache
  *  - flush_cache_mm(mm) flushes the specified mm context's cache lines
- *  - flush_cache_page(mm, vmaddr) flushes a single page
+ *  - flush_cache_page(mm, vmaddr, pfn) flushes a single page
  *  - flush_cache_range(vma, start, end) flushes a range of pages
  *  - flush_icache_range(start, end) flush a range of instructions
  *  - flush_dcache_page(pg) flushes(wback&invalidates) a page for dcache
@@ -35,7 +35,7 @@
 extern void (*flush_cache_range)(struct vm_area_struct *vma,
 	unsigned long start, unsigned long end);
 extern void (*flush_cache_page)(struct vm_area_struct *vma,
-	unsigned long page);
+	unsigned long page, unsigned long pfn);
 extern void __flush_dcache_page(struct page *page);
 
 static inline void flush_dcache_page(struct page *page)
===== include/asm-parisc/cacheflush.h 1.12 vs edited =====
--- 1.12/include/asm-parisc/cacheflush.h	2004-09-17 16:04:02 +01:00
+++ edited/include/asm-parisc/cacheflush.h	2004-11-15 19:55:13 +00:00
@@ -78,14 +78,14 @@
 
 #define copy_to_user_page(vma, page, vaddr, dst, src, len) \
 do { \
-	flush_cache_page(vma, vaddr); \
+	flush_cache_page(vma, vaddr, page_to_pfn(page)); \
 	memcpy(dst, src, len); \
 	flush_kernel_dcache_range_asm((unsigned long)dst, (unsigned long)dst + len); \
 } while (0)
 
 #define copy_from_user_page(vma, page, vaddr, dst, src, len) \
 do { \
-	flush_cache_page(vma, vaddr); \
+	flush_cache_page(vma, vaddr, page_to_pfn(page)); \
 	memcpy(dst, src, len); \
 } while (0)
 
@@ -181,7 +181,8 @@
 }
 
 static inline void
-flush_cache_page(struct vm_area_struct *vma, unsigned long vmaddr)
+flush_cache_page(struct vm_area_struct *vma, unsigned long vmaddr,
+		 unsigned long pfn)
 {
 	BUG_ON(!vma->vm_mm->context);
 
===== include/asm-ppc/cacheflush.h 1.8 vs edited =====
--- 1.8/include/asm-ppc/cacheflush.h	2004-05-22 22:56:28 +01:00
+++ edited/include/asm-ppc/cacheflush.h	2004-11-15 19:55:13 +00:00
@@ -22,7 +22,7 @@
 #define flush_cache_all()		do { } while (0)
 #define flush_cache_mm(mm)		do { } while (0)
 #define flush_cache_range(vma, a, b)	do { } while (0)
-#define flush_cache_page(vma, p)	do { } while (0)
+#define flush_cache_page(vma, p, pfn)	do { } while (0)
 #define flush_icache_page(vma, page)	do { } while (0)
 #define flush_cache_vmap(start, end)	do { } while (0)
 #define flush_cache_vunmap(start, end)	do { } while (0)
===== include/asm-ppc64/cacheflush.h 1.8 vs edited =====
--- 1.8/include/asm-ppc64/cacheflush.h	2004-05-30 20:50:14 +01:00
+++ edited/include/asm-ppc64/cacheflush.h	2004-11-15 19:55:13 +00:00
@@ -12,7 +12,7 @@
 #define flush_cache_all()			do { } while (0)
 #define flush_cache_mm(mm)			do { } while (0)
 #define flush_cache_range(vma, start, end)	do { } while (0)
-#define flush_cache_page(vma, vmaddr)		do { } while (0)
+#define flush_cache_page(vma, vmaddr, pfn)	do { } while (0)
 #define flush_icache_page(vma, page)		do { } while (0)
 #define flush_cache_vmap(start, end)		do { } while (0)
 #define flush_cache_vunmap(start, end)		do { } while (0)
===== include/asm-s390/cacheflush.h 1.4 vs edited =====
--- 1.4/include/asm-s390/cacheflush.h	2004-05-22 22:56:28 +01:00
+++ edited/include/asm-s390/cacheflush.h	2004-11-15 19:55:13 +00:00
@@ -8,7 +8,7 @@
 #define flush_cache_all()			do { } while (0)
 #define flush_cache_mm(mm)			do { } while (0)
 #define flush_cache_range(vma, start, end)	do { } while (0)
-#define flush_cache_page(vma, vmaddr)		do { } while (0)
+#define flush_cache_page(vma, vmaddr, pfn)	do { } while (0)
 #define flush_dcache_page(page)			do { } while (0)
 #define flush_dcache_mmap_lock(mapping)		do { } while (0)
 #define flush_dcache_mmap_unlock(mapping)	do { } while (0)
===== include/asm-sh/cacheflush.h 1.3 vs edited =====
--- 1.3/include/asm-sh/cacheflush.h	2004-09-17 15:29:45 +01:00
+++ edited/include/asm-sh/cacheflush.h	2004-11-15 19:52:27 +00:00
@@ -15,14 +15,14 @@
 
 #define copy_to_user_page(vma, page, vaddr, dst, src, len) \
 	do {							\
-		flush_cache_page(vma, vaddr);			\
+		flush_cache_page(vma, vaddr, page_to_pfn(page));\
 		memcpy(dst, src, len);				\
 		flush_icache_user_range(vma, page, vaddr, len);	\
 	} while (0)
 
 #define copy_from_user_page(vma, page, vaddr, dst, src, len) \
 	do {							\
-		flush_cache_page(vma, vaddr);			\
+		flush_cache_page(vma, vaddr, page_to_pfn(page));\
 		memcpy(dst, src, len);				\
 	} while (0)
 
===== include/asm-sh/cpu-sh2/cacheflush.h 1.2 vs edited =====
--- 1.2/include/asm-sh/cpu-sh2/cacheflush.h	2004-05-22 22:56:28 +01:00
+++ edited/include/asm-sh/cpu-sh2/cacheflush.h	2004-11-15 19:56:22 +00:00
@@ -28,7 +28,7 @@
 #define flush_cache_all()			do { } while (0)
 #define flush_cache_mm(mm)			do { } while (0)
 #define flush_cache_range(vma, start, end)	do { } while (0)
-#define flush_cache_page(vma, vmaddr)		do { } while (0)
+#define flush_cache_page(vma, vmaddr, pfn)	do { } while (0)
 #define flush_dcache_page(page)			do { } while (0)
 #define flush_dcache_mmap_lock(mapping)		do { } while (0)
 #define flush_dcache_mmap_unlock(mapping)	do { } while (0)
===== include/asm-sh/cpu-sh3/cacheflush.h 1.4 vs edited =====
--- 1.4/include/asm-sh/cpu-sh3/cacheflush.h	2004-10-19 06:26:41 +01:00
+++ edited/include/asm-sh/cpu-sh3/cacheflush.h	2004-11-15 19:56:22 +00:00
@@ -15,7 +15,7 @@
  *
  *  - flush_cache_all() flushes entire cache
  *  - flush_cache_mm(mm) flushes the specified mm context's cache lines
- *  - flush_cache_page(mm, vmaddr) flushes a single page
+ *  - flush_cache_page(mm, vmaddr, pfn) flushes a single page
  *  - flush_cache_range(vma, start, end) flushes a range of pages
  *
  *  - flush_dcache_page(pg) flushes(wback&invalidates) a page for dcache
@@ -43,7 +43,8 @@
 extern void flush_cache_mm(struct mm_struct *mm);
 extern void flush_cache_range(struct vm_area_struct *vma, unsigned long start,
                               unsigned long end);
-extern void flush_cache_page(struct vm_area_struct *vma, unsigned long addr);
+extern void flush_cache_page(struct vm_area_struct *vma, unsigned long addr,
+			     unsigned long pfn);
 extern void flush_dcache_page(struct page *pg);
 extern void flush_icache_range(unsigned long start, unsigned long end);
 extern void flush_icache_page(struct vm_area_struct *vma, struct page *page);
@@ -68,7 +69,7 @@
 #define flush_cache_all()			do { } while (0)
 #define flush_cache_mm(mm)			do { } while (0)
 #define flush_cache_range(vma, start, end)	do { } while (0)
-#define flush_cache_page(vma, vmaddr)		do { } while (0)
+#define flush_cache_page(vma, vmaddr, pfn)	do { } while (0)
 #define flush_dcache_page(page)			do { } while (0)
 #define flush_dcache_mmap_lock(mapping)		do { } while (0)
 #define flush_dcache_mmap_unlock(mapping)	do { } while (0)
===== include/asm-sh/cpu-sh4/cacheflush.h 1.2 vs edited =====
--- 1.2/include/asm-sh/cpu-sh4/cacheflush.h	2004-05-22 22:56:28 +01:00
+++ edited/include/asm-sh/cpu-sh4/cacheflush.h	2004-11-15 19:56:22 +00:00
@@ -28,7 +28,8 @@
 extern void flush_cache_mm(struct mm_struct *mm);
 extern void flush_cache_range(struct vm_area_struct *vma, unsigned long start,
 			      unsigned long end);
-extern void flush_cache_page(struct vm_area_struct *vma, unsigned long addr);
+extern void flush_cache_page(struct vm_area_struct *vma, unsigned long addr,
+			     unsigned long pfn);
 extern void flush_dcache_page(struct page *pg);
 
 #define flush_dcache_mmap_lock(mapping)		do { } while (0)
===== include/asm-sh64/cacheflush.h 1.2 vs edited =====
--- 1.2/include/asm-sh64/cacheflush.h	2004-09-17 15:31:42 +01:00
+++ edited/include/asm-sh64/cacheflush.h	2004-11-15 19:55:13 +00:00
@@ -14,7 +14,7 @@
 extern void flush_cache_sigtramp(unsigned long start, unsigned long end);
 extern void flush_cache_range(struct vm_area_struct *vma, unsigned long start,
 			      unsigned long end);
-extern void flush_cache_page(struct vm_area_struct *vma, unsigned long addr);
+extern void flush_cache_page(struct vm_area_struct *vma, unsigned long addr, pfn);
 extern void flush_dcache_page(struct page *pg);
 extern void flush_icache_range(unsigned long start, unsigned long end);
 extern void flush_icache_user_range(struct vm_area_struct *vma,
@@ -31,14 +31,14 @@
 
 #define copy_to_user_page(vma, page, vaddr, dst, src, len) \
 	do {							\
-		flush_cache_page(vma, vaddr);			\
+		flush_cache_page(vma, vaddr, page_to_pfn(page));\
 		memcpy(dst, src, len);				\
 		flush_icache_user_range(vma, page, vaddr, len);	\
 	} while (0)
 
 #define copy_from_user_page(vma, page, vaddr, dst, src, len) \
 	do {							\
-		flush_cache_page(vma, vaddr);			\
+		flush_cache_page(vma, vaddr, page_to_pfn(page));\
 		memcpy(dst, src, len);				\
 	} while (0)
 
===== include/asm-sparc/cacheflush.h 1.6 vs edited =====
--- 1.6/include/asm-sparc/cacheflush.h	2004-09-17 15:57:49 +01:00
+++ edited/include/asm-sparc/cacheflush.h	2004-11-15 20:06:35 +00:00
@@ -50,21 +50,21 @@
 #define flush_cache_all() BTFIXUP_CALL(flush_cache_all)()
 #define flush_cache_mm(mm) BTFIXUP_CALL(flush_cache_mm)(mm)
 #define flush_cache_range(vma,start,end) BTFIXUP_CALL(flush_cache_range)(vma,start,end)
-#define flush_cache_page(vma,addr) BTFIXUP_CALL(flush_cache_page)(vma,addr)
+#define flush_cache_page(vma,addr,pfn) BTFIXUP_CALL(flush_cache_page)(vma,addr)
 #define flush_icache_range(start, end)		do { } while (0)
 #define flush_icache_page(vma, pg)		do { } while (0)
 
 #define flush_icache_user_range(vma,pg,adr,len)	do { } while (0)
 
 #define copy_to_user_page(vma, page, vaddr, dst, src, len) \
-	do {					\
-		flush_cache_page(vma, vaddr);	\
-		memcpy(dst, src, len);		\
+	do {							\
+		flush_cache_page(vma, vaddr, page_to_pfn(page));\
+		memcpy(dst, src, len);				\
 	} while (0)
 #define copy_from_user_page(vma, page, vaddr, dst, src, len) \
-	do {					\
-		flush_cache_page(vma, vaddr);	\
-		memcpy(dst, src, len);		\
+	do {							\
+		flush_cache_page(vma, vaddr, page_to_pfn(page));\
+		memcpy(dst, src, len);				\
 	} while (0)
 
 BTFIXUPDEF_CALL(void, __flush_page_to_ram, unsigned long)
===== include/asm-sparc64/cacheflush.h 1.7 vs edited =====
--- 1.7/include/asm-sparc64/cacheflush.h	2004-09-17 15:58:35 +01:00
+++ edited/include/asm-sparc64/cacheflush.h	2004-11-15 19:55:13 +00:00
@@ -11,7 +11,7 @@
 	do { if ((__mm) == current->mm) flushw_user(); } while(0)
 #define flush_cache_range(vma, start, end) \
 	flush_cache_mm((vma)->vm_mm)
-#define flush_cache_page(vma, page) \
+#define flush_cache_page(vma, page, pfn) \
 	flush_cache_mm((vma)->vm_mm)
 
 /* 
@@ -38,15 +38,15 @@
 #define flush_icache_user_range(vma,pg,adr,len)	do { } while (0)
 
 #define copy_to_user_page(vma, page, vaddr, dst, src, len) \
-	do {					\
-		flush_cache_page(vma, vaddr);	\
-		memcpy(dst, src, len);		\
+	do {							\
+		flush_cache_page(vma, vaddr, page_to_pfn(page));\
+		memcpy(dst, src, len);				\
 	} while (0)
 
 #define copy_from_user_page(vma, page, vaddr, dst, src, len) \
-	do {					\
-		flush_cache_page(vma, vaddr);	\
-		memcpy(dst, src, len);		\
+	do {							\
+		flush_cache_page(vma, vaddr, page_to_pfn(page));\
+		memcpy(dst, src, len);				\
 	} while (0)
 
 extern void flush_dcache_page(struct page *page);
===== include/asm-v850/cacheflush.h 1.6 vs edited =====
--- 1.6/include/asm-v850/cacheflush.h	2004-05-22 22:56:28 +01:00
+++ edited/include/asm-v850/cacheflush.h	2004-11-15 19:55:13 +00:00
@@ -25,7 +25,7 @@
 #define flush_cache_all()			((void)0)
 #define flush_cache_mm(mm)			((void)0)
 #define flush_cache_range(vma, start, end)	((void)0)
-#define flush_cache_page(vma, vmaddr)		((void)0)
+#define flush_cache_page(vma, vmaddr, pfn)	((void)0)
 #define flush_dcache_page(page)			((void)0)
 #define flush_dcache_mmap_lock(mapping)		((void)0)
 #define flush_dcache_mmap_unlock(mapping)	((void)0)
===== include/asm-x86_64/cacheflush.h 1.6 vs edited =====
--- 1.6/include/asm-x86_64/cacheflush.h	2004-05-22 22:56:28 +01:00
+++ edited/include/asm-x86_64/cacheflush.h	2004-11-15 19:55:13 +00:00
@@ -8,7 +8,7 @@
 #define flush_cache_all()			do { } while (0)
 #define flush_cache_mm(mm)			do { } while (0)
 #define flush_cache_range(vma, start, end)	do { } while (0)
-#define flush_cache_page(vma, vmaddr)		do { } while (0)
+#define flush_cache_page(vma, vmaddr, pfn)	do { } while (0)
 #define flush_dcache_page(page)			do { } while (0)
 #define flush_dcache_mmap_lock(mapping)		do { } while (0)
 #define flush_dcache_mmap_unlock(mapping)	do { } while (0)
===== mm/fremap.c 1.29 vs edited =====
--- 1.29/mm/fremap.c	2004-10-19 06:26:38 +01:00
+++ edited/mm/fremap.c	2004-11-15 19:43:50 +00:00
@@ -30,7 +30,7 @@
 	if (pte_present(pte)) {
 		unsigned long pfn = pte_pfn(pte);
 
-		flush_cache_page(vma, addr);
+		flush_cache_page(vma, addr, pfn);
 		pte = ptep_clear_flush(vma, addr, ptep);
 		if (pfn_valid(pfn)) {
 			struct page *page = pfn_to_page(pfn);
===== mm/memory.c 1.197 vs edited =====
--- 1.197/mm/memory.c	2004-10-28 08:39:54 +01:00
+++ edited/mm/memory.c	2004-11-15 19:43:50 +00:00
@@ -1029,7 +1029,6 @@
 {
 	pte_t entry;
 
-	flush_cache_page(vma, address);
 	entry = maybe_mkwrite(pte_mkdirty(mk_pte(new_page, vma->vm_page_prot)),
 			      vma);
 	ptep_establish(vma, address, page_table, entry);
@@ -1081,7 +1080,7 @@
 		int reuse = can_share_swap_page(old_page);
 		unlock_page(old_page);
 		if (reuse) {
-			flush_cache_page(vma, address);
+			flush_cache_page(vma, address, pfn);
 			entry = maybe_mkwrite(pte_mkyoung(pte_mkdirty(pte)),
 					      vma);
 			ptep_set_access_flags(vma, address, page_table, entry, 1);
@@ -1119,6 +1118,7 @@
 			++mm->rss;
 		else
 			page_remove_rmap(old_page);
+		flush_cache_page(vma, address, pfn);
 		break_cow(vma, new_page, address, page_table);
 		lru_cache_add_active(new_page);
 		page_add_anon_rmap(new_page, vma, address);
===== mm/rmap.c 1.80 vs edited =====
--- 1.80/mm/rmap.c	2004-10-28 08:39:47 +01:00
+++ edited/mm/rmap.c	2004-11-15 19:43:50 +00:00
@@ -564,7 +564,7 @@
 	}
 
 	/* Nuke the page table entry. */
-	flush_cache_page(vma, address);
+	flush_cache_page(vma, address, page_to_pfn(page));
 	pteval = ptep_clear_flush(vma, address, pte);
 
 	/* Move the dirty bit to the physical page now the pte is gone. */
@@ -676,7 +676,7 @@
 			continue;
 
 		/* Nuke the page table entry. */
-		flush_cache_page(vma, address);
+		flush_cache_page(vma, address, pfn);
 		pteval = ptep_clear_flush(vma, address, pte);
 
 		/* If nonlinear, store the file page offset in the pte. */


-- 
Russell King


----- End forwarded message -----

-- 
Russell King
 Linux kernel    2.6 ARM Linux   - http://www.arm.linux.org.uk/
 maintainer of:  2.6 PCMCIA      - http://pcmcia.arm.linux.org.uk/
                 2.6 Serial core

[-- Attachment #3: Type: message/rfc822, Size: 1567 bytes --]

From: Linus Torvalds <torvalds@osdl.org>
To: Russell King <rmk+lkml@arm.linux.org.uk>
Cc: Andrew Morton <akpm@osdl.org>, Linux Kernel List <linux-kernel@vger.kernel.org>
Subject: Re: Fwd: Re: flush_cache_page()
Date: Tue, 11 Jan 2005 16:07:09 -0800 (PST)
Message-ID: <Pine.LNX.4.58.0501111605570.2373@ppc970.osdl.org>



On Tue, 11 Jan 2005, Russell King wrote:
>
> Any responses on this?  Didn't get any last time I mailed this out.

I don't have any real objections. I'd like it verified that gcc can
compile away all the overhead on the architectures that don't use the pfn, 
since "page_to_pfn()" can be a bit expensive otherwise.. But I don't see 
anything wrong with the approach.

		Linus

[-- Attachment #4: Type: message/rfc822, Size: 3530 bytes --]

From: Russell King <rmk+lkml@arm.linux.org.uk>
To: Linus Torvalds <torvalds@osdl.org>, Philippe Robin <Philippe.Robin@arm.com>
Cc: Andrew Morton <akpm@osdl.org>, Linux Kernel List <linux-kernel@vger.kernel.org>
Subject: Re: Fwd: Re: flush_cache_page()
Date: Sat, 29 Jan 2005 11:37:08 +0000
Message-ID: <20050129113707.B2233@flint.arm.linux.org.uk>

On Tue, Jan 11, 2005 at 04:07:09PM -0800, Linus Torvalds wrote:
> On Tue, 11 Jan 2005, Russell King wrote:
> > Any responses on this?  Didn't get any last time I mailed this out.
> 
> I don't have any real objections. I'd like it verified that gcc can
> compile away all the overhead on the architectures that don't use the pfn, 
> since "page_to_pfn()" can be a bit expensive otherwise.. But I don't see 
> anything wrong with the approach.

Thanks for the response.  However, apart from Ralph, Paul and yourself,
it seems none of the other architecture maintainers care about this
patch - the original mail was BCC'd to the architecture list.  Maybe
that's an implicit acceptance of this patch, I don't know.

I do know that page_to_pfn() will generate code on some platforms which
don't require it due to them declaring flush_cache_page() as a function.
However, I assert that if they don't need this overhead, that's for them
to fix up.  I don't know all their quirks so it isn't something I can
tackle.

In other words, unless I actually receive some real help from the other
architecture maintainers on this to address your concerns, ARM version 6
CPUs with aliasing L1 caches (== >16K) will remain a dead dodo with
mainline Linux kernels.

(This mail BCC'd to the architecture list again in the vain hope that
someone will offer assistance.)

-- 
Russell King
 Linux kernel    2.6 ARM Linux   - http://www.arm.linux.org.uk/
 maintainer of:  2.6 PCMCIA      - http://pcmcia.arm.linux.org.uk/
                 2.6 Serial core

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: Changing  update_mmu_cache()
  2005-02-25 20:15     ` Russell King
@ 2005-02-25 21:43       ` Andrew Morton
  2005-02-25 21:46         ` William Lee Irwin III
  2005-02-25 22:48         ` Russell King
  2005-02-26  1:09       ` David S. Miller
  2005-02-26  1:10       ` Benjamin Herrenschmidt
  2 siblings, 2 replies; 20+ messages in thread
From: Andrew Morton @ 2005-02-25 21:43 UTC (permalink / raw)
  To: Russell King; +Cc: davem, linux-arch

Russell King <rmk@arm.linux.org.uk> wrote:
>
> The item I was referring to was my flush_cache_page() changes from
>  January 11th (attached), posted to both linux-arch and lkml, and
>  previous to that in November some time, along with Linus' reply,
>  and my somewhat later reply.
> 
>  To be completely honest, because it has been such a long time since
>  the solution was first developed, I no longer even know if this
>  solution still works.  I also suspect, since I don't follow VM
>  progress, that my knowledge of the VM is now rather out of date.
> 
>  On the plus side, a couple of architecture people have come forward
>  to say that it could be beneficial to their architecture as well.
> 
>  I just find it extremely hard to do these architecture-wide changes
>  and get them past Linus with little or no help from any other
>  architecture people, especially when I'm then asked to prove that
>  the changes do not hurt other architectures.
> 
>  I'm not really expecting anyone to do lots of hard work on this
>  though... maybe just enough satisfaction feedback to from architecture
>  people to Linus will be sufficient.
> 
>  The problem I now face is that we're almost at 2.6.11, and its been
>  almost three months, so I think it's safe to assume that Linus will
>  have forgotten everything about this, and will probably hate the
>  patch next time around.  But maybe I'm underestimating Linus.

What does it do?  Just adds a pfn arg to flush_cache_page()?  We do that
sort of thing quite a lot, and I can help.

A typical approach would be to send me a patch for the core kernel, a patch
for x86 and a patch for arm.  Any additional best-effort per-architecture
patches would be appreciated as well, of course.

I test of four architectures and compile on seven.  arch maintainers will
develop, test and submit their bits and when all the ducks are lined up
I'll send it all off to Linus.

The main problem is that people are hacking on mm/* all the damn time, so
I have to live with massive reject storms during the changeover period. 
But that's my problem, not yours ;)

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: Changing  update_mmu_cache()
  2005-02-25 21:43       ` Andrew Morton
@ 2005-02-25 21:46         ` William Lee Irwin III
  2005-02-25 22:48         ` Russell King
  1 sibling, 0 replies; 20+ messages in thread
From: William Lee Irwin III @ 2005-02-25 21:46 UTC (permalink / raw)
  To: Andrew Morton; +Cc: Russell King, davem, linux-arch

Russell King <rmk@arm.linux.org.uk> wrote:
>> The problem I now face is that we're almost at 2.6.11, and its been
>> almost three months, so I think it's safe to assume that Linus will
>> have forgotten everything about this, and will probably hate the
>> patch next time around.  But maybe I'm underestimating Linus.

On Fri, Feb 25, 2005 at 01:43:22PM -0800, Andrew Morton wrote:
> What does it do?  Just adds a pfn arg to flush_cache_page()?  We do that
> sort of thing quite a lot, and I can help.
> A typical approach would be to send me a patch for the core kernel, a patch
> for x86 and a patch for arm.  Any additional best-effort per-architecture
> patches would be appreciated as well, of course.
> I test of four architectures and compile on seven.  arch maintainers will
> develop, test and submit their bits and when all the ducks are lined up
> I'll send it all off to Linus.
> The main problem is that people are hacking on mm/* all the damn time, so
> I have to live with massive reject storms during the changeover period. 
> But that's my problem, not yours ;)

I do many-architecture testing also, and I'd be willing to help with
sweeps.


-- wli

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: Changing  update_mmu_cache()
  2005-02-25 21:43       ` Andrew Morton
  2005-02-25 21:46         ` William Lee Irwin III
@ 2005-02-25 22:48         ` Russell King
  2005-02-25 22:59           ` William Lee Irwin III
  2005-02-25 23:07           ` Andrew Morton
  1 sibling, 2 replies; 20+ messages in thread
From: Russell King @ 2005-02-25 22:48 UTC (permalink / raw)
  To: Andrew Morton; +Cc: davem, linux-arch

On Fri, Feb 25, 2005 at 01:43:22PM -0800, Andrew Morton wrote:
> Russell King <rmk@arm.linux.org.uk> wrote:
> >
> > The item I was referring to was my flush_cache_page() changes from
> >  January 11th (attached), posted to both linux-arch and lkml, and
> >  previous to that in November some time, along with Linus' reply,
> >  and my somewhat later reply.
> > 
> >  To be completely honest, because it has been such a long time since
> >  the solution was first developed, I no longer even know if this
> >  solution still works.  I also suspect, since I don't follow VM
> >  progress, that my knowledge of the VM is now rather out of date.
> > 
> >  On the plus side, a couple of architecture people have come forward
> >  to say that it could be beneficial to their architecture as well.
> > 
> >  I just find it extremely hard to do these architecture-wide changes
> >  and get them past Linus with little or no help from any other
> >  architecture people, especially when I'm then asked to prove that
> >  the changes do not hurt other architectures.
> > 
> >  I'm not really expecting anyone to do lots of hard work on this
> >  though... maybe just enough satisfaction feedback to from architecture
> >  people to Linus will be sufficient.
> > 
> >  The problem I now face is that we're almost at 2.6.11, and its been
> >  almost three months, so I think it's safe to assume that Linus will
> >  have forgotten everything about this, and will probably hate the
> >  patch next time around.  But maybe I'm underestimating Linus.
> 
> What does it do?  Just adds a pfn arg to flush_cache_page()?  We do that
> sort of thing quite a lot, and I can help.

Correct, and that's one of the small steps towards being able to map
the PFN in an architecture private mapping in such a way that it is
coherent with the alias we want to flush.

Essentially, ARM VIPT can only flush the whole cache, or a specific
set of cache lines determined by the VI and PT gained from a valid
page mapping.  What this gives us is the virtual address and the
pfn to be able to setup such a mapping, and flush the relevant
cache lines.


I'm sorry, I'm completely at the end of my rag over this.  I, for
one, can't operate with keeping up with the mainline kernel while
having these kinds of invasive patches outstanding for 4+ months
with zero help, and little prospect of them getting merged.

The same happened with the DMA mmap API.  With that, I've now
resorted to merging the ARM version of it and said "bugger the
other architectures."

What it basically comes down to is that a stable kernel series is
not the place for development.  It's obvious we're trying to meet
opposing demands with the existing development model, and it just
isn't working.  Well, it isn't for me at least.

-- 
Russell King
 Linux kernel    2.6 ARM Linux   - http://www.arm.linux.org.uk/
 maintainer of:  2.6 PCMCIA      - http://pcmcia.arm.linux.org.uk/
                 2.6 Serial core

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: Changing  update_mmu_cache()
  2005-02-25 22:48         ` Russell King
@ 2005-02-25 22:59           ` William Lee Irwin III
  2005-02-25 23:07           ` Andrew Morton
  1 sibling, 0 replies; 20+ messages in thread
From: William Lee Irwin III @ 2005-02-25 22:59 UTC (permalink / raw)
  To: Russell King; +Cc: Andrew Morton, davem, linux-arch

On Fri, Feb 25, 2005 at 10:48:09PM +0000, Russell King wrote:
> I'm sorry, I'm completely at the end of my rag over this.  I, for
> one, can't operate with keeping up with the mainline kernel while
> having these kinds of invasive patches outstanding for 4+ months
> with zero help, and little prospect of them getting merged.
> The same happened with the DMA mmap API.  With that, I've now
> resorted to merging the ARM version of it and said "bugger the
> other architectures."
> What it basically comes down to is that a stable kernel series is
> not the place for development.  It's obvious we're trying to meet
> opposing demands with the existing development model, and it just
> isn't working.  Well, it isn't for me at least.

Well, my efforts on the DMA mmap() issue got completely thwarted by
holy penguin pee, so it wasn't for lack of trying.


-- wli

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: Changing  update_mmu_cache()
  2005-02-25 22:48         ` Russell King
  2005-02-25 22:59           ` William Lee Irwin III
@ 2005-02-25 23:07           ` Andrew Morton
  1 sibling, 0 replies; 20+ messages in thread
From: Andrew Morton @ 2005-02-25 23:07 UTC (permalink / raw)
  To: Russell King; +Cc: davem, linux-arch

Russell King <rmk@arm.linux.org.uk> wrote:
>
> I'm sorry, I'm completely at the end of my rag over this.  I, for
> one, can't operate with keeping up with the mainline kernel while
> having these kinds of invasive patches outstanding for 4+ months
> with zero help,

I can help.  Send 'em over.  It's what I do.

> and little prospect of them getting merged.

Is there a technical problem, or are the problems merely general time and
patch monkeying things?

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: Changing  update_mmu_cache()
  2005-02-25 20:15     ` Russell King
  2005-02-25 21:43       ` Andrew Morton
@ 2005-02-26  1:09       ` David S. Miller
  2005-02-27 18:55         ` Paul Mundt
  2005-02-27 19:27         ` Russell King
  2005-02-26  1:10       ` Benjamin Herrenschmidt
  2 siblings, 2 replies; 20+ messages in thread
From: David S. Miller @ 2005-02-26  1:09 UTC (permalink / raw)
  To: Russell King; +Cc: linux-arch

On Fri, 25 Feb 2005 20:15:38 +0000
Russell King <rmk@arm.linux.org.uk> wrote:

> The item I was referring to was my flush_cache_page() changes from
> January 11th (attached), posted to both linux-arch and lkml, and
> previous to that in November some time, along with Linus' reply,
> and my somewhat later reply.

Awesome, thanks.

I've applied this to a current tree, hit the architectures missed
(because they simply didn't exist when you wrote your patch, not
your fault) and verified the build on sparc64, sparc, i386, ppc,
and ppc64.

I took the liberty of making sure every declaration of
flush_cache_page consisted of a single line so that full tree
greps would see the whole list of arguments.

I also enhanced SH platform so that the cache-sh4.c code no
longer calculates the physical address by hand for flush_cache_page(),
it's there now via pfn << PAGE_SHIFT.

Extra "super BONUS": Documentation/cachetlb.txt is updated as well.

I'll push this off to Linus when 2.6.12 opens up.  If folks could
build test this against current 2.6.x and report any failures that
need fixing, I would appreciate that.

# This is a BitKeeper generated diff -Nru style patch.
#
# ChangeSet
#   2005/02/25 16:36:06-08:00 davem@nuts.davemloft.net 
#   [MM]: Add 'pfn' arg to flush_cache_page().
#   
#   Based almost entirely upon a patch by Russell King.
#   
#   Signed-off-by: David S. Miller <davem@davemloft.net>
# 
# mm/rmap.c
#   2005/02/25 16:35:20-08:00 davem@nuts.davemloft.net +2 -2
#   [MM]: Add 'pfn' arg to flush_cache_page().
# 
# mm/memory.c
#   2005/02/25 16:35:20-08:00 davem@nuts.davemloft.net +2 -2
#   [MM]: Add 'pfn' arg to flush_cache_page().
# 
# mm/fremap.c
#   2005/02/25 16:35:20-08:00 davem@nuts.davemloft.net +1 -1
#   [MM]: Add 'pfn' arg to flush_cache_page().
# 
# include/asm-x86_64/cacheflush.h
#   2005/02/25 16:35:20-08:00 davem@nuts.davemloft.net +1 -1
#   [MM]: Add 'pfn' arg to flush_cache_page().
# 
# include/asm-v850/cacheflush.h
#   2005/02/25 16:35:20-08:00 davem@nuts.davemloft.net +1 -1
#   [MM]: Add 'pfn' arg to flush_cache_page().
# 
# include/asm-sparc64/cacheflush.h
#   2005/02/25 16:35:20-08:00 davem@nuts.davemloft.net +7 -7
#   [MM]: Add 'pfn' arg to flush_cache_page().
# 
# include/asm-sparc/cacheflush.h
#   2005/02/25 16:35:20-08:00 davem@nuts.davemloft.net +7 -7
#   [MM]: Add 'pfn' arg to flush_cache_page().
# 
# include/asm-sh64/cacheflush.h
#   2005/02/25 16:35:20-08:00 davem@nuts.davemloft.net +3 -3
#   [MM]: Add 'pfn' arg to flush_cache_page().
# 
# include/asm-sh/cpu-sh4/cacheflush.h
#   2005/02/25 16:35:20-08:00 davem@nuts.davemloft.net +1 -1
#   [MM]: Add 'pfn' arg to flush_cache_page().
# 
# include/asm-sh/cpu-sh3/cacheflush.h
#   2005/02/25 16:35:20-08:00 davem@nuts.davemloft.net +3 -3
#   [MM]: Add 'pfn' arg to flush_cache_page().
# 
# include/asm-sh/cpu-sh2/cacheflush.h
#   2005/02/25 16:35:20-08:00 davem@nuts.davemloft.net +2 -2
#   [MM]: Add 'pfn' arg to flush_cache_page().
# 
# include/asm-sh/cacheflush.h
#   2005/02/25 16:35:20-08:00 davem@nuts.davemloft.net +2 -2
#   [MM]: Add 'pfn' arg to flush_cache_page().
# 
# include/asm-s390/cacheflush.h
#   2005/02/25 16:35:20-08:00 davem@nuts.davemloft.net +1 -1
#   [MM]: Add 'pfn' arg to flush_cache_page().
# 
# include/asm-ppc64/cacheflush.h
#   2005/02/25 16:35:20-08:00 davem@nuts.davemloft.net +1 -1
#   [MM]: Add 'pfn' arg to flush_cache_page().
# 
# include/asm-ppc/cacheflush.h
#   2005/02/25 16:35:20-08:00 davem@nuts.davemloft.net +1 -1
#   [MM]: Add 'pfn' arg to flush_cache_page().
# 
# include/asm-parisc/cacheflush.h
#   2005/02/25 16:35:20-08:00 davem@nuts.davemloft.net +3 -3
#   [MM]: Add 'pfn' arg to flush_cache_page().
# 
# include/asm-mips/cacheflush.h
#   2005/02/25 16:35:20-08:00 davem@nuts.davemloft.net +2 -3
#   [MM]: Add 'pfn' arg to flush_cache_page().
# 
# include/asm-m68knommu/cacheflush.h
#   2005/02/25 16:35:20-08:00 davem@nuts.davemloft.net +1 -1
#   [MM]: Add 'pfn' arg to flush_cache_page().
# 
# include/asm-m68k/cacheflush.h
#   2005/02/25 16:35:20-08:00 davem@nuts.davemloft.net +7 -8
#   [MM]: Add 'pfn' arg to flush_cache_page().
# 
# include/asm-m32r/cacheflush.h
#   2005/02/25 16:35:20-08:00 davem@nuts.davemloft.net +3 -3
#   [MM]: Add 'pfn' arg to flush_cache_page().
# 
# include/asm-ia64/cacheflush.h
#   2005/02/25 16:35:20-08:00 davem@nuts.davemloft.net +1 -1
#   [MM]: Add 'pfn' arg to flush_cache_page().
# 
# include/asm-i386/cacheflush.h
#   2005/02/25 16:35:20-08:00 davem@nuts.davemloft.net +1 -1
#   [MM]: Add 'pfn' arg to flush_cache_page().
# 
# include/asm-h8300/cacheflush.h
#   2005/02/25 16:35:20-08:00 davem@nuts.davemloft.net +1 -1
#   [MM]: Add 'pfn' arg to flush_cache_page().
# 
# include/asm-frv/cacheflush.h
#   2005/02/25 16:35:20-08:00 davem@nuts.davemloft.net +1 -1
#   [MM]: Add 'pfn' arg to flush_cache_page().
# 
# include/asm-cris/cacheflush.h
#   2005/02/25 16:35:20-08:00 davem@nuts.davemloft.net +1 -1
#   [MM]: Add 'pfn' arg to flush_cache_page().
# 
# include/asm-arm26/cacheflush.h
#   2005/02/25 16:35:20-08:00 davem@nuts.davemloft.net +1 -1
#   [MM]: Add 'pfn' arg to flush_cache_page().
# 
# include/asm-arm/cacheflush.h
#   2005/02/25 16:35:20-08:00 davem@nuts.davemloft.net +8 -8
#   [MM]: Add 'pfn' arg to flush_cache_page().
# 
# include/asm-alpha/cacheflush.h
#   2005/02/25 16:35:20-08:00 davem@nuts.davemloft.net +1 -1
#   [MM]: Add 'pfn' arg to flush_cache_page().
# 
# fs/binfmt_elf.c
#   2005/02/25 16:35:19-08:00 davem@nuts.davemloft.net +1 -1
#   [MM]: Add 'pfn' arg to flush_cache_page().
# 
# arch/sparc/mm/srmmu.c
#   2005/02/25 16:35:19-08:00 davem@nuts.davemloft.net +1 -2
#   [MM]: Add 'pfn' arg to flush_cache_page().
# 
# arch/sh64/mm/cache.c
#   2005/02/25 16:35:19-08:00 davem@nuts.davemloft.net +4 -24
#   [MM]: Add 'pfn' arg to flush_cache_page().
# 
# arch/sh/mm/cache-sh7705.c
#   2005/02/25 16:35:19-08:00 davem@nuts.davemloft.net +2 -18
#   [MM]: Add 'pfn' arg to flush_cache_page().
# 
# arch/sh/mm/cache-sh4.c
#   2005/02/25 16:35:19-08:00 davem@nuts.davemloft.net +11 -30
#   [MM]: Add 'pfn' arg to flush_cache_page().
# 
# arch/mips/mm/cache.c
#   2005/02/25 16:35:19-08:00 davem@nuts.davemloft.net +1 -1
#   [MM]: Add 'pfn' arg to flush_cache_page().
# 
# arch/mips/mm/c-tx39.c
#   2005/02/25 16:35:19-08:00 davem@nuts.davemloft.net +1 -2
#   [MM]: Add 'pfn' arg to flush_cache_page().
# 
# arch/mips/mm/c-sb1.c
#   2005/02/25 16:35:19-08:00 davem@nuts.davemloft.net +6 -5
#   [MM]: Add 'pfn' arg to flush_cache_page().
# 
# arch/mips/mm/c-r4k.c
#   2005/02/25 16:35:19-08:00 davem@nuts.davemloft.net +1 -2
#   [MM]: Add 'pfn' arg to flush_cache_page().
# 
# arch/mips/mm/c-r3k.c
#   2005/02/25 16:35:19-08:00 davem@nuts.davemloft.net +1 -2
#   [MM]: Add 'pfn' arg to flush_cache_page().
# 
# arch/arm/mm/flush.c
#   2005/02/25 16:35:19-08:00 davem@nuts.davemloft.net +1 -1
#   [MM]: Add 'pfn' arg to flush_cache_page().
# 
# arch/arm/mm/fault-armv.c
#   2005/02/25 16:35:19-08:00 davem@nuts.davemloft.net +2 -2
#   [MM]: Add 'pfn' arg to flush_cache_page().
# 
# Documentation/cachetlb.txt
#   2005/02/25 16:35:19-08:00 davem@nuts.davemloft.net +9 -3
#   [MM]: Add 'pfn' arg to flush_cache_page().
# 
diff -Nru a/Documentation/cachetlb.txt b/Documentation/cachetlb.txt
--- a/Documentation/cachetlb.txt	2005-02-25 16:36:35 -08:00
+++ b/Documentation/cachetlb.txt	2005-02-25 16:36:35 -08:00
@@ -155,7 +155,7 @@
 	   change_range_of_page_tables(mm, start, end);
 	   flush_tlb_range(vma, start, end);
 
-	3) flush_cache_page(vma, addr);
+	3) flush_cache_page(vma, addr, pfn);
 	   set_pte(pte_pointer, new_pte_val);
 	   flush_tlb_page(vma, addr);
 
@@ -203,7 +203,7 @@
 	call flush_cache_page (see below) for each entry which may be
 	modified.
 
-3) void flush_cache_page(struct vm_area_struct *vma, unsigned long addr)
+3) void flush_cache_page(struct vm_area_struct *vma, unsigned long addr, unsigned long pfn)
 
 	This time we need to remove a PAGE_SIZE sized range
 	from the cache.  The 'vma' is the backing structure used by
@@ -213,8 +213,14 @@
 	executable (and thus could be in the 'instruction cache' in
 	"Harvard" type cache layouts).
 
+	The 'pfn' indicates the physical page frame (shift this value
+	left by PAGE_SHIFT to get the physical address) that 'addr'
+	translates to.  It is this mapping which should be removed from
+	the cache.
+
 	After running, there will be no entries in the cache for
-	'vma->vm_mm' for virtual address 'addr'.
+	'vma->vm_mm' for virtual address 'addr' which translates
+	to 'pfn'.
 
 	This is used primarily during fault processing.
 
diff -Nru a/arch/arm/mm/fault-armv.c b/arch/arm/mm/fault-armv.c
--- a/arch/arm/mm/fault-armv.c	2005-02-25 16:36:35 -08:00
+++ b/arch/arm/mm/fault-armv.c	2005-02-25 16:36:35 -08:00
@@ -54,7 +54,7 @@
 	 * fault (ie, is old), we can safely ignore any issues.
 	 */
 	if (pte_present(entry) && pte_val(entry) & shared_pte_mask) {
-		flush_cache_page(vma, address);
+		flush_cache_page(vma, address, pte_pfn(entry));
 		pte_val(entry) &= ~shared_pte_mask;
 		set_pte(pte, entry);
 		flush_tlb_page(vma, address);
@@ -115,7 +115,7 @@
 	if (aliases)
 		adjust_pte(vma, addr);
 	else
-		flush_cache_page(vma, addr);
+		flush_cache_page(vma, addr, page_to_pfn(page));
 }
 
 /*
diff -Nru a/arch/arm/mm/flush.c b/arch/arm/mm/flush.c
--- a/arch/arm/mm/flush.c	2005-02-25 16:36:35 -08:00
+++ b/arch/arm/mm/flush.c	2005-02-25 16:36:35 -08:00
@@ -56,7 +56,7 @@
 		if (!(mpnt->vm_flags & VM_MAYSHARE))
 			continue;
 		offset = (pgoff - mpnt->vm_pgoff) << PAGE_SHIFT;
-		flush_cache_page(mpnt, mpnt->vm_start + offset);
+		flush_cache_page(mpnt, mpnt->vm_start + offset, page_to_pfn(page));
 		if (cache_is_vipt())
 			break;
 	}
diff -Nru a/arch/mips/mm/c-r3k.c b/arch/mips/mm/c-r3k.c
--- a/arch/mips/mm/c-r3k.c	2005-02-25 16:36:35 -08:00
+++ b/arch/mips/mm/c-r3k.c	2005-02-25 16:36:35 -08:00
@@ -254,8 +254,7 @@
 {
 }
 
-static void r3k_flush_cache_page(struct vm_area_struct *vma,
-	unsigned long page)
+static void r3k_flush_cache_page(struct vm_area_struct *vma, unsigned long page, unsigned long pfn)
 {
 }
 
diff -Nru a/arch/mips/mm/c-r4k.c b/arch/mips/mm/c-r4k.c
--- a/arch/mips/mm/c-r4k.c	2005-02-25 16:36:35 -08:00
+++ b/arch/mips/mm/c-r4k.c	2005-02-25 16:36:35 -08:00
@@ -426,8 +426,7 @@
 	}
 }
 
-static void r4k_flush_cache_page(struct vm_area_struct *vma,
-	unsigned long page)
+static void r4k_flush_cache_page(struct vm_area_struct *vma, unsigned long page, unsigned long pfn)
 {
 	struct flush_cache_page_args args;
 
diff -Nru a/arch/mips/mm/c-sb1.c b/arch/mips/mm/c-sb1.c
--- a/arch/mips/mm/c-sb1.c	2005-02-25 16:36:35 -08:00
+++ b/arch/mips/mm/c-sb1.c	2005-02-25 16:36:35 -08:00
@@ -160,8 +160,7 @@
  * dcache first, then invalidate the icache.  If the page isn't
  * executable, nothing is required.
  */
-static void local_sb1_flush_cache_page(struct vm_area_struct *vma,
-	unsigned long addr)
+static void local_sb1_flush_cache_page(struct vm_area_struct *vma, unsigned long addr, unsigned long pfn)
 {
 	int cpu = smp_processor_id();
 
@@ -183,17 +182,18 @@
 struct flush_cache_page_args {
 	struct vm_area_struct *vma;
 	unsigned long addr;
+	unsigned long pfn;
 };
 
 static void sb1_flush_cache_page_ipi(void *info)
 {
 	struct flush_cache_page_args *args = info;
 
-	local_sb1_flush_cache_page(args->vma, args->addr);
+	local_sb1_flush_cache_page(args->vma, args->addr, args->pfn);
 }
 
 /* Dirty dcache could be on another CPU, so do the IPIs */
-static void sb1_flush_cache_page(struct vm_area_struct *vma, unsigned long addr)
+static void sb1_flush_cache_page(struct vm_area_struct *vma, unsigned long addr, unsigned long pfn)
 {
 	struct flush_cache_page_args args;
 
@@ -203,10 +203,11 @@
 	addr &= PAGE_MASK;
 	args.vma = vma;
 	args.addr = addr;
+	args.pfn = pfn;
 	on_each_cpu(sb1_flush_cache_page_ipi, (void *) &args, 1, 1);
 }
 #else
-void sb1_flush_cache_page(struct vm_area_struct *vma, unsigned long addr)
+void sb1_flush_cache_page(struct vm_area_struct *vma, unsigned long addr, unsigned long pfn)
 	__attribute__((alias("local_sb1_flush_cache_page")));
 #endif
 
diff -Nru a/arch/mips/mm/c-tx39.c b/arch/mips/mm/c-tx39.c
--- a/arch/mips/mm/c-tx39.c	2005-02-25 16:36:35 -08:00
+++ b/arch/mips/mm/c-tx39.c	2005-02-25 16:36:35 -08:00
@@ -178,8 +178,7 @@
 	}
 }
 
-static void tx39_flush_cache_page(struct vm_area_struct *vma,
-				   unsigned long page)
+static void tx39_flush_cache_page(struct vm_area_struct *vma, unsigned long page, unsigned long pfn)
 {
 	int exec = vma->vm_flags & VM_EXEC;
 	struct mm_struct *mm = vma->vm_mm;
diff -Nru a/arch/mips/mm/cache.c b/arch/mips/mm/cache.c
--- a/arch/mips/mm/cache.c	2005-02-25 16:36:35 -08:00
+++ b/arch/mips/mm/cache.c	2005-02-25 16:36:35 -08:00
@@ -23,7 +23,7 @@
 void (*flush_cache_mm)(struct mm_struct *mm);
 void (*flush_cache_range)(struct vm_area_struct *vma, unsigned long start,
 	unsigned long end);
-void (*flush_cache_page)(struct vm_area_struct *vma, unsigned long page);
+void (*flush_cache_page)(struct vm_area_struct *vma, unsigned long page, unsigned long pfn);
 void (*flush_icache_range)(unsigned long start, unsigned long end);
 void (*flush_icache_page)(struct vm_area_struct *vma, struct page *page);
 
diff -Nru a/arch/sh/mm/cache-sh4.c b/arch/sh/mm/cache-sh4.c
--- a/arch/sh/mm/cache-sh4.c	2005-02-25 16:36:35 -08:00
+++ b/arch/sh/mm/cache-sh4.c	2005-02-25 16:36:35 -08:00
@@ -258,10 +258,16 @@
 	flush_cache_all();
 }
 
-static void __flush_cache_page(struct vm_area_struct *vma,
-			       unsigned long address,
-			       unsigned long phys)
+/*
+ * Write back and invalidate I/D-caches for the page.
+ *
+ * ADDR: Virtual Address (U0 address)
+ * PFN: Physical page number
+ */
+void flush_cache_page(struct vm_area_struct *vma, unsigned long address, unsigned long pfn)
 {
+	unsigned long phys = pfn << PAGE_SHIFT;
+
 	/* We only need to flush D-cache when we have alias */
 	if ((address^phys) & CACHE_ALIAS) {
 		/* Loop 4K of the D-cache */
@@ -342,32 +348,6 @@
 }
 
 /*
- * Write back and invalidate I/D-caches for the page.
- *
- * ADDR: Virtual Address (U0 address)
- */
-void flush_cache_page(struct vm_area_struct *vma, unsigned long address)
-{
-	pgd_t *dir;
-	pmd_t *pmd;
-	pte_t *pte;
-	pte_t entry;
-	unsigned long phys;
-
-	dir = pgd_offset(vma->vm_mm, address);
-	pmd = pmd_offset(dir, address);
-	if (pmd_none(*pmd) || pmd_bad(*pmd))
-		return;
-	pte = pte_offset_kernel(pmd, address);
-	entry = *pte;
-	if (!(pte_val(entry) & _PAGE_PRESENT))
-		return;
-
-	phys = pte_val(entry)&PTE_PHYS_MASK;
-	__flush_cache_page(vma, address, phys);
-}
-
-/*
  * flush_icache_user_range
  * @vma: VMA of the process
  * @page: page
@@ -377,6 +357,7 @@
 void flush_icache_user_range(struct vm_area_struct *vma,
 			     struct page *page, unsigned long addr, int len)
 {
-	__flush_cache_page(vma, addr, PHYSADDR(page_address(page)));
+	__flush_cache_page(vma, addr,
+			   PHYSADDR(page_address(page)) >> PAGE_SHIFT);
 }
 
diff -Nru a/arch/sh/mm/cache-sh7705.c b/arch/sh/mm/cache-sh7705.c
--- a/arch/sh/mm/cache-sh7705.c	2005-02-25 16:36:35 -08:00
+++ b/arch/sh/mm/cache-sh7705.c	2005-02-25 16:36:35 -08:00
@@ -186,25 +186,9 @@
  *
  * ADDRESS: Virtual Address (U0 address)
  */
-void flush_cache_page(struct vm_area_struct *vma, unsigned long address)
+void flush_cache_page(struct vm_area_struct *vma, unsigned long address, unsigned long pfn)
 {
-	pgd_t *dir;
-	pmd_t *pmd;
-	pte_t *pte;
-	pte_t entry;
-	unsigned long phys;
-
-	dir = pgd_offset(vma->vm_mm, address);
-	pmd = pmd_offset(dir, address);
-	if (pmd_none(*pmd) || pmd_bad(*pmd))
-		return;
-	pte = pte_offset(pmd, address);
-	entry = *pte;
-	if (pte_none(entry) || !pte_present(entry))
-		return;
-
-	phys = pte_val(entry)&PTE_PHYS_MASK;
-	__flush_dcache_page(phys);
+	__flush_dcache_page(pfn << PAGE_SHIFT);
 }
 
 /*
diff -Nru a/arch/sh64/mm/cache.c b/arch/sh64/mm/cache.c
--- a/arch/sh64/mm/cache.c	2005-02-25 16:36:35 -08:00
+++ b/arch/sh64/mm/cache.c	2005-02-25 16:36:35 -08:00
@@ -573,29 +573,9 @@
 	}
 }
 
-static void sh64_dcache_purge_virt_page(struct mm_struct *mm, unsigned long eaddr)
+static inline void sh64_dcache_purge_virt_page(struct mm_struct *mm, unsigned long eaddr, unsigned long pfn)
 {
-	unsigned long phys;
-	pgd_t *pgd;
-	pmd_t *pmd;
-	pte_t *pte;
-	pte_t entry;
-
-	pgd = pgd_offset(mm, eaddr);
-	pmd = pmd_offset(pgd, eaddr);
-
-	if (pmd_none(*pmd) || pmd_bad(*pmd))
-		return;
-
-	pte = pte_offset_kernel(pmd, eaddr);
-	entry = *pte;
-
-	if (pte_none(entry) || !pte_present(entry))
-		return;
-
-	phys = pte_val(entry) & PAGE_MASK;
-
-	sh64_dcache_purge_phy_page(phys);
+	sh64_dcache_purge_phy_page(pfn << PAGE_SHIFT);
 }
 
 static void sh64_dcache_purge_user_page(struct mm_struct *mm, unsigned long eaddr)
@@ -904,7 +884,7 @@
 
 /****************************************************************************/
 
-void flush_cache_page(struct vm_area_struct *vma, unsigned long eaddr)
+void flush_cache_page(struct vm_area_struct *vma, unsigned long eaddr, unsigned long pfn)
 {
 	/* Invalidate any entries in either cache for the vma within the user
 	   address space vma->vm_mm for the page starting at virtual address
@@ -915,7 +895,7 @@
 	   Note(1), this is called with mm->page_table_lock held.
 	   */
 
-	sh64_dcache_purge_virt_page(vma->vm_mm, eaddr);
+	sh64_dcache_purge_virt_page(vma->vm_mm, eaddr, pfn);
 
 	if (vma->vm_flags & VM_EXEC) {
 		sh64_icache_inv_user_page(vma, eaddr);
diff -Nru a/arch/sparc/mm/srmmu.c b/arch/sparc/mm/srmmu.c
--- a/arch/sparc/mm/srmmu.c	2005-02-25 16:36:35 -08:00
+++ b/arch/sparc/mm/srmmu.c	2005-02-25 16:36:35 -08:00
@@ -1003,8 +1003,7 @@
 extern void viking_flush_cache_mm(struct mm_struct *mm);
 extern void viking_flush_cache_range(struct vm_area_struct *vma, unsigned long start,
 				     unsigned long end);
-extern void viking_flush_cache_page(struct vm_area_struct *vma,
-				    unsigned long page);
+extern void viking_flush_cache_page(struct vm_area_struct *vma, unsigned long page);
 extern void viking_flush_page_to_ram(unsigned long page);
 extern void viking_flush_page_for_dma(unsigned long page);
 extern void viking_flush_sig_insns(struct mm_struct *mm, unsigned long addr);
diff -Nru a/fs/binfmt_elf.c b/fs/binfmt_elf.c
--- a/fs/binfmt_elf.c	2005-02-25 16:36:35 -08:00
+++ b/fs/binfmt_elf.c	2005-02-25 16:36:35 -08:00
@@ -1585,7 +1585,7 @@
 					DUMP_SEEK (file->f_pos + PAGE_SIZE);
 				} else {
 					void *kaddr;
-					flush_cache_page(vma, addr);
+					flush_cache_page(vma, addr, page_to_pfn(page));
 					kaddr = kmap(page);
 					if ((size += PAGE_SIZE) > limit ||
 					    !dump_write(file, kaddr,
diff -Nru a/include/asm-alpha/cacheflush.h b/include/asm-alpha/cacheflush.h
--- a/include/asm-alpha/cacheflush.h	2005-02-25 16:36:35 -08:00
+++ b/include/asm-alpha/cacheflush.h	2005-02-25 16:36:35 -08:00
@@ -8,7 +8,7 @@
 #define flush_cache_all()			do { } while (0)
 #define flush_cache_mm(mm)			do { } while (0)
 #define flush_cache_range(vma, start, end)	do { } while (0)
-#define flush_cache_page(vma, vmaddr)		do { } while (0)
+#define flush_cache_page(vma, vmaddr, pfn)	do { } while (0)
 #define flush_dcache_page(page)			do { } while (0)
 #define flush_dcache_mmap_lock(mapping)		do { } while (0)
 #define flush_dcache_mmap_unlock(mapping)	do { } while (0)
diff -Nru a/include/asm-arm/cacheflush.h b/include/asm-arm/cacheflush.h
--- a/include/asm-arm/cacheflush.h	2005-02-25 16:36:35 -08:00
+++ b/include/asm-arm/cacheflush.h	2005-02-25 16:36:35 -08:00
@@ -237,16 +237,16 @@
  * space" model to handle this.
  */
 #define copy_to_user_page(vma, page, vaddr, dst, src, len) \
-	do {					\
-		flush_cache_page(vma, vaddr);	\
-		memcpy(dst, src, len);		\
-		flush_dcache_page(page);	\
+	do {							\
+		flush_cache_page(vma, vaddr, page_to_pfn(page));\
+		memcpy(dst, src, len);				\
+		flush_dcache_page(page);			\
 	} while (0)
 
 #define copy_from_user_page(vma, page, vaddr, dst, src, len) \
-	do {					\
-		flush_cache_page(vma, vaddr);	\
-		memcpy(dst, src, len);		\
+	do {							\
+		flush_cache_page(vma, vaddr, page_to_pfn(page));\
+		memcpy(dst, src, len);				\
 	} while (0)
 
 /*
@@ -269,7 +269,7 @@
 }
 
 static inline void
-flush_cache_page(struct vm_area_struct *vma, unsigned long user_addr)
+flush_cache_page(struct vm_area_struct *vma, unsigned long user_addr, unsigned long pfn)
 {
 	if (cpu_isset(smp_processor_id(), vma->vm_mm->cpu_vm_mask)) {
 		unsigned long addr = user_addr & PAGE_MASK;
diff -Nru a/include/asm-arm26/cacheflush.h b/include/asm-arm26/cacheflush.h
--- a/include/asm-arm26/cacheflush.h	2005-02-25 16:36:35 -08:00
+++ b/include/asm-arm26/cacheflush.h	2005-02-25 16:36:35 -08:00
@@ -23,7 +23,7 @@
 #define flush_cache_all()                       do { } while (0)
 #define flush_cache_mm(mm)                      do { } while (0)
 #define flush_cache_range(vma,start,end)        do { } while (0)
-#define flush_cache_page(vma,vmaddr)            do { } while (0)
+#define flush_cache_page(vma,vmaddr,pfn)        do { } while (0)
 #define flush_cache_vmap(start, end)		do { } while (0)
 #define flush_cache_vunmap(start, end)		do { } while (0)
 
diff -Nru a/include/asm-cris/cacheflush.h b/include/asm-cris/cacheflush.h
--- a/include/asm-cris/cacheflush.h	2005-02-25 16:36:35 -08:00
+++ b/include/asm-cris/cacheflush.h	2005-02-25 16:36:35 -08:00
@@ -10,7 +10,7 @@
 #define flush_cache_all()			do { } while (0)
 #define flush_cache_mm(mm)			do { } while (0)
 #define flush_cache_range(vma, start, end)	do { } while (0)
-#define flush_cache_page(vma, vmaddr)		do { } while (0)
+#define flush_cache_page(vma, vmaddr, pfn)	do { } while (0)
 #define flush_dcache_page(page)			do { } while (0)
 #define flush_dcache_mmap_lock(mapping)		do { } while (0)
 #define flush_dcache_mmap_unlock(mapping)	do { } while (0)
diff -Nru a/include/asm-frv/cacheflush.h b/include/asm-frv/cacheflush.h
--- a/include/asm-frv/cacheflush.h	2005-02-25 16:36:35 -08:00
+++ b/include/asm-frv/cacheflush.h	2005-02-25 16:36:35 -08:00
@@ -21,7 +21,7 @@
 #define flush_cache_all()			do {} while(0)
 #define flush_cache_mm(mm)			do {} while(0)
 #define flush_cache_range(mm, start, end)	do {} while(0)
-#define flush_cache_page(vma, vmaddr)		do {} while(0)
+#define flush_cache_page(vma, vmaddr, pfn)	do {} while(0)
 #define flush_cache_vmap(start, end)		do {} while(0)
 #define flush_cache_vunmap(start, end)		do {} while(0)
 #define flush_dcache_mmap_lock(mapping)		do {} while(0)
diff -Nru a/include/asm-h8300/cacheflush.h b/include/asm-h8300/cacheflush.h
--- a/include/asm-h8300/cacheflush.h	2005-02-25 16:36:35 -08:00
+++ b/include/asm-h8300/cacheflush.h	2005-02-25 16:36:35 -08:00
@@ -13,7 +13,7 @@
 #define flush_cache_all()
 #define	flush_cache_mm(mm)
 #define	flush_cache_range(vma,a,b)
-#define	flush_cache_page(vma,p)
+#define	flush_cache_page(vma,p,pfn)
 #define	flush_dcache_page(page)
 #define	flush_dcache_mmap_lock(mapping)
 #define	flush_dcache_mmap_unlock(mapping)
diff -Nru a/include/asm-i386/cacheflush.h b/include/asm-i386/cacheflush.h
--- a/include/asm-i386/cacheflush.h	2005-02-25 16:36:35 -08:00
+++ b/include/asm-i386/cacheflush.h	2005-02-25 16:36:35 -08:00
@@ -8,7 +8,7 @@
 #define flush_cache_all()			do { } while (0)
 #define flush_cache_mm(mm)			do { } while (0)
 #define flush_cache_range(vma, start, end)	do { } while (0)
-#define flush_cache_page(vma, vmaddr)		do { } while (0)
+#define flush_cache_page(vma, vmaddr, pfn)	do { } while (0)
 #define flush_dcache_page(page)			do { } while (0)
 #define flush_dcache_mmap_lock(mapping)		do { } while (0)
 #define flush_dcache_mmap_unlock(mapping)	do { } while (0)
diff -Nru a/include/asm-ia64/cacheflush.h b/include/asm-ia64/cacheflush.h
--- a/include/asm-ia64/cacheflush.h	2005-02-25 16:36:35 -08:00
+++ b/include/asm-ia64/cacheflush.h	2005-02-25 16:36:35 -08:00
@@ -19,7 +19,7 @@
 #define flush_cache_all()			do { } while (0)
 #define flush_cache_mm(mm)			do { } while (0)
 #define flush_cache_range(vma, start, end)	do { } while (0)
-#define flush_cache_page(vma, vmaddr)		do { } while (0)
+#define flush_cache_page(vma, vmaddr, pfn)	do { } while (0)
 #define flush_icache_page(vma,page)		do { } while (0)
 #define flush_cache_vmap(start, end)		do { } while (0)
 #define flush_cache_vunmap(start, end)		do { } while (0)
diff -Nru a/include/asm-m32r/cacheflush.h b/include/asm-m32r/cacheflush.h
--- a/include/asm-m32r/cacheflush.h	2005-02-25 16:36:35 -08:00
+++ b/include/asm-m32r/cacheflush.h	2005-02-25 16:36:35 -08:00
@@ -11,7 +11,7 @@
 #define flush_cache_all()			do { } while (0)
 #define flush_cache_mm(mm)			do { } while (0)
 #define flush_cache_range(vma, start, end)	do { } while (0)
-#define flush_cache_page(vma, vmaddr)		do { } while (0)
+#define flush_cache_page(vma, vmaddr, pfn)	do { } while (0)
 #define flush_dcache_page(page)			do { } while (0)
 #define flush_dcache_mmap_lock(mapping)		do { } while (0)
 #define flush_dcache_mmap_unlock(mapping)	do { } while (0)
@@ -31,7 +31,7 @@
 #define flush_cache_all()			do { } while (0)
 #define flush_cache_mm(mm)			do { } while (0)
 #define flush_cache_range(vma, start, end)	do { } while (0)
-#define flush_cache_page(vma, vmaddr)		do { } while (0)
+#define flush_cache_page(vma, vmaddr, pfn)	do { } while (0)
 #define flush_dcache_page(page)			do { } while (0)
 #define flush_dcache_mmap_lock(mapping)		do { } while (0)
 #define flush_dcache_mmap_unlock(mapping)	do { } while (0)
@@ -43,7 +43,7 @@
 #define flush_cache_all()			do { } while (0)
 #define flush_cache_mm(mm)			do { } while (0)
 #define flush_cache_range(vma, start, end)	do { } while (0)
-#define flush_cache_page(vma, vmaddr)		do { } while (0)
+#define flush_cache_page(vma, vmaddr, pfn)	do { } while (0)
 #define flush_dcache_page(page)			do { } while (0)
 #define flush_dcache_mmap_lock(mapping)		do { } while (0)
 #define flush_dcache_mmap_unlock(mapping)	do { } while (0)
diff -Nru a/include/asm-m68k/cacheflush.h b/include/asm-m68k/cacheflush.h
--- a/include/asm-m68k/cacheflush.h	2005-02-25 16:36:35 -08:00
+++ b/include/asm-m68k/cacheflush.h	2005-02-25 16:36:35 -08:00
@@ -99,8 +99,7 @@
 	        __flush_cache_030();
 }
 
-static inline void flush_cache_page(struct vm_area_struct *vma,
-				    unsigned long vmaddr)
+static inline void flush_cache_page(struct vm_area_struct *vma, unsigned long vmaddr, unsigned long pfn)
 {
 	if (vma->vm_mm == current->mm)
 	        __flush_cache_030();
@@ -134,15 +133,15 @@
 #define flush_icache_user_range(vma,pg,adr,len)	do { } while (0)
 
 #define copy_to_user_page(vma, page, vaddr, dst, src, len) \
-	do {					\
-		flush_cache_page(vma, vaddr);	\
-		memcpy(dst, src, len);		\
+	do {							\
+		flush_cache_page(vma, vaddr, page_to_pfn(page));\
+		memcpy(dst, src, len);				\
 	} while (0)
 
 #define copy_from_user_page(vma, page, vaddr, dst, src, len) \
-	do {					\
-		flush_cache_page(vma, vaddr);	\
-		memcpy(dst, src, len);		\
+	do {							\
+		flush_cache_page(vma, vaddr, page_to_pfn(page));\
+		memcpy(dst, src, len);				\
 	} while (0)
 
 extern void flush_icache_range(unsigned long address, unsigned long endaddr);
diff -Nru a/include/asm-m68knommu/cacheflush.h b/include/asm-m68knommu/cacheflush.h
--- a/include/asm-m68knommu/cacheflush.h	2005-02-25 16:36:35 -08:00
+++ b/include/asm-m68knommu/cacheflush.h	2005-02-25 16:36:35 -08:00
@@ -9,7 +9,7 @@
 #define flush_cache_all()			__flush_cache_all()
 #define flush_cache_mm(mm)			do { } while (0)
 #define flush_cache_range(vma, start, end)	do { } while (0)
-#define flush_cache_page(vma, vmaddr)		do { } while (0)
+#define flush_cache_page(vma, vmaddr, pfn)	do { } while (0)
 #define flush_dcache_range(start,len)		do { } while (0)
 #define flush_dcache_page(page)			do { } while (0)
 #define flush_dcache_mmap_lock(mapping)		do { } while (0)
diff -Nru a/include/asm-mips/cacheflush.h b/include/asm-mips/cacheflush.h
--- a/include/asm-mips/cacheflush.h	2005-02-25 16:36:35 -08:00
+++ b/include/asm-mips/cacheflush.h	2005-02-25 16:36:35 -08:00
@@ -17,7 +17,7 @@
  *
  *  - flush_cache_all() flushes entire cache
  *  - flush_cache_mm(mm) flushes the specified mm context's cache lines
- *  - flush_cache_page(mm, vmaddr) flushes a single page
+ *  - flush_cache_page(mm, vmaddr, pfn) flushes a single page
  *  - flush_cache_range(vma, start, end) flushes a range of pages
  *  - flush_icache_range(start, end) flush a range of instructions
  *  - flush_dcache_page(pg) flushes(wback&invalidates) a page for dcache
@@ -34,8 +34,7 @@
 extern void (*flush_cache_mm)(struct mm_struct *mm);
 extern void (*flush_cache_range)(struct vm_area_struct *vma,
 	unsigned long start, unsigned long end);
-extern void (*flush_cache_page)(struct vm_area_struct *vma,
-	unsigned long page);
+extern void (*flush_cache_page)(struct vm_area_struct *vma, unsigned long page, unsigned long pfn);
 extern void __flush_dcache_page(struct page *page);
 
 static inline void flush_dcache_page(struct page *page)
diff -Nru a/include/asm-parisc/cacheflush.h b/include/asm-parisc/cacheflush.h
--- a/include/asm-parisc/cacheflush.h	2005-02-25 16:36:35 -08:00
+++ b/include/asm-parisc/cacheflush.h	2005-02-25 16:36:35 -08:00
@@ -67,14 +67,14 @@
 
 #define copy_to_user_page(vma, page, vaddr, dst, src, len) \
 do { \
-	flush_cache_page(vma, vaddr); \
+	flush_cache_page(vma, vaddr, page_to_pfn(page)); \
 	memcpy(dst, src, len); \
 	flush_kernel_dcache_range_asm((unsigned long)dst, (unsigned long)dst + len); \
 } while (0)
 
 #define copy_from_user_page(vma, page, vaddr, dst, src, len) \
 do { \
-	flush_cache_page(vma, vaddr); \
+	flush_cache_page(vma, vaddr, page_to_pfn(page)); \
 	memcpy(dst, src, len); \
 } while (0)
 
@@ -170,7 +170,7 @@
 }
 
 static inline void
-flush_cache_page(struct vm_area_struct *vma, unsigned long vmaddr)
+flush_cache_page(struct vm_area_struct *vma, unsigned long vmaddr, unsigned long pfn)
 {
 	BUG_ON(!vma->vm_mm->context);
 
diff -Nru a/include/asm-ppc/cacheflush.h b/include/asm-ppc/cacheflush.h
--- a/include/asm-ppc/cacheflush.h	2005-02-25 16:36:35 -08:00
+++ b/include/asm-ppc/cacheflush.h	2005-02-25 16:36:35 -08:00
@@ -22,7 +22,7 @@
 #define flush_cache_all()		do { } while (0)
 #define flush_cache_mm(mm)		do { } while (0)
 #define flush_cache_range(vma, a, b)	do { } while (0)
-#define flush_cache_page(vma, p)	do { } while (0)
+#define flush_cache_page(vma, p, pfn)	do { } while (0)
 #define flush_icache_page(vma, page)	do { } while (0)
 #define flush_cache_vmap(start, end)	do { } while (0)
 #define flush_cache_vunmap(start, end)	do { } while (0)
diff -Nru a/include/asm-ppc64/cacheflush.h b/include/asm-ppc64/cacheflush.h
--- a/include/asm-ppc64/cacheflush.h	2005-02-25 16:36:35 -08:00
+++ b/include/asm-ppc64/cacheflush.h	2005-02-25 16:36:35 -08:00
@@ -12,7 +12,7 @@
 #define flush_cache_all()			do { } while (0)
 #define flush_cache_mm(mm)			do { } while (0)
 #define flush_cache_range(vma, start, end)	do { } while (0)
-#define flush_cache_page(vma, vmaddr)		do { } while (0)
+#define flush_cache_page(vma, vmaddr, pfn)	do { } while (0)
 #define flush_icache_page(vma, page)		do { } while (0)
 #define flush_cache_vmap(start, end)		do { } while (0)
 #define flush_cache_vunmap(start, end)		do { } while (0)
diff -Nru a/include/asm-s390/cacheflush.h b/include/asm-s390/cacheflush.h
--- a/include/asm-s390/cacheflush.h	2005-02-25 16:36:35 -08:00
+++ b/include/asm-s390/cacheflush.h	2005-02-25 16:36:35 -08:00
@@ -8,7 +8,7 @@
 #define flush_cache_all()			do { } while (0)
 #define flush_cache_mm(mm)			do { } while (0)
 #define flush_cache_range(vma, start, end)	do { } while (0)
-#define flush_cache_page(vma, vmaddr)		do { } while (0)
+#define flush_cache_page(vma, vmaddr, pfn)	do { } while (0)
 #define flush_dcache_page(page)			do { } while (0)
 #define flush_dcache_mmap_lock(mapping)		do { } while (0)
 #define flush_dcache_mmap_unlock(mapping)	do { } while (0)
diff -Nru a/include/asm-sh/cacheflush.h b/include/asm-sh/cacheflush.h
--- a/include/asm-sh/cacheflush.h	2005-02-25 16:36:35 -08:00
+++ b/include/asm-sh/cacheflush.h	2005-02-25 16:36:35 -08:00
@@ -15,14 +15,14 @@
 
 #define copy_to_user_page(vma, page, vaddr, dst, src, len) \
 	do {							\
-		flush_cache_page(vma, vaddr);			\
+		flush_cache_page(vma, vaddr, page_to_pfn(page));\
 		memcpy(dst, src, len);				\
 		flush_icache_user_range(vma, page, vaddr, len);	\
 	} while (0)
 
 #define copy_from_user_page(vma, page, vaddr, dst, src, len) \
 	do {							\
-		flush_cache_page(vma, vaddr);			\
+		flush_cache_page(vma, vaddr, page_to_pfn(page));\
 		memcpy(dst, src, len);				\
 	} while (0)
 
diff -Nru a/include/asm-sh/cpu-sh2/cacheflush.h b/include/asm-sh/cpu-sh2/cacheflush.h
--- a/include/asm-sh/cpu-sh2/cacheflush.h	2005-02-25 16:36:35 -08:00
+++ b/include/asm-sh/cpu-sh2/cacheflush.h	2005-02-25 16:36:35 -08:00
@@ -15,7 +15,7 @@
  *
  *  - flush_cache_all() flushes entire cache
  *  - flush_cache_mm(mm) flushes the specified mm context's cache lines
- *  - flush_cache_page(mm, vmaddr) flushes a single page
+ *  - flush_cache_page(mm, vmaddr, pfn) flushes a single page
  *  - flush_cache_range(vma, start, end) flushes a range of pages
  *
  *  - flush_dcache_page(pg) flushes(wback&invalidates) a page for dcache
@@ -28,7 +28,7 @@
 #define flush_cache_all()			do { } while (0)
 #define flush_cache_mm(mm)			do { } while (0)
 #define flush_cache_range(vma, start, end)	do { } while (0)
-#define flush_cache_page(vma, vmaddr)		do { } while (0)
+#define flush_cache_page(vma, vmaddr, pfn)	do { } while (0)
 #define flush_dcache_page(page)			do { } while (0)
 #define flush_dcache_mmap_lock(mapping)		do { } while (0)
 #define flush_dcache_mmap_unlock(mapping)	do { } while (0)
diff -Nru a/include/asm-sh/cpu-sh3/cacheflush.h b/include/asm-sh/cpu-sh3/cacheflush.h
--- a/include/asm-sh/cpu-sh3/cacheflush.h	2005-02-25 16:36:35 -08:00
+++ b/include/asm-sh/cpu-sh3/cacheflush.h	2005-02-25 16:36:35 -08:00
@@ -15,7 +15,7 @@
  *
  *  - flush_cache_all() flushes entire cache
  *  - flush_cache_mm(mm) flushes the specified mm context's cache lines
- *  - flush_cache_page(mm, vmaddr) flushes a single page
+ *  - flush_cache_page(mm, vmaddr, pfn) flushes a single page
  *  - flush_cache_range(vma, start, end) flushes a range of pages
  *
  *  - flush_dcache_page(pg) flushes(wback&invalidates) a page for dcache
@@ -43,7 +43,7 @@
 extern void flush_cache_mm(struct mm_struct *mm);
 extern void flush_cache_range(struct vm_area_struct *vma, unsigned long start,
                               unsigned long end);
-extern void flush_cache_page(struct vm_area_struct *vma, unsigned long addr);
+extern void flush_cache_page(struct vm_area_struct *vma, unsigned long addr, unsigned long pfn);
 extern void flush_dcache_page(struct page *pg);
 extern void flush_icache_range(unsigned long start, unsigned long end);
 extern void flush_icache_page(struct vm_area_struct *vma, struct page *page);
@@ -68,7 +68,7 @@
 #define flush_cache_all()			do { } while (0)
 #define flush_cache_mm(mm)			do { } while (0)
 #define flush_cache_range(vma, start, end)	do { } while (0)
-#define flush_cache_page(vma, vmaddr)		do { } while (0)
+#define flush_cache_page(vma, vmaddr, pfn)	do { } while (0)
 #define flush_dcache_page(page)			do { } while (0)
 #define flush_dcache_mmap_lock(mapping)		do { } while (0)
 #define flush_dcache_mmap_unlock(mapping)	do { } while (0)
diff -Nru a/include/asm-sh/cpu-sh4/cacheflush.h b/include/asm-sh/cpu-sh4/cacheflush.h
--- a/include/asm-sh/cpu-sh4/cacheflush.h	2005-02-25 16:36:35 -08:00
+++ b/include/asm-sh/cpu-sh4/cacheflush.h	2005-02-25 16:36:35 -08:00
@@ -28,7 +28,7 @@
 extern void flush_cache_mm(struct mm_struct *mm);
 extern void flush_cache_range(struct vm_area_struct *vma, unsigned long start,
 			      unsigned long end);
-extern void flush_cache_page(struct vm_area_struct *vma, unsigned long addr);
+extern void flush_cache_page(struct vm_area_struct *vma, unsigned long addr, unsigned long pfn);
 extern void flush_dcache_page(struct page *pg);
 
 #define flush_dcache_mmap_lock(mapping)		do { } while (0)
diff -Nru a/include/asm-sh64/cacheflush.h b/include/asm-sh64/cacheflush.h
--- a/include/asm-sh64/cacheflush.h	2005-02-25 16:36:35 -08:00
+++ b/include/asm-sh64/cacheflush.h	2005-02-25 16:36:35 -08:00
@@ -14,7 +14,7 @@
 extern void flush_cache_sigtramp(unsigned long start, unsigned long end);
 extern void flush_cache_range(struct vm_area_struct *vma, unsigned long start,
 			      unsigned long end);
-extern void flush_cache_page(struct vm_area_struct *vma, unsigned long addr);
+extern void flush_cache_page(struct vm_area_struct *vma, unsigned long addr, unsigned long pfn);
 extern void flush_dcache_page(struct page *pg);
 extern void flush_icache_range(unsigned long start, unsigned long end);
 extern void flush_icache_user_range(struct vm_area_struct *vma,
@@ -31,14 +31,14 @@
 
 #define copy_to_user_page(vma, page, vaddr, dst, src, len) \
 	do {							\
-		flush_cache_page(vma, vaddr);			\
+		flush_cache_page(vma, vaddr, page_to_pfn(page));\
 		memcpy(dst, src, len);				\
 		flush_icache_user_range(vma, page, vaddr, len);	\
 	} while (0)
 
 #define copy_from_user_page(vma, page, vaddr, dst, src, len) \
 	do {							\
-		flush_cache_page(vma, vaddr);			\
+		flush_cache_page(vma, vaddr, page_to_pfn(page));\
 		memcpy(dst, src, len);				\
 	} while (0)
 
diff -Nru a/include/asm-sparc/cacheflush.h b/include/asm-sparc/cacheflush.h
--- a/include/asm-sparc/cacheflush.h	2005-02-25 16:36:35 -08:00
+++ b/include/asm-sparc/cacheflush.h	2005-02-25 16:36:35 -08:00
@@ -50,21 +50,21 @@
 #define flush_cache_all() BTFIXUP_CALL(flush_cache_all)()
 #define flush_cache_mm(mm) BTFIXUP_CALL(flush_cache_mm)(mm)
 #define flush_cache_range(vma,start,end) BTFIXUP_CALL(flush_cache_range)(vma,start,end)
-#define flush_cache_page(vma,addr) BTFIXUP_CALL(flush_cache_page)(vma,addr)
+#define flush_cache_page(vma,addr,pfn) BTFIXUP_CALL(flush_cache_page)(vma,addr)
 #define flush_icache_range(start, end)		do { } while (0)
 #define flush_icache_page(vma, pg)		do { } while (0)
 
 #define flush_icache_user_range(vma,pg,adr,len)	do { } while (0)
 
 #define copy_to_user_page(vma, page, vaddr, dst, src, len) \
-	do {					\
-		flush_cache_page(vma, vaddr);	\
-		memcpy(dst, src, len);		\
+	do {							\
+		flush_cache_page(vma, vaddr, page_to_pfn(page));\
+		memcpy(dst, src, len);				\
 	} while (0)
 #define copy_from_user_page(vma, page, vaddr, dst, src, len) \
-	do {					\
-		flush_cache_page(vma, vaddr);	\
-		memcpy(dst, src, len);		\
+	do {							\
+		flush_cache_page(vma, vaddr, page_to_pfn(page));\
+		memcpy(dst, src, len);				\
 	} while (0)
 
 BTFIXUPDEF_CALL(void, __flush_page_to_ram, unsigned long)
diff -Nru a/include/asm-sparc64/cacheflush.h b/include/asm-sparc64/cacheflush.h
--- a/include/asm-sparc64/cacheflush.h	2005-02-25 16:36:35 -08:00
+++ b/include/asm-sparc64/cacheflush.h	2005-02-25 16:36:35 -08:00
@@ -11,7 +11,7 @@
 	do { if ((__mm) == current->mm) flushw_user(); } while(0)
 #define flush_cache_range(vma, start, end) \
 	flush_cache_mm((vma)->vm_mm)
-#define flush_cache_page(vma, page) \
+#define flush_cache_page(vma, page, pfn) \
 	flush_cache_mm((vma)->vm_mm)
 
 /* 
@@ -38,15 +38,15 @@
 #define flush_icache_user_range(vma,pg,adr,len)	do { } while (0)
 
 #define copy_to_user_page(vma, page, vaddr, dst, src, len) \
-	do {					\
-		flush_cache_page(vma, vaddr);	\
-		memcpy(dst, src, len);		\
+	do {							\
+		flush_cache_page(vma, vaddr, page_to_pfn(page));\
+		memcpy(dst, src, len);				\
 	} while (0)
 
 #define copy_from_user_page(vma, page, vaddr, dst, src, len) \
-	do {					\
-		flush_cache_page(vma, vaddr);	\
-		memcpy(dst, src, len);		\
+	do {							\
+		flush_cache_page(vma, vaddr, page_to_pfn(page));\
+		memcpy(dst, src, len);				\
 	} while (0)
 
 extern void flush_dcache_page(struct page *page);
diff -Nru a/include/asm-v850/cacheflush.h b/include/asm-v850/cacheflush.h
--- a/include/asm-v850/cacheflush.h	2005-02-25 16:36:35 -08:00
+++ b/include/asm-v850/cacheflush.h	2005-02-25 16:36:35 -08:00
@@ -25,7 +25,7 @@
 #define flush_cache_all()			((void)0)
 #define flush_cache_mm(mm)			((void)0)
 #define flush_cache_range(vma, start, end)	((void)0)
-#define flush_cache_page(vma, vmaddr)		((void)0)
+#define flush_cache_page(vma, vmaddr, pfn)	((void)0)
 #define flush_dcache_page(page)			((void)0)
 #define flush_dcache_mmap_lock(mapping)		((void)0)
 #define flush_dcache_mmap_unlock(mapping)	((void)0)
diff -Nru a/include/asm-x86_64/cacheflush.h b/include/asm-x86_64/cacheflush.h
--- a/include/asm-x86_64/cacheflush.h	2005-02-25 16:36:35 -08:00
+++ b/include/asm-x86_64/cacheflush.h	2005-02-25 16:36:35 -08:00
@@ -8,7 +8,7 @@
 #define flush_cache_all()			do { } while (0)
 #define flush_cache_mm(mm)			do { } while (0)
 #define flush_cache_range(vma, start, end)	do { } while (0)
-#define flush_cache_page(vma, vmaddr)		do { } while (0)
+#define flush_cache_page(vma, vmaddr, pfn)	do { } while (0)
 #define flush_dcache_page(page)			do { } while (0)
 #define flush_dcache_mmap_lock(mapping)		do { } while (0)
 #define flush_dcache_mmap_unlock(mapping)	do { } while (0)
diff -Nru a/mm/fremap.c b/mm/fremap.c
--- a/mm/fremap.c	2005-02-25 16:36:35 -08:00
+++ b/mm/fremap.c	2005-02-25 16:36:35 -08:00
@@ -30,7 +30,7 @@
 	if (pte_present(pte)) {
 		unsigned long pfn = pte_pfn(pte);
 
-		flush_cache_page(vma, addr);
+		flush_cache_page(vma, addr, pfn);
 		pte = ptep_clear_flush(vma, addr, ptep);
 		if (pfn_valid(pfn)) {
 			struct page *page = pfn_to_page(pfn);
diff -Nru a/mm/memory.c b/mm/memory.c
--- a/mm/memory.c	2005-02-25 16:36:35 -08:00
+++ b/mm/memory.c	2005-02-25 16:36:35 -08:00
@@ -1247,7 +1247,6 @@
 {
 	pte_t entry;
 
-	flush_cache_page(vma, address);
 	entry = maybe_mkwrite(pte_mkdirty(mk_pte(new_page, vma->vm_page_prot)),
 			      vma);
 	ptep_establish(vma, address, page_table, entry);
@@ -1299,7 +1298,7 @@
 		int reuse = can_share_swap_page(old_page);
 		unlock_page(old_page);
 		if (reuse) {
-			flush_cache_page(vma, address);
+			flush_cache_page(vma, address, pfn);
 			entry = maybe_mkwrite(pte_mkyoung(pte_mkdirty(pte)),
 					      vma);
 			ptep_set_access_flags(vma, address, page_table, entry, 1);
@@ -1344,6 +1343,7 @@
 			update_mem_hiwater();
 		} else
 			page_remove_rmap(old_page);
+		flush_cache_page(vma, address, pfn);
 		break_cow(vma, new_page, address, page_table);
 		lru_cache_add_active(new_page);
 		page_add_anon_rmap(new_page, vma, address);
diff -Nru a/mm/rmap.c b/mm/rmap.c
--- a/mm/rmap.c	2005-02-25 16:36:35 -08:00
+++ b/mm/rmap.c	2005-02-25 16:36:35 -08:00
@@ -574,7 +574,7 @@
 	}
 
 	/* Nuke the page table entry. */
-	flush_cache_page(vma, address);
+	flush_cache_page(vma, address, page_to_pfn(page));
 	pteval = ptep_clear_flush(vma, address, pte);
 
 	/* Move the dirty bit to the physical page now the pte is gone. */
@@ -692,7 +692,7 @@
 			continue;
 
 		/* Nuke the page table entry. */
-		flush_cache_page(vma, address);
+		flush_cache_page(vma, address, pfn);
 		pteval = ptep_clear_flush(vma, address, pte);
 
 		/* If nonlinear, store the file page offset in the pte. */

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: Changing  update_mmu_cache()
  2005-02-25 20:15     ` Russell King
  2005-02-25 21:43       ` Andrew Morton
  2005-02-26  1:09       ` David S. Miller
@ 2005-02-26  1:10       ` Benjamin Herrenschmidt
  2 siblings, 0 replies; 20+ messages in thread
From: Benjamin Herrenschmidt @ 2005-02-26  1:10 UTC (permalink / raw)
  To: Russell King; +Cc: David S. Miller, Linux Arch list

On Fri, 2005-02-25 at 20:15 +0000, Russell King wrote:

> The item I was referring to was my flush_cache_page() changes from
> January 11th (attached), posted to both linux-arch and lkml, and
> previous to that in November some time, along with Linus' reply,
> and my somewhat later reply.

ppc & ppc64 don't use flush_cache_page() so I have little experience
with it, but your patch is fine in that sense that it will have no
impact for me :)

Ben.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: Changing  update_mmu_cache()
  2005-02-26  1:09       ` David S. Miller
@ 2005-02-27 18:55         ` Paul Mundt
  2005-02-28  4:12           ` David S. Miller
  2005-02-27 19:27         ` Russell King
  1 sibling, 1 reply; 20+ messages in thread
From: Paul Mundt @ 2005-02-27 18:55 UTC (permalink / raw)
  To: David S. Miller; +Cc: Russell King, linux-arch

[-- Attachment #1: Type: text/plain, Size: 3128 bytes --]

On Fri, Feb 25, 2005 at 05:09:05PM -0800, David S. Miller wrote:
> I also enhanced SH platform so that the cache-sh4.c code no
> longer calculates the physical address by hand for flush_cache_page(),
> it's there now via pfn << PAGE_SHIFT.
> 
Looks good, thanks. flush_icache_user_range() needed one minor change:

--- orig/arch/sh/mm/cache-sh4.c
+++ mod/arch/sh/mm/cache-sh4.c
@@ -377,6 +357,6 @@
 void flush_icache_user_range(struct vm_area_struct *vma,
 			     struct page *page, unsigned long addr, int len)
 {
-	__flush_cache_page(vma, addr, PHYSADDR(page_address(page)));
+	flush_cache_page(vma, addr, page_to_pfn(page));
 }
 
> I'll push this off to Linus when 2.6.12 opens up.  If folks could
> build test this against current 2.6.x and report any failures that
> need fixing, I would appreciate that.
> 
You missed fs/binfmt_elf.c, this gets it working..

===== fs/binfmt_elf.c 1.102 vs edited =====
--- 1.102/fs/binfmt_elf.c	2005-02-06 13:29:02 +02:00
+++ edited/fs/binfmt_elf.c	2005-02-27 20:31:54 +02:00
@@ -1584,7 +1584,7 @@
 					DUMP_SEEK (file->f_pos + PAGE_SIZE);
 				} else {
 					void *kaddr;
-					flush_cache_page(vma, addr);
+					flush_cache_page(vma, addr, page_to_pfn(page));
 					kaddr = kmap(page);
 					if ((size += PAGE_SIZE) > limit ||
 					    !dump_write(file, kaddr,


On another note, for sh64 we don't need to actually keep
sh64_dcache_purge_virt_page() around for anything after this change..
flush_cache_page() was the only user of it anyways, so it makes more
sense to just have it call sh64_dcache_purge_phy_page() directly.

Here's a patch for arch/sh64/mm/cache.c that you can use in-place of the
one you have now, builds and boots.

--- orig/arch/sh64/mm/cache.c
+++ mod/arch/sh64/mm/cache.c
@@ -584,31 +584,6 @@
 	}
 }
 
-static void sh64_dcache_purge_virt_page(struct mm_struct *mm, unsigned long eaddr)
-{
-	unsigned long phys;
-	pgd_t *pgd;
-	pmd_t *pmd;
-	pte_t *pte;
-	pte_t entry;
-
-	pgd = pgd_offset(mm, eaddr);
-	pmd = pmd_offset(pgd, eaddr);
-
-	if (pmd_none(*pmd) || pmd_bad(*pmd))
-		return;
-
-	pte = pte_offset_kernel(pmd, eaddr);
-	entry = *pte;
-
-	if (pte_none(entry) || !pte_present(entry))
-		return;
-
-	phys = pte_val(entry) & PAGE_MASK;
-
-	sh64_dcache_purge_phy_page(phys);
-}
-
 static void sh64_dcache_purge_user_page(struct mm_struct *mm, unsigned long eaddr)
 {
 	pgd_t *pgd;
@@ -915,7 +890,7 @@
 
 /****************************************************************************/
 
-void flush_cache_page(struct vm_area_struct *vma, unsigned long eaddr)
+void flush_cache_page(struct vm_area_struct *vma, unsigned long eaddr, unsigned long pfn)
 {
 	/* Invalidate any entries in either cache for the vma within the user
 	   address space vma->vm_mm for the page starting at virtual address
@@ -926,7 +901,7 @@
 	   Note(1), this is called with mm->page_table_lock held.
 	   */
 
-	sh64_dcache_purge_virt_page(vma->vm_mm, eaddr);
+	sh64_dcache_purge_phy_page(pfn << PAGE_SHIFT);
 
 	if (vma->vm_flags & VM_EXEC) {
 		sh64_icache_inv_user_page(vma, eaddr);

[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: Changing  update_mmu_cache()
  2005-02-26  1:09       ` David S. Miller
  2005-02-27 18:55         ` Paul Mundt
@ 2005-02-27 19:27         ` Russell King
  1 sibling, 0 replies; 20+ messages in thread
From: Russell King @ 2005-02-27 19:27 UTC (permalink / raw)
  To: David S. Miller; +Cc: linux-arch

On Fri, Feb 25, 2005 at 05:09:05PM -0800, David S. Miller wrote:
> On Fri, 25 Feb 2005 20:15:38 +0000
> Russell King <rmk@arm.linux.org.uk> wrote:
> > The item I was referring to was my flush_cache_page() changes from
> > January 11th (attached), posted to both linux-arch and lkml, and
> > previous to that in November some time, along with Linus' reply,
> > and my somewhat later reply.
> 
> Awesome, thanks.

Thanks David - I appreciate your help with this.  I owe you something
for this... beer?

> I'll push this off to Linus when 2.6.12 opens up.  If folks could
> build test this against current 2.6.x and report any failures that
> need fixing, I would appreciate that.

Once this is in, I can follow it up with the ARM-arch changes for the
VIPT caches on ARMv6.

-- 
Russell King
 Linux kernel    2.6 ARM Linux   - http://www.arm.linux.org.uk/
 maintainer of:  2.6 PCMCIA      - http://pcmcia.arm.linux.org.uk/
                 2.6 Serial core

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: Changing  update_mmu_cache()
  2005-02-27 18:55         ` Paul Mundt
@ 2005-02-28  4:12           ` David S. Miller
  2005-02-28  9:18             ` Paul Mundt
  0 siblings, 1 reply; 20+ messages in thread
From: David S. Miller @ 2005-02-28  4:12 UTC (permalink / raw)
  To: Paul Mundt; +Cc: rmk, linux-arch

On Sun, 27 Feb 2005 20:55:37 +0200
Paul Mundt <lethal@linux-sh.org> wrote:

> Looks good, thanks. flush_icache_user_range() needed one minor change:
 ...
>  void flush_icache_user_range(struct vm_area_struct *vma,
>  			     struct page *page, unsigned long addr, int len)
>  {
> -	__flush_cache_page(vma, addr, PHYSADDR(page_address(page)));
> +	flush_cache_page(vma, addr, page_to_pfn(page));
>  }

What are you patching against?  In the patch I sent, which are you
replying to, I made flush_icache_user_range() in this file be:

void flush_icache_user_range(struct vm_area_struct *vma,
                             struct page *page, unsigned long addr, int len)
{
        __flush_cache_page(vma, addr,
                           PHYSADDR(page_address(page)) >> PAGE_SHIFT);
}

> > I'll push this off to Linus when 2.6.12 opens up.  If folks could
> > build test this against current 2.6.x and report any failures that
> > need fixing, I would appreciate that.
> > 
> You missed fs/binfmt_elf.c, this gets it working..

Again, what the heck are you patching against?
It's definitely not the patch I posted which you are
replying to.  The tree would not have built for me
on 4 platforms with this error :-)

> On another note, for sh64 we don't need to actually keep
> sh64_dcache_purge_virt_page() around for anything after this change..
> flush_cache_page() was the only user of it anyways, so it makes more
> sense to just have it call sh64_dcache_purge_phy_page() directly.
> 
> Here's a patch for arch/sh64/mm/cache.c that you can use in-place of the
> one you have now, builds and boots.

Again, what are you patching against?  None of your patches are against
my patch at all.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: Changing  update_mmu_cache()
  2005-02-28  4:12           ` David S. Miller
@ 2005-02-28  9:18             ` Paul Mundt
  2005-03-06  5:15               ` David S. Miller
  0 siblings, 1 reply; 20+ messages in thread
From: Paul Mundt @ 2005-02-28  9:18 UTC (permalink / raw)
  To: David S. Miller; +Cc: rmk, linux-arch

[-- Attachment #1: Type: text/plain, Size: 1789 bytes --]

On Sun, Feb 27, 2005 at 08:12:08PM -0800, David S. Miller wrote:
> What are you patching against?  In the patch I sent, which are you
> replying to, I made flush_icache_user_range() in this file be:
> 
> void flush_icache_user_range(struct vm_area_struct *vma,
>                              struct page *page, unsigned long addr, int len)
> {
>         __flush_cache_page(vma, addr,
>                            PHYSADDR(page_address(page)) >> PAGE_SHIFT);
> }
> 
Yes, you're right, wrong tree, sorry for the confusion. Here are the
fixes against your patch:

===== arch/sh/mm/cache-sh4.c 1.10 vs edited =====
--- 1.10/arch/sh/mm/cache-sh4.c	2005-02-28 11:14:14 +02:00
+++ edited/arch/sh/mm/cache-sh4.c	2005-02-28 11:15:27 +02:00
@@ -357,7 +357,6 @@
 void flush_icache_user_range(struct vm_area_struct *vma,
 			     struct page *page, unsigned long addr, int len)
 {
-	__flush_cache_page(vma, addr,
-			   PHYSADDR(page_address(page)) >> PAGE_SHIFT);
+	flush_cache_page(vma, addr, page_to_pfn(page));
 }
 
===== arch/sh64/mm/cache.c 1.2 vs edited =====
--- 1.2/arch/sh64/mm/cache.c	2005-02-28 11:14:14 +02:00
+++ edited/arch/sh64/mm/cache.c	2005-02-28 11:15:01 +02:00
@@ -573,11 +573,6 @@
 	}
 }
 
-static inline void sh64_dcache_purge_virt_page(struct mm_struct *mm, unsigned long eaddr, unsigned long pfn)
-{
-	sh64_dcache_purge_phy_page(pfn << PAGE_SHIFT);
-}
-
 static void sh64_dcache_purge_user_page(struct mm_struct *mm, unsigned long eaddr)
 {
 	pgd_t *pgd;
@@ -895,7 +890,7 @@
 	   Note(1), this is called with mm->page_table_lock held.
 	   */
 
-	sh64_dcache_purge_virt_page(vma->vm_mm, eaddr, pfn);
+	sh64_dcache_purge_phy_page(pfn << PAGE_SHIFT);
 
 	if (vma->vm_flags & VM_EXEC) {
 		sh64_icache_inv_user_page(vma, eaddr);

[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: Changing  update_mmu_cache()
  2005-02-28  9:18             ` Paul Mundt
@ 2005-03-06  5:15               ` David S. Miller
  0 siblings, 0 replies; 20+ messages in thread
From: David S. Miller @ 2005-03-06  5:15 UTC (permalink / raw)
  To: Paul Mundt; +Cc: rmk, linux-arch

On Mon, 28 Feb 2005 11:18:43 +0200
Paul Mundt <lethal@linux-sh.org> wrote:

> On Sun, Feb 27, 2005 at 08:12:08PM -0800, David S. Miller wrote:
> > What are you patching against?  In the patch I sent, which are you
> > replying to, I made flush_icache_user_range() in this file be:
> > 
> > void flush_icache_user_range(struct vm_area_struct *vma,
> >                              struct page *page, unsigned long addr, int len)
> > {
> >         __flush_cache_page(vma, addr,
> >                            PHYSADDR(page_address(page)) >> PAGE_SHIFT);
> > }
> > 
> Yes, you're right, wrong tree, sorry for the confusion. Here are the
> fixes against your patch:

Applied, thanks Paul.

^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2005-03-06  5:15 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2005-02-22  4:53 Changing update_mmu_cache() Benjamin Herrenschmidt
2005-02-22  5:43 ` David S. Miller
2005-02-22  9:07 ` Russell King
2005-02-22 18:08   ` David S. Miller
2005-02-25 20:15     ` Russell King
2005-02-25 21:43       ` Andrew Morton
2005-02-25 21:46         ` William Lee Irwin III
2005-02-25 22:48         ` Russell King
2005-02-25 22:59           ` William Lee Irwin III
2005-02-25 23:07           ` Andrew Morton
2005-02-26  1:09       ` David S. Miller
2005-02-27 18:55         ` Paul Mundt
2005-02-28  4:12           ` David S. Miller
2005-02-28  9:18             ` Paul Mundt
2005-03-06  5:15               ` David S. Miller
2005-02-27 19:27         ` Russell King
2005-02-26  1:10       ` Benjamin Herrenschmidt
2005-02-22 20:51   ` Benjamin Herrenschmidt
2005-02-23  5:35 ` Changing update_mmu_cache() or set_pte() ? Benjamin Herrenschmidt
2005-02-23  5:47   ` Benjamin Herrenschmidt

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox