linux-arch.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 1/6] mm: add coherence API for DMA to vmalloc/vmap areas
  2009-09-09 15:52 [PATCH 0/6] fix xfs by making I/O to vmap/vmalloc areas work James Bottomley
@ 2009-09-09 15:52 ` James Bottomley
  0 siblings, 0 replies; 18+ messages in thread
From: James Bottomley @ 2009-09-09 15:52 UTC (permalink / raw)
  To: linux-arch, linux-fsdevel, linux-parisc
  Cc: Russell King, Christoph Hellwig, Paul Mundt, James Bottomley

On Virtually Indexed architectures (which don't do automatic alias
resolution in their caches), we have to flush via the correct
virtual address to prepare pages for DMA.  On some architectures
(like arm) we cannot prevent the CPU from doing data movein along
the alias (and thus giving stale read data), so we not only have to
introduce a flush API to push dirty cache lines out, but also an invalidate
API to kill inconsistent cache lines that may have moved in before
DMA changed the data

Signed-off-by: James Bottomley <James.Bottomley@suse.de>
---
 include/linux/highmem.h |    6 ++++++
 1 files changed, 6 insertions(+), 0 deletions(-)

diff --git a/include/linux/highmem.h b/include/linux/highmem.h
index 211ff44..eb99c70 100644
--- a/include/linux/highmem.h
+++ b/include/linux/highmem.h
@@ -17,6 +17,12 @@ static inline void flush_anon_page(struct vm_area_struct *vma, struct page *page
 static inline void flush_kernel_dcache_page(struct page *page)
 {
 }
+static inline void flush_kernel_dcache_addr(void *vaddr)
+{
+}
+static inline void invalidate_kernel_dcache_addr(void *vaddr)
+{
+}
 #endif
 
 #include <asm/kmap_types.h>
-- 
1.6.3.3

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 0/6] fix xfs by making I/O to vmap/vmalloc areas work
@ 2009-09-17 23:06 James Bottomley
  2009-09-17 23:06 ` James Bottomley
  2009-09-17 23:06 ` [PATCH 1/6] mm: add coherence API for DMA to vmalloc/vmap areas James Bottomley
  0 siblings, 2 replies; 18+ messages in thread
From: James Bottomley @ 2009-09-17 23:06 UTC (permalink / raw)
  To: linux-arch, linux-fsdevel, linux-parisc
  Cc: Russell King, Christoph Hellwig, Paul Mundt, James Bottomley

From: James Bottomley <James.Bottomley@HansenPartnership.com>

Here's version three of the patch.  This one makes sure the invalidate
works correctly.  I verified it on parisc by making my system print out
the virtual addresses it was invalidating and matching up with the ones
that were initially flushed, but since invalidate is a nop on parisc, I
can't verify live that the issue is fixed.  I'd really appreciate someone
from arm and sh testing here.

Thanks,

James

---

James Bottomley (6):
  mm: add coherence API for DMA to vmalloc/vmap areas
  parisc: add mm API for DMA to vmalloc/vmap areas
  arm: add mm API for DMA to vmalloc/vmap areas
  sh: add mm API for DMA to vmalloc/vmap areas
  block: permit I/O to vmalloc/vmap kernel pages
  xfs: fix xfs to work with Virtually Indexed architectures

 arch/arm/include/asm/cacheflush.h    |   10 ++++++++++
 arch/parisc/include/asm/cacheflush.h |    8 ++++++++
 arch/sh/include/asm/cacheflush.h     |    8 ++++++++
 fs/bio.c                             |   20 ++++++++++++++++++--
 fs/xfs/linux-2.6/xfs_buf.c           |   20 ++++++++++++++++++++
 include/linux/highmem.h              |    6 ++++++
 6 files changed, 70 insertions(+), 2 deletions(-)


^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH 0/6] fix xfs by making I/O to vmap/vmalloc areas work
  2009-09-17 23:06 [PATCH 0/6] fix xfs by making I/O to vmap/vmalloc areas work James Bottomley
@ 2009-09-17 23:06 ` James Bottomley
  2009-09-17 23:06 ` [PATCH 1/6] mm: add coherence API for DMA to vmalloc/vmap areas James Bottomley
  1 sibling, 0 replies; 18+ messages in thread
From: James Bottomley @ 2009-09-17 23:06 UTC (permalink / raw)
  To: linux-arch, linux-fsdevel, linux-parisc
  Cc: Russell King, Christoph Hellwig, Paul Mundt, James Bottomley

From: James Bottomley <James.Bottomley@HansenPartnership.com>

Here's version three of the patch.  This one makes sure the invalidate
works correctly.  I verified it on parisc by making my system print out
the virtual addresses it was invalidating and matching up with the ones
that were initially flushed, but since invalidate is a nop on parisc, I
can't verify live that the issue is fixed.  I'd really appreciate someone
from arm and sh testing here.

Thanks,

James

---

James Bottomley (6):
  mm: add coherence API for DMA to vmalloc/vmap areas
  parisc: add mm API for DMA to vmalloc/vmap areas
  arm: add mm API for DMA to vmalloc/vmap areas
  sh: add mm API for DMA to vmalloc/vmap areas
  block: permit I/O to vmalloc/vmap kernel pages
  xfs: fix xfs to work with Virtually Indexed architectures

 arch/arm/include/asm/cacheflush.h    |   10 ++++++++++
 arch/parisc/include/asm/cacheflush.h |    8 ++++++++
 arch/sh/include/asm/cacheflush.h     |    8 ++++++++
 fs/bio.c                             |   20 ++++++++++++++++++--
 fs/xfs/linux-2.6/xfs_buf.c           |   20 ++++++++++++++++++++
 include/linux/highmem.h              |    6 ++++++
 6 files changed, 70 insertions(+), 2 deletions(-)


^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH 1/6] mm: add coherence API for DMA to vmalloc/vmap areas
  2009-09-17 23:06 [PATCH 0/6] fix xfs by making I/O to vmap/vmalloc areas work James Bottomley
  2009-09-17 23:06 ` James Bottomley
@ 2009-09-17 23:06 ` James Bottomley
  2009-09-17 23:06   ` James Bottomley
  2009-09-17 23:06   ` [PATCH 2/6] parisc: add mm " James Bottomley
  1 sibling, 2 replies; 18+ messages in thread
From: James Bottomley @ 2009-09-17 23:06 UTC (permalink / raw)
  To: linux-arch, linux-fsdevel, linux-parisc
  Cc: Russell King, Christoph Hellwig, Paul Mundt, James Bottomley,
	James Bottomley

From: James Bottomley <jejb@external.hp.com>

On Virtually Indexed architectures (which don't do automatic alias
resolution in their caches), we have to flush via the correct
virtual address to prepare pages for DMA.  On some architectures
(like arm) we cannot prevent the CPU from doing data movein along
the alias (and thus giving stale read data), so we not only have to
introduce a flush API to push dirty cache lines out, but also an invalidate
API to kill inconsistent cache lines that may have moved in before
DMA changed the data

Signed-off-by: James Bottomley <James.Bottomley@suse.de>
---
 include/linux/highmem.h |    6 ++++++
 1 files changed, 6 insertions(+), 0 deletions(-)

diff --git a/include/linux/highmem.h b/include/linux/highmem.h
index 211ff44..eb99c70 100644
--- a/include/linux/highmem.h
+++ b/include/linux/highmem.h
@@ -17,6 +17,12 @@ static inline void flush_anon_page(struct vm_area_struct *vma, struct page *page
 static inline void flush_kernel_dcache_page(struct page *page)
 {
 }
+static inline void flush_kernel_dcache_addr(void *vaddr)
+{
+}
+static inline void invalidate_kernel_dcache_addr(void *vaddr)
+{
+}
 #endif
 
 #include <asm/kmap_types.h>
-- 
1.6.3.3


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 1/6] mm: add coherence API for DMA to vmalloc/vmap areas
  2009-09-17 23:06 ` [PATCH 1/6] mm: add coherence API for DMA to vmalloc/vmap areas James Bottomley
@ 2009-09-17 23:06   ` James Bottomley
  2009-09-17 23:06   ` [PATCH 2/6] parisc: add mm " James Bottomley
  1 sibling, 0 replies; 18+ messages in thread
From: James Bottomley @ 2009-09-17 23:06 UTC (permalink / raw)
  To: linux-arch, linux-fsdevel, linux-parisc
  Cc: Russell King, Christoph Hellwig, Paul Mundt, James Bottomley,
	James Bottomley

From: James Bottomley <jejb@external.hp.com>

On Virtually Indexed architectures (which don't do automatic alias
resolution in their caches), we have to flush via the correct
virtual address to prepare pages for DMA.  On some architectures
(like arm) we cannot prevent the CPU from doing data movein along
the alias (and thus giving stale read data), so we not only have to
introduce a flush API to push dirty cache lines out, but also an invalidate
API to kill inconsistent cache lines that may have moved in before
DMA changed the data

Signed-off-by: James Bottomley <James.Bottomley@suse.de>
---
 include/linux/highmem.h |    6 ++++++
 1 files changed, 6 insertions(+), 0 deletions(-)

diff --git a/include/linux/highmem.h b/include/linux/highmem.h
index 211ff44..eb99c70 100644
--- a/include/linux/highmem.h
+++ b/include/linux/highmem.h
@@ -17,6 +17,12 @@ static inline void flush_anon_page(struct vm_area_struct *vma, struct page *page
 static inline void flush_kernel_dcache_page(struct page *page)
 {
 }
+static inline void flush_kernel_dcache_addr(void *vaddr)
+{
+}
+static inline void invalidate_kernel_dcache_addr(void *vaddr)
+{
+}
 #endif
 
 #include <asm/kmap_types.h>
-- 
1.6.3.3


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 2/6] parisc: add mm API for DMA to vmalloc/vmap areas
  2009-09-17 23:06 ` [PATCH 1/6] mm: add coherence API for DMA to vmalloc/vmap areas James Bottomley
  2009-09-17 23:06   ` James Bottomley
@ 2009-09-17 23:06   ` James Bottomley
  2009-09-17 23:06     ` James Bottomley
  2009-09-17 23:06     ` [PATCH 3/6] arm: " James Bottomley
  1 sibling, 2 replies; 18+ messages in thread
From: James Bottomley @ 2009-09-17 23:06 UTC (permalink / raw)
  To: linux-arch, linux-fsdevel, linux-parisc
  Cc: Russell King, Christoph Hellwig, Paul Mundt, James Bottomley,
	James Bottomley

From: James Bottomley <jejb@external.hp.com>

We already have an API to flush a kernel page along an alias
address, so use it.  The TLB purge prevents the CPU from doing
speculative moveins on the flushed address, so we don't need to
implement and invalidate.

Signed-off-by: James Bottomley <James.Bottomley@suse.de>
---
 arch/parisc/include/asm/cacheflush.h |    8 ++++++++
 1 files changed, 8 insertions(+), 0 deletions(-)

diff --git a/arch/parisc/include/asm/cacheflush.h b/arch/parisc/include/asm/cacheflush.h
index 7243951..2536a00 100644
--- a/arch/parisc/include/asm/cacheflush.h
+++ b/arch/parisc/include/asm/cacheflush.h
@@ -90,6 +90,14 @@ static inline void flush_kernel_dcache_page(struct page *page)
 {
 	flush_kernel_dcache_page_addr(page_address(page));
 }
+static inline void flush_kernel_dcache_addr(void *addr)
+{
+	flush_kernel_dcache_page_addr(addr);
+}
+static inline void invalidate_kernel_dcache_addr(void *addr)
+{
+	/* nop .. the flush prevents move in until the page is touched */
+}
 
 #ifdef CONFIG_DEBUG_RODATA
 void mark_rodata_ro(void);
-- 
1.6.3.3


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 2/6] parisc: add mm API for DMA to vmalloc/vmap areas
  2009-09-17 23:06   ` [PATCH 2/6] parisc: add mm " James Bottomley
@ 2009-09-17 23:06     ` James Bottomley
  2009-09-17 23:06     ` [PATCH 3/6] arm: " James Bottomley
  1 sibling, 0 replies; 18+ messages in thread
From: James Bottomley @ 2009-09-17 23:06 UTC (permalink / raw)
  To: linux-arch, linux-fsdevel, linux-parisc
  Cc: Russell King, Christoph Hellwig, Paul Mundt, James Bottomley,
	James Bottomley

From: James Bottomley <jejb@external.hp.com>

We already have an API to flush a kernel page along an alias
address, so use it.  The TLB purge prevents the CPU from doing
speculative moveins on the flushed address, so we don't need to
implement and invalidate.

Signed-off-by: James Bottomley <James.Bottomley@suse.de>
---
 arch/parisc/include/asm/cacheflush.h |    8 ++++++++
 1 files changed, 8 insertions(+), 0 deletions(-)

diff --git a/arch/parisc/include/asm/cacheflush.h b/arch/parisc/include/asm/cacheflush.h
index 7243951..2536a00 100644
--- a/arch/parisc/include/asm/cacheflush.h
+++ b/arch/parisc/include/asm/cacheflush.h
@@ -90,6 +90,14 @@ static inline void flush_kernel_dcache_page(struct page *page)
 {
 	flush_kernel_dcache_page_addr(page_address(page));
 }
+static inline void flush_kernel_dcache_addr(void *addr)
+{
+	flush_kernel_dcache_page_addr(addr);
+}
+static inline void invalidate_kernel_dcache_addr(void *addr)
+{
+	/* nop .. the flush prevents move in until the page is touched */
+}
 
 #ifdef CONFIG_DEBUG_RODATA
 void mark_rodata_ro(void);
-- 
1.6.3.3


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 3/6] arm: add mm API for DMA to vmalloc/vmap areas
  2009-09-17 23:06   ` [PATCH 2/6] parisc: add mm " James Bottomley
  2009-09-17 23:06     ` James Bottomley
@ 2009-09-17 23:06     ` James Bottomley
  2009-09-17 23:06       ` [PATCH 4/6] sh: " James Bottomley
  1 sibling, 1 reply; 18+ messages in thread
From: James Bottomley @ 2009-09-17 23:06 UTC (permalink / raw)
  To: linux-arch, linux-fsdevel, linux-parisc
  Cc: Russell King, Christoph Hellwig, Paul Mundt, James Bottomley,
	James Bottomley

From: James Bottomley <jejb@external.hp.com>

ARM cannot prevent cache movein, so this patch implements both the
flush and invalidate pieces of the API.

Signed-off-by: James Bottomley <James.Bottomley@suse.de>
---
 arch/arm/include/asm/cacheflush.h |   10 ++++++++++
 1 files changed, 10 insertions(+), 0 deletions(-)

diff --git a/arch/arm/include/asm/cacheflush.h b/arch/arm/include/asm/cacheflush.h
index 1a711ea..1104ee9 100644
--- a/arch/arm/include/asm/cacheflush.h
+++ b/arch/arm/include/asm/cacheflush.h
@@ -436,6 +436,16 @@ static inline void flush_kernel_dcache_page(struct page *page)
 	if ((cache_is_vivt() || cache_is_vipt_aliasing()) && !PageHighMem(page))
 		__cpuc_flush_dcache_page(page_address(page));
 }
+static inline void flush_kernel_dcache_addr(void *addr)
+{
+	if ((cache_is_vivt() || cache_is_vipt_aliasing()))
+		__cpuc_flush_dcache_page(addr);
+}
+static inline void invalidate_kernel_dcache_addr(void *addr)
+{
+	if ((cache_is_vivt() || cache_is_vipt_aliasing()))
+		__cpuc_flush_dcache_page(addr);
+}
 
 #define flush_dcache_mmap_lock(mapping) \
 	spin_lock_irq(&(mapping)->tree_lock)
-- 
1.6.3.3

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 4/6] sh: add mm API for DMA to vmalloc/vmap areas
  2009-09-17 23:06     ` [PATCH 3/6] arm: " James Bottomley
@ 2009-09-17 23:06       ` James Bottomley
  2009-09-17 23:06         ` James Bottomley
  2009-09-17 23:07         ` [PATCH 5/6] block: permit I/O to vmalloc/vmap kernel pages James Bottomley
  0 siblings, 2 replies; 18+ messages in thread
From: James Bottomley @ 2009-09-17 23:06 UTC (permalink / raw)
  To: linux-arch, linux-fsdevel, linux-parisc
  Cc: Russell King, Christoph Hellwig, Paul Mundt, James Bottomley,
	James Bottomley

From: James Bottomley <James.Bottomley@HansenPartnership.com>

Signed-off-by: James Bottomley <James.Bottomley@suse.de>
---
 arch/sh/include/asm/cacheflush.h |    8 ++++++++
 1 files changed, 8 insertions(+), 0 deletions(-)

diff --git a/arch/sh/include/asm/cacheflush.h b/arch/sh/include/asm/cacheflush.h
index 4c5462d..3cb8824 100644
--- a/arch/sh/include/asm/cacheflush.h
+++ b/arch/sh/include/asm/cacheflush.h
@@ -48,6 +48,14 @@ static inline void flush_kernel_dcache_page(struct page *page)
 {
 	flush_dcache_page(page);
 }
+static inline void flush_kernel_dcache_addr(void *addr)
+{
+	__flush_wback_region(addr, PAGE_SIZE);
+}
+static inline void invalidate_kernel_dcache_addr(void *addr)
+{
+	__flush_invalidate_region(addr, PAGE_SIZE);
+}
 
 #if defined(CONFIG_CPU_SH4) && !defined(CONFIG_CACHE_OFF)
 extern void copy_to_user_page(struct vm_area_struct *vma,
-- 
1.6.3.3


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 4/6] sh: add mm API for DMA to vmalloc/vmap areas
  2009-09-17 23:06       ` [PATCH 4/6] sh: " James Bottomley
@ 2009-09-17 23:06         ` James Bottomley
  2009-09-17 23:07         ` [PATCH 5/6] block: permit I/O to vmalloc/vmap kernel pages James Bottomley
  1 sibling, 0 replies; 18+ messages in thread
From: James Bottomley @ 2009-09-17 23:06 UTC (permalink / raw)
  To: linux-arch, linux-fsdevel, linux-parisc
  Cc: Russell King, Christoph Hellwig, Paul Mundt, James Bottomley,
	James Bottomley

From: James Bottomley <James.Bottomley@HansenPartnership.com>

Signed-off-by: James Bottomley <James.Bottomley@suse.de>
---
 arch/sh/include/asm/cacheflush.h |    8 ++++++++
 1 files changed, 8 insertions(+), 0 deletions(-)

diff --git a/arch/sh/include/asm/cacheflush.h b/arch/sh/include/asm/cacheflush.h
index 4c5462d..3cb8824 100644
--- a/arch/sh/include/asm/cacheflush.h
+++ b/arch/sh/include/asm/cacheflush.h
@@ -48,6 +48,14 @@ static inline void flush_kernel_dcache_page(struct page *page)
 {
 	flush_dcache_page(page);
 }
+static inline void flush_kernel_dcache_addr(void *addr)
+{
+	__flush_wback_region(addr, PAGE_SIZE);
+}
+static inline void invalidate_kernel_dcache_addr(void *addr)
+{
+	__flush_invalidate_region(addr, PAGE_SIZE);
+}
 
 #if defined(CONFIG_CPU_SH4) && !defined(CONFIG_CACHE_OFF)
 extern void copy_to_user_page(struct vm_area_struct *vma,
-- 
1.6.3.3


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 5/6] block: permit I/O to vmalloc/vmap kernel pages
  2009-09-17 23:06       ` [PATCH 4/6] sh: " James Bottomley
  2009-09-17 23:06         ` James Bottomley
@ 2009-09-17 23:07         ` James Bottomley
  2009-09-17 23:07           ` James Bottomley
  2009-09-17 23:07           ` [PATCH 6/6] xfs: fix xfs to work with Virtually Indexed architectures James Bottomley
  1 sibling, 2 replies; 18+ messages in thread
From: James Bottomley @ 2009-09-17 23:07 UTC (permalink / raw)
  To: linux-arch, linux-fsdevel, linux-parisc
  Cc: Russell King, Christoph Hellwig, Paul Mundt, James Bottomley,
	James Bottomley

From: James Bottomley <jejb@external.hp.com>

This updates bio_map_kern() to check for pages in the vmalloc address
range and call the new kernel flushing APIs if the are.  This should
allow any kernel user to pass a vmalloc/vmap area to block.

Signed-off-by: James Bottomley <James.Bottomley@suse.de>
---
 fs/bio.c |   20 ++++++++++++++++++--
 1 files changed, 18 insertions(+), 2 deletions(-)

diff --git a/fs/bio.c b/fs/bio.c
index 7673800..0cf7b79 100644
--- a/fs/bio.c
+++ b/fs/bio.c
@@ -1120,6 +1120,14 @@ void bio_unmap_user(struct bio *bio)
 
 static void bio_map_kern_endio(struct bio *bio, int err)
 {
+	void *kaddr = bio->bi_private;
+
+	if (is_vmalloc_addr(kaddr)) {
+		int i;
+
+		for (i = 0; i < bio->bi_vcnt; i++)
+			invalidate_kernel_dcache_addr(kaddr + i * PAGE_SIZE);
+	}
 	bio_put(bio);
 }
 
@@ -1138,9 +1146,12 @@ static struct bio *__bio_map_kern(struct request_queue *q, void *data,
 	if (!bio)
 		return ERR_PTR(-ENOMEM);
 
+	bio->bi_private = data;
+
 	offset = offset_in_page(kaddr);
 	for (i = 0; i < nr_pages; i++) {
 		unsigned int bytes = PAGE_SIZE - offset;
+		struct page *page;
 
 		if (len <= 0)
 			break;
@@ -1148,8 +1159,13 @@ static struct bio *__bio_map_kern(struct request_queue *q, void *data,
 		if (bytes > len)
 			bytes = len;
 
-		if (bio_add_pc_page(q, bio, virt_to_page(data), bytes,
-				    offset) < bytes)
+		if (is_vmalloc_addr(data)) {
+			flush_kernel_dcache_addr(data);
+			page = vmalloc_to_page(data);
+		} else
+			page = virt_to_page(data);
+
+		if (bio_add_pc_page(q, bio, page, bytes, offset) < bytes)
 			break;
 
 		data += bytes;
-- 
1.6.3.3


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 5/6] block: permit I/O to vmalloc/vmap kernel pages
  2009-09-17 23:07         ` [PATCH 5/6] block: permit I/O to vmalloc/vmap kernel pages James Bottomley
@ 2009-09-17 23:07           ` James Bottomley
  2009-09-17 23:07           ` [PATCH 6/6] xfs: fix xfs to work with Virtually Indexed architectures James Bottomley
  1 sibling, 0 replies; 18+ messages in thread
From: James Bottomley @ 2009-09-17 23:07 UTC (permalink / raw)
  To: linux-arch, linux-fsdevel, linux-parisc
  Cc: Russell King, Christoph Hellwig, Paul Mundt, James Bottomley,
	James Bottomley

From: James Bottomley <jejb@external.hp.com>

This updates bio_map_kern() to check for pages in the vmalloc address
range and call the new kernel flushing APIs if the are.  This should
allow any kernel user to pass a vmalloc/vmap area to block.

Signed-off-by: James Bottomley <James.Bottomley@suse.de>
---
 fs/bio.c |   20 ++++++++++++++++++--
 1 files changed, 18 insertions(+), 2 deletions(-)

diff --git a/fs/bio.c b/fs/bio.c
index 7673800..0cf7b79 100644
--- a/fs/bio.c
+++ b/fs/bio.c
@@ -1120,6 +1120,14 @@ void bio_unmap_user(struct bio *bio)
 
 static void bio_map_kern_endio(struct bio *bio, int err)
 {
+	void *kaddr = bio->bi_private;
+
+	if (is_vmalloc_addr(kaddr)) {
+		int i;
+
+		for (i = 0; i < bio->bi_vcnt; i++)
+			invalidate_kernel_dcache_addr(kaddr + i * PAGE_SIZE);
+	}
 	bio_put(bio);
 }
 
@@ -1138,9 +1146,12 @@ static struct bio *__bio_map_kern(struct request_queue *q, void *data,
 	if (!bio)
 		return ERR_PTR(-ENOMEM);
 
+	bio->bi_private = data;
+
 	offset = offset_in_page(kaddr);
 	for (i = 0; i < nr_pages; i++) {
 		unsigned int bytes = PAGE_SIZE - offset;
+		struct page *page;
 
 		if (len <= 0)
 			break;
@@ -1148,8 +1159,13 @@ static struct bio *__bio_map_kern(struct request_queue *q, void *data,
 		if (bytes > len)
 			bytes = len;
 
-		if (bio_add_pc_page(q, bio, virt_to_page(data), bytes,
-				    offset) < bytes)
+		if (is_vmalloc_addr(data)) {
+			flush_kernel_dcache_addr(data);
+			page = vmalloc_to_page(data);
+		} else
+			page = virt_to_page(data);
+
+		if (bio_add_pc_page(q, bio, page, bytes, offset) < bytes)
 			break;
 
 		data += bytes;
-- 
1.6.3.3


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 6/6] xfs: fix xfs to work with Virtually Indexed architectures
  2009-09-17 23:07         ` [PATCH 5/6] block: permit I/O to vmalloc/vmap kernel pages James Bottomley
  2009-09-17 23:07           ` James Bottomley
@ 2009-09-17 23:07           ` James Bottomley
  1 sibling, 0 replies; 18+ messages in thread
From: James Bottomley @ 2009-09-17 23:07 UTC (permalink / raw)
  To: linux-arch, linux-fsdevel, linux-parisc
  Cc: Russell King, Christoph Hellwig, Paul Mundt, James Bottomley,
	James Bottomley

From: James Bottomley <jejb@external.hp.com>

xfs_buf.c includes what is essentially a hand rolled version of
blk_rq_map_kern().  In order to work properly with the vmalloc buffers
that xfs uses, this hand rolled routine must also implement the flushing
API for vmap/vmalloc areas.

Signed-off-by: James Bottomley <James.Bottomley@suse.de>
---
 fs/xfs/linux-2.6/xfs_buf.c |   20 ++++++++++++++++++++
 1 files changed, 20 insertions(+), 0 deletions(-)

diff --git a/fs/xfs/linux-2.6/xfs_buf.c b/fs/xfs/linux-2.6/xfs_buf.c
index 965df12..320a6e4 100644
--- a/fs/xfs/linux-2.6/xfs_buf.c
+++ b/fs/xfs/linux-2.6/xfs_buf.c
@@ -1132,12 +1132,26 @@ xfs_buf_bio_end_io(
 	xfs_buf_t		*bp = (xfs_buf_t *)bio->bi_private;
 	unsigned int		blocksize = bp->b_target->bt_bsize;
 	struct bio_vec		*bvec = bio->bi_io_vec + bio->bi_vcnt - 1;
+	void			*vaddr = NULL;
+	int			i;
 
 	xfs_buf_ioerror(bp, -error);
 
+	if (is_vmalloc_addr(bp->b_addr))
+		for (i = 0; i < bp->b_page_count; i++)
+			if (bvec->bv_page == bp->b_pages[i]) {
+				vaddr = bp->b_addr + i*PAGE_SIZE;
+				break;
+			}
+
 	do {
 		struct page	*page = bvec->bv_page;
 
+		if (is_vmalloc_addr(bp->b_addr)) {
+			invalidate_kernel_dcache_addr(vaddr);
+			vaddr -= PAGE_SIZE;
+		}
+
 		ASSERT(!PagePrivate(page));
 		if (unlikely(bp->b_error)) {
 			if (bp->b_flags & XBF_READ)
@@ -1202,6 +1216,9 @@ _xfs_buf_ioapply(
 		bio->bi_end_io = xfs_buf_bio_end_io;
 		bio->bi_private = bp;
 
+		if (is_vmalloc_addr(bp->b_addr))
+			flush_kernel_dcache_addr(bp->b_addr);
+
 		bio_add_page(bio, bp->b_pages[0], PAGE_CACHE_SIZE, 0);
 		size = 0;
 
@@ -1228,6 +1245,9 @@ next_chunk:
 		if (nbytes > size)
 			nbytes = size;
 
+		if (is_vmalloc_addr(bp->b_addr))
+			flush_kernel_dcache_addr(bp->b_addr + PAGE_SIZE*map_i);
+
 		rbytes = bio_add_page(bio, bp->b_pages[map_i], nbytes, offset);
 		if (rbytes < nbytes)
 			break;
-- 
1.6.3.3

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 1/6] mm: add coherence API for DMA to vmalloc/vmap areas
  2009-11-17 17:03 [PATCH 0/6] fix xfs by making I/O to vmap/vmalloc areas work James Bottomley
@ 2009-11-17 17:03 ` James Bottomley
  2009-11-17 17:03   ` James Bottomley
  2009-11-18 14:38   ` Ralf Baechle
  0 siblings, 2 replies; 18+ messages in thread
From: James Bottomley @ 2009-11-17 17:03 UTC (permalink / raw)
  To: linux-arch, linux-parisc; +Cc: James Bottomley

On Virtually Indexed architectures (which don't do automatic alias
resolution in their caches), we have to flush via the correct
virtual address to prepare pages for DMA.  On some architectures
(like arm) we cannot prevent the CPU from doing data movein along
the alias (and thus giving stale read data), so we not only have to
introduce a flush API to push dirty cache lines out, but also an invalidate
API to kill inconsistent cache lines that may have moved in before
DMA changed the data

Signed-off-by: James Bottomley <James.Bottomley@suse.de>
---
 include/linux/highmem.h |    6 ++++++
 1 files changed, 6 insertions(+), 0 deletions(-)

diff --git a/include/linux/highmem.h b/include/linux/highmem.h
index 211ff44..eb99c70 100644
--- a/include/linux/highmem.h
+++ b/include/linux/highmem.h
@@ -17,6 +17,12 @@ static inline void flush_anon_page(struct vm_area_struct *vma, struct page *page
 static inline void flush_kernel_dcache_page(struct page *page)
 {
 }
+static inline void flush_kernel_dcache_addr(void *vaddr)
+{
+}
+static inline void invalidate_kernel_dcache_addr(void *vaddr)
+{
+}
 #endif
 
 #include <asm/kmap_types.h>
-- 
1.6.3.3


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 1/6] mm: add coherence API for DMA to vmalloc/vmap areas
  2009-11-17 17:03 ` [PATCH 1/6] mm: add coherence API for DMA to vmalloc/vmap areas James Bottomley
@ 2009-11-17 17:03   ` James Bottomley
  2009-11-18 14:38   ` Ralf Baechle
  1 sibling, 0 replies; 18+ messages in thread
From: James Bottomley @ 2009-11-17 17:03 UTC (permalink / raw)
  To: linux-arch, linux-parisc; +Cc: James Bottomley

On Virtually Indexed architectures (which don't do automatic alias
resolution in their caches), we have to flush via the correct
virtual address to prepare pages for DMA.  On some architectures
(like arm) we cannot prevent the CPU from doing data movein along
the alias (and thus giving stale read data), so we not only have to
introduce a flush API to push dirty cache lines out, but also an invalidate
API to kill inconsistent cache lines that may have moved in before
DMA changed the data

Signed-off-by: James Bottomley <James.Bottomley@suse.de>
---
 include/linux/highmem.h |    6 ++++++
 1 files changed, 6 insertions(+), 0 deletions(-)

diff --git a/include/linux/highmem.h b/include/linux/highmem.h
index 211ff44..eb99c70 100644
--- a/include/linux/highmem.h
+++ b/include/linux/highmem.h
@@ -17,6 +17,12 @@ static inline void flush_anon_page(struct vm_area_struct *vma, struct page *page
 static inline void flush_kernel_dcache_page(struct page *page)
 {
 }
+static inline void flush_kernel_dcache_addr(void *vaddr)
+{
+}
+static inline void invalidate_kernel_dcache_addr(void *vaddr)
+{
+}
 #endif
 
 #include <asm/kmap_types.h>
-- 
1.6.3.3


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [PATCH 1/6] mm: add coherence API for DMA to vmalloc/vmap areas
  2009-11-17 17:03 ` [PATCH 1/6] mm: add coherence API for DMA to vmalloc/vmap areas James Bottomley
  2009-11-17 17:03   ` James Bottomley
@ 2009-11-18 14:38   ` Ralf Baechle
  2009-11-18 14:38     ` Ralf Baechle
  2009-11-18 15:13     ` James Bottomley
  1 sibling, 2 replies; 18+ messages in thread
From: Ralf Baechle @ 2009-11-18 14:38 UTC (permalink / raw)
  To: James Bottomley; +Cc: linux-arch, linux-parisc

On Tue, Nov 17, 2009 at 11:03:47AM -0600, James Bottomley wrote:

> On Virtually Indexed architectures (which don't do automatic alias
> resolution in their caches), we have to flush via the correct
> virtual address to prepare pages for DMA.  On some architectures
> (like arm) we cannot prevent the CPU from doing data movein along
> the alias (and thus giving stale read data), so we not only have to
> introduce a flush API to push dirty cache lines out, but also an invalidate
> API to kill inconsistent cache lines that may have moved in before
> DMA changed the data

The API looks right for MIPS and trivial to implement based on existing
code, so feel free to throw in my Ack on the generic parts.

The new APIs deserve documentation in Documentation/cachetlb.txt.

  Ralf

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 1/6] mm: add coherence API for DMA to vmalloc/vmap areas
  2009-11-18 14:38   ` Ralf Baechle
@ 2009-11-18 14:38     ` Ralf Baechle
  2009-11-18 15:13     ` James Bottomley
  1 sibling, 0 replies; 18+ messages in thread
From: Ralf Baechle @ 2009-11-18 14:38 UTC (permalink / raw)
  To: James Bottomley; +Cc: linux-arch, linux-parisc

On Tue, Nov 17, 2009 at 11:03:47AM -0600, James Bottomley wrote:

> On Virtually Indexed architectures (which don't do automatic alias
> resolution in their caches), we have to flush via the correct
> virtual address to prepare pages for DMA.  On some architectures
> (like arm) we cannot prevent the CPU from doing data movein along
> the alias (and thus giving stale read data), so we not only have to
> introduce a flush API to push dirty cache lines out, but also an invalidate
> API to kill inconsistent cache lines that may have moved in before
> DMA changed the data

The API looks right for MIPS and trivial to implement based on existing
code, so feel free to throw in my Ack on the generic parts.

The new APIs deserve documentation in Documentation/cachetlb.txt.

  Ralf

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 1/6] mm: add coherence API for DMA to vmalloc/vmap areas
  2009-11-18 14:38   ` Ralf Baechle
  2009-11-18 14:38     ` Ralf Baechle
@ 2009-11-18 15:13     ` James Bottomley
  1 sibling, 0 replies; 18+ messages in thread
From: James Bottomley @ 2009-11-18 15:13 UTC (permalink / raw)
  To: Ralf Baechle; +Cc: linux-arch, linux-parisc

On Wed, 2009-11-18 at 15:38 +0100, Ralf Baechle wrote:
> On Tue, Nov 17, 2009 at 11:03:47AM -0600, James Bottomley wrote:
> 
> > On Virtually Indexed architectures (which don't do automatic alias
> > resolution in their caches), we have to flush via the correct
> > virtual address to prepare pages for DMA.  On some architectures
> > (like arm) we cannot prevent the CPU from doing data movein along
> > the alias (and thus giving stale read data), so we not only have to
> > introduce a flush API to push dirty cache lines out, but also an invalidate
> > API to kill inconsistent cache lines that may have moved in before
> > DMA changed the data
> 
> The API looks right for MIPS and trivial to implement based on existing
> code, so feel free to throw in my Ack on the generic parts.
> 
> The new APIs deserve documentation in Documentation/cachetlb.txt.

True (mutter, hate doing docs, mutter).

How about this?

James

---

diff --git a/Documentation/cachetlb.txt b/Documentation/cachetlb.txt
index da42ab4..7d1055c 100644
--- a/Documentation/cachetlb.txt
+++ b/Documentation/cachetlb.txt
@@ -377,3 +377,27 @@ maps this page at its virtual address.
 	All the functionality of flush_icache_page can be implemented in
 	flush_dcache_page and update_mmu_cache. In 2.7 the hope is to
 	remove this interface completely.
+
+For machines where aliasing can be a problem, there exist two
+additional APIs to handle I/O to vmap/vmalloc areas within the
+kernel. These are areas that have two kernel mappings, one the regular
+page offset map through which the page has likely been previously
+accessed and the other, the new contiguous map in the kernel virtual
+map area.  This dual mapping sets up aliasing within the kernel and,
+in particular since all kernel flushing goes through the offset map,
+must be handled separately for I/O.  to declare your architecture as
+needing to use these functions, you must define
+ARCH_HAS_FLUSH_KERNEL_DCACHE_PAGE in asm/cacheflush.h and add two API
+helpers (usually as static inlines in cacheflush.h).  The two new APIs
+are:
+
+  void flush_kernel_dcache_addr(void *addr)
+       Flush a single page through the vmap alias for addr.  This is
+       usually executed prior to performing I/O on the page to make
+       sure the underlying physical page is up to date.
+
+  void invalidate_kernel_dcache_addr(void *addr)
+       Invalidate the page after I/O has completed.  This is necessary
+       on machines whose cache mechanisms might trigger cache movein
+       during I/O.  If you can ensure architecturally that this movein
+       never occurs, this function can be empty on your architecture.

^ permalink raw reply related	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2009-11-18 15:13 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2009-09-17 23:06 [PATCH 0/6] fix xfs by making I/O to vmap/vmalloc areas work James Bottomley
2009-09-17 23:06 ` James Bottomley
2009-09-17 23:06 ` [PATCH 1/6] mm: add coherence API for DMA to vmalloc/vmap areas James Bottomley
2009-09-17 23:06   ` James Bottomley
2009-09-17 23:06   ` [PATCH 2/6] parisc: add mm " James Bottomley
2009-09-17 23:06     ` James Bottomley
2009-09-17 23:06     ` [PATCH 3/6] arm: " James Bottomley
2009-09-17 23:06       ` [PATCH 4/6] sh: " James Bottomley
2009-09-17 23:06         ` James Bottomley
2009-09-17 23:07         ` [PATCH 5/6] block: permit I/O to vmalloc/vmap kernel pages James Bottomley
2009-09-17 23:07           ` James Bottomley
2009-09-17 23:07           ` [PATCH 6/6] xfs: fix xfs to work with Virtually Indexed architectures James Bottomley
  -- strict thread matches above, loose matches on Subject: below --
2009-11-17 17:03 [PATCH 0/6] fix xfs by making I/O to vmap/vmalloc areas work James Bottomley
2009-11-17 17:03 ` [PATCH 1/6] mm: add coherence API for DMA to vmalloc/vmap areas James Bottomley
2009-11-17 17:03   ` James Bottomley
2009-11-18 14:38   ` Ralf Baechle
2009-11-18 14:38     ` Ralf Baechle
2009-11-18 15:13     ` James Bottomley
2009-09-09 15:52 [PATCH 0/6] fix xfs by making I/O to vmap/vmalloc areas work James Bottomley
2009-09-09 15:52 ` [PATCH 1/6] mm: add coherence API for DMA to vmalloc/vmap areas James Bottomley

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).